domain
stringclasses 48
values | url
stringlengths 35
137
| text
stringlengths 0
836k
| topic
stringclasses 13
values |
---|---|---|---|
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-stan-ar1.html |
13\.4 Autoregressive models
---------------------------
A variation of the random walk model described previously is the autoregressive time series model of order 1, AR(1\). This model is essentially the same as the random walk model but it introduces an estimated coefficient, which we will call \\(\\phi\\). The parameter \\(\\phi\\) controls the degree to which the random walk reverts to the mean – when \\(\\phi\\) \= 1, the model is identical to the random walk, but at smaller values, the model will revert back to the mean (which in this case is zero). Also, \\(\\phi\\) can take on negative values, which we’ll discuss more in future lectures. The math to describe the AR(1\) model is:
\\\[y\_t \= \\phi y\_{t\-1} \+ e\_{t}\\].
The `fit_stan()` function can fit higher order AR models, but for now we just want to fit an AR(1\) model and make a histogram of phi.
```
ar1 <- atsar::fit_stan(y = Temp, x = matrix(1, nrow = length(Temp),
ncol = 1), model_name = "ar", est_drift = FALSE, P = 1)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-stan-uss.html |
13\.5 Univariate state\-space models
------------------------------------
At this point, we’ve fit models with observation or process error, but we haven’t tried to estimate both simultaneously. We will do so here, and introduce some new notation to describe the process model and observation model. We use the notation \\({x\_t}\\) to denote the latent state or state of nature (which is unobserved) at time \\(t\\) and \\({y\_t}\\) to denote the observed data. For introductory purposes, we’ll make the process model autoregressive (similar to our AR(1\) model),
\\\[x\_{t} \= \\phi x\_{t\-1} \+ e\_{t}, e\_{t} \\sim N(0,q)\\]
For the process model, there are a number of ways to parameterize the first ‘state,’ and we’ll talk about this more in the class, but for the sake of this model, we’ll place a vague weakly informative prior on \\(x\_0\\), \\(x\_0 \\sim N(0, 10\)\\).Second, we need to construct an observation model linking the estimate unseen states of nature \\(x\_t\\) to the data \\(y\_t\\). For simplicitly, we’ll assume that the observation errors are indepdendent and identically distributed, with no observation component. Mathematically, this model is
\\\[Y\_t \\sim N(x\_t, r)\\]
In the two above models, we’ll refer to \\(q\\) as the standard deviation of the process variance and \\(r\\) as the standard deviation of the observation error variance
We can fit the state\-space AR(1\) and random walk models using the `fit_stan()` function:
```
ss_ar <- atsar::fit_stan(y = Temp, est_drift = FALSE, model_name = "ss_ar")
ss_rw <- atsar::fit_stan(y = Temp, est_drift = FALSE, model_name = "ss_rw")
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-stan-dfa.html |
13\.6 Dynamic factor analysis
-----------------------------
First load the plankton dataset from the **MARSS** package.
```
library(MARSS)
data(lakeWAplankton, package = "MARSS")
# we want lakeWAplanktonTrans, which has been transformed
# so the 0s are replaced with NAs and the data z-scored
dat <- lakeWAplanktonTrans
# use only the 10 years from 1980-1989
plankdat <- dat[dat[, "Year"] >= 1980 & dat[, "Year"] < 1990,
]
# create vector of phytoplankton group names
phytoplankton <- c("Cryptomonas", "Diatoms", "Greens", "Unicells",
"Other.algae")
# get only the phytoplankton
dat.spp.1980 <- t(plankdat[, phytoplankton])
# z-score the data since we subsetted time
dat.spp.1980 <- MARSS::zscore(dat.spp.1980)
# check our z-score
apply(dat.spp.1980, 1, mean, na.rm = TRUE)
```
```
Cryptomonas Diatoms Greens Unicells Other.algae
4.740855e-17 -5.592676e-18 -4.486354e-19 -2.699663e-18 6.517410e-18
```
```
apply(dat.spp.1980, 1, var, na.rm = TRUE)
```
```
Cryptomonas Diatoms Greens Unicells Other.algae
1 1 1 1 1
```
Plot the data.
```
# make into ts since easier to plot
dat.ts <- ts(t(dat.spp.1980), frequency = 12, start = c(1980,
1))
par(mfrow = c(3, 2), mar = c(2, 2, 2, 2))
for (i in 1:5) {
plot(dat.ts[, i], type = "b", main = colnames(dat.ts)[i],
col = "blue", pch = 16)
}
```
Figure 13\.3: Phytoplankton data.
Run a 3 trend model on these data.
```
mod_3 <- bayesdfa::fit_dfa(y = dat.spp.1980, num_trends = 3,
chains = 1, iter = 1000)
```
Rotate the estimated trends and look at what it produces.
```
rot <- bayesdfa::rotate_trends(mod_3)
names(rot)
```
```
[1] "Z_rot" "trends" "Z_rot_mean" "Z_rot_median"
[5] "trends_mean" "trends_median" "trends_lower" "trends_upper"
```
Plot the estimate of the trends.
```
matplot(t(rot$trends_mean), type = "l", lwd = 2, ylab = "mean trend")
```
Figure 13\.4: Trends.
### 13\.6\.1 Using leave one out cross\-validation to select models
We will fit multiple DFA with different numbers of trends and use leave one out (LOO) cross\-validation to choose the best model.
```
mod_1 <- bayesdfa::fit_dfa(y = dat.spp.1980, num_trends = 1,
iter = 1000, chains = 1)
mod_2 <- bayesdfa::fit_dfa(y = dat.spp.1980, num_trends = 2,
iter = 1000, chains = 1)
mod_3 <- bayesdfa::fit_dfa(y = dat.spp.1980, num_trends = 3,
iter = 1000, chains = 1)
mod_4 <- bayesdfa::fit_dfa(y = dat.spp.1980, num_trends = 4,
iter = 1000, chains = 1)
# mod_5 = bayesdfa::fit_dfa(y = dat.spp.1980, num_trends=5)
```
We will compute the Leave One Out Information Criterion (LOOIC) using the **loo** package. Like AIC, lower is better.
```
loo(mod_1)$estimates["looic", "Estimate"]
```
```
[1] 1613.758
```
Table of the LOOIC values:
```
looics <- c(loo(mod_1)$estimates["looic", "Estimate"], loo(mod_2)$estimates["looic",
"Estimate"], loo(mod_3)$estimates["looic", "Estimate"], loo(mod_4)$estimates["looic",
"Estimate"])
looic.table <- data.frame(trends = 1:4, LOOIC = looics)
looic.table
```
```
trends LOOIC
1 1 1613.758
2 2 1540.511
3 3 1477.859
4 4 1457.230
```
### 13\.6\.1 Using leave one out cross\-validation to select models
We will fit multiple DFA with different numbers of trends and use leave one out (LOO) cross\-validation to choose the best model.
```
mod_1 <- bayesdfa::fit_dfa(y = dat.spp.1980, num_trends = 1,
iter = 1000, chains = 1)
mod_2 <- bayesdfa::fit_dfa(y = dat.spp.1980, num_trends = 2,
iter = 1000, chains = 1)
mod_3 <- bayesdfa::fit_dfa(y = dat.spp.1980, num_trends = 3,
iter = 1000, chains = 1)
mod_4 <- bayesdfa::fit_dfa(y = dat.spp.1980, num_trends = 4,
iter = 1000, chains = 1)
# mod_5 = bayesdfa::fit_dfa(y = dat.spp.1980, num_trends=5)
```
We will compute the Leave One Out Information Criterion (LOOIC) using the **loo** package. Like AIC, lower is better.
```
loo(mod_1)$estimates["looic", "Estimate"]
```
```
[1] 1613.758
```
Table of the LOOIC values:
```
looics <- c(loo(mod_1)$estimates["looic", "Estimate"], loo(mod_2)$estimates["looic",
"Estimate"], loo(mod_3)$estimates["looic", "Estimate"], loo(mod_4)$estimates["looic",
"Estimate"])
looic.table <- data.frame(trends = 1:4, LOOIC = looics)
looic.table
```
```
trends LOOIC
1 1 1613.758
2 2 1540.511
3 3 1477.859
4 4 1457.230
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-stan-state-uncertainty.html |
13\.7 Uncertainty intervals on states
-------------------------------------
We will look at the effect of missing data on the uncertainty intervals on estimates states using a DFA on the harbor seal dataset.
```
data(harborSealWA, package = "MARSS")
# the first column is year
matplot(harborSealWA[, 1], harborSealWA[, -1], type = "l", ylab = "Log abundance",
xlab = "")
```
Assume they are all observing a single trend.
```
seal.mod <- bayesdfa::fit_dfa(y = t(harborSealWA[, -1]), num_trends = 1,
chains = 1, iter = 1000)
```
```
pars <- rstan::extract(seal.mod$model)
```
```
pred_mean <- c(apply(pars$x, c(2, 3), mean))
pred_lo <- c(apply(pars$x, c(2, 3), quantile, 0.025))
pred_hi <- c(apply(pars$x, c(2, 3), quantile, 0.975))
plot(pred_mean, type = "l", lwd = 3, ylim = range(c(pred_mean,
pred_lo, pred_hi)), main = "Trend")
lines(pred_lo)
lines(pred_hi)
```
Figure 13\.5: Estimated states and 95 percent credible intervals.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-neon-data.html |
13\.8 NEON EFI Aquatics Challenge
---------------------------------
The data for the aquatics challenge comes from a NEON site at Lake Barco (Florida). More about the data and challenge is [here](https://ecoforecast.org/efi-rcn-forecast-challenges/) and the Github repository for getting all the necessary data is [here](https://github.com/eco4cast/neon4cast-aquatics).
We pulled in the data with the code from the challenge and saved the data for Lake Barco in the atsalibrary package. See `?neon_barc` for more details.
```
data(neon_barc, package = "atsalibrary")
```
Familiarize yourself with the oxygen and temperature data from Lake Barco by looking at `neon_barc`.
Question: what sort of models do you think would be good candidates for forecasting?
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-neon-oxygen.html |
13\.9 NEON EFI Aquatics Challenge
---------------------------------
Before modeling temperature and oxygen jointly, we’ll start working with just the oxygen data alone.
We’ll just with just a AR(p) state space model. The Lake Barco data has associated standard errors with the oxygen data, so we’ll use that as a known observation error (rather than estimating that variance parameter, in our previous work).
A script that describes the model is called `model_01.stan`. You can download this file [here](./Rcode/model_01.stan).
This should be familiar, following from the `atsar` code with a couple modifications. First, Stan doesn’t like NAs being passed in as data, and we’ve had to do some indexing to avoid that. Second, notice that the model is flexible and can be used to fit models with any numbers of lags.
To get the data prepped for Stan, we need to do a few things. First, specify the lag and forecast horizon
```
data <- neon_barc
data$indx <- seq(1, nrow(data))
n_forecast <- 7
n_lag <- 1
```
Next, we’ll drop observations with missing oxygen. If you wanted to do some validation, we could also split the data into a training and test set.
```
# As a first model, we'll just work with modeling oxygen
o2_dat <- dplyr::filter(data, !is.na(oxygen))
# split the test and training data
last_obs <- max(data$indx) - n_forecast
o2_train <- dplyr::filter(o2_dat, indx <= last_obs)
test <- dplyr::filter(data, indx > last_obs)
o2_x <- o2_train$indx
o2_y <- o2_train$oxygen
o2_sd <- o2_train$oxygen_sd
n_o2 <- nrow(o2_train)
```
Remember that the Stan data needs to be in a list,
```
stan_data <- list(n = last_obs, n_o2 = n_o2, n_lag = n_lag, n_forecast = n_forecast,
o2_x = o2_x, o2_y = o2_y, o2_sd = o2_sd)
```
Finally we can compile the model. If we wanted to do fully Bayesian estimates, we could do that with
```
fit <- stan(file = "model_01.stan", data = stan_data)
```
Try fitting the Bayesian model with a short chain length (maybe 1000 iterations) and 1 MCMC chain.
But because we’re interested in doing this quickly, and running the model a bunch of times, we’ll try Stan’s optimizing function for MAP estimation.
```
m <- stan_model(file = "model_01.stan")
o2_model <- rstan::optimizing(m, data = stan_data, hessian = TRUE)
```
Let’s extract predictions from the fitted object,
```
data$pred <- o2_model$par[grep("pred", names(o2_model$par))]
ggplot(data, aes(date, pred)) + geom_line() + geom_point(aes(date,
oxygen), col = "red", alpha = 0.5)
```
Question how do we evaluate predictions? We’ll talk about that more next week, but for today we can think about it as RMSE(observations,predictions)
Question: modify the code above (just on the R side, not Stan) to fit a lag\-2 or lag\-3 model. Are the predictions similar?
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-neon-oxygen-iterate.html |
13\.10 NEON EFI Aquatics Challenge
----------------------------------
In generating forecasts, it’s often a good idea to use multiple training / test sets. One question with this challenge is what’s the best lag (or order of AR model) for generating forecasts? Starting with lag\-1, we can evaluate the effects of different lags by starting at some point in the dataset (e.g. using 50% of the observations), and generate and evaluate forecasts. We then iterate, adding a day of data, generating and evaluating forecasts, and repeating through the rest of the data. \[An alternative approach would be to adopt a moving window, where each prediction would be based on the same number of historical data points].
I’ve generalized the code above into a function that takes the initial Lake Barco dataset, and
```
create_stan_data <- function(data, last_obs, n_forecast, n_lag) {
o2_test <- dplyr::filter(data, indx %in% seq(last_obs + 1,
(last_obs + n_forecast)))
o2_train <- dplyr::filter(data, indx <= last_obs, !is.na(oxygen))
o2_x <- o2_train$indx
o2_y <- o2_train$oxygen
o2_sd <- o2_train$oxygen_sd
n_o2 <- nrow(o2_train)
stan_data <- list(n = last_obs, n_o2 = n_o2, n_lag = n_lag,
n_forecast = n_forecast, o2_x = o2_x, o2_y = o2_y, o2_sd = o2_sd)
return(list(train = o2_train, stan_data = stan_data, test = o2_test))
}
```
Now we can try iterating over sets of data with a lag 1 model
```
n_forecast <- 7
n_lag <- 1
rmse <- NA
for (i in 500:(nrow(data) - n_lag)) {
dat_list <- create_stan_data(data, last_obs = i, n_forecast = n_forecast,
n_lag = n_lag)
# fit the model. opimizing can be sensitive to starting
# values, so let's try
best_map <- -1e+100
for (j in 1:10) {
test_fit <- rstan::optimizing(m, data = dat_list$stan_data)
if (test_fit$value > best_map) {
fit <- test_fit
best_map <- test_fit$value
}
}
if (fit$return_code == 0) {
# extract forecasts
pred <- fit$par[grep("forecast", names(fit$par))]
# evaluate predictions
rmse[i] <- sqrt(mean((dat_list$test$oxygen - pred)^2,
na.rm = T))
}
}
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-neon-oxygen-temp.html |
13\.11 NEON EFI Aquatics Challenge
----------------------------------
So far, these models have only included oxygen. We can easily bring in the temperature
data, which has it’s own standard error associated with it. We can set up a null model
modeling temperature and oxygen independently, with different lags allowed
First we have to modify our data function to also generate temperature data.
```
create_stan_data <- function(data, last_obs, n_forecast, n_lag_o2,
n_lag_temp) {
# create test data
o2_test <- dplyr::filter(data, indx %in% seq(last_obs + 1,
(last_obs + n_forecast)))
temp_test <- dplyr::filter(data, indx %in% seq(last_obs +
1, (last_obs + n_forecast)))
o2_train <- dplyr::filter(data, indx <= last_obs, !is.na(oxygen))
o2_x <- o2_train$indx
o2_y <- o2_train$oxygen
o2_sd <- o2_train$oxygen_sd
n_o2 <- nrow(o2_train)
temp_train <- dplyr::filter(data, indx <= last_obs, !is.na(temperature))
temp_x <- temp_train$indx
temp_y <- temp_train$temperature
temp_sd <- temp_train$temperature_sd
n_temp <- nrow(temp_train)
stan_data <- list(n = last_obs, n_lag_o2 = n_lag_o2, n_lag_temp = n_lag_temp,
n_forecast = n_forecast, n_o2 = n_o2, o2_x = o2_x, o2_y = o2_y,
o2_sd = o2_sd, n_temp = n_temp, temp_x = temp_x, temp_y = temp_y,
temp_sd = temp_sd)
return(list(o2_train = o2_train, temp_train = temp_train,
stan_data = stan_data, o2_test = o2_test, temp_test = temp_test))
}
```
```
m <- stan_model(file = "model_02.stan")
```
where `model_02.stan` is the Stan model file that you can [download here](./Rcode/model_02.stan).
```
# Now we can try iterating over sets of data with a lag 1
# model
n_forecast <- 7
n_lag <- 1
rmse <- NA
for (i in 500:(nrow(data) - n_lag)) {
dat_list <- create_stan_data(data, last_obs = i, n_forecast = n_forecast,
n_lag_o2 = n_lag, n_lag_temp = n_lag)
# fit the model. opimizing can be sensitive to starting
# values, so let's try
best_map <- -1e+100
for (j in 1:10) {
test_fit <- rstan::optimizing(m, data = dat_list$stan_data)
if (test_fit$value > best_map) {
fit <- test_fit
best_map <- test_fit$value
}
}
# extract forecasts
o2_pred <- fit$par[grep("o2_forecast", names(fit$par))]
temp_pred <- fit$par[grep("temp_forecast", names(fit$par))]
pred <- c(o2_pred, temp_pred)
obs <- c(dat_list$o2_test$oxygen, dat_list$temp_test$temperature)
# evaluate predictions
rmse[i] <- sqrt(mean((obs - pred)^2, na.rm = T))
print(rmse)
}
```
Question: why do the future predictions of either temp or oxygen sometimes fall to 0, and appear to do poorly?
Question: modify the script above to loop over various oxygen and temperature lags. Using a subset of data, say observations 500:800, which combination yields the best predictions?
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/prep.html |
14\.2 Prep
----------
```
library(ggplot2)
library(MARSS)
library(stringr)
set.seed(1234)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/create-data.html |
14\.4 Create data
-----------------
Plot the data and the error added to the signal by each sensor. The data is the error (on right) plus the signal. The signal is definitely not obvious in the data. The data look mostly like the error, which is autocorrelated so has trends in it unlike white noise error.
```
p1 <- ggplot(subset(df, name != "signal"), aes(x = t, y = val)) +
geom_line() + facet_wrap(~name, ncol = 2)
p1
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/fit-the-model.html |
14\.6 Fit the model
-------------------
We specify this one\-to\-one in R for `MARSS()`:
```
makemod <- function(n) {
B <- matrix(list(0), n + 1, n + 1)
diag(B)[2:(n + 1)] <- paste0("b", 1:n)
B[1, 1] <- 1
A <- "zero"
Z <- cbind(1, diag(1, n))
Q <- matrix(list(0), n + 1, n + 1)
Q[1, 1] <- 1
diag(Q)[2:(n + 1)] <- paste0("q", 1:n)
R <- "zero"
U <- "zero"
x0 <- "zero"
mod.list <- list(B = B, A = A, Z = Z, Q = Q, R = R, U = U,
x0 = x0, tinitx = 0)
return(mod.list)
}
mod.list1 <- makemod(3)
```
Demean the data.
```
dat2 <- dat - apply(dat, 1, mean) %*% matrix(1, 1, TT)
```
Fit to that
```
fit.mod1 <- MARSS(dat2, model = mod.list1)
```
```
Success! algorithm run for 15 iterations. abstol and log-log tests passed.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Algorithm ran 15 (=minit) iterations and convergence was reached.
Log-likelihood: -244.5957
AIC: 501.1914 AICc: 502.2035
Estimate
B.b1 0.790
B.b2 0.406
B.b3 0.905
Q.q1 1.004
Q.q2 23.042
Q.q3 55.892
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/show-the-fits.html |
14\.7 Show the fits
-------------------
X1 is the estimate of the signal. The mean has been removed. X2, X3 and X4 are the AR\-1 errors for our sensors.
```
require(ggplot2)
autoplot(fit.mod1, plot.type = "xtT", conf.int = FALSE)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/missing-data.html |
14\.10 Missing data
-------------------
One nice features of this approach is that it is robust to a fair bit of missing data. Here I delete a third of the data. I do this randomly throughout the dataset. The data look pretty hopeless. No signal to be seen.
Fit as usual:
```
fit <- MARSS(dat2.miss, model = mod.list1, silent = TRUE)
```
But though we can’t see the signal in the data, it is there.
Averaging our sensors doesn’t work since there are so many missing values and we will have missing values in our average.
Another type of missing data are strings of missing data. Here I create a data set with random strings of missing values. Again the data look really hopeless and definitely cannot average across the data since we’d be averaging across different data sets.
We can fit as usual and see that it is possible to recover the signal.
```
fit <- MARSS(dat2.miss, model = mod.list1, silent = TRUE)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/correlated-noise.html |
14\.11 Correlated noise
-----------------------
In the simulated data, the AR\-1 errors were uncorrelated. Each error time series was independent of the others. But we might want to test a model where the errors are correlated. The processes that drive variability in sensors can sometimes be a factor that are common across all our sensors, like say average wind speed or rainfall.
Our AR\-1 errors would look like so with covariance \\(c\\).
\\\[\\begin{bmatrix}e \\\\ w\_1 \\\\ w\_2 \\\\ w\_3\\end{bmatrix}\_t, \\quad \\begin{bmatrix}e \\\\ w\_1 \\\\ w\_2 \\\\ w\_3\\end{bmatrix}\_t \\sim MVN\\left(0, \\begin{bmatrix}1\&0\&0\&0\\\\0\&q\_1\&c\_1\&c\_2 \\\\ 0\&c\_1\&q\_2\&c\_3 \\\\ 0\&c\_2\&c\_3\&q\_3\\end{bmatrix}\\right)\\]
To fit this model, we need to create a \\(Q\\) matrix that looks like the above. It’s a bit of a hassle.
```
Q <- matrix(list(0), n + 1, n + 1)
Q[1, 1] <- 1
Q2 <- matrix("q", n, n)
diag(Q2) <- paste0("q", 1:n)
Q2[upper.tri(Q2)] <- paste0("c", 1:n)
Q2[lower.tri(Q2)] <- paste0("c", 1:n)
Q[2:(n + 1), 2:(n + 1)] <- Q2
Q
```
```
[,1] [,2] [,3] [,4]
[1,] 1 0 0 0
[2,] 0 "q1" "c1" "c2"
[3,] 0 "c1" "q2" "c3"
[4,] 0 "c2" "c3" "q3"
```
Now we can fit as usual using this \\(Q\\) in our model list.
```
mod.list2 <- mod.list1
mod.list2$Q <- Q
fit <- MARSS(dat2, model = mod.list2)
```
```
Success! abstol and log-log tests passed at 127 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 127 iterations.
Log-likelihood: -242.6383
AIC: 503.2765 AICc: 505.5265
Estimate
B.b1 0.745
B.b2 0.389
B.b3 0.907
Q.q1 0.800
Q.c1 -1.721
Q.c2 -2.130
Q.q2 21.680
Q.c3 4.210
Q.q3 54.143
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
The AIC is larger indicating that this model is not more supported, which is not surprising given that the data are not correlated with each other.
```
c(fit$AIC, fit.mod1$AIC)
```
```
[1] 503.2765 501.1914
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/discussion.html |
14\.12 Discussion
-----------------
This example worked because I had a sensor that was quite a bit better than the others with a much smaller level of observation error variance (sd\=1 versus 28 and 41 for the others). I didn’t know which one it was, but I did have at least one good sensor. If I up the observation error variance on the first (good) sensor, then my signal estimate is not so good. The variance of the signal estimate is better than the average, but it is still bad. There is only so much that can be done when the sensor adds so much error.
```
sd <- sqrt(c(10, 28, 41))
dat[1, ] <- signal + arima.sim(TT, model = list(ar = ar[1]),
sd = sd[1])
dat2 <- dat - apply(dat, 1, mean) %*% matrix(1, 1, TT)
fit <- MARSS(dat2, model = mod.list1, silent = TRUE)
```
One solution is to have more sensors. They can all be horrible but now that I have more, I can get a better estimate of the signal. In this example I have 12 bad sensors instead of 3\. The properties of the sensors are the same as in the example above. I will add the new data to the existing data.
```
set.seed(123)
datm <- dat
for (i in 1:2) {
tmp <- createdata(n, TT, ar, sd)
datm <- rbind(datm, tmp$dat)
}
datm2 <- datm - apply(datm, 1, mean) %*% matrix(1, 1, TT)
fit <- MARSS(datm2, model = makemod(dim(datm2)[1]), silent = TRUE)
```
Some more caveats are that I simulated data that was the same as the model that I fit, except the signal. However an AR\-1 with \\(b\\) and \\(q\\) (sd) estimated is quite flexible and this will likely work for data that is roughly AR\-1\. A common exception is very smooth data that you get from sensors that record dense data (like every second). That kind of sensor data may need to be subsampled (every 10 or 20 or 30 data point) to get AR\-1 like data.
Lastly I set the seed to 1234 to have an example that looks *ok*. If you comment that out and rerun the code, you’ll quickly see that the example I used is not one of the bad ones. It’s not unusually good, just not unusually bad.
On the otherhand, I poised a difficult problem with two quite awful sensors. A sensor with a random walk error would be really alarming and hopefully you would not have that type of error. But you might. IT can happen when local conditions are undergoing a random walk with slow reversion to the mean. Many natural systems look like that. If you have that problem, subsampling that *random walk* sensor might be a good idea.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/sec-seasonal-dlm-overview.html |
15\.1 Overview
--------------
As discussed in Section [8\.6\.3](sec-msscov-season.html#sec-msscov-season-fourier) in the covariates chapter, we can model season with sine and cosine covariates.
\\\[y\_t \= x\_t \+ \\beta\_1 \\sin(2 \\pi t/p) \+ \\beta\_2 \\cos(2 \\pi t/p) \+ e\_t\\]
where \\(t\\) is the time step (1 to length of the time series) and \\(p\\) is the frequency of the data (e.g. 12 for monthly data). \\(\\alpha\_t\\) is the mean level about which the data \\(y\_t\\) are fluctuating.
We can simulate data like this as follows:
```
set.seed(1234)
TT <- 100
q <- 0.1
r <- 0.1
beta1 <- 0.6
beta2 <- 0.4
cov1 <- sin(2 * pi * (1:TT)/12)
cov2 <- cos(2 * pi * (1:TT)/12)
xt <- cumsum(rnorm(TT, 0, q))
yt <- xt + beta1 * cov1 + beta2 * cov2 + rnorm(TT, 0, r)
plot(yt, type = "l", xlab = "t")
```
In this case, the seasonal cycle is constant over time since \\(\\beta\_1\\) and \\(\\beta\_2\\) are fixed (not varying in time).
The \\(\\beta\\) determine the shape and amplitude of the seasonal cycle—though in this case we will only have one peak per year.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/using-dlms-to-estimate-changing-season.html |
15\.3 Using DLMs to estimate changing season
--------------------------------------------
Here is a DLM model with the level and season modeled as a random walk.
\\\[\\begin{bmatrix}x \\\\ \\beta\_1 \\\\ \\beta\_2 \\end{bmatrix}\_t \= \\begin{bmatrix}x \\\\ \\beta\_1 \\\\ \\beta\_2 \\end{bmatrix}\_{t\-1} \+ \\begin{bmatrix}w\_1 \\\\ w\_2 \\\\ w\_3 \\end{bmatrix}\_t\\]
\\\[y\_t \= \\begin{bmatrix}1\& \\sin(2\\pi t/p)\&\\cos(2\\pi t/p)\\end{bmatrix} \\begin{bmatrix}x \\\\ \\beta\_1 \\\\ \\beta\_2 \\end{bmatrix}\_t \+ v\_t\\]
We can fit the model to the \\(y\_t\\) data and estimate the \\(\\beta\\)’s and \\(\\alpha\\). We specify this one\-to\-one in R for `MARSS()`.
`Z` is time\-varying and we set this up with an array with the 3rd dimension being time.
```
Z <- array(1, dim = c(1, 3, TT))
Z[1, 2, ] <- sin(2 * pi * (1:TT)/12)
Z[1, 3, ] <- cos(2 * pi * (1:TT)/12)
```
Then we make our model list. We need to set `A` since `MARSS()` doesn’t like the default value of `scaling` when `Z` is time\-varying.
```
mod.list <- list(U = "zero", Q = "diagonal and unequal", Z = Z,
A = "zero")
```
When we fit the model we need to give `MARSS()` initial values for `x0`. It cannot come up with default ones for this model. It doesn’t really matter what you pick.
```
require(MARSS)
fit <- MARSS(yt, model = mod.list, inits = list(x0 = matrix(0,
3, 1)))
```
```
Success! abstol and log-log tests passed at 45 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 45 iterations.
Log-likelihood: 19.0091
AIC: -24.01821 AICc: -22.80082
Estimate
R.R 0.00469
Q.(X1,X1) 0.01476
Q.(X2,X2) 0.00638
Q.(X3,X3) 0.00580
x0.X1 -0.11002
x0.X2 -0.85340
x0.X3 0.92787
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
The \\(\\beta\_1\\) estimate is State X2 and \\(\\beta\_2\\) is State X3\. The estimates match what we put into the simulated data.
```
plot type = "xtT" Estimated states
```
We can compare the estimated cycles to the ones used in the simulation.
We can make this a bit harder by imagining that our data have missing values. Let’s imagine that we only observe half the months.
```
yt.miss <- yt
yt.miss[sample(100, 50)] <- NA
plot(yt, type = "l")
points(yt.miss)
```
```
require(MARSS)
fit.miss <- MARSS(yt.miss, model = mod.list, inits = list(x0 = matrix(0,
3, 1)))
```
```
Success! abstol and log-log tests passed at 108 iterations.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Estimation converged in 108 iterations.
Log-likelihood: 0.1948933
AIC: 13.61021 AICc: 16.27688
Estimate
R.R 0.00243
Q.(X1,X1) 0.01342
Q.(X2,X2) 0.00794
Q.(X3,X3) 0.00580
x0.X1 -0.11497
x0.X2 -0.82921
x0.X3 0.92696
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
```
The model still can pick out the changing seasonal cycle.
```
plot type = "xtT" Estimated states
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/time-varying-amplitude.html |
15\.4 Time\-varying amplitude
-----------------------------
Instead of a constant seasonality, we can imagine that it varies in time. The first way it might vary is in the amplitude of the seasonality. So the location of the peak is the same but the difference between the peak and valley changes.
\\\[z\_t\\left(\\beta\_1 \\sin(2 \\pi t/p) \+ \\beta\_2 \\cos(2 \\pi t/p)\\right)\\]
In this case, the \\(\\beta\\)’s remain constant but the sum of the sines and cosines is multiplied by a time\-varying scaling factor.
Here we simulate some data where \\(z\_t\\) is sinusoidal and is largest in the beginning of the time\-series. Note we want \\(z\_t\\) to stay positive otherwise our peak will become a valley when \\(z\_t\\) goes negative.
```
set.seed(1234)
TT <- 100
q <- 0.1
r <- 0.1
beta1 <- 0.6
beta2 <- 0.4
zt <- 0.5 * sin(2 * pi * (1:TT)/TT) + 0.75
cov1 <- sin(2 * pi * (1:TT)/12)
cov2 <- cos(2 * pi * (1:TT)/12)
xt <- cumsum(rnorm(TT, 0, q))
yt <- xt + zt * beta1 * cov1 + zt * beta2 * cov2 + rnorm(TT,
0, r)
plot(yt, type = "l", xlab = "t")
```
### 15\.4\.1 Fitting the model
When the seasonality is written as
\\\[z\_t\\left(\\beta\_1 \\sin(2 \\pi t/p) \+ \\beta\_2 \\cos(2 \\pi t/p)\\right)\\]
our model is under\-determined because we have \\(z\_t \\beta\_1\\) and \\(z\_t \\beta\_2\\). We can scale the $z\_t up and the \\(\\beta\\)’s correspondingly down and have the same values (so multiply \\(z\_t\\) by 2 and divide the \\(\\beta\\)’s by 2, say). We can fix that by multiplying the \\(z\_t\\) and dividing the seaonal part by \\(\\beta\_1\\). Then our seasonal model becomes *Recognizing when your model is under\-determined takes some experience. If you work in a Bayesian framework, it is a bit easier because it is easy to look at the posterior distributions and look for ridges.*
\\\[(z\_t/\\beta\_1\) \\left(\\sin(2 \\pi t/p) \+ (\\beta\_2/\\beta\_1\) \\cos(2 \\pi t/p)\\right) \= \\\\
x\_{2,t} \\left(\\sin(2 \\pi t/p) \+ \\beta \\cos(2 \\pi t/p)\\right)
\\]
The seasonality (peak location) will be the same for \\((\\sin(2 \\pi t/p) \+ \\beta \\cos(2 \\pi t/p))\\) and \\((\\beta\_1 \\sin(2 \\pi t/p) \+ \\beta\_2 \\cos(2 \\pi t/p))\\). The only thing that is different is the amplitude and we are using \\(x\_{2,t}\\) to determine the amplitude.
Now our \\(x\\) and \\(y\\) models look like this. Notice that the \\(\\mathbf{Z}\\) is \\(1 \\times 2\\) instead of \\(1 \\times 3\\).
\\\[\\begin{bmatrix}x\_1 \\\\ x\_2 \\end{bmatrix}\_t \= \\begin{bmatrix}x\_1 \\\\ x\_2 \\end{bmatrix}\_{t\-1} \+ \\begin{bmatrix}w\_1 \\\\ w\_2 \\end{bmatrix}\_t\\]
\\\[y\_t \= \\begin{bmatrix}1\& \\sin(2\\pi t/p) \+ \\beta \\cos(2\\pi t/p)\\end{bmatrix} \\begin{bmatrix}x\_1 \\\\ x\_2 \\end{bmatrix}\_t \+ v\_t\\]
To set up the `Z` matrix, we can pass in values like `"1 + 0.5*beta"`. `MARSS()` will translate that to \\(1\+0\.5\\beta\\).
```
Z <- array(list(1), dim = c(1, 2, TT))
Z[1, 2, ] <- paste0(sin(2 * pi * (1:TT)/12), " + ", cos(2 * pi *
(1:TT)/12), "*beta")
```
Then we make our model list. We need to set `A` since `MARSS()` doesn’t like the default value of `scaling` when `Z` is time\-varying.
```
mod.list <- list(U = "zero", Q = "diagonal and unequal", Z = Z,
A = "zero")
```
```
require(MARSS)
fit <- MARSS(yt, model = mod.list, inits = list(x0 = matrix(0,
2, 1)))
```
We are able to recover the level, seasonality and changing amplitude of the seasonality.
### 15\.4\.1 Fitting the model
When the seasonality is written as
\\\[z\_t\\left(\\beta\_1 \\sin(2 \\pi t/p) \+ \\beta\_2 \\cos(2 \\pi t/p)\\right)\\]
our model is under\-determined because we have \\(z\_t \\beta\_1\\) and \\(z\_t \\beta\_2\\). We can scale the $z\_t up and the \\(\\beta\\)’s correspondingly down and have the same values (so multiply \\(z\_t\\) by 2 and divide the \\(\\beta\\)’s by 2, say). We can fix that by multiplying the \\(z\_t\\) and dividing the seaonal part by \\(\\beta\_1\\). Then our seasonal model becomes *Recognizing when your model is under\-determined takes some experience. If you work in a Bayesian framework, it is a bit easier because it is easy to look at the posterior distributions and look for ridges.*
\\\[(z\_t/\\beta\_1\) \\left(\\sin(2 \\pi t/p) \+ (\\beta\_2/\\beta\_1\) \\cos(2 \\pi t/p)\\right) \= \\\\
x\_{2,t} \\left(\\sin(2 \\pi t/p) \+ \\beta \\cos(2 \\pi t/p)\\right)
\\]
The seasonality (peak location) will be the same for \\((\\sin(2 \\pi t/p) \+ \\beta \\cos(2 \\pi t/p))\\) and \\((\\beta\_1 \\sin(2 \\pi t/p) \+ \\beta\_2 \\cos(2 \\pi t/p))\\). The only thing that is different is the amplitude and we are using \\(x\_{2,t}\\) to determine the amplitude.
Now our \\(x\\) and \\(y\\) models look like this. Notice that the \\(\\mathbf{Z}\\) is \\(1 \\times 2\\) instead of \\(1 \\times 3\\).
\\\[\\begin{bmatrix}x\_1 \\\\ x\_2 \\end{bmatrix}\_t \= \\begin{bmatrix}x\_1 \\\\ x\_2 \\end{bmatrix}\_{t\-1} \+ \\begin{bmatrix}w\_1 \\\\ w\_2 \\end{bmatrix}\_t\\]
\\\[y\_t \= \\begin{bmatrix}1\& \\sin(2\\pi t/p) \+ \\beta \\cos(2\\pi t/p)\\end{bmatrix} \\begin{bmatrix}x\_1 \\\\ x\_2 \\end{bmatrix}\_t \+ v\_t\\]
To set up the `Z` matrix, we can pass in values like `"1 + 0.5*beta"`. `MARSS()` will translate that to \\(1\+0\.5\\beta\\).
```
Z <- array(list(1), dim = c(1, 2, TT))
Z[1, 2, ] <- paste0(sin(2 * pi * (1:TT)/12), " + ", cos(2 * pi *
(1:TT)/12), "*beta")
```
Then we make our model list. We need to set `A` since `MARSS()` doesn’t like the default value of `scaling` when `Z` is time\-varying.
```
mod.list <- list(U = "zero", Q = "diagonal and unequal", Z = Z,
A = "zero")
```
```
require(MARSS)
fit <- MARSS(yt, model = mod.list, inits = list(x0 = matrix(0,
2, 1)))
```
We are able to recover the level, seasonality and changing amplitude of the seasonality.
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/chap-cyclic-sockeye.html |
Chapter 16 Modeling cyclic sockeye
==================================
A script with all the R code in the chapter can be downloaded [here](./Rcode/cyclic-sockeye.R). The Rmd for this chapter can be downloaded [here](./Rmds/cyclic-sockeye.Rmd)
### Data and packages
```
library(atsalibrary)
library(ggplot2)
library(MARSS)
```
### Data and packages
```
library(atsalibrary)
library(ggplot2)
library(MARSS)
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/modeling-the-cycle.html |
16\.3 Modeling the cycle
------------------------
As discussed in Chapter [15](chap-seasonal-dlm.html#chap-seasonal-dlm) in the covariates chapter, we can model changes in seasonality with a DLM with sine and cosine covariates. Here is a DLM model with the level and season modeled as a random walk.
\\\[\\begin{bmatrix}x \\\\ \\beta\_1 \\\\ \\beta\_2 \\end{bmatrix}\_t \= \\begin{bmatrix}x \\\\ \\beta\_1 \\\\ \\beta\_2 \\end{bmatrix}\_{t\-1} \+ \\begin{bmatrix}w\_1 \\\\ w\_2 \\\\ w\_3 \\end{bmatrix}\_t\\]
\\\[y\_t \= \\begin{bmatrix}1\& \\sin(2\\pi t/p)\&\\cos(2\\pi t/p)\\end{bmatrix} \\begin{bmatrix}x \\\\ \\beta\_1 \\\\ \\beta\_2 \\end{bmatrix}\_t \+ v\_t\\]
We can fit the model to the Kvichak River log spawner data and estimate the \\(\\beta\\)’s and stochastic level (\\(x\\)). This is annual data so what does \\(p\\) mean? \\(p\\) is the time steps between peaks. For sockeye that is 5 years. So we set \\(p\=5\\). If \\(p\\) were changing, that would cause problems, but it is not for these data (which you can confirm by looking at the ACF for different parts of the time series).
### 16\.3\.1 Set up the data
```
river <- "KVICHAK"
df <- subset(sockeye, region == river)
yt <- log(df$spawners)
TT <- length(yt)
p <- 5
```
### 16\.3\.2 Specify the \\(\\mathbf{Z}\\) matrix
\\(\\mathbf{Z}\\) is time\-varying and we set this up with an array with the 3rd dimension being time.
```
Z <- array(1, dim = c(1, 3, TT))
Z[1, 2, ] <- sin(2 * pi * (1:TT)/p)
Z[1, 3, ] <- cos(2 * pi * (1:TT)/p)
```
### 16\.3\.3 Specify the model list
Then we make our model list. We need to set \\(\\mathbf{A}\\) since `MARSS()` doesn’t like the default value of `scaling` when \\(\\mathbf{Z}\\) is time\-varying.
```
mod.list <- list(U = "zero", Q = "diagonal and unequal", Z = Z,
A = "zero")
```
### 16\.3\.4 Fit the model
When we fit the model we need to give `MARSS()` initial values for `x0`. It cannot come up with default ones for this model. It doesn’t really matter what you pick.
```
m <- dim(Z)[2]
fit <- MARSS(yt, model = mod.list, inits = list(x0 = matrix(0,
m, 1)))
```
```
Warning! Abstol convergence only. Maxit (=500) reached before log-log convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
WARNING: Abstol convergence only no log-log convergence.
maxit (=500) reached before log-log convergence.
The likelihood and params might not be at the ML values.
Try setting control$maxit higher.
Log-likelihood: -58.61614
AIC: 131.2323 AICc: 133.899
Estimate
R.R 0.415426
Q.(X1,X1) 0.012584
Q.(X2,X2) 0.000547
Q.(X3,X3) 0.037008
x0.X1 7.897380
x0.X2 -0.855359
x0.X3 1.300854
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
Convergence warnings
Warning: the Q.(X2,X2) parameter value has not converged.
Type MARSSinfo("convergence") for more info on this warning.
```
### 16\.3\.5 Plot the output
The \\(\\beta\_1\\) estimate is State X2 and \\(\\beta\_2\\) is State X3\. The estimates match what we put into the simulated data.
```
plot type = "xtT" Estimated states
```
We can plot our cycle estimates and see that the peak has shifted over time. The peak has not been regularly every 5 years.
Let’s look at the other rivers. Write a function to do the fit.
```
fitriver <- function(river, p = 5) {
df <- subset(sockeye, region == river)
yt <- log(df$spawners)
TT <- length(yt)
Z <- array(1, dim = c(1, 3, TT))
Z[1, 2, ] <- sin(2 * pi * (1:TT)/p)
Z[1, 3, ] <- cos(2 * pi * (1:TT)/p)
mod.list <- list(U = "zero", Q = "diagonal and unequal",
Z = Z, A = "zero")
fit <- MARSS(yt, model = mod.list, inits = list(x0 = matrix(0,
3, 1)), silent = TRUE)
return(fit)
}
```
The make a list with all the fits.
```
fits <- list()
for (river in names(a)) {
fits[[river]] <- fitriver(river)
}
```
Create a data frame of the amplitude of the cycle (\\(\\sqrt{\\beta\_1^2\+\\beta\_2^2}\\)) and the stochastic level (\\(x\\)).
```
dfz <- data.frame()
for (river in names(a)) {
fit <- fits[[river]]
tmp <- data.frame(amplitude = sqrt(fit$states[2, ]^2 + fit$states[3,
]^2), trend = fit$states[1, ], river = river, brood_year = subset(sockeye,
region == river)$brood_year)
dfz <- rbind(dfz, tmp)
}
```
### 16\.3\.1 Set up the data
```
river <- "KVICHAK"
df <- subset(sockeye, region == river)
yt <- log(df$spawners)
TT <- length(yt)
p <- 5
```
### 16\.3\.2 Specify the \\(\\mathbf{Z}\\) matrix
\\(\\mathbf{Z}\\) is time\-varying and we set this up with an array with the 3rd dimension being time.
```
Z <- array(1, dim = c(1, 3, TT))
Z[1, 2, ] <- sin(2 * pi * (1:TT)/p)
Z[1, 3, ] <- cos(2 * pi * (1:TT)/p)
```
### 16\.3\.3 Specify the model list
Then we make our model list. We need to set \\(\\mathbf{A}\\) since `MARSS()` doesn’t like the default value of `scaling` when \\(\\mathbf{Z}\\) is time\-varying.
```
mod.list <- list(U = "zero", Q = "diagonal and unequal", Z = Z,
A = "zero")
```
### 16\.3\.4 Fit the model
When we fit the model we need to give `MARSS()` initial values for `x0`. It cannot come up with default ones for this model. It doesn’t really matter what you pick.
```
m <- dim(Z)[2]
fit <- MARSS(yt, model = mod.list, inits = list(x0 = matrix(0,
m, 1)))
```
```
Warning! Abstol convergence only. Maxit (=500) reached before log-log convergence.
MARSS fit is
Estimation method: kem
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
WARNING: Abstol convergence only no log-log convergence.
maxit (=500) reached before log-log convergence.
The likelihood and params might not be at the ML values.
Try setting control$maxit higher.
Log-likelihood: -58.61614
AIC: 131.2323 AICc: 133.899
Estimate
R.R 0.415426
Q.(X1,X1) 0.012584
Q.(X2,X2) 0.000547
Q.(X3,X3) 0.037008
x0.X1 7.897380
x0.X2 -0.855359
x0.X3 1.300854
Initial states (x0) defined at t=0
Standard errors have not been calculated.
Use MARSSparamCIs to compute CIs and bias estimates.
Convergence warnings
Warning: the Q.(X2,X2) parameter value has not converged.
Type MARSSinfo("convergence") for more info on this warning.
```
### 16\.3\.5 Plot the output
The \\(\\beta\_1\\) estimate is State X2 and \\(\\beta\_2\\) is State X3\. The estimates match what we put into the simulated data.
```
plot type = "xtT" Estimated states
```
We can plot our cycle estimates and see that the peak has shifted over time. The peak has not been regularly every 5 years.
Let’s look at the other rivers. Write a function to do the fit.
```
fitriver <- function(river, p = 5) {
df <- subset(sockeye, region == river)
yt <- log(df$spawners)
TT <- length(yt)
Z <- array(1, dim = c(1, 3, TT))
Z[1, 2, ] <- sin(2 * pi * (1:TT)/p)
Z[1, 3, ] <- cos(2 * pi * (1:TT)/p)
mod.list <- list(U = "zero", Q = "diagonal and unequal",
Z = Z, A = "zero")
fit <- MARSS(yt, model = mod.list, inits = list(x0 = matrix(0,
3, 1)), silent = TRUE)
return(fit)
}
```
The make a list with all the fits.
```
fits <- list()
for (river in names(a)) {
fits[[river]] <- fitriver(river)
}
```
Create a data frame of the amplitude of the cycle (\\(\\sqrt{\\beta\_1^2\+\\beta\_2^2}\\)) and the stochastic level (\\(x\\)).
```
dfz <- data.frame()
for (river in names(a)) {
fit <- fits[[river]]
tmp <- data.frame(amplitude = sqrt(fit$states[2, ]^2 + fit$states[3,
]^2), trend = fit$states[1, ], river = river, brood_year = subset(sockeye,
region == river)$brood_year)
dfz <- rbind(dfz, tmp)
}
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/univariate-results.html |
16\.4 Univariate results
------------------------
Plot of the amplitude of the cycles. All the rivers were analyzed independently. It certainly looks like there are common patterns in the amplitude of the cycles with many showing a steady decline in amplitude. Note the counts were not decreasing so this is not due to fewer spawners.
```
ggplot(dfz, aes(x = brood_year, y = amplitude)) + geom_line() +
facet_wrap(~river, scales = "free_y") + ggtitle("Cycle Amplitude")
```
Plot of the stochastic level. Again all the rivers were analyzed independently. It certainly looks like there are common patterns in the trends. In the next step, we can test this.
```
ggplot(dfz, aes(x = brood_year, y = trend)) + geom_line() + facet_wrap(~river,
scales = "free_y") + ggtitle("Stochastic Level")
```
| Time Series Analysis and Forecasting |
atsa-es.github.io | https://atsa-es.github.io/atsa-labs/multivariate-dlm-1-synchrony-in-levels.html |
16\.5 Multivariate DLM 1: Synchrony in levels
---------------------------------------------
In the first analysis, we will look at whether the stochastic levels (underlying trends) are correlated. We will analyze all the rivers together but in the equations, we will show just two rivers to keep the equations concise.
### 16\.5\.1 State model
The hidden states model will have the following components:
* Each trend \\(x\\) will be modeled as separate but allowed to be correlated. This means either an unconstrained \\(\\mathbf{Q}\\) or an equal variance and equal covariance matrix.
* Each seasonal trend, the \\(\\beta\\)’s, will also be treated as separate but independent. This means either a diagonal and equal variance \\(\\mathbf{Q}\\) or diagona and unequal variances.
The \\(\\mathbf{x}\\) equation is then:
\\\[\\begin{bmatrix}x\_a \\\\ x\_b \\\\ \\beta\_{1a} \\\\ \\beta\_{1b} \\\\ \\beta\_{2a} \\\\ \\beta\_{2b} \\end{bmatrix}\_t \= \\begin{bmatrix}x\_a \\\\ x\_b \\\\ \\beta\_{1a} \\\\ \\beta\_{1b} \\\\ \\beta\_{2a} \\\\ \\beta\_{2b}\\end{bmatrix}\_{t\-1} \+ \\begin{bmatrix}w\_1 \\\\ w\_2 \\\\ w\_3 \\\\ w\_4 \\\\ w\_5 \\\\ w\_6 \\end{bmatrix}\_t\\]
\\\[\\begin{bmatrix}w\_1 \\\\ w\_2 \\\\ w\_3 \\\\ w\_4 \\\\ w\_5 \\\\ w\_6 \\end{bmatrix}\_t \\sim \\text{MVN}\\left(0, \\begin{bmatrix}
q\_a \& c \& 0 \& 0 \& 0 \& 0\\\\
c \& q\_b \& 0 \& 0 \& 0 \& 0 \\\\
0 \& 0 \& q\_1 \& 0 \& 0 \& 0 \\\\
0 \& 0 \& 0 \& q\_2 \& 0 \& 0 \\\\
0 \& 0 \& 0 \& 0 \& q\_3 \& 0 \\\\
0 \& 0 \& 0 \& 0 \& 0 \& q\_4 \\end{bmatrix}\\right)\\]
### 16\.5\.2 Observation model
The observation model will have the following components:
* Each spawner count time series will be treated as independent with independent error (equal or unequal variance).
\\\[\\begin{bmatrix}y\_a \\\\ y\_b\\end{bmatrix}\_t \=
\\begin{bmatrix}
1 \& 0 \& \\sin(2\\pi t/p) \& 0 \& \\cos(2\\pi t/p) \& 0\\\\
0 \& 1 \& 0\&\\sin(2\\pi t/p) \& 0\&\\cos(2\\pi t/p)
\\end{bmatrix} \\begin{bmatrix}
x\_a \\\\ x\_b \\\\
\\beta\_{1a} \\\\ \\beta\_{1b} \\\\
\\beta\_{2a} \\\\ \\beta\_{2b}
\\end{bmatrix}\_t \+ \\mathbf{v}\_t\\]
### 16\.5\.3 Fit model
Set the number of rivers.
```
n <- 2
```
The following code will create the \\(\\mathbf{Z}\\) for a model with \\(n\\) rivers. The first \\(\\mathbf{Z}\\) is shown.
```
Z <- array(1, dim = c(n, n * 3, TT))
Z[1:n, 1:n, ] <- diag(1, n)
for (t in 1:TT) {
Z[, (n + 1):(2 * n), t] <- diag(sin(2 * pi * t/p), n)
Z[, (2 * n + 1):(3 * n), t] <- diag(cos(2 * pi * t/p), n)
}
Z[, , 1]
```
```
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 1 0 0.9510565 0.0000000 0.309017 0.000000
[2,] 0 1 0.0000000 0.9510565 0.000000 0.309017
```
And this code will make the \\(\\mathbf{Q}\\) matrix:
```
Q <- matrix(list(0), 3 * n, 3 * n)
Q[1:n, 1:n] <- "c"
diag(Q) <- c(paste0("q", letters[1:n]), paste0("q", 1:(2 * n)))
Q
```
```
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] "qa" "c" 0 0 0 0
[2,] "c" "qb" 0 0 0 0
[3,] 0 0 "q1" 0 0 0
[4,] 0 0 0 "q2" 0 0
[5,] 0 0 0 0 "q3" 0
[6,] 0 0 0 0 0 "q4"
```
We will write a function to prepare the model matrices and fit. It takes the names of the rivers.
```
fitriver.m <- function(river, p = 5) {
require(tidyr)
require(dplyr)
require(MARSS)
df <- subset(sockeye, region %in% river)
df <- df %>%
pivot_wider(id_cols = brood_year, names_from = "region",
values_from = spawners) %>%
ungroup() %>%
select(-brood_year)
yt <- t(log(df))
TT <- ncol(yt)
n <- nrow(yt)
Z <- array(1, dim = c(n, n * 3, TT))
Z[1:n, 1:n, ] <- diag(1, n)
for (t in 1:TT) {
Z[, (n + 1):(2 * n), t] <- diag(sin(2 * pi * t/p), n)
Z[, (2 * n + 1):(3 * n), t] <- diag(cos(2 * pi * t/p),
n)
}
Q <- matrix(list(0), 3 * n, 3 * n)
Q[1:n, 1:n] <- paste0("c", 1:(n^2))
diag(Q) <- c(paste0("q", letters[1:n]), paste0("q", 1:(2 *
n)))
Q[lower.tri(Q)] <- t(Q)[lower.tri(Q)]
mod.list <- list(U = "zero", Q = Q, Z = Z, A = "zero")
fit <- MARSS(yt, model = mod.list, inits = list(x0 = matrix(0,
3 * n, 1)), silent = TRUE)
return(fit)
}
```
Now we can fit for two (or more) rivers. Note it didn’t quite converge as some of the variances for the \\(\\beta\\)’s are going to 0 (constant \\(\\beta\\) value).
```
river <- unique(sockeye$region)
n <- length(river)
fit <- fitriver.m(river)
```
### 16\.5\.4 Look at the results
We will look at the correlation plot for the trends.
```
require(corrplot)
Qmat <- coef(fit, type = "matrix")$Q[1:n, 1:n]
rownames(Qmat) <- colnames(Qmat) <- river
M <- cov2cor(Qmat)
corrplot(M, order = "hclust", addrect = 4)
```
We can compare to the locations and see that this suggests that there is small scale regional correlation in the spawner counts.
### 16\.5\.1 State model
The hidden states model will have the following components:
* Each trend \\(x\\) will be modeled as separate but allowed to be correlated. This means either an unconstrained \\(\\mathbf{Q}\\) or an equal variance and equal covariance matrix.
* Each seasonal trend, the \\(\\beta\\)’s, will also be treated as separate but independent. This means either a diagonal and equal variance \\(\\mathbf{Q}\\) or diagona and unequal variances.
The \\(\\mathbf{x}\\) equation is then:
\\\[\\begin{bmatrix}x\_a \\\\ x\_b \\\\ \\beta\_{1a} \\\\ \\beta\_{1b} \\\\ \\beta\_{2a} \\\\ \\beta\_{2b} \\end{bmatrix}\_t \= \\begin{bmatrix}x\_a \\\\ x\_b \\\\ \\beta\_{1a} \\\\ \\beta\_{1b} \\\\ \\beta\_{2a} \\\\ \\beta\_{2b}\\end{bmatrix}\_{t\-1} \+ \\begin{bmatrix}w\_1 \\\\ w\_2 \\\\ w\_3 \\\\ w\_4 \\\\ w\_5 \\\\ w\_6 \\end{bmatrix}\_t\\]
\\\[\\begin{bmatrix}w\_1 \\\\ w\_2 \\\\ w\_3 \\\\ w\_4 \\\\ w\_5 \\\\ w\_6 \\end{bmatrix}\_t \\sim \\text{MVN}\\left(0, \\begin{bmatrix}
q\_a \& c \& 0 \& 0 \& 0 \& 0\\\\
c \& q\_b \& 0 \& 0 \& 0 \& 0 \\\\
0 \& 0 \& q\_1 \& 0 \& 0 \& 0 \\\\
0 \& 0 \& 0 \& q\_2 \& 0 \& 0 \\\\
0 \& 0 \& 0 \& 0 \& q\_3 \& 0 \\\\
0 \& 0 \& 0 \& 0 \& 0 \& q\_4 \\end{bmatrix}\\right)\\]
### 16\.5\.2 Observation model
The observation model will have the following components:
* Each spawner count time series will be treated as independent with independent error (equal or unequal variance).
\\\[\\begin{bmatrix}y\_a \\\\ y\_b\\end{bmatrix}\_t \=
\\begin{bmatrix}
1 \& 0 \& \\sin(2\\pi t/p) \& 0 \& \\cos(2\\pi t/p) \& 0\\\\
0 \& 1 \& 0\&\\sin(2\\pi t/p) \& 0\&\\cos(2\\pi t/p)
\\end{bmatrix} \\begin{bmatrix}
x\_a \\\\ x\_b \\\\
\\beta\_{1a} \\\\ \\beta\_{1b} \\\\
\\beta\_{2a} \\\\ \\beta\_{2b}
\\end{bmatrix}\_t \+ \\mathbf{v}\_t\\]
### 16\.5\.3 Fit model
Set the number of rivers.
```
n <- 2
```
The following code will create the \\(\\mathbf{Z}\\) for a model with \\(n\\) rivers. The first \\(\\mathbf{Z}\\) is shown.
```
Z <- array(1, dim = c(n, n * 3, TT))
Z[1:n, 1:n, ] <- diag(1, n)
for (t in 1:TT) {
Z[, (n + 1):(2 * n), t] <- diag(sin(2 * pi * t/p), n)
Z[, (2 * n + 1):(3 * n), t] <- diag(cos(2 * pi * t/p), n)
}
Z[, , 1]
```
```
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 1 0 0.9510565 0.0000000 0.309017 0.000000
[2,] 0 1 0.0000000 0.9510565 0.000000 0.309017
```
And this code will make the \\(\\mathbf{Q}\\) matrix:
```
Q <- matrix(list(0), 3 * n, 3 * n)
Q[1:n, 1:n] <- "c"
diag(Q) <- c(paste0("q", letters[1:n]), paste0("q", 1:(2 * n)))
Q
```
```
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] "qa" "c" 0 0 0 0
[2,] "c" "qb" 0 0 0 0
[3,] 0 0 "q1" 0 0 0
[4,] 0 0 0 "q2" 0 0
[5,] 0 0 0 0 "q3" 0
[6,] 0 0 0 0 0 "q4"
```
We will write a function to prepare the model matrices and fit. It takes the names of the rivers.
```
fitriver.m <- function(river, p = 5) {
require(tidyr)
require(dplyr)
require(MARSS)
df <- subset(sockeye, region %in% river)
df <- df %>%
pivot_wider(id_cols = brood_year, names_from = "region",
values_from = spawners) %>%
ungroup() %>%
select(-brood_year)
yt <- t(log(df))
TT <- ncol(yt)
n <- nrow(yt)
Z <- array(1, dim = c(n, n * 3, TT))
Z[1:n, 1:n, ] <- diag(1, n)
for (t in 1:TT) {
Z[, (n + 1):(2 * n), t] <- diag(sin(2 * pi * t/p), n)
Z[, (2 * n + 1):(3 * n), t] <- diag(cos(2 * pi * t/p),
n)
}
Q <- matrix(list(0), 3 * n, 3 * n)
Q[1:n, 1:n] <- paste0("c", 1:(n^2))
diag(Q) <- c(paste0("q", letters[1:n]), paste0("q", 1:(2 *
n)))
Q[lower.tri(Q)] <- t(Q)[lower.tri(Q)]
mod.list <- list(U = "zero", Q = Q, Z = Z, A = "zero")
fit <- MARSS(yt, model = mod.list, inits = list(x0 = matrix(0,
3 * n, 1)), silent = TRUE)
return(fit)
}
```
Now we can fit for two (or more) rivers. Note it didn’t quite converge as some of the variances for the \\(\\beta\\)’s are going to 0 (constant \\(\\beta\\) value).
```
river <- unique(sockeye$region)
n <- length(river)
fit <- fitriver.m(river)
```
### 16\.5\.4 Look at the results
We will look at the correlation plot for the trends.
```
require(corrplot)
Qmat <- coef(fit, type = "matrix")$Q[1:n, 1:n]
rownames(Qmat) <- colnames(Qmat) <- river
M <- cov2cor(Qmat)
corrplot(M, order = "hclust", addrect = 4)
```
We can compare to the locations and see that this suggests that there is small scale regional correlation in the spawner counts.
| Time Series Analysis and Forecasting |
fish-forecast.github.io | https://fish-forecast.github.io/Fish-Forecast-Bookdown/1-2-the-landings-data-and-covariates.html |
1\.2 The landings data and covariates
-------------------------------------
The **FishForecast** package has the following data objects:
* **greeklandings** The 1964 to 2007 total landings data multiple species. It is stored as a data frame, not ts object, with a year column, a species column and columns for landings in metric tons and log metric tons.
* **anchovy** and **sardine** A data frame for the landings (in log metric tons) of these species. These are the example catch time series used in the chapters. The data are 1964\-2007, however Stergiou and Christou used 1964\-1989 and the time series are subsetted to this time period for the examples. These data frames have only year and log.metric.tons columns.
* **anchovyts** and **sardinets** A ts object for the yearly landings (in log metric tons) of these species.
* **anchovy87** and **sardine87** A subsetted data frame with Year \<\= 1987\. This is the training data used in Stergiou and Christou.
* **anchovy87ts** and **sardine87ts** A ts object for the yearly landings (in log metric tons) of these species for 1964\-1987\.
* **ecovsmean.mon** and **ecovsmean.year** The environmental covariates air temperature, pressure, sea surface temperature, vertical wind, and wind speed cubed average monthly and yearly over three 1 degree boxes in the study area. See the chapter on covariates for details.
* **greekfish.cov** The fisheries covariates on number of boats, horsepower, and fishers.
Load the data by loading the **FishForecast** package and use only the 1964\-1989 landings. We use `subset()` to subset the landings data frame. Not `window()` as that is a function for subsetting ts objects.
```
require(FishForecast)
```
```
## Loading required package: FishForecast
```
```
landings89 = subset(greeklandings, Year <= 1989)
ggplot(landings89, aes(x=Year, y=log.metric.tons)) +
geom_line() + facet_wrap(~Species)
```
| Time Series Analysis and Forecasting |
fish-forecast.github.io | https://fish-forecast.github.io/Fish-Forecast-Bookdown/1-3-ts-objects.html |
1\.3 ts objects
---------------
A ts object in R is a time series, univariate or multivariate, that has information on the major time step value (e.g. year) and the period of the minor time step, if any. For example, if your data are monthly then the major time step is year, the minor time step is month and the period is 12 (12 months a year). If you have daily data collected hourly then you major time step is day, minor time step is hour and period is 24 (24 hours per day). If you have yearly data collected yearly, your major time step is year, your minor time step is also year, and the period is 1 (1 year per year). You cannot have multiple minor time steps, for example monthly data collected hourly with daily and hourly periods specified.
The data in a ts object cannot have any missing time steps. For example, if your data were in a data frame with a column for year, you could have a missing year, say no row for year 1988, and the data sense would still ‘make sense’. The data in a ts object cannot have any missing ‘rows’. If there is no data for a particular year or year/month (if your data are monthly), then that data point must be entered as a NA. You do not need a time step (e.g. year/month) column(s) for a ts object. You only need the starting major time step and the starting minor time step (if not 1\) and the period. All the time values from each data point can be computed from those 2 pieces of information if there are no gaps in your time series. Missing data are fine; they just have to be entered with a NA.
All the non\-seasonal examples shown will work on a plain vector of numbers, and it it is not necessary to convert a non\-seasonal time series into a ts object. That said, if you do not convert to a ts object, you will miss out on all the plotting and subsetting functions that are written for ts objects. Also when you do multivariate regression with covariates, having your data and covariates stored as a ts object will make regressing against lagged covariates (covariate values in the past) easier.
### 1\.3\.1 `ts()` function
To convert a vector of numbers to a ts object, we use the `ts()` function.
```
ts(data = NA, start = 1, end = numeric(), frequency = 1)
```
`start` is a two number vector with the first major time step and the first minor time step. If you only pass in one number, then it will use 1 (first minor time step) as the 2nd number in `start`. `end` is specified in exactly the same way and you only need to specified `start` or `end`, not both. `frequency` is the number of minor time steps per major time step. If you do not pass this in, it will assume that `frequency=1`, i.e. no periods or season in your data.
If you specify `frequency=4`, it will assume that the period is quarterly. If you specify that `frequency=12`, it will assume that period is monthly. This just affects the labeling of the minor time step columns and will print your data with 4 or 12 columns. For other frequencies, the data will not be printed with columns for the minor time steps, but the information is there and plotting will use the major steps.
#### Examples
Quarterly data
```
aa <- ts(1:24, start=c(1960,1), frequency=4)
aa
```
```
## Qtr1 Qtr2 Qtr3 Qtr4
## 1960 1 2 3 4
## 1961 5 6 7 8
## 1962 9 10 11 12
## 1963 13 14 15 16
## 1964 17 18 19 20
## 1965 21 22 23 24
```
```
plot(aa, type="p")
```
Monthly data
```
aa <- ts(1:24, start=c(1960,1), frequency=12)
aa
```
```
## Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
## 1960 1 2 3 4 5 6 7 8 9 10 11 12
## 1961 13 14 15 16 17 18 19 20 21 22 23 24
```
```
plot(aa, type="p")
```
Biennial data
```
aa <- ts(1:24, start=c(1960,1), frequency=2)
aa
```
```
## Time Series:
## Start = c(1960, 1)
## End = c(1971, 2)
## Frequency = 2
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
```
```
plot(aa, type="p")
```
### 1\.3\.2 ggplot and ts objects
In some ways, plotting ts object is easy. Just use `plot()` or `autoplot()` and it takes care of the time axis. In other ways, it can be frustrating if you want to alter the defaults.
#### `autoplot()`
`autoplot()` is a ggplot of the ts object.
```
aa <- ts(1:24, start=c(1960,1), frequency=12)
autoplot(aa)
```
and you have access to the usual gglot functions.
```
autoplot(aa) +
geom_point() +
ylab("landings") + xlab("") +
ggtitle("Anchovy landings")
```
Adding minor tick marks in ggplot is tedious (google if you want that) but adding vertical lines at your minor ticks is easy.
```
aa <- ts(1:24, start=c(1960,1), frequency=12)
vline_breaks <- seq(1960, 1962, by=1/12)
autoplot(aa) +
geom_vline(xintercept = vline_breaks, color ="blue") +
geom_point()
```
### 1\.3\.3 Plotting using a data frame
Often it is easier to work with a data frame (or a tibble) with columns for your major and minor time steps. That way you are not locked into whatever choices the plotting and printing functions use for ts objects. Many plotting functions work nicely with this type of data frame and you have full control over plotting and summarizing your data.
To plot the x\-axis, we need to add a date column in date format. Knowing the right format to use for `as.Date()` will take some sleuthing on the internet. The default is `1960-12-31` so if you get stuff you can always write your date in that format and use the default. Here I use `1960Jan01` and specify the format for that. I have used the `date_format()` function in the scales package to help format the dates on the x\-axis.
```
aa <- data.frame(
year=rep(1960:1961,each=12),
month = rep(month.abb,2),
val=1:24)
aa$date <- as.Date(paste0(aa$year,aa$month,"01"),"%Y%b%d")
ggplot(aa, aes(x=date, y=val)) + geom_point() +
scale_x_date(labels=scales::date_format("%b-%Y")) +
ylab("landings") + xlab("")
```
### 1\.3\.1 `ts()` function
To convert a vector of numbers to a ts object, we use the `ts()` function.
```
ts(data = NA, start = 1, end = numeric(), frequency = 1)
```
`start` is a two number vector with the first major time step and the first minor time step. If you only pass in one number, then it will use 1 (first minor time step) as the 2nd number in `start`. `end` is specified in exactly the same way and you only need to specified `start` or `end`, not both. `frequency` is the number of minor time steps per major time step. If you do not pass this in, it will assume that `frequency=1`, i.e. no periods or season in your data.
If you specify `frequency=4`, it will assume that the period is quarterly. If you specify that `frequency=12`, it will assume that period is monthly. This just affects the labeling of the minor time step columns and will print your data with 4 or 12 columns. For other frequencies, the data will not be printed with columns for the minor time steps, but the information is there and plotting will use the major steps.
#### Examples
Quarterly data
```
aa <- ts(1:24, start=c(1960,1), frequency=4)
aa
```
```
## Qtr1 Qtr2 Qtr3 Qtr4
## 1960 1 2 3 4
## 1961 5 6 7 8
## 1962 9 10 11 12
## 1963 13 14 15 16
## 1964 17 18 19 20
## 1965 21 22 23 24
```
```
plot(aa, type="p")
```
Monthly data
```
aa <- ts(1:24, start=c(1960,1), frequency=12)
aa
```
```
## Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
## 1960 1 2 3 4 5 6 7 8 9 10 11 12
## 1961 13 14 15 16 17 18 19 20 21 22 23 24
```
```
plot(aa, type="p")
```
Biennial data
```
aa <- ts(1:24, start=c(1960,1), frequency=2)
aa
```
```
## Time Series:
## Start = c(1960, 1)
## End = c(1971, 2)
## Frequency = 2
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
```
```
plot(aa, type="p")
```
#### Examples
Quarterly data
```
aa <- ts(1:24, start=c(1960,1), frequency=4)
aa
```
```
## Qtr1 Qtr2 Qtr3 Qtr4
## 1960 1 2 3 4
## 1961 5 6 7 8
## 1962 9 10 11 12
## 1963 13 14 15 16
## 1964 17 18 19 20
## 1965 21 22 23 24
```
```
plot(aa, type="p")
```
Monthly data
```
aa <- ts(1:24, start=c(1960,1), frequency=12)
aa
```
```
## Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
## 1960 1 2 3 4 5 6 7 8 9 10 11 12
## 1961 13 14 15 16 17 18 19 20 21 22 23 24
```
```
plot(aa, type="p")
```
Biennial data
```
aa <- ts(1:24, start=c(1960,1), frequency=2)
aa
```
```
## Time Series:
## Start = c(1960, 1)
## End = c(1971, 2)
## Frequency = 2
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
```
```
plot(aa, type="p")
```
### 1\.3\.2 ggplot and ts objects
In some ways, plotting ts object is easy. Just use `plot()` or `autoplot()` and it takes care of the time axis. In other ways, it can be frustrating if you want to alter the defaults.
#### `autoplot()`
`autoplot()` is a ggplot of the ts object.
```
aa <- ts(1:24, start=c(1960,1), frequency=12)
autoplot(aa)
```
and you have access to the usual gglot functions.
```
autoplot(aa) +
geom_point() +
ylab("landings") + xlab("") +
ggtitle("Anchovy landings")
```
Adding minor tick marks in ggplot is tedious (google if you want that) but adding vertical lines at your minor ticks is easy.
```
aa <- ts(1:24, start=c(1960,1), frequency=12)
vline_breaks <- seq(1960, 1962, by=1/12)
autoplot(aa) +
geom_vline(xintercept = vline_breaks, color ="blue") +
geom_point()
```
#### `autoplot()`
`autoplot()` is a ggplot of the ts object.
```
aa <- ts(1:24, start=c(1960,1), frequency=12)
autoplot(aa)
```
and you have access to the usual gglot functions.
```
autoplot(aa) +
geom_point() +
ylab("landings") + xlab("") +
ggtitle("Anchovy landings")
```
Adding minor tick marks in ggplot is tedious (google if you want that) but adding vertical lines at your minor ticks is easy.
```
aa <- ts(1:24, start=c(1960,1), frequency=12)
vline_breaks <- seq(1960, 1962, by=1/12)
autoplot(aa) +
geom_vline(xintercept = vline_breaks, color ="blue") +
geom_point()
```
### 1\.3\.3 Plotting using a data frame
Often it is easier to work with a data frame (or a tibble) with columns for your major and minor time steps. That way you are not locked into whatever choices the plotting and printing functions use for ts objects. Many plotting functions work nicely with this type of data frame and you have full control over plotting and summarizing your data.
To plot the x\-axis, we need to add a date column in date format. Knowing the right format to use for `as.Date()` will take some sleuthing on the internet. The default is `1960-12-31` so if you get stuff you can always write your date in that format and use the default. Here I use `1960Jan01` and specify the format for that. I have used the `date_format()` function in the scales package to help format the dates on the x\-axis.
```
aa <- data.frame(
year=rep(1960:1961,each=12),
month = rep(month.abb,2),
val=1:24)
aa$date <- as.Date(paste0(aa$year,aa$month,"01"),"%Y%b%d")
ggplot(aa, aes(x=date, y=val)) + geom_point() +
scale_x_date(labels=scales::date_format("%b-%Y")) +
ylab("landings") + xlab("")
```
| Time Series Analysis and Forecasting |
fish-forecast.github.io | https://fish-forecast.github.io/Fish-Forecast-Bookdown/1-4-packages.html |
1\.4 Packages
-------------
We will be mainly be using the forecast (Hyndman et al. [2020](references.html#ref-R-forecast)) and tseries (Trapletti and Hornik [2019](references.html#ref-R-tseries)) packages, with the MARSS (Holmes et al. [2020](references.html#ref-R-MARSS)) package to implement ARMAX models. However we will also use a variety of other packages especially for the multivariate regression chapter. So that you can keep track of what package a function come from, I will use the `::` notation for functions that are *not* from the following standard packages:
* base R
* stats
* ggplot2
Thus to call function fun1 from package pack1, I will use `pack1::fun1()`. This will make the code more verbose but you will be able to keep track of which function comes from what package.
To install the data used in this book along with all the needed packages, install the **FishForecast** package from GitHub. If you are on a Windows machine, you will need to install [Rtools](https://cran.rstudio.com/bin/windows/Rtools/) in order to install packages from GitHub.
To install a package from GitHub, install the **devtools** package and then run
```
library(devtools)
devtools::install_github("Fish-Forecast/FishForecast")
```
Calling
```
library(FishForecast)
```
will then make the data objects available.
#### tidyverse and piping
I will minimize the use of tidyverse and piping. Although the latter can create much more concise code, for beginner R users and programmers, I think it will interfere with learning. I may add the piped versions of the code later. I am not going to be doing much ‘data\-wrangling’. I will assume that your data are in the tidyverse format, though I will not be using tibbles explicitly. Our data are quite simple, so this is not hard. See the chapter on inputting your data.
#### plotting packages
I will use a combination of base plotting and ggplot2 (Wickham et al. [2020](references.html#ref-R-ggplot2)) plotting. Doing a tutorial on basic plotting with ggplot2 may be helpful for the material.
#### tidyverse and piping
I will minimize the use of tidyverse and piping. Although the latter can create much more concise code, for beginner R users and programmers, I think it will interfere with learning. I may add the piped versions of the code later. I am not going to be doing much ‘data\-wrangling’. I will assume that your data are in the tidyverse format, though I will not be using tibbles explicitly. Our data are quite simple, so this is not hard. See the chapter on inputting your data.
#### plotting packages
I will use a combination of base plotting and ggplot2 (Wickham et al. [2020](references.html#ref-R-ggplot2)) plotting. Doing a tutorial on basic plotting with ggplot2 may be helpful for the material.
| Time Series Analysis and Forecasting |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/get-up-and-running-with-r-and-rstudio.html |
Get up and running with R and RStudio
=====================================
What is R?
----------
The answer to this question very much depends on who we ask. The geeky answer is something like this…
R is a dialect of the S language, which was developed by John Chambers and colleagues at Bell Laboratories in the mid 1970s. It was designed to offer an interactive computing environment for statisticians and scientists to carry out data analysis. There are essentially two widely used versions of S (though others have started to appear), a commercial one called S\-Plus, and the open source implementation known as R. S\-Plus came first, and although it is still around, it is used less each year. Development of R was begun in the late 1990s by two academics, Ross Ihaka and Robert Gentleman, at the University of Auckland. Their motivation was to create an open source language to enable researchers in computational statistics to explore new ideas. That language quickly evolved into something that looked more and more S\-like, which we now know as R (GNU R, to be overly precise).
We could go on and on about the various features that R possesses. R is a functional programming language, it supports object orientation, etc etc… but these kinds of explanations are only helpful to someone who already knows about computer languages. It is useful to understand why so many people have turned to R to meet their data analysis needs. When a typical R user talks about “R” they are often referring to two things at once, the GNU R language and the ecosystem that exists around the language:
* R is all about data analysis. We can carry out any standard statistical analysis in R, as well as access a huge array of more sophisticated tools with impressive names like “structural equation model”, “random forests” and “penalized regression”. These days, when statisticians and computer scientists develop a new analysis tool, they often implement it in R first. This means a competent R user can always access the latest, cutting edge analysis tools. R also has the best graphics and plotting facilities of any platform. With sufficient expertise, we can make pretty much any type of figure we need (e.g. scatter plots, phylogenetic trees, spatial maps, or even [volcanoes](http://www.r-project.org/screenshots/volcano-image.jpg)). In short, R is a very productive environment for doing data analysis.
* Because R is such a good environment for data analysis, a very large community of users has grown up around it. The size of this community has increased steadily since R was created, but this growth has really increased up in the last 5\-10 years or so. In the early 2000s there were very few books about R and the main way to access help online was through the widely\-feared R mailing lists. Now, there are probably hundreds of books about different aspects of R, online tutorials written by enthusiasts, and many websites that exist solely to help people learn R. The resulting ecosystem is vast, and though it can be difficult to navigate at times, when we run into an R\-related problem the chances are that the answer is already written down somewhere[1](#fn1).
R is not just about data analysis—though we will mostly use it this way. It is a fully\-fledged programming language, meaning that once you become moderately proficient with it you can do things such as construct numerical simulation models, solve equations, query websites, send emails, [access the foaas web service](http://cran.r-project.org/web/packages/rfoaas/), and carry out many other tasks we don’t have time to write down. We won’t do any of this year or next but it is worth noting that R can do much more than just analyse data if we need it to.
### Getting and installing R
R is open source, meaning anyone can download the source code – the collection of computer instructions that define R – and assuming they have enough time, energy and expertise, they are free to alter it as they please. Open source does not *necessarily* mean free, as in it costs £0 to download and use, but luckily R *is* free in this sense. If you’re working on the University managed desktops it should already have been installed and is ready for you to use. We encourage you to install a copy on your own laptop so that you can work at home, in the library, at a café, or wherever else you find you are productive. Do not use R on its own though. Use it in combination with the RStudio IDE discussed in the next section.
In order to install R you need to download the appropriate installer from the Comprehensive R Archive Network ([CRAN](http://cran.r-project.org)). We are going to use the “base distribution” as this contains everything you need to use R under normal circumstances. There is a single [installer](http://cran.r-project.org/bin/windows/base/) for Windows. On a Mac, it’s important to match the [installer](http://cran.r-project.org/bin/macosx/) to the version of OS X. In either case, R uses a the standard install mechanism that should be familiar to anyone who has installed an application on their machine. There is no need to change the default settings—doing so will probably lead to problems later on.
Go ahead and install R on your own computer now. You won’t be able to make much use of this book without it.
After installing R it should be visible in the Programs menu on a Windows computer or in the Applications folder on a Mac. However, it would be a good idea to read the next section before launching R…
What is RStudio (and why use it)?
---------------------------------
R and RStudio are not the same thing. We can run R without RStudio if we need to, but we cannot run RStudio without R. Remember that! R is essentially just a computer program that sits there and waits for instructions in the form of text. Those instructions can be typed in by a user like you or me, or they can be sent to it from another program. This means you can run R in a variety of different environments. The job of RStudio is to provide an environment that makes R a more pleasant and productive tool. One way to get a sense of why RStudio is a Very Good Thing is to look at what running R without it is like. The simplest way to run it on a Linux or Unix\-based machine (like a Mac) is to use something called the Terminal. It’s well beyond the scope of this book to get into what this is, but in a nutshell, the Terminal provides a low\-level, text\-based way to interact with a computer. Here is what R looks like running inside a Terminal on a Mac:
We can run R in much the same way on Windows using the “Command Prompt” if we need to. The key thing you need to take away from that screenshot is that running R like this is very “bare bones”. We typed the letter “R” in the Terminal and hit Enter to start R. It printed a little information as it started up and then presented us with “the prompt” (`>`), waiting for input. This is where we type or paste in instructions telling R what to do. There is no other way to interact with it when we run R like this – no menus or buttons, just a lonely prompt.
The developers of R on Windows PCs and Macs provide a slightly nicer way to work with R. When we download and install R for either of these two operating systems, in addition to the basic R program that we just saw running in a Terminal, we also get another program that acts as a [Graphical User Interface](http://en.wikipedia.org/wiki/Graphical_user_interface) (GUI) for R. This is the thing labelled “R” in the Programs menu on a Windows computer or the Applications folder on a Mac. If you launch the R GUI on your computer you will be presented with roughly the same thing on either a Windows PC or a Mac. There will be something called the Console, which is where you interact directly with R by typing things at the prompt (which looks like this: `>`), and a few buttons and menus for managing common tasks. We will not go through these two GUIs in any more detail because we are not going to use them. We just need to know they exist so we don’t confuse them with RStudio.
So what is RStudio? The first thing to note is that it is a different program from R. Remember that! RStudio is installed installed separately from R and occupies its own place in the Programs menu (Windows PC) or Applications folder (Mac). In one sense RStudio is just another Graphical User Interface for R which improves on the “bare bones” experience. However, it is a GUI on steroids. It is more accurate to describe it as an [Integrated Development Environment](http://en.wikipedia.org/wiki/Integrated_development_environment) (IDE). There is no all\-encompassing definition of an IDE, but they all exist to make programmer’s lives easier by integrating various useful tools into a single piece of software. From the perspective of this book, there are four key features that we care about:
* The R interpreter—the thing that was running in the Terminal above—runs inside RStudio. It’s accessed via a window labelled Console. This is where we type in instructions we want to execute when we are working directly with R. The Console also shows us any output that R prints in response to these instructions. So if we just want the “bare bones” experience, we can still have it.
* RStudio provides facilities for working with R programs using something called a Source Code Editor. An R program ( also called a “script”)" is just is a collection of instructions in the R language that have been saved to a text file. Nothing more! However, it is much easier to work with a script using a proper Source Code Editor than an ordinary text editor like Notepad.
* An good IDE like RStudio also gives you a visual, point\-and\-click means of accessing various language\-specific features. This is a bit difficult to explain until we have have actually used some of these, but trust us, being able to do things like manage packages, set working directories, or inspect objects we’ve made simplifies day\-to\-day use of R. This especially true for new users.
* RStudio is cross\-platform—it will run on a Windows PC, a Linux PC or a Mac. In terms of the appearance and the functionality it provides, RStudio is exactly the same on each of these platforms. If we learn to work with R via RStudio on a Windows PC, it’s no problem migrating to a Mac or Linux PC later on if we need to. This is a big advantage for those of us who work on multiple platforms.
We’re only going to scratch the surface of what RStudio can do and there are certainly alternative bits of software that could meet our immediate needs. The reason for introducing a powerful tool like RStudio is because one day you may need to access things like debugging facilities, package building tools, repository management. RStudio makes it easy to use these advanced tools.
### Getting and installing RStudio
RStudio is developed and maintained by a for\-profit company called… RStudio. They make their money by selling software tools and services related to R and RStudio. The basic desktop version of RStudio is free to download and use though. It can be downloaded from the RStudio [download page](http://www.rstudio.com/products/RStudio/#Desk). The one to go for is the Open Source Edition of RStudio Desktop, **not** the commercial version of RStudio Desktop. RStudio installs like any other piece of software, so there’s nothing to configure after installation.
If you haven’t already done it, go ahead and install RStudio Desktop on your own computer. You are going to need it.
### The anatomy of RStudio
Once it’s installed RStudio is run like any other stand\-alone application, via the Programs menu or the Applications folder on a Windows PC or Mac, respectively[2](#fn2). We’ll say this one last time—RStudio only works if we’ve also installed R. Here is how RStudio appears the first time it runs:
There are three panes inside a single window, which we have labelled with red numbers. Each of these has a well\-defined purpose. Let’s take a quick look at these:
1. The large window on the left is the Console. We have already told you what this is for—the Console lets you know what R is doing and provides a mechanism to interact with R by typing instructions. All this happens at the prompt, `>`. We will be working in the Console in a moment so won’t say any more about this here.
2. The window at the top right contains two tabs. The first of these, labelled **Environment**, allows us to see all the different R objects we can access. There are also some buttons that help us to get data into and out of R. The second, labelled **History**, allows us to see a list of instructions we’ve previously sent to R. The buttons in this tab allow us to reuse or save these instructions.
3. The window at the bottom right contains five tabs. The first, labelled **Files**, gives us a way to interact with the files and folders. The next tab, labelled **Plots**, is where any figures we produce are displayed. This tab also allows you to save your figures to file. The **Packages** tab is where we view, install and update packages used to extend the functionality of R. The **Help** tab is where you can access and display various different help pages. The **Viewer** is essentially an embedded web browser for working with interactive output—we won’t be using it in this course.
Don’t be alarmed if RStudio looks different on your computer. There are a couple of reasons why this might be the case. First, the appearance of RStudio is highly customisable. Take a quick look at the `Tools > Global Options...` window to see what we mean. Second, there is a fourth window that is sometimes be visible when we work with RStudio—the source code Editor we mentioned above. RStudio saves its state between different sessions, so if we have already messed about with RStudio’s appearance or left a script open last time we used it you will see these changes.
#### RStudio will change over time
Keep in mind that RStudio is very actively developed, which means features tend to appear or change over time. Consequently, if you update it regularly expect the odd thing to change here and there. This is generally a good thing—it usually means new features have been added—but it does require you to occasionally adjust to new additions.
Working at the Console
----------------------
R was designed to be used interactively—it is what is known as an **interpreted language**, which we can interact with via something called a Command Line Interface (CLI). This is just a fancy way of saying that we can type an instructions to “do something” directly into the Console and those instructions will then be interpreted when we hit the Enter key. If our R expression does not contain any errors, R will then do something like read in some data, perform a calculation, make a figure, and so on. What actually happens obviously depends on what we ask it to do.
Let’s briefly see what all this means by doing something very simple with R. Type `1 + 3` at the Console and hit the Enter key:
```
1+3
```
```
## [1] 4
```
The first line above just reminds us what we typed into the Console. The line after that beginning with `##` shows us what R printed to the Console after reading and evaluating our instructions.
What just happened? We can ignore the `[1]` bit for now (the meaning of this will become clear later in the course). What are we left with – the number 2\. The instruction we gave R was in effect “evaluate the expression `1 + 3`”. R read this in, decided it was a valid R expression, evaluated the expression, and then printed the result to the Console for us. Unsurprisingly, the expression `1 + 3` is a request to add the numbers 1 and 3, and so R prints the number 4 to the Console.
OK, that was not very exciting. In the next chapter we will start learning to use R to carry out more useful calculations. The important take\-away from this is that this sequence of events—reading instructions, evaluating those instructions and printing their output—happens every time we type or paste something into the Console and hit Enter. The printing bit is optional by the way. Whether or not it happens depends on whether you decide to capture the output or not. Just remember, if R does not print anything to the Console it does not necessarily mean nothing has happened.
Why do we keep using that word *expression*? It has a very specific meaning in computer science. The [Wikipedia page](http://en.wikipedia.org/wiki/Expression_(computer_science)) says:
> An expression in a programming language is a combination of explicit values, constants, variables, operators, and functions that are interpreted according to the particular rules of precedence and of association for a particular programming language, which computes and then produces another value.
That probably doesn’t make much sense, but it at least demonstrates why we don’t let computer scientists teach biologists about programming. In simple terms, an R expression is a small set of instructions written in human readable(ish) text that tell R to do something. That’s it. We could write “instructions” instead of “expressions” throughout this book but we may as well use the correct word. Whatever we call them, our aim is to learn how to combine sequences of expressions to Get Things Done in R. That’s what this book is about.
What is R?
----------
The answer to this question very much depends on who we ask. The geeky answer is something like this…
R is a dialect of the S language, which was developed by John Chambers and colleagues at Bell Laboratories in the mid 1970s. It was designed to offer an interactive computing environment for statisticians and scientists to carry out data analysis. There are essentially two widely used versions of S (though others have started to appear), a commercial one called S\-Plus, and the open source implementation known as R. S\-Plus came first, and although it is still around, it is used less each year. Development of R was begun in the late 1990s by two academics, Ross Ihaka and Robert Gentleman, at the University of Auckland. Their motivation was to create an open source language to enable researchers in computational statistics to explore new ideas. That language quickly evolved into something that looked more and more S\-like, which we now know as R (GNU R, to be overly precise).
We could go on and on about the various features that R possesses. R is a functional programming language, it supports object orientation, etc etc… but these kinds of explanations are only helpful to someone who already knows about computer languages. It is useful to understand why so many people have turned to R to meet their data analysis needs. When a typical R user talks about “R” they are often referring to two things at once, the GNU R language and the ecosystem that exists around the language:
* R is all about data analysis. We can carry out any standard statistical analysis in R, as well as access a huge array of more sophisticated tools with impressive names like “structural equation model”, “random forests” and “penalized regression”. These days, when statisticians and computer scientists develop a new analysis tool, they often implement it in R first. This means a competent R user can always access the latest, cutting edge analysis tools. R also has the best graphics and plotting facilities of any platform. With sufficient expertise, we can make pretty much any type of figure we need (e.g. scatter plots, phylogenetic trees, spatial maps, or even [volcanoes](http://www.r-project.org/screenshots/volcano-image.jpg)). In short, R is a very productive environment for doing data analysis.
* Because R is such a good environment for data analysis, a very large community of users has grown up around it. The size of this community has increased steadily since R was created, but this growth has really increased up in the last 5\-10 years or so. In the early 2000s there were very few books about R and the main way to access help online was through the widely\-feared R mailing lists. Now, there are probably hundreds of books about different aspects of R, online tutorials written by enthusiasts, and many websites that exist solely to help people learn R. The resulting ecosystem is vast, and though it can be difficult to navigate at times, when we run into an R\-related problem the chances are that the answer is already written down somewhere[1](#fn1).
R is not just about data analysis—though we will mostly use it this way. It is a fully\-fledged programming language, meaning that once you become moderately proficient with it you can do things such as construct numerical simulation models, solve equations, query websites, send emails, [access the foaas web service](http://cran.r-project.org/web/packages/rfoaas/), and carry out many other tasks we don’t have time to write down. We won’t do any of this year or next but it is worth noting that R can do much more than just analyse data if we need it to.
### Getting and installing R
R is open source, meaning anyone can download the source code – the collection of computer instructions that define R – and assuming they have enough time, energy and expertise, they are free to alter it as they please. Open source does not *necessarily* mean free, as in it costs £0 to download and use, but luckily R *is* free in this sense. If you’re working on the University managed desktops it should already have been installed and is ready for you to use. We encourage you to install a copy on your own laptop so that you can work at home, in the library, at a café, or wherever else you find you are productive. Do not use R on its own though. Use it in combination with the RStudio IDE discussed in the next section.
In order to install R you need to download the appropriate installer from the Comprehensive R Archive Network ([CRAN](http://cran.r-project.org)). We are going to use the “base distribution” as this contains everything you need to use R under normal circumstances. There is a single [installer](http://cran.r-project.org/bin/windows/base/) for Windows. On a Mac, it’s important to match the [installer](http://cran.r-project.org/bin/macosx/) to the version of OS X. In either case, R uses a the standard install mechanism that should be familiar to anyone who has installed an application on their machine. There is no need to change the default settings—doing so will probably lead to problems later on.
Go ahead and install R on your own computer now. You won’t be able to make much use of this book without it.
After installing R it should be visible in the Programs menu on a Windows computer or in the Applications folder on a Mac. However, it would be a good idea to read the next section before launching R…
### Getting and installing R
R is open source, meaning anyone can download the source code – the collection of computer instructions that define R – and assuming they have enough time, energy and expertise, they are free to alter it as they please. Open source does not *necessarily* mean free, as in it costs £0 to download and use, but luckily R *is* free in this sense. If you’re working on the University managed desktops it should already have been installed and is ready for you to use. We encourage you to install a copy on your own laptop so that you can work at home, in the library, at a café, or wherever else you find you are productive. Do not use R on its own though. Use it in combination with the RStudio IDE discussed in the next section.
In order to install R you need to download the appropriate installer from the Comprehensive R Archive Network ([CRAN](http://cran.r-project.org)). We are going to use the “base distribution” as this contains everything you need to use R under normal circumstances. There is a single [installer](http://cran.r-project.org/bin/windows/base/) for Windows. On a Mac, it’s important to match the [installer](http://cran.r-project.org/bin/macosx/) to the version of OS X. In either case, R uses a the standard install mechanism that should be familiar to anyone who has installed an application on their machine. There is no need to change the default settings—doing so will probably lead to problems later on.
Go ahead and install R on your own computer now. You won’t be able to make much use of this book without it.
After installing R it should be visible in the Programs menu on a Windows computer or in the Applications folder on a Mac. However, it would be a good idea to read the next section before launching R…
What is RStudio (and why use it)?
---------------------------------
R and RStudio are not the same thing. We can run R without RStudio if we need to, but we cannot run RStudio without R. Remember that! R is essentially just a computer program that sits there and waits for instructions in the form of text. Those instructions can be typed in by a user like you or me, or they can be sent to it from another program. This means you can run R in a variety of different environments. The job of RStudio is to provide an environment that makes R a more pleasant and productive tool. One way to get a sense of why RStudio is a Very Good Thing is to look at what running R without it is like. The simplest way to run it on a Linux or Unix\-based machine (like a Mac) is to use something called the Terminal. It’s well beyond the scope of this book to get into what this is, but in a nutshell, the Terminal provides a low\-level, text\-based way to interact with a computer. Here is what R looks like running inside a Terminal on a Mac:
We can run R in much the same way on Windows using the “Command Prompt” if we need to. The key thing you need to take away from that screenshot is that running R like this is very “bare bones”. We typed the letter “R” in the Terminal and hit Enter to start R. It printed a little information as it started up and then presented us with “the prompt” (`>`), waiting for input. This is where we type or paste in instructions telling R what to do. There is no other way to interact with it when we run R like this – no menus or buttons, just a lonely prompt.
The developers of R on Windows PCs and Macs provide a slightly nicer way to work with R. When we download and install R for either of these two operating systems, in addition to the basic R program that we just saw running in a Terminal, we also get another program that acts as a [Graphical User Interface](http://en.wikipedia.org/wiki/Graphical_user_interface) (GUI) for R. This is the thing labelled “R” in the Programs menu on a Windows computer or the Applications folder on a Mac. If you launch the R GUI on your computer you will be presented with roughly the same thing on either a Windows PC or a Mac. There will be something called the Console, which is where you interact directly with R by typing things at the prompt (which looks like this: `>`), and a few buttons and menus for managing common tasks. We will not go through these two GUIs in any more detail because we are not going to use them. We just need to know they exist so we don’t confuse them with RStudio.
So what is RStudio? The first thing to note is that it is a different program from R. Remember that! RStudio is installed installed separately from R and occupies its own place in the Programs menu (Windows PC) or Applications folder (Mac). In one sense RStudio is just another Graphical User Interface for R which improves on the “bare bones” experience. However, it is a GUI on steroids. It is more accurate to describe it as an [Integrated Development Environment](http://en.wikipedia.org/wiki/Integrated_development_environment) (IDE). There is no all\-encompassing definition of an IDE, but they all exist to make programmer’s lives easier by integrating various useful tools into a single piece of software. From the perspective of this book, there are four key features that we care about:
* The R interpreter—the thing that was running in the Terminal above—runs inside RStudio. It’s accessed via a window labelled Console. This is where we type in instructions we want to execute when we are working directly with R. The Console also shows us any output that R prints in response to these instructions. So if we just want the “bare bones” experience, we can still have it.
* RStudio provides facilities for working with R programs using something called a Source Code Editor. An R program ( also called a “script”)" is just is a collection of instructions in the R language that have been saved to a text file. Nothing more! However, it is much easier to work with a script using a proper Source Code Editor than an ordinary text editor like Notepad.
* An good IDE like RStudio also gives you a visual, point\-and\-click means of accessing various language\-specific features. This is a bit difficult to explain until we have have actually used some of these, but trust us, being able to do things like manage packages, set working directories, or inspect objects we’ve made simplifies day\-to\-day use of R. This especially true for new users.
* RStudio is cross\-platform—it will run on a Windows PC, a Linux PC or a Mac. In terms of the appearance and the functionality it provides, RStudio is exactly the same on each of these platforms. If we learn to work with R via RStudio on a Windows PC, it’s no problem migrating to a Mac or Linux PC later on if we need to. This is a big advantage for those of us who work on multiple platforms.
We’re only going to scratch the surface of what RStudio can do and there are certainly alternative bits of software that could meet our immediate needs. The reason for introducing a powerful tool like RStudio is because one day you may need to access things like debugging facilities, package building tools, repository management. RStudio makes it easy to use these advanced tools.
### Getting and installing RStudio
RStudio is developed and maintained by a for\-profit company called… RStudio. They make their money by selling software tools and services related to R and RStudio. The basic desktop version of RStudio is free to download and use though. It can be downloaded from the RStudio [download page](http://www.rstudio.com/products/RStudio/#Desk). The one to go for is the Open Source Edition of RStudio Desktop, **not** the commercial version of RStudio Desktop. RStudio installs like any other piece of software, so there’s nothing to configure after installation.
If you haven’t already done it, go ahead and install RStudio Desktop on your own computer. You are going to need it.
### The anatomy of RStudio
Once it’s installed RStudio is run like any other stand\-alone application, via the Programs menu or the Applications folder on a Windows PC or Mac, respectively[2](#fn2). We’ll say this one last time—RStudio only works if we’ve also installed R. Here is how RStudio appears the first time it runs:
There are three panes inside a single window, which we have labelled with red numbers. Each of these has a well\-defined purpose. Let’s take a quick look at these:
1. The large window on the left is the Console. We have already told you what this is for—the Console lets you know what R is doing and provides a mechanism to interact with R by typing instructions. All this happens at the prompt, `>`. We will be working in the Console in a moment so won’t say any more about this here.
2. The window at the top right contains two tabs. The first of these, labelled **Environment**, allows us to see all the different R objects we can access. There are also some buttons that help us to get data into and out of R. The second, labelled **History**, allows us to see a list of instructions we’ve previously sent to R. The buttons in this tab allow us to reuse or save these instructions.
3. The window at the bottom right contains five tabs. The first, labelled **Files**, gives us a way to interact with the files and folders. The next tab, labelled **Plots**, is where any figures we produce are displayed. This tab also allows you to save your figures to file. The **Packages** tab is where we view, install and update packages used to extend the functionality of R. The **Help** tab is where you can access and display various different help pages. The **Viewer** is essentially an embedded web browser for working with interactive output—we won’t be using it in this course.
Don’t be alarmed if RStudio looks different on your computer. There are a couple of reasons why this might be the case. First, the appearance of RStudio is highly customisable. Take a quick look at the `Tools > Global Options...` window to see what we mean. Second, there is a fourth window that is sometimes be visible when we work with RStudio—the source code Editor we mentioned above. RStudio saves its state between different sessions, so if we have already messed about with RStudio’s appearance or left a script open last time we used it you will see these changes.
#### RStudio will change over time
Keep in mind that RStudio is very actively developed, which means features tend to appear or change over time. Consequently, if you update it regularly expect the odd thing to change here and there. This is generally a good thing—it usually means new features have been added—but it does require you to occasionally adjust to new additions.
### Getting and installing RStudio
RStudio is developed and maintained by a for\-profit company called… RStudio. They make their money by selling software tools and services related to R and RStudio. The basic desktop version of RStudio is free to download and use though. It can be downloaded from the RStudio [download page](http://www.rstudio.com/products/RStudio/#Desk). The one to go for is the Open Source Edition of RStudio Desktop, **not** the commercial version of RStudio Desktop. RStudio installs like any other piece of software, so there’s nothing to configure after installation.
If you haven’t already done it, go ahead and install RStudio Desktop on your own computer. You are going to need it.
### The anatomy of RStudio
Once it’s installed RStudio is run like any other stand\-alone application, via the Programs menu or the Applications folder on a Windows PC or Mac, respectively[2](#fn2). We’ll say this one last time—RStudio only works if we’ve also installed R. Here is how RStudio appears the first time it runs:
There are three panes inside a single window, which we have labelled with red numbers. Each of these has a well\-defined purpose. Let’s take a quick look at these:
1. The large window on the left is the Console. We have already told you what this is for—the Console lets you know what R is doing and provides a mechanism to interact with R by typing instructions. All this happens at the prompt, `>`. We will be working in the Console in a moment so won’t say any more about this here.
2. The window at the top right contains two tabs. The first of these, labelled **Environment**, allows us to see all the different R objects we can access. There are also some buttons that help us to get data into and out of R. The second, labelled **History**, allows us to see a list of instructions we’ve previously sent to R. The buttons in this tab allow us to reuse or save these instructions.
3. The window at the bottom right contains five tabs. The first, labelled **Files**, gives us a way to interact with the files and folders. The next tab, labelled **Plots**, is where any figures we produce are displayed. This tab also allows you to save your figures to file. The **Packages** tab is where we view, install and update packages used to extend the functionality of R. The **Help** tab is where you can access and display various different help pages. The **Viewer** is essentially an embedded web browser for working with interactive output—we won’t be using it in this course.
Don’t be alarmed if RStudio looks different on your computer. There are a couple of reasons why this might be the case. First, the appearance of RStudio is highly customisable. Take a quick look at the `Tools > Global Options...` window to see what we mean. Second, there is a fourth window that is sometimes be visible when we work with RStudio—the source code Editor we mentioned above. RStudio saves its state between different sessions, so if we have already messed about with RStudio’s appearance or left a script open last time we used it you will see these changes.
#### RStudio will change over time
Keep in mind that RStudio is very actively developed, which means features tend to appear or change over time. Consequently, if you update it regularly expect the odd thing to change here and there. This is generally a good thing—it usually means new features have been added—but it does require you to occasionally adjust to new additions.
Working at the Console
----------------------
R was designed to be used interactively—it is what is known as an **interpreted language**, which we can interact with via something called a Command Line Interface (CLI). This is just a fancy way of saying that we can type an instructions to “do something” directly into the Console and those instructions will then be interpreted when we hit the Enter key. If our R expression does not contain any errors, R will then do something like read in some data, perform a calculation, make a figure, and so on. What actually happens obviously depends on what we ask it to do.
Let’s briefly see what all this means by doing something very simple with R. Type `1 + 3` at the Console and hit the Enter key:
```
1+3
```
```
## [1] 4
```
The first line above just reminds us what we typed into the Console. The line after that beginning with `##` shows us what R printed to the Console after reading and evaluating our instructions.
What just happened? We can ignore the `[1]` bit for now (the meaning of this will become clear later in the course). What are we left with – the number 2\. The instruction we gave R was in effect “evaluate the expression `1 + 3`”. R read this in, decided it was a valid R expression, evaluated the expression, and then printed the result to the Console for us. Unsurprisingly, the expression `1 + 3` is a request to add the numbers 1 and 3, and so R prints the number 4 to the Console.
OK, that was not very exciting. In the next chapter we will start learning to use R to carry out more useful calculations. The important take\-away from this is that this sequence of events—reading instructions, evaluating those instructions and printing their output—happens every time we type or paste something into the Console and hit Enter. The printing bit is optional by the way. Whether or not it happens depends on whether you decide to capture the output or not. Just remember, if R does not print anything to the Console it does not necessarily mean nothing has happened.
Why do we keep using that word *expression*? It has a very specific meaning in computer science. The [Wikipedia page](http://en.wikipedia.org/wiki/Expression_(computer_science)) says:
> An expression in a programming language is a combination of explicit values, constants, variables, operators, and functions that are interpreted according to the particular rules of precedence and of association for a particular programming language, which computes and then produces another value.
That probably doesn’t make much sense, but it at least demonstrates why we don’t let computer scientists teach biologists about programming. In simple terms, an R expression is a small set of instructions written in human readable(ish) text that tell R to do something. That’s it. We could write “instructions” instead of “expressions” throughout this book but we may as well use the correct word. Whatever we call them, our aim is to learn how to combine sequences of expressions to Get Things Done in R. That’s what this book is about.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/a-quick-introduction-to-r.html |
Chapter 1 A quick introduction to R
===================================
1\.1 Using R as a big calculator
--------------------------------
### 1\.1\.1 Basic arithmetic
The end of the [Get up and running with R and RStudio](get-up-and-running-with-r-and-rstudio.html#get-up-and-running-with-r-and-rstudio) chapter demonstrated that R can handle familiar arithmetic operations: addition, subtraction, multiplication, division. If we want to add or subtract a pair of numbers just place the `+` or `-` symbol in between two numbers, hit Enter, and R will read the expression, evaluate it, and print the result to the Console. This works exactly as we expect it to:
```
3 + 2
```
```
## [1] 5
```
```
5 - 1
```
```
## [1] 4
```
Multiplication and division are no different, though we don’t use `x` or `÷` for these operations. Instead, we use `*` and `/` to multiply and divide:
```
7 * 2
```
```
## [1] 14
```
```
3 / 2
```
```
## [1] 1.5
```
We can also exponentiate a numbers: raise one number to the power of another. We use the `^` operator to do this:
```
4^2
```
```
## [1] 16
```
This raises 4 to the power of 2 (i.e. we squared it). In general, we can raise a number `x` to the power of `y` using `x^y`. Neither `x` or `y` need to be a whole numbers either.
Arithmetic operations can also be combined into one expression. Assume we want to subtract 6 from 23. The expression to perform this calculation is:
```
2^3 - 6
```
```
## [1] 2
```
\\(2^3\=8\\) and \\(8\-6\=2\\). Simple enough, but what if we had wanted to carry out a slightly longer calculation that required the last answer to then be divided by 2? This is the **wrong** the way to do it:
```
2^3 - 6 / 2
```
```
## [1] 5
```
The answer we were looking for is \\(1\\). So what happened? R evaluated \\(6/2\\) first and then subtracted this answer from \\(2^3\\).
If that’s obvious, great. If not, it’s time to learn a bit about the **order of precendence** used by R. R uses a standard set of rules to decide the order in which arithmetic calculations feed into one another so that it can unambiguously evaluate any expression. It uses the same order as every other computer language, which thankfully is the same one we all learned in mathematics class at school. The order of precedence used is:
1. exponents and roots (“taking powers”)
2. multiplication and division
3. additional and subtraction
#### BODMAS and friends
If you find it difficult to remember order of precedence used by R, there are a load of [mnemonics](http://en.wikipedia.org/wiki/Order_of_operations#Mnemonics) that can to help. Pick one you like and remember that instead.
In order to get the answer we were looking for we need to take control of the order of evaluation. We do this by enclosing grouping the necessary bits of the calculation inside parentheses (“round brackets”). That is, we place `(` and `)` either side of them. The order in which expressions inside different pairs of parentheses are evaluated follows the rules we all had to learn at school. The R expression we should have used is therefore:
```
(2^3 - 6) / 2
```
```
## [1] 1
```
We can use more than one pair of parentheses to control the order of evaluation in more complex calculations. For example, if we want to find the cube root of 2 (i.e. 21/3) rather than 23 in that last calculation we would instead write:
```
(2^(1/3) - 6) / 2
```
```
## [1] -2.370039
```
The parentheses around the `1/3` in the exponent are needed to ensure this is evaluated prior to being used as the exponent.
### 1\.1\.2 Problematic calculations
Now is a good time to highlight how R handles certain kinds of awkward numerical calculations. One of these involves division of a number by 0\. Some programming languages will respond to an attempt to do this with an error. R is a bit more forgiving:
```
1/0
```
```
## [1] Inf
```
Mathematically, division of a finite number by `0` equals A Very Large Number: infinity. R has a special built in data value that allows it to handle this kind of thing. This is `Inf`, which of course stands for “infinity”. The other special kind of value we sometimes run into can be generated by numerical calculations that don’t have a well\-defined result. For example, it arises when we try to divide 0 or infinity by themselves:
```
0/0
```
```
## [1] NaN
```
The `NaN` in this result stands for Not a Number. R produces `NaN` because \\(0/0\\) is not defined mathematically: it produces something that is Not a Number. The reason we are pointing out `Inf` and `NaN` is not because we expect to use them. It’s important to know what they represent because they often arise as a result of a mistake somewhere in a program. It’s hard to track down such mistakes if we don’t know how `Inf` and `NaN` arise.
That is enough about using R as a calculator for now. What we’ve seen—even though we haven’t said it yet—is that R functions as a REPL: a read\-eval\-print loop (there’s no need to remember this term). R takes user input, evaluates it, prints the results, and then waits for the next input. This is handy, because it means we can use it interactively, working through an analysis line\-by\-line. However, to use R to solve for complex problems we need to learn how to store and reuse results. We’ll look at this in the next section.
#### Working efficiently at the Console
Working at the Console soon gets tedious if we have to retype similar things over and over again. There is no need to do this though. Place the cursor at the prompt and hit the up arrow. What happens? This brings back the last expression sent to R’s interpreter. Hit the up arrow again to see the last\-but\-one expression, and so on. We go back down the list using the down arrow. Once we’re at the line we need, we use the left and right arrows to move around the expression and the delete key to remove the parts we want to change. Once an expression has been edited like this we hit Enter to send it to R again. Try it!
1\.2 Storing and reusing results
--------------------------------
So far we’ve not tried to do anything remotely complicated or interesting, though we now know how to construct longer calculations using parentheses to control the order of evaluation. This approach is fine if the calculation is very simple. It quickly becomes unwieldy for dealing with anything more. The best way to see what we mean is by working through a simple example—solving a quadratic equation. Quadratic equations looks like this: \\(a \+ bx \+ cx^2 \= 0\\). If we know the values of \\(a\\), \\(b\\) and \\(c\\) then we can solve this equation to find the values of \\(x\\) that ensure the left hand side equals the right hand side. Here’s the well\-known formula for these solutions: \\\[
x \= \\frac{\-b\\pm\\sqrt{b^2\-4ac}}{2a}
\\] Let’s use R to calculate these solutions for us. Say that we want to find the solutions to the quadratic equation when \\(a\=1\\), \\(b\=6\\) and \\(c\=5\\). We just have to turn the above equation into a pair of R expressions:
```
(-6 + (6^2 -4 * 1 * 5)^(1/2)) / (2 * 1)
```
```
## [1] -1
```
```
(-6 - (6^2 -4 * 1 * 5)^(1/2)) / (2 * 1)
```
```
## [1] -5
```
The output tells us that the two values of \\(x\\) that satisfy this particular quadratic equation are \-1 and \-5\. What should we do if we now need to solve a different quadratic equation? Working at the Console, we could bring up the expressions we typed (using the up arrow) and then go through each of these, changing the numbers to match the new values of \\(a\\), \\(b\\) and \\(c\\). Editing individual expressions like this is fairly tedious, and more importantly, it’s fairly error prone because we have to make sure we substitute the new numbers at exactly the right positions.
A partial solution to this problem is to store the values of \\(a\\), \\(b\\) and \\(c\\). We’ll see precisely why this is useful in a moment. First, we need to learn how to store results in R. The key to this is to use the **assigment operator**, written as a left arrow `<-`. Sticking with our original example, we need to store the numbers 1, 6 and 5\. We do this using three expressions, one after the another:
```
a <- 1
```
```
b <- 6
```
```
c <- 5
```
Notice that we don’t put a space between `<` and `-`—R won’t like it if we try to add one. R didn’t print anything to screen, so what actually happened? We asked R to first evaluate the expression on the right hand side of each `<-` (just a number in this case) and then **assign the result** of that evaluation instead of printing it. Each result has a name associated with it, which appears on the left hand side of the `<-`.
#### RStudio shortcut
We use the assignment operator `<-` all the time when working with R, and because it’s inefficient to have to type the `<` and `-` characters over and over again, RStudio has a built in shortcut for typing the assignment operator: Alt \+ `-` . Try it. Move the curser to the Console, hold down the Alt key (‘Option’ on a Mac), and press the `-` sign key. RStudio will auto\-magically add insert `<-`.
The net result of all this is that we have stored the numbers 1, 6 and 5 somewhere in R, associating them with the letters `a`, `b` and `c`, respectively. What does this mean? Here’s what happens if we type the letter `a` into the Console and hit Enter:
```
a
```
```
## [1] 1
```
It looks the same as if we had typed the number `1` directly into the Console. The result of typing `b` or `c` is hopefully obvious. What we just did was to store the output that results from evaluating three separate R expressions, associating each a name so that we can access them again[3](#fn3).
Whenever we use the assignment operator `<-` we are telling R to keep whatever kind of value results from the calculation on the right hand side of `<-`, giving it the name on the left hand side so that we can access it later. Why is this useful? Let’s imagine we want to do more than one thing with our three numbers. If we want to know their sum or their product we can now use:
```
a + b + c
```
```
## [1] 12
```
```
a * b * c
```
```
## [1] 30
```
So once we’ve stored a result and associated it with a name we can reuse it wherever it’s needed. Returning to our motivating example, we can now calculate the solutions to the quadratic equation by typing these two expressions into the Console:
```
(-b + (b^2 -4 * a * c)^(1/2)) / (2 * a)
```
```
## [1] -1
```
```
(-b - (b^2 -4 * a * c)^(1/2)) / (2 * a)
```
```
## [1] -5
```
Imagine we’d like to find the solutions to a different quadratic equation where \\(a\=1\\), \\(b\=5\\) and \\(c\=5\\). We just changed the value of \\(b\\) here to keep things simple. To find our new solutions we have to do two things. First we change the value of the number associated with `b`…
```
b <- 5
```
…then we bring up those lines that calculate the solutions to the quadratic equation and run them, one after the other:
```
(-b + (b^2 -4 * a * c)^(1/2)) / (2 * a)
```
```
## [1] -1.381966
```
```
(-b - (b^2 -4 * a * c)^(1/2)) / (2 * a)
```
```
## [1] -3.618034
```
We didn’t have to retype those two expressions. We could just use the up arrow to bring each one back to the prompt and hit Enter. This is much simpler than editing the expressions. More importantly, we are beginning to see the benefits of using something like R: we can break down complex calculations into a series of steps, storing and reusing intermediate results as required.
1\.3 How does assignment work?
------------------------------
It’s important to understand, at least roughly, how assignment works. The first thing to note is that when we use the assignment operator `<-` to associate names and values, we informally refer to this as creating (or modifying) **a variable**. This is much less tedious than using words like “bind”, “associate”, value“, and”name" all the time. Why is it called a variable? What happens when we run these lines:
```
myvar <- 1
myvar <- 7
```
The first time we used `<-` with `myvar` on the left hand side we **created** a variable `myvar` associated with the value 1\. The second line `myvar <- 7` **modified** the value of `myvar` to be 7\. This is why we refer to `myvar` as a variable: we can change the its value as we please. What happened to the old value associated with `myvar`? In short, it is gone, kaput, lost… forever. The moment we assign a new value to `myvar` the old one is destroyed and can no longer be accessed. Remember this.
Keep in mind that the expression on the right hand side of `<-` can be any kind of calculation, not just just a number. For example, if I want to store the number 1, associating it with `answer`, I could do this:
```
answer <- (1 + 2^3) / (2 + 7)
```
That is a strange way to assign the number 1, but it illustrates the point. More generally, as along as the expression on the right hand side generates an output it can be used with the assignment operator. For example, we can create new variables from old variables:
```
newvar <- 2 * answer
```
What happened here? Start at the right hand side of `<-`. The expression on this side contained the variable `answer` so R went to see if `answer` actually exists in the global environment. It does, so it then substituted the value associated with `answer` into the requested calculation, and then assigned the resulting value of 2 to `newvar`. We created a new variable `newvar` using information associated with `answer`.
Now look at what happens if we just copy a variable using the assignment operator:
```
myvar <- 7
mycopy <- myvar
```
At this point we have two variables, `myvar` and `mycopy`, each associated with the number 7\. There is something very important going on here: each of these is associated with a **different copy** of this number. If we change the value associated with one of these variables it does not change the value of the other, as this shows:
```
myvar <- 10
```
```
myvar
```
```
## [1] 10
```
```
mycopy
```
```
## [1] 7
```
R always behaves like this unless we work hard to alter this behaviour (we never do this in this book). So remember, every time we assign one variable to another, we actually make a completely new, independent copy of its associated value. For our purposes this is a good thing because it makes it much easier to understand what a long sequence of R expressions will do. That probably doesn’t seem like an obvious or important point, but trust us, it is.
1\.4 Global environment
-----------------------
Whenever we associate a name with a value we create a copy of both these things somewhere in the computer’s memory. In R the “somewhere” is called an environment. We aren’t going to get into a discussion of R’s many different kinds of environments—that’s an advanced topic well beyond the scope of this book. The one environment we do need to be aware of though is the **Global Environment**.
Whenever we perform an assignment in the Console the name\-value pair we create (i.e. the variable) is placed into the Global Environment. The current set of variables are all listed in the **Environment** tab in RStudio. Take a look. Assuming that at least one variable has been made, there will be two columns in the **Environment** tab. The first shows us the names of all the variables, while the second summarises their values.
#### The Global Environment is temporary
By default, R will save the Global Environment whenever we close it down and then restore it in the next R session. It does this by writing a copy of the Global Environment to disk. In theory this means we can close down R, reopen it, and pick things up from where we left off. Don’t do this—it only increases the risk of making a serious mistake. Assume that when R and RStudio are shut down, everything in Global Environment will be lost.
1\.5 Naming rules and conventions
---------------------------------
We don’t have to use a single letter to name things in R. The words `tom`, `dick` and `harry` could be used in place of `a`, `b` and `c`. It might be confusing to use them, but `tom`, `dick` and `harry` are all legal names as far as to R is concerned:
* A legal name in R is any sequence of letters, numbers, `.`, or `_`, but the sequence of characters we use must begin with a letter. Both upper and lower case letters are allowed. For example, `num_1`, `num.1`, `num1`, `NUM1`, `myNum1` are all legal names, but `1num` and `_num1` are not because they begin with `1` and `_`.
* R is case sensitive—it treats upper and lower case letters as different characters. This means that `num` and `Num` are treated as distinct names. Forgetting about case sensitivity is a good way to create errors when using R. Try to remember that.
#### Don’t begin a name with `.`
We are allowed to begin a name with a `.`, but this usually is A Bad Idea. Why? Because variable names that begin with `.` are hidden from view in the Global Environment—the value it refers to exists but it’s invisible. This behaviour exists to allow R to create invisible variables that control how it behaves. This is useful, but it isn’t really meant to be used by the average user.
1\.1 Using R as a big calculator
--------------------------------
### 1\.1\.1 Basic arithmetic
The end of the [Get up and running with R and RStudio](get-up-and-running-with-r-and-rstudio.html#get-up-and-running-with-r-and-rstudio) chapter demonstrated that R can handle familiar arithmetic operations: addition, subtraction, multiplication, division. If we want to add or subtract a pair of numbers just place the `+` or `-` symbol in between two numbers, hit Enter, and R will read the expression, evaluate it, and print the result to the Console. This works exactly as we expect it to:
```
3 + 2
```
```
## [1] 5
```
```
5 - 1
```
```
## [1] 4
```
Multiplication and division are no different, though we don’t use `x` or `÷` for these operations. Instead, we use `*` and `/` to multiply and divide:
```
7 * 2
```
```
## [1] 14
```
```
3 / 2
```
```
## [1] 1.5
```
We can also exponentiate a numbers: raise one number to the power of another. We use the `^` operator to do this:
```
4^2
```
```
## [1] 16
```
This raises 4 to the power of 2 (i.e. we squared it). In general, we can raise a number `x` to the power of `y` using `x^y`. Neither `x` or `y` need to be a whole numbers either.
Arithmetic operations can also be combined into one expression. Assume we want to subtract 6 from 23. The expression to perform this calculation is:
```
2^3 - 6
```
```
## [1] 2
```
\\(2^3\=8\\) and \\(8\-6\=2\\). Simple enough, but what if we had wanted to carry out a slightly longer calculation that required the last answer to then be divided by 2? This is the **wrong** the way to do it:
```
2^3 - 6 / 2
```
```
## [1] 5
```
The answer we were looking for is \\(1\\). So what happened? R evaluated \\(6/2\\) first and then subtracted this answer from \\(2^3\\).
If that’s obvious, great. If not, it’s time to learn a bit about the **order of precendence** used by R. R uses a standard set of rules to decide the order in which arithmetic calculations feed into one another so that it can unambiguously evaluate any expression. It uses the same order as every other computer language, which thankfully is the same one we all learned in mathematics class at school. The order of precedence used is:
1. exponents and roots (“taking powers”)
2. multiplication and division
3. additional and subtraction
#### BODMAS and friends
If you find it difficult to remember order of precedence used by R, there are a load of [mnemonics](http://en.wikipedia.org/wiki/Order_of_operations#Mnemonics) that can to help. Pick one you like and remember that instead.
In order to get the answer we were looking for we need to take control of the order of evaluation. We do this by enclosing grouping the necessary bits of the calculation inside parentheses (“round brackets”). That is, we place `(` and `)` either side of them. The order in which expressions inside different pairs of parentheses are evaluated follows the rules we all had to learn at school. The R expression we should have used is therefore:
```
(2^3 - 6) / 2
```
```
## [1] 1
```
We can use more than one pair of parentheses to control the order of evaluation in more complex calculations. For example, if we want to find the cube root of 2 (i.e. 21/3) rather than 23 in that last calculation we would instead write:
```
(2^(1/3) - 6) / 2
```
```
## [1] -2.370039
```
The parentheses around the `1/3` in the exponent are needed to ensure this is evaluated prior to being used as the exponent.
### 1\.1\.2 Problematic calculations
Now is a good time to highlight how R handles certain kinds of awkward numerical calculations. One of these involves division of a number by 0\. Some programming languages will respond to an attempt to do this with an error. R is a bit more forgiving:
```
1/0
```
```
## [1] Inf
```
Mathematically, division of a finite number by `0` equals A Very Large Number: infinity. R has a special built in data value that allows it to handle this kind of thing. This is `Inf`, which of course stands for “infinity”. The other special kind of value we sometimes run into can be generated by numerical calculations that don’t have a well\-defined result. For example, it arises when we try to divide 0 or infinity by themselves:
```
0/0
```
```
## [1] NaN
```
The `NaN` in this result stands for Not a Number. R produces `NaN` because \\(0/0\\) is not defined mathematically: it produces something that is Not a Number. The reason we are pointing out `Inf` and `NaN` is not because we expect to use them. It’s important to know what they represent because they often arise as a result of a mistake somewhere in a program. It’s hard to track down such mistakes if we don’t know how `Inf` and `NaN` arise.
That is enough about using R as a calculator for now. What we’ve seen—even though we haven’t said it yet—is that R functions as a REPL: a read\-eval\-print loop (there’s no need to remember this term). R takes user input, evaluates it, prints the results, and then waits for the next input. This is handy, because it means we can use it interactively, working through an analysis line\-by\-line. However, to use R to solve for complex problems we need to learn how to store and reuse results. We’ll look at this in the next section.
#### Working efficiently at the Console
Working at the Console soon gets tedious if we have to retype similar things over and over again. There is no need to do this though. Place the cursor at the prompt and hit the up arrow. What happens? This brings back the last expression sent to R’s interpreter. Hit the up arrow again to see the last\-but\-one expression, and so on. We go back down the list using the down arrow. Once we’re at the line we need, we use the left and right arrows to move around the expression and the delete key to remove the parts we want to change. Once an expression has been edited like this we hit Enter to send it to R again. Try it!
### 1\.1\.1 Basic arithmetic
The end of the [Get up and running with R and RStudio](get-up-and-running-with-r-and-rstudio.html#get-up-and-running-with-r-and-rstudio) chapter demonstrated that R can handle familiar arithmetic operations: addition, subtraction, multiplication, division. If we want to add or subtract a pair of numbers just place the `+` or `-` symbol in between two numbers, hit Enter, and R will read the expression, evaluate it, and print the result to the Console. This works exactly as we expect it to:
```
3 + 2
```
```
## [1] 5
```
```
5 - 1
```
```
## [1] 4
```
Multiplication and division are no different, though we don’t use `x` or `÷` for these operations. Instead, we use `*` and `/` to multiply and divide:
```
7 * 2
```
```
## [1] 14
```
```
3 / 2
```
```
## [1] 1.5
```
We can also exponentiate a numbers: raise one number to the power of another. We use the `^` operator to do this:
```
4^2
```
```
## [1] 16
```
This raises 4 to the power of 2 (i.e. we squared it). In general, we can raise a number `x` to the power of `y` using `x^y`. Neither `x` or `y` need to be a whole numbers either.
Arithmetic operations can also be combined into one expression. Assume we want to subtract 6 from 23. The expression to perform this calculation is:
```
2^3 - 6
```
```
## [1] 2
```
\\(2^3\=8\\) and \\(8\-6\=2\\). Simple enough, but what if we had wanted to carry out a slightly longer calculation that required the last answer to then be divided by 2? This is the **wrong** the way to do it:
```
2^3 - 6 / 2
```
```
## [1] 5
```
The answer we were looking for is \\(1\\). So what happened? R evaluated \\(6/2\\) first and then subtracted this answer from \\(2^3\\).
If that’s obvious, great. If not, it’s time to learn a bit about the **order of precendence** used by R. R uses a standard set of rules to decide the order in which arithmetic calculations feed into one another so that it can unambiguously evaluate any expression. It uses the same order as every other computer language, which thankfully is the same one we all learned in mathematics class at school. The order of precedence used is:
1. exponents and roots (“taking powers”)
2. multiplication and division
3. additional and subtraction
#### BODMAS and friends
If you find it difficult to remember order of precedence used by R, there are a load of [mnemonics](http://en.wikipedia.org/wiki/Order_of_operations#Mnemonics) that can to help. Pick one you like and remember that instead.
In order to get the answer we were looking for we need to take control of the order of evaluation. We do this by enclosing grouping the necessary bits of the calculation inside parentheses (“round brackets”). That is, we place `(` and `)` either side of them. The order in which expressions inside different pairs of parentheses are evaluated follows the rules we all had to learn at school. The R expression we should have used is therefore:
```
(2^3 - 6) / 2
```
```
## [1] 1
```
We can use more than one pair of parentheses to control the order of evaluation in more complex calculations. For example, if we want to find the cube root of 2 (i.e. 21/3) rather than 23 in that last calculation we would instead write:
```
(2^(1/3) - 6) / 2
```
```
## [1] -2.370039
```
The parentheses around the `1/3` in the exponent are needed to ensure this is evaluated prior to being used as the exponent.
### 1\.1\.2 Problematic calculations
Now is a good time to highlight how R handles certain kinds of awkward numerical calculations. One of these involves division of a number by 0\. Some programming languages will respond to an attempt to do this with an error. R is a bit more forgiving:
```
1/0
```
```
## [1] Inf
```
Mathematically, division of a finite number by `0` equals A Very Large Number: infinity. R has a special built in data value that allows it to handle this kind of thing. This is `Inf`, which of course stands for “infinity”. The other special kind of value we sometimes run into can be generated by numerical calculations that don’t have a well\-defined result. For example, it arises when we try to divide 0 or infinity by themselves:
```
0/0
```
```
## [1] NaN
```
The `NaN` in this result stands for Not a Number. R produces `NaN` because \\(0/0\\) is not defined mathematically: it produces something that is Not a Number. The reason we are pointing out `Inf` and `NaN` is not because we expect to use them. It’s important to know what they represent because they often arise as a result of a mistake somewhere in a program. It’s hard to track down such mistakes if we don’t know how `Inf` and `NaN` arise.
That is enough about using R as a calculator for now. What we’ve seen—even though we haven’t said it yet—is that R functions as a REPL: a read\-eval\-print loop (there’s no need to remember this term). R takes user input, evaluates it, prints the results, and then waits for the next input. This is handy, because it means we can use it interactively, working through an analysis line\-by\-line. However, to use R to solve for complex problems we need to learn how to store and reuse results. We’ll look at this in the next section.
#### Working efficiently at the Console
Working at the Console soon gets tedious if we have to retype similar things over and over again. There is no need to do this though. Place the cursor at the prompt and hit the up arrow. What happens? This brings back the last expression sent to R’s interpreter. Hit the up arrow again to see the last\-but\-one expression, and so on. We go back down the list using the down arrow. Once we’re at the line we need, we use the left and right arrows to move around the expression and the delete key to remove the parts we want to change. Once an expression has been edited like this we hit Enter to send it to R again. Try it!
1\.2 Storing and reusing results
--------------------------------
So far we’ve not tried to do anything remotely complicated or interesting, though we now know how to construct longer calculations using parentheses to control the order of evaluation. This approach is fine if the calculation is very simple. It quickly becomes unwieldy for dealing with anything more. The best way to see what we mean is by working through a simple example—solving a quadratic equation. Quadratic equations looks like this: \\(a \+ bx \+ cx^2 \= 0\\). If we know the values of \\(a\\), \\(b\\) and \\(c\\) then we can solve this equation to find the values of \\(x\\) that ensure the left hand side equals the right hand side. Here’s the well\-known formula for these solutions: \\\[
x \= \\frac{\-b\\pm\\sqrt{b^2\-4ac}}{2a}
\\] Let’s use R to calculate these solutions for us. Say that we want to find the solutions to the quadratic equation when \\(a\=1\\), \\(b\=6\\) and \\(c\=5\\). We just have to turn the above equation into a pair of R expressions:
```
(-6 + (6^2 -4 * 1 * 5)^(1/2)) / (2 * 1)
```
```
## [1] -1
```
```
(-6 - (6^2 -4 * 1 * 5)^(1/2)) / (2 * 1)
```
```
## [1] -5
```
The output tells us that the two values of \\(x\\) that satisfy this particular quadratic equation are \-1 and \-5\. What should we do if we now need to solve a different quadratic equation? Working at the Console, we could bring up the expressions we typed (using the up arrow) and then go through each of these, changing the numbers to match the new values of \\(a\\), \\(b\\) and \\(c\\). Editing individual expressions like this is fairly tedious, and more importantly, it’s fairly error prone because we have to make sure we substitute the new numbers at exactly the right positions.
A partial solution to this problem is to store the values of \\(a\\), \\(b\\) and \\(c\\). We’ll see precisely why this is useful in a moment. First, we need to learn how to store results in R. The key to this is to use the **assigment operator**, written as a left arrow `<-`. Sticking with our original example, we need to store the numbers 1, 6 and 5\. We do this using three expressions, one after the another:
```
a <- 1
```
```
b <- 6
```
```
c <- 5
```
Notice that we don’t put a space between `<` and `-`—R won’t like it if we try to add one. R didn’t print anything to screen, so what actually happened? We asked R to first evaluate the expression on the right hand side of each `<-` (just a number in this case) and then **assign the result** of that evaluation instead of printing it. Each result has a name associated with it, which appears on the left hand side of the `<-`.
#### RStudio shortcut
We use the assignment operator `<-` all the time when working with R, and because it’s inefficient to have to type the `<` and `-` characters over and over again, RStudio has a built in shortcut for typing the assignment operator: Alt \+ `-` . Try it. Move the curser to the Console, hold down the Alt key (‘Option’ on a Mac), and press the `-` sign key. RStudio will auto\-magically add insert `<-`.
The net result of all this is that we have stored the numbers 1, 6 and 5 somewhere in R, associating them with the letters `a`, `b` and `c`, respectively. What does this mean? Here’s what happens if we type the letter `a` into the Console and hit Enter:
```
a
```
```
## [1] 1
```
It looks the same as if we had typed the number `1` directly into the Console. The result of typing `b` or `c` is hopefully obvious. What we just did was to store the output that results from evaluating three separate R expressions, associating each a name so that we can access them again[3](#fn3).
Whenever we use the assignment operator `<-` we are telling R to keep whatever kind of value results from the calculation on the right hand side of `<-`, giving it the name on the left hand side so that we can access it later. Why is this useful? Let’s imagine we want to do more than one thing with our three numbers. If we want to know their sum or their product we can now use:
```
a + b + c
```
```
## [1] 12
```
```
a * b * c
```
```
## [1] 30
```
So once we’ve stored a result and associated it with a name we can reuse it wherever it’s needed. Returning to our motivating example, we can now calculate the solutions to the quadratic equation by typing these two expressions into the Console:
```
(-b + (b^2 -4 * a * c)^(1/2)) / (2 * a)
```
```
## [1] -1
```
```
(-b - (b^2 -4 * a * c)^(1/2)) / (2 * a)
```
```
## [1] -5
```
Imagine we’d like to find the solutions to a different quadratic equation where \\(a\=1\\), \\(b\=5\\) and \\(c\=5\\). We just changed the value of \\(b\\) here to keep things simple. To find our new solutions we have to do two things. First we change the value of the number associated with `b`…
```
b <- 5
```
…then we bring up those lines that calculate the solutions to the quadratic equation and run them, one after the other:
```
(-b + (b^2 -4 * a * c)^(1/2)) / (2 * a)
```
```
## [1] -1.381966
```
```
(-b - (b^2 -4 * a * c)^(1/2)) / (2 * a)
```
```
## [1] -3.618034
```
We didn’t have to retype those two expressions. We could just use the up arrow to bring each one back to the prompt and hit Enter. This is much simpler than editing the expressions. More importantly, we are beginning to see the benefits of using something like R: we can break down complex calculations into a series of steps, storing and reusing intermediate results as required.
1\.3 How does assignment work?
------------------------------
It’s important to understand, at least roughly, how assignment works. The first thing to note is that when we use the assignment operator `<-` to associate names and values, we informally refer to this as creating (or modifying) **a variable**. This is much less tedious than using words like “bind”, “associate”, value“, and”name" all the time. Why is it called a variable? What happens when we run these lines:
```
myvar <- 1
myvar <- 7
```
The first time we used `<-` with `myvar` on the left hand side we **created** a variable `myvar` associated with the value 1\. The second line `myvar <- 7` **modified** the value of `myvar` to be 7\. This is why we refer to `myvar` as a variable: we can change the its value as we please. What happened to the old value associated with `myvar`? In short, it is gone, kaput, lost… forever. The moment we assign a new value to `myvar` the old one is destroyed and can no longer be accessed. Remember this.
Keep in mind that the expression on the right hand side of `<-` can be any kind of calculation, not just just a number. For example, if I want to store the number 1, associating it with `answer`, I could do this:
```
answer <- (1 + 2^3) / (2 + 7)
```
That is a strange way to assign the number 1, but it illustrates the point. More generally, as along as the expression on the right hand side generates an output it can be used with the assignment operator. For example, we can create new variables from old variables:
```
newvar <- 2 * answer
```
What happened here? Start at the right hand side of `<-`. The expression on this side contained the variable `answer` so R went to see if `answer` actually exists in the global environment. It does, so it then substituted the value associated with `answer` into the requested calculation, and then assigned the resulting value of 2 to `newvar`. We created a new variable `newvar` using information associated with `answer`.
Now look at what happens if we just copy a variable using the assignment operator:
```
myvar <- 7
mycopy <- myvar
```
At this point we have two variables, `myvar` and `mycopy`, each associated with the number 7\. There is something very important going on here: each of these is associated with a **different copy** of this number. If we change the value associated with one of these variables it does not change the value of the other, as this shows:
```
myvar <- 10
```
```
myvar
```
```
## [1] 10
```
```
mycopy
```
```
## [1] 7
```
R always behaves like this unless we work hard to alter this behaviour (we never do this in this book). So remember, every time we assign one variable to another, we actually make a completely new, independent copy of its associated value. For our purposes this is a good thing because it makes it much easier to understand what a long sequence of R expressions will do. That probably doesn’t seem like an obvious or important point, but trust us, it is.
1\.4 Global environment
-----------------------
Whenever we associate a name with a value we create a copy of both these things somewhere in the computer’s memory. In R the “somewhere” is called an environment. We aren’t going to get into a discussion of R’s many different kinds of environments—that’s an advanced topic well beyond the scope of this book. The one environment we do need to be aware of though is the **Global Environment**.
Whenever we perform an assignment in the Console the name\-value pair we create (i.e. the variable) is placed into the Global Environment. The current set of variables are all listed in the **Environment** tab in RStudio. Take a look. Assuming that at least one variable has been made, there will be two columns in the **Environment** tab. The first shows us the names of all the variables, while the second summarises their values.
#### The Global Environment is temporary
By default, R will save the Global Environment whenever we close it down and then restore it in the next R session. It does this by writing a copy of the Global Environment to disk. In theory this means we can close down R, reopen it, and pick things up from where we left off. Don’t do this—it only increases the risk of making a serious mistake. Assume that when R and RStudio are shut down, everything in Global Environment will be lost.
1\.5 Naming rules and conventions
---------------------------------
We don’t have to use a single letter to name things in R. The words `tom`, `dick` and `harry` could be used in place of `a`, `b` and `c`. It might be confusing to use them, but `tom`, `dick` and `harry` are all legal names as far as to R is concerned:
* A legal name in R is any sequence of letters, numbers, `.`, or `_`, but the sequence of characters we use must begin with a letter. Both upper and lower case letters are allowed. For example, `num_1`, `num.1`, `num1`, `NUM1`, `myNum1` are all legal names, but `1num` and `_num1` are not because they begin with `1` and `_`.
* R is case sensitive—it treats upper and lower case letters as different characters. This means that `num` and `Num` are treated as distinct names. Forgetting about case sensitivity is a good way to create errors when using R. Try to remember that.
#### Don’t begin a name with `.`
We are allowed to begin a name with a `.`, but this usually is A Bad Idea. Why? Because variable names that begin with `.` are hidden from view in the Global Environment—the value it refers to exists but it’s invisible. This behaviour exists to allow R to create invisible variables that control how it behaves. This is useful, but it isn’t really meant to be used by the average user.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/scripts.html |
Chapter 2 Building scripts
==========================
2\.1 Introduction
-----------------
We’ve seen that using variables is useful because it enables us to break down a problem into a series of simpler steps. However, so far we’ve only been working in the Console. If we want to reuse a calculation when we’re working like this, we have to change a variable or two and then evaluate the expressions that do the job of solving an equation, make a graph, whatever. We also have to do all of this in the correct order, or things will not work as intended.
We can see that working in the Console is not going to be practical most of the time. So what should we do? The answer is: put our sequence of R expressions into a text file, called a **script**. Calling it a script makes it sound a bit fancy and clever—“I spent all day debugging my script”. It is not. It is a boring text file that could be opened up in something like Notepad.exe. We just call it a script to signify the fact that the text contained in the file is a series of instructions telling our computer to do something.
Working directly at the Console is the simplest way to use R, but we do not recommend working this way unless you only need to do something very simple that involves a handful of steps. For more complicated activities you should always store your instructions in a script.
2\.2 Writing scripts in RStudio
-------------------------------
To open a new script in RStudio navigate to `File > New File > R Script`. This will open the new file in a fourth pane. This pane is the Source Code Editor we mentioned in the [Get up and running with R and RStudio](get-up-and-running-with-r-and-rstudio.html#get-up-and-running-with-r-and-rstudio) chapter. The name of the tab where this new file lives will be set to **Untitled1** if we haven’t opened any other new R Scripts. Here is what RStudio looks like after we do this (we’ve highlighted the new pane with a red asterisk):
When we work with a script we type the required sequence of R expressions into the Editor pane, **not directly into the Console**. This is important—if we mix and match mistakes happen. The worst of these is that we write a script that seems to work, only to find it is broken when we open it up and use it again later. This usually happens because we typed something into the Console that is needed to make the whole script run when we were preparing it, but then forget to put it into the script. Just don’t switch between using the Console and the Editor to avoid this.
The easiest way to appreciate the benefits of using a script is to work with one. Here are a few lines of R code to copy\-paste into the new Editor pane…
```
a <- 1
b <- 6
c <- 5
sqrt.b2ac <- (b^2 -4 * a * c)^(1/2)
(-b + sqrt.b2ac) / (2 * a)
(-b - sqrt.b2ac) / (2 * a)
```
…and here’s a partial screenshot of how the Editor pane might look (the details depend on how RStudio is set up):
Notice that parts of the R code is formatted by colour. This is called **syntax highlighting**. Syntax highlighting is a must have feature of any Editor. In a nutshell, syntax is a bit like the grammar of a computer language. It is the set of rules that determine how we form valid expressions, assign variables, and so on. The purpose of syntax highlighting is to draw attention to different components of syntax. We can see that when we use the Cobalt highlighting option, RStudio sets the background to black and displays variables in white, parentheses and arithmetic operators in orange, and numbers in red. It doesn’t matter so much what the colours are. What matters is that we have a visual means to distinguish these different kinds of elements, making it much easier to read a script.
#### Choose your own colour scheme
The first thing you will probably notice is that this Editor looks a little different from yours. We said earlier that RStudio was highly customisable. What we did above was change the way it does something called **syntax highlighting**. You can do this by navigating to `Tools > Global Options…`, selecting the `Appearance` button, and picking the `Cobalt` option under `Editor theme` (try it!).
The other kind of elements RStudio has highlighted are in blue. We added these. They are called comments. Comments in R always start with a `#` symbol—this is called the “hash”" symbol (also known as the ‘pound’ symbol to North Americans of a certain age). **Lines that start with `#` are completely ignored by R**. They exist only to allow us, the developers of a script, to add notes and explanations that remind us how it all works.
#### Comments are important
At this point we just want to emphasise that **you should always use comments in your scripts** to remind yourself what your R code is supposed to be doing. Use them liberally to help you understand the logic of each script you write. This is another “take our word”" for it situation – if you do not use comments, then when you come back to your precious script in a few weeks/months/years time you will have no idea what it does.
2\.3 Running scripts in RStudio
-------------------------------
The whole point of writing a script is ultimately to run it. The phrase “run our code” is shorthand for “send a number of R expressions to the R interpreter to be read and evaluated”. The latter is tedious to write (and read) over and over again, so we will just write “run your/my/our code”. We could run the code in the above script by copying and pasting it into the Console, but this is inefficient. Instead of relying on cut and paste, RStudio gives us different ways to run our code:
* There is a `Run` button at the top right of the Editor pane. As we might imagine, clicking on this will run some code. If we haven’t highlighted anything in the Editor, this runs whichever line the cursor is at, i.e. it runs just that one line. If we had highlighted a region inside the Editor, this button will run all of that in sequence.
* No one likes clicking buttons. Luckily, pressing Control\+Enter (or Command\+Enter on a Mac) does exactly the same thing as the `Run` button. It also uses the same rules to decide which bits of code to run or not[4](#fn4).
Now that we know how to ‘run our code’, we can run every line in the script we just started. Here’s what should happen at the Console when we do this:
```
a <- 1
b <- 6
c <- 5
sqrt.b2ac <- (b^2 -4 * a * c)^(1/2)
```
```
(-b + sqrt.b2ac) / (2 * a)
```
```
## [1] -1
```
```
(-b - sqrt.b2ac) / (2 * a)
```
```
## [1] -5
```
This works exactly as though we typed or pasted the sequence of R expressions into the Console, hitting Enter each time we get to the end of a line. What this means is that we can use this script to find the solutions to any quadratic equation with ‘real roots’. All we have to do is edit the values assigned to `a`, `b` and `c` and then rerun **the whole script**. We can’t just rerun bits of it because everything is designed to work together, in sequence.
Now that we have a script that does something a little bit useful we might wish to reuse it at some point. It’s just a text file, so we can save the script as we would any other file. We can do this using the familiar menu\-based approach (`File > Save As...`) or via the keyboard shortcut `Control+S` (or `Command+S` on a Mac). The only thing to keep in mind is that we must use the file extension `.R` or `.r` when we name the file, e.g. `my_great_script.R`. This is because RStudio uses the file extension to detect the fact that a file is an R script and not an ordinary text file. If we don’t do this, then next time we open up the file in RStudio we won’t be able to access the fancy Editor features like syntax highlighting, nor will we be able to send lines to the Console without using copy\-paste.
From now on always work with scripts. No more typing into the Console!
2\.4 Spotting problems
----------------------
We may as well get something out of the way early on. It’s painfully easy to accidentally ask R to do something that contains an error of some kind. Mistakes happen all the time when writing R code—everyone does it. It’s not a problem when this happens. When it does though, it’s important to step back and work out what went wrong.
### 2\.4\.1 The dreaded `+`
Be careful when highlighting code to run. RStudio will run exactly the text that is highlighted. If we start or finish the highlighted region in the middle of an expression then one of three things will usually happen. If we’re lucky we’ll generate an error because we ran an invalid partial expression. We say this is lucky because the error will at least be easy to spot. If we’re unlucky, we might end up running part of an expression that *is itself* a valid expression, but does not do what we had intended. This is harder to spot because it won’t generate an error, and it will probably create problems further down the line.
The third outcome is that the Console will look something like this:
What happened? Look carefully at the little snippet of R code we sent to the Console. It’s not a complete R expression, because it is missing a closing parenthesis: `)`. When R receives only part of an expression like this, which has correct syntax but is not complete, it sits and waits for the rest of the expression. This is what the `+` at the Console signifies. When we see this we have two options. We can manually type in the missing part of the expression and hit Enter, or (better) we can hit the Escape key to return you to the prompt `>` and start again. The first option is rather error prone so we would generally prefer the latter.
### 2\.4\.2 Errors
Here is an example of what happens at the Console when we generate an error:
```
xyz + 2
```
```
## Error in eval(expr, envir, enclos): object 'xyz' not found
```
In general terms, what happened is that R read in the instruction `xyz + 2`, tried to evaluate it, and found it could not. This is because the variable `xyz` does not exist, i.e. we never made a variable called `xyz`. Upon running into the error, R printed something to the screen to tell us we’ve made a mistake (“Error: object ‘xyz’ not found”).
When this happens we say R has ‘thrown an error’. We know its an error because the message is in a warning colour (probably red or orange—it depends how RStudio is set up) and contains the word `Error`. The bit after the `:` is an attempt by R to tell us what went wrong. Always **read the error messages**. They will be incomprehensible at first, but they will eventually start to make more sense and become helpful (usually—sometimes they make no sense whatsoever, even to experienced users). This way of learning only works if we read the error messages in the first place though.
2\.1 Introduction
-----------------
We’ve seen that using variables is useful because it enables us to break down a problem into a series of simpler steps. However, so far we’ve only been working in the Console. If we want to reuse a calculation when we’re working like this, we have to change a variable or two and then evaluate the expressions that do the job of solving an equation, make a graph, whatever. We also have to do all of this in the correct order, or things will not work as intended.
We can see that working in the Console is not going to be practical most of the time. So what should we do? The answer is: put our sequence of R expressions into a text file, called a **script**. Calling it a script makes it sound a bit fancy and clever—“I spent all day debugging my script”. It is not. It is a boring text file that could be opened up in something like Notepad.exe. We just call it a script to signify the fact that the text contained in the file is a series of instructions telling our computer to do something.
Working directly at the Console is the simplest way to use R, but we do not recommend working this way unless you only need to do something very simple that involves a handful of steps. For more complicated activities you should always store your instructions in a script.
2\.2 Writing scripts in RStudio
-------------------------------
To open a new script in RStudio navigate to `File > New File > R Script`. This will open the new file in a fourth pane. This pane is the Source Code Editor we mentioned in the [Get up and running with R and RStudio](get-up-and-running-with-r-and-rstudio.html#get-up-and-running-with-r-and-rstudio) chapter. The name of the tab where this new file lives will be set to **Untitled1** if we haven’t opened any other new R Scripts. Here is what RStudio looks like after we do this (we’ve highlighted the new pane with a red asterisk):
When we work with a script we type the required sequence of R expressions into the Editor pane, **not directly into the Console**. This is important—if we mix and match mistakes happen. The worst of these is that we write a script that seems to work, only to find it is broken when we open it up and use it again later. This usually happens because we typed something into the Console that is needed to make the whole script run when we were preparing it, but then forget to put it into the script. Just don’t switch between using the Console and the Editor to avoid this.
The easiest way to appreciate the benefits of using a script is to work with one. Here are a few lines of R code to copy\-paste into the new Editor pane…
```
a <- 1
b <- 6
c <- 5
sqrt.b2ac <- (b^2 -4 * a * c)^(1/2)
(-b + sqrt.b2ac) / (2 * a)
(-b - sqrt.b2ac) / (2 * a)
```
…and here’s a partial screenshot of how the Editor pane might look (the details depend on how RStudio is set up):
Notice that parts of the R code is formatted by colour. This is called **syntax highlighting**. Syntax highlighting is a must have feature of any Editor. In a nutshell, syntax is a bit like the grammar of a computer language. It is the set of rules that determine how we form valid expressions, assign variables, and so on. The purpose of syntax highlighting is to draw attention to different components of syntax. We can see that when we use the Cobalt highlighting option, RStudio sets the background to black and displays variables in white, parentheses and arithmetic operators in orange, and numbers in red. It doesn’t matter so much what the colours are. What matters is that we have a visual means to distinguish these different kinds of elements, making it much easier to read a script.
#### Choose your own colour scheme
The first thing you will probably notice is that this Editor looks a little different from yours. We said earlier that RStudio was highly customisable. What we did above was change the way it does something called **syntax highlighting**. You can do this by navigating to `Tools > Global Options…`, selecting the `Appearance` button, and picking the `Cobalt` option under `Editor theme` (try it!).
The other kind of elements RStudio has highlighted are in blue. We added these. They are called comments. Comments in R always start with a `#` symbol—this is called the “hash”" symbol (also known as the ‘pound’ symbol to North Americans of a certain age). **Lines that start with `#` are completely ignored by R**. They exist only to allow us, the developers of a script, to add notes and explanations that remind us how it all works.
#### Comments are important
At this point we just want to emphasise that **you should always use comments in your scripts** to remind yourself what your R code is supposed to be doing. Use them liberally to help you understand the logic of each script you write. This is another “take our word”" for it situation – if you do not use comments, then when you come back to your precious script in a few weeks/months/years time you will have no idea what it does.
2\.3 Running scripts in RStudio
-------------------------------
The whole point of writing a script is ultimately to run it. The phrase “run our code” is shorthand for “send a number of R expressions to the R interpreter to be read and evaluated”. The latter is tedious to write (and read) over and over again, so we will just write “run your/my/our code”. We could run the code in the above script by copying and pasting it into the Console, but this is inefficient. Instead of relying on cut and paste, RStudio gives us different ways to run our code:
* There is a `Run` button at the top right of the Editor pane. As we might imagine, clicking on this will run some code. If we haven’t highlighted anything in the Editor, this runs whichever line the cursor is at, i.e. it runs just that one line. If we had highlighted a region inside the Editor, this button will run all of that in sequence.
* No one likes clicking buttons. Luckily, pressing Control\+Enter (or Command\+Enter on a Mac) does exactly the same thing as the `Run` button. It also uses the same rules to decide which bits of code to run or not[4](#fn4).
Now that we know how to ‘run our code’, we can run every line in the script we just started. Here’s what should happen at the Console when we do this:
```
a <- 1
b <- 6
c <- 5
sqrt.b2ac <- (b^2 -4 * a * c)^(1/2)
```
```
(-b + sqrt.b2ac) / (2 * a)
```
```
## [1] -1
```
```
(-b - sqrt.b2ac) / (2 * a)
```
```
## [1] -5
```
This works exactly as though we typed or pasted the sequence of R expressions into the Console, hitting Enter each time we get to the end of a line. What this means is that we can use this script to find the solutions to any quadratic equation with ‘real roots’. All we have to do is edit the values assigned to `a`, `b` and `c` and then rerun **the whole script**. We can’t just rerun bits of it because everything is designed to work together, in sequence.
Now that we have a script that does something a little bit useful we might wish to reuse it at some point. It’s just a text file, so we can save the script as we would any other file. We can do this using the familiar menu\-based approach (`File > Save As...`) or via the keyboard shortcut `Control+S` (or `Command+S` on a Mac). The only thing to keep in mind is that we must use the file extension `.R` or `.r` when we name the file, e.g. `my_great_script.R`. This is because RStudio uses the file extension to detect the fact that a file is an R script and not an ordinary text file. If we don’t do this, then next time we open up the file in RStudio we won’t be able to access the fancy Editor features like syntax highlighting, nor will we be able to send lines to the Console without using copy\-paste.
From now on always work with scripts. No more typing into the Console!
2\.4 Spotting problems
----------------------
We may as well get something out of the way early on. It’s painfully easy to accidentally ask R to do something that contains an error of some kind. Mistakes happen all the time when writing R code—everyone does it. It’s not a problem when this happens. When it does though, it’s important to step back and work out what went wrong.
### 2\.4\.1 The dreaded `+`
Be careful when highlighting code to run. RStudio will run exactly the text that is highlighted. If we start or finish the highlighted region in the middle of an expression then one of three things will usually happen. If we’re lucky we’ll generate an error because we ran an invalid partial expression. We say this is lucky because the error will at least be easy to spot. If we’re unlucky, we might end up running part of an expression that *is itself* a valid expression, but does not do what we had intended. This is harder to spot because it won’t generate an error, and it will probably create problems further down the line.
The third outcome is that the Console will look something like this:
What happened? Look carefully at the little snippet of R code we sent to the Console. It’s not a complete R expression, because it is missing a closing parenthesis: `)`. When R receives only part of an expression like this, which has correct syntax but is not complete, it sits and waits for the rest of the expression. This is what the `+` at the Console signifies. When we see this we have two options. We can manually type in the missing part of the expression and hit Enter, or (better) we can hit the Escape key to return you to the prompt `>` and start again. The first option is rather error prone so we would generally prefer the latter.
### 2\.4\.2 Errors
Here is an example of what happens at the Console when we generate an error:
```
xyz + 2
```
```
## Error in eval(expr, envir, enclos): object 'xyz' not found
```
In general terms, what happened is that R read in the instruction `xyz + 2`, tried to evaluate it, and found it could not. This is because the variable `xyz` does not exist, i.e. we never made a variable called `xyz`. Upon running into the error, R printed something to the screen to tell us we’ve made a mistake (“Error: object ‘xyz’ not found”).
When this happens we say R has ‘thrown an error’. We know its an error because the message is in a warning colour (probably red or orange—it depends how RStudio is set up) and contains the word `Error`. The bit after the `:` is an attempt by R to tell us what went wrong. Always **read the error messages**. They will be incomprehensible at first, but they will eventually start to make more sense and become helpful (usually—sometimes they make no sense whatsoever, even to experienced users). This way of learning only works if we read the error messages in the first place though.
### 2\.4\.1 The dreaded `+`
Be careful when highlighting code to run. RStudio will run exactly the text that is highlighted. If we start or finish the highlighted region in the middle of an expression then one of three things will usually happen. If we’re lucky we’ll generate an error because we ran an invalid partial expression. We say this is lucky because the error will at least be easy to spot. If we’re unlucky, we might end up running part of an expression that *is itself* a valid expression, but does not do what we had intended. This is harder to spot because it won’t generate an error, and it will probably create problems further down the line.
The third outcome is that the Console will look something like this:
What happened? Look carefully at the little snippet of R code we sent to the Console. It’s not a complete R expression, because it is missing a closing parenthesis: `)`. When R receives only part of an expression like this, which has correct syntax but is not complete, it sits and waits for the rest of the expression. This is what the `+` at the Console signifies. When we see this we have two options. We can manually type in the missing part of the expression and hit Enter, or (better) we can hit the Escape key to return you to the prompt `>` and start again. The first option is rather error prone so we would generally prefer the latter.
### 2\.4\.2 Errors
Here is an example of what happens at the Console when we generate an error:
```
xyz + 2
```
```
## Error in eval(expr, envir, enclos): object 'xyz' not found
```
In general terms, what happened is that R read in the instruction `xyz + 2`, tried to evaluate it, and found it could not. This is because the variable `xyz` does not exist, i.e. we never made a variable called `xyz`. Upon running into the error, R printed something to the screen to tell us we’ve made a mistake (“Error: object ‘xyz’ not found”).
When this happens we say R has ‘thrown an error’. We know its an error because the message is in a warning colour (probably red or orange—it depends how RStudio is set up) and contains the word `Error`. The bit after the `:` is an attempt by R to tell us what went wrong. Always **read the error messages**. They will be incomprehensible at first, but they will eventually start to make more sense and become helpful (usually—sometimes they make no sense whatsoever, even to experienced users). This way of learning only works if we read the error messages in the first place though.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/using-functions.html |
Chapter 3 Using functions
=========================
3\.1 Introduction
-----------------
Functions are a basic building block of any programming language. To use R effectively—even if our needs are very simple—we need to understand how to use functions. We are not aiming to unpick the inner workings of functions in this course[5](#fn5). The aim of this chapter is to explain what functions are for, how to use them, and how to avoid mistakes when doing so, without worrying too much about how they actually work.
3\.2 Functions and arguments
----------------------------
The job of each function in R is to carry out some kind of calculation or computation that would typically require many lines of R code to do “from scratch”. Functions allow us to reuse common computations while offering some control over the precise details of what actually happens. The best way to see what we mean by this is to see one in action. The `round` function is used to round one or more number(s) to a significant number of digits. To use it, we could type this into the Console and hit Enter:
```
round(x = 3.141593, digits = 2)
```
We have suppressed the output for now so that we can unpack things a bit first. Every time we use a function we always have to work with the same basic construct (there are a few exceptions, but we can ignore these for now). We start with the name of the function, that is, we use the name of the function as the prefix. In this case, the function name is `round`. After the function name, we need a pair of opening and closing parentheses. It is this combination of name and parentheses that that alerts R to fact that we are trying to use a function. Whenever we see name followed by opening and closing parentheses we’re seeing a function in action.
What about the bits inside the parentheses? These are called the **arguments** of the function. That is a horrible name, but it is the one that everyone uses so we have to get used to it. Depending on how it was defined, a function can take zero, one, or more arguments. We will discuss this idea in more detail later in this section.
In the simple example above, we used the `round` function with two arguments. Each of these was supplied as a name\-value pair, separated by a comma. When working with arguments, name\-value pairs occur either side of the equals (`=`) sign, with the **name** of the argument on the left hand side and the **value** it takes on the right hand side (notice that the syntax highlighter we used to make this website helpfully colours the argument names differently from the values). The name serves to identify which argument we are working with, and the value is the thing that controls what that argument does in the function.
We refer to the process of associating argument names and values as “supplying the arguments” of the function (sometimes we also say “setting the arguments”). Notice the similarity between supplying function arguments and the assignment operation discussed in the last topic. The difference here is that name\-value pairs are associated with the `=` symbol. This association is also temporary: it only lasts as long as it takes for the function to do whatever it does.
#### Use `=` to assign arguments
**Do not** use the assignment operator `<-` inside the parentheses when working with functions. This is a “trust us” situation: you will end up in all kinds of difficulty if you do this.
The names of the arguments that we are allowed to use are typically determined for us by the function. That is, we are not free to choose whatever name we like. We say “typically”, because R is a very flexible language and so there are certain exceptions to this simple rule of thumb. For now it is simpler to think of the names as constrained by the particular function we’re using. The arguments control the behaviour of a function. Our job as users is to set the values of these to get the behaviour we want. By now it is now probably fairly obvious what is going to happen when we used the `round` function like this at the Console:
```
round(x = 3.141593, digits = 2)
```
```
## [1] 3.14
```
Remember, we said the `round` function rounds one or more numbers to a number of significant digits. The argument that specifies the focal number(s) is `x`; the second argument, `digits`, specifies the number of decimal places we require. Based on the supplied values of these arguments, `3.141593` and `2`, respectively, the `round` function spits out a value of `3.14`, which is then printed to the Console. If we had wanted to the answer to 3 significant digits we would use `digits = 3`. This is what we mean when we say the values of the supplied arguments controls the behaviour of the function.
3\.3 Evaluating arguments and returning results
-----------------------------------------------
Whenever R evaluates a function we refer to this action as “calling the function”. In our simple example, we called the `round` function with arguments `x` and `digits` (in this course we treat the phrases “use the function” and “call the function” as synonyms, as the former is more natural to new users). What we have just seen—although it may not be obvious—is that when we call functions they first **evaluate** their arguments, then perform some kind of action, and finally (optionally) **return** a value to us when they finish doing whatever it is they do
We will discuss that word “return” in a moment. What do we mean by the word “evaluate” in this context? Take a look at this second example which uses `round` again:
```
round(x = 2.3 + 1.4, digits = 0)
```
```
## [1] 4
```
When we call a function, what typically happens is that everything on the right hand side of an `=` is first evaluated, the result of this evaluation becomes associated with the corresponding argument name, and then the function does its calculations using the resulting name\-value pairs. We say “typically” because other kinds of behaviours are possible—remember, R is a very flexible language—though for the purposes of this course we can assume that what we just wrote is always true. What happened above is that R evaluated `2.3 + 1.4`, resulting in the number `3.7`, which was then associated with the argument `x`. We set `digits` to `0` this time so that `round` just returns a whole number, `4`.
The important thing to realise is that the expression(s) on the right hand side of the `=` can be anything we like. This third example essentially equivalent to the last one:
```
myvar <- 2.3 + 1.4
round(x = myvar, digits = 0)
```
```
## [1] 4
```
This time we created a new variable called `myvar` and then supplied this as the value of the `x` argument. When we call the `round` function like this, the R interpreter spots the fact that something on the right hand side of an `=` is a variable and associates the value of this variable with `x` argument. As long as we have actually defined the numeric variable `myvar` at some point we can use it as the value of an argument.
Keeping in mind what we’ve just learned, take a careful look at this example:
```
x <- 0
round(x = 3.7, digits = x)
```
```
## [1] 4
```
What is going on here? The key to understanding this is to realise that the symbol `x` is used in two different ways here. When it appears on the left hand side of the `=` it represents an argument name. When it appears on the right hand side it is treated as a variable name, which must have a value associated with it for the above to be valid. This is admittedly a slightly confusing way to use this function, but it is perfectly valid. The message here is that what matters is where things appear relative to the `=`, not the symbols used to represent them.
We said at the beginning of this section that a function may optionally **return** a value to us when they finish complete their task. That word “return” is just jargon that refers to the process by which a function outputs a value. If we use a function at the Console this will be the value printed at the end. We can use this value in other ways too. For example, there is nothing to stop us combining function calls with the arithmetic operations:
```
2 * round(x = 2.64, digits = 0)
```
```
## [1] 6
```
Here the R interpreter first evaluates the function call, and then multiplies the value it returns by 2\. If we want to reuse this value we have to assign the result of function call, for example:
```
roundnum <- 2 * round(x = 2.64, digits = 0)
```
Using a function with `<-` is really no different from the examples using multiple arithmetic operations in the last topic. The R interpreter starts on the right hand side of the `<-`, evaluates the function call there, and only then assigns the value to `roundnum`.
3\.4 Functions do not have “side effects”
-----------------------------------------
There is one more idea about functions and arguments that we really need to understand in order to avoid confusion later on. It relates to how functions modify their arguments, or more accurately, how they **do not** modify their arguments. Take a look at this example:
```
myvar <- 3.7
round(x = myvar, digits = 0)
```
```
## [1] 4
```
```
myvar
```
```
## [1] 3.7
```
We created a variable `myvar` with the value 3\.7, rounded this to a whole number with `round`, and then printed the value of `myvar`. Notice that **the value of `myvar` has not changed** after using it as an argument to `round`. This is important. R functions typically do not alter the values of their arguments. Again, we say “typically” because there are ways to alter this behaviour if we really want to (yes, R is a very flexible language), but we will never ever do this. The standard behaviour—that functions do not alter their arguments—is what is meant by the phrase “functions do not have side effects”.
If we had meant to round the value of `myvar` so that we can use this new value later on, we have to assign the result of function evaluation, like this:
```
myvar <- 3.7
myvar <- round(x = myvar, digits = 0)
```
In this example, we just overwrote the old value, but we could just as easily have created a new variable. The reason this is worth pointing out is that new users sometimes assume certain types of functions will alter their arguments. Specifically, when working with functions manipulate something called a `data.frame`, there is a tendency to assume that the function changes the `data.frame` argument. It will not. If we want to make use of the changes, rather than just see them printed to the Console, we need to assign the results. We can do this by creating a new variable or overwriting the old one. We’ll gain first hand experience of this in the Data Wrangling block.
Remember, functions do not have side effects! New R users sometimes forget this and create all kinds of headaches for themselves. Don’t be that person.
3\.5 Combining functions
------------------------
Up until now we have not tried to do anything very complicated in our examples. Using R to actually get useful work almost always involves multiple steps, very often facilitated by a number of different functions. There is more than one way to do this. Here’s a simple example that takes an approach we already know about:
```
myvar <- sqrt(x = 10)
round(x = myvar, digits = 0)
```
```
## [1] 3
```
We calculated the square root of the number 10 and assigned the result to `myvar`, then we rounded this to a whole number and printed the result to the Console. So one way to use a series of functions in sequence is to assign a name to the result at each step and use this as an argument to the function in the next step.
Here is another way to replicate the calculation in the previous example:
```
round(x = sqrt(x = 10), digits = 0)
```
```
## [1] 3
```
The technical name for this is **function composition**. Another way of referring to this kind of expression is as a **nested function** call: we say that the `sqrt` function is nested inside the `round` function. The way to should read these constructs is **from the inside out**. The `sqrt(x = 10)` expression is on the right hand side of an `=` symbol, so this is evaluated first, the result is associated with the `x` argument of the `round` function, and only then does the `round` function do its job.
There aren’t really any new ideas here. We have already seen that the R interpreter evaluates whatever is on the right hand side of the `=` symbol first before associating the resulting value with the appropriate argument name. However, nested function calls can be confusing at first so we need to see them in action. There’s nothing to stop us using multiple levels of nesting either. Take a look at this example:
```
round(x = sqrt(x = abs(x = -10)), digits = 0)
```
```
## [1] 3
```
The `abs` function takes the absolute value of a number, i.e. removes the `-` sign if it is there. Remember, read nested calls from the inside out. In this example, first we took the absolute value of \-10, then we took the square root of the resulting number (10\), and then we rounded this to a whole number.
Nested function calls are useful because they make our R code less verbose (we have to write less), but this comes at the cost of reduced readability. We aim to keep function nesting to a minimum in this book, but we will occasionally have to work with the nesting construct so have to understand it even if we don’t like using it. We’ll see a much\-easier\-to\-read method for applying a series of functions in the Data Wrangling block.
3\.6 Specifying function arguments
----------------------------------
We have only been working with functions that carry out mathematical calculations with numbers so far. We will see many more in this course as it unfolds. Some functions are designed to extract information about functions for us. For example, take a look at the `args` function:
```
args(name = round)
```
```
## function (x, digits = 0)
## NULL
```
We can see what `args` does: it prints a summary of the main arguments of a function to the Console (though it doesn’t always print all the available arguments). What can we learn from the summary of the `round` arguments? Notice that the first argument, `x`, is shown without an associated value, whereas the `digits` part of the summary is printed as `digits = 0`. **The significance of this is that `digits` has a default value**. This means that we can leave out `digits` when using the round function:
```
round(x = 3.7)
```
```
## [1] 4
```
This is obviously the same result as we would get using `round(x = 3.7, digits = 0)`. This is a very useful feature of R, as it allows us keep our R code concise. Some functions take a large number of arguments, many of which are defined with sensible defaults. Unless we need to change these default arguments, we can just ignore them when we call such functions. The `x` argument of `round` does not have a default, which means we have to supply a value. This is sensible, as the whole purpose of `round` is to round any number we give it.
There is another way to simplify our use of functions. Take a look at this example:
```
round(3.72, digits = 1)
```
```
## [1] 3.7
```
What does this demonstrate? We do not have to specify argument names, i.e. there is no need to specify argument names. In the absence of an argument name the R interpreter uses the position of the supplied argument to work out which name to associate it with. In this example we left out the name of the argument at position 1\. This is where `x` belongs, so we end up rounding 3\.71 to 1 decimal place. R is even more flexible than this, as it carries out partial matching on argument names:
```
round(3.72, dig = 1)
```
```
## [1] 3.7
```
This also works because R can unambiguously match the argument I named `dig` to `digits`. Take note, if there were another argument to `round` that started with the letters `dig` this would have caused an error. We have to know our function arguments if we want to rely of partial matching.
#### Be careful with your arguments
Here is some advice. Do not rely on partial matching of function names. It just leads to confusion and the odd error. If you use it a lot you end up forgetting the true name of arguments, and if you abbreviate too much you create name matching conflicts. For example, if a function has arguments `arg1` and `arg2` and you use the partial name `a` for an argument, there is no way to know which argument you meant. We are pointing out partial matching so that you are aware of the behaviour. It is not worth the hassle of getting it wrong just to save on a little typing, so do not use it.
What about position matching? This can also cause problems if we’re not paying attention. For example, if you forget the order of the arguments to a function and then place your arguments in the wrong place, you will either generate an error or produce a nonsensical result. It is nice not to have to type out the `name = value` construct all the time though, so our advice is to rely positional matching only for the first argument. This is a common convention in R, and it makes sense because it is often obvious what kind of information or data the first argument should carry, so the its name is redundant.
3\.1 Introduction
-----------------
Functions are a basic building block of any programming language. To use R effectively—even if our needs are very simple—we need to understand how to use functions. We are not aiming to unpick the inner workings of functions in this course[5](#fn5). The aim of this chapter is to explain what functions are for, how to use them, and how to avoid mistakes when doing so, without worrying too much about how they actually work.
3\.2 Functions and arguments
----------------------------
The job of each function in R is to carry out some kind of calculation or computation that would typically require many lines of R code to do “from scratch”. Functions allow us to reuse common computations while offering some control over the precise details of what actually happens. The best way to see what we mean by this is to see one in action. The `round` function is used to round one or more number(s) to a significant number of digits. To use it, we could type this into the Console and hit Enter:
```
round(x = 3.141593, digits = 2)
```
We have suppressed the output for now so that we can unpack things a bit first. Every time we use a function we always have to work with the same basic construct (there are a few exceptions, but we can ignore these for now). We start with the name of the function, that is, we use the name of the function as the prefix. In this case, the function name is `round`. After the function name, we need a pair of opening and closing parentheses. It is this combination of name and parentheses that that alerts R to fact that we are trying to use a function. Whenever we see name followed by opening and closing parentheses we’re seeing a function in action.
What about the bits inside the parentheses? These are called the **arguments** of the function. That is a horrible name, but it is the one that everyone uses so we have to get used to it. Depending on how it was defined, a function can take zero, one, or more arguments. We will discuss this idea in more detail later in this section.
In the simple example above, we used the `round` function with two arguments. Each of these was supplied as a name\-value pair, separated by a comma. When working with arguments, name\-value pairs occur either side of the equals (`=`) sign, with the **name** of the argument on the left hand side and the **value** it takes on the right hand side (notice that the syntax highlighter we used to make this website helpfully colours the argument names differently from the values). The name serves to identify which argument we are working with, and the value is the thing that controls what that argument does in the function.
We refer to the process of associating argument names and values as “supplying the arguments” of the function (sometimes we also say “setting the arguments”). Notice the similarity between supplying function arguments and the assignment operation discussed in the last topic. The difference here is that name\-value pairs are associated with the `=` symbol. This association is also temporary: it only lasts as long as it takes for the function to do whatever it does.
#### Use `=` to assign arguments
**Do not** use the assignment operator `<-` inside the parentheses when working with functions. This is a “trust us” situation: you will end up in all kinds of difficulty if you do this.
The names of the arguments that we are allowed to use are typically determined for us by the function. That is, we are not free to choose whatever name we like. We say “typically”, because R is a very flexible language and so there are certain exceptions to this simple rule of thumb. For now it is simpler to think of the names as constrained by the particular function we’re using. The arguments control the behaviour of a function. Our job as users is to set the values of these to get the behaviour we want. By now it is now probably fairly obvious what is going to happen when we used the `round` function like this at the Console:
```
round(x = 3.141593, digits = 2)
```
```
## [1] 3.14
```
Remember, we said the `round` function rounds one or more numbers to a number of significant digits. The argument that specifies the focal number(s) is `x`; the second argument, `digits`, specifies the number of decimal places we require. Based on the supplied values of these arguments, `3.141593` and `2`, respectively, the `round` function spits out a value of `3.14`, which is then printed to the Console. If we had wanted to the answer to 3 significant digits we would use `digits = 3`. This is what we mean when we say the values of the supplied arguments controls the behaviour of the function.
3\.3 Evaluating arguments and returning results
-----------------------------------------------
Whenever R evaluates a function we refer to this action as “calling the function”. In our simple example, we called the `round` function with arguments `x` and `digits` (in this course we treat the phrases “use the function” and “call the function” as synonyms, as the former is more natural to new users). What we have just seen—although it may not be obvious—is that when we call functions they first **evaluate** their arguments, then perform some kind of action, and finally (optionally) **return** a value to us when they finish doing whatever it is they do
We will discuss that word “return” in a moment. What do we mean by the word “evaluate” in this context? Take a look at this second example which uses `round` again:
```
round(x = 2.3 + 1.4, digits = 0)
```
```
## [1] 4
```
When we call a function, what typically happens is that everything on the right hand side of an `=` is first evaluated, the result of this evaluation becomes associated with the corresponding argument name, and then the function does its calculations using the resulting name\-value pairs. We say “typically” because other kinds of behaviours are possible—remember, R is a very flexible language—though for the purposes of this course we can assume that what we just wrote is always true. What happened above is that R evaluated `2.3 + 1.4`, resulting in the number `3.7`, which was then associated with the argument `x`. We set `digits` to `0` this time so that `round` just returns a whole number, `4`.
The important thing to realise is that the expression(s) on the right hand side of the `=` can be anything we like. This third example essentially equivalent to the last one:
```
myvar <- 2.3 + 1.4
round(x = myvar, digits = 0)
```
```
## [1] 4
```
This time we created a new variable called `myvar` and then supplied this as the value of the `x` argument. When we call the `round` function like this, the R interpreter spots the fact that something on the right hand side of an `=` is a variable and associates the value of this variable with `x` argument. As long as we have actually defined the numeric variable `myvar` at some point we can use it as the value of an argument.
Keeping in mind what we’ve just learned, take a careful look at this example:
```
x <- 0
round(x = 3.7, digits = x)
```
```
## [1] 4
```
What is going on here? The key to understanding this is to realise that the symbol `x` is used in two different ways here. When it appears on the left hand side of the `=` it represents an argument name. When it appears on the right hand side it is treated as a variable name, which must have a value associated with it for the above to be valid. This is admittedly a slightly confusing way to use this function, but it is perfectly valid. The message here is that what matters is where things appear relative to the `=`, not the symbols used to represent them.
We said at the beginning of this section that a function may optionally **return** a value to us when they finish complete their task. That word “return” is just jargon that refers to the process by which a function outputs a value. If we use a function at the Console this will be the value printed at the end. We can use this value in other ways too. For example, there is nothing to stop us combining function calls with the arithmetic operations:
```
2 * round(x = 2.64, digits = 0)
```
```
## [1] 6
```
Here the R interpreter first evaluates the function call, and then multiplies the value it returns by 2\. If we want to reuse this value we have to assign the result of function call, for example:
```
roundnum <- 2 * round(x = 2.64, digits = 0)
```
Using a function with `<-` is really no different from the examples using multiple arithmetic operations in the last topic. The R interpreter starts on the right hand side of the `<-`, evaluates the function call there, and only then assigns the value to `roundnum`.
3\.4 Functions do not have “side effects”
-----------------------------------------
There is one more idea about functions and arguments that we really need to understand in order to avoid confusion later on. It relates to how functions modify their arguments, or more accurately, how they **do not** modify their arguments. Take a look at this example:
```
myvar <- 3.7
round(x = myvar, digits = 0)
```
```
## [1] 4
```
```
myvar
```
```
## [1] 3.7
```
We created a variable `myvar` with the value 3\.7, rounded this to a whole number with `round`, and then printed the value of `myvar`. Notice that **the value of `myvar` has not changed** after using it as an argument to `round`. This is important. R functions typically do not alter the values of their arguments. Again, we say “typically” because there are ways to alter this behaviour if we really want to (yes, R is a very flexible language), but we will never ever do this. The standard behaviour—that functions do not alter their arguments—is what is meant by the phrase “functions do not have side effects”.
If we had meant to round the value of `myvar` so that we can use this new value later on, we have to assign the result of function evaluation, like this:
```
myvar <- 3.7
myvar <- round(x = myvar, digits = 0)
```
In this example, we just overwrote the old value, but we could just as easily have created a new variable. The reason this is worth pointing out is that new users sometimes assume certain types of functions will alter their arguments. Specifically, when working with functions manipulate something called a `data.frame`, there is a tendency to assume that the function changes the `data.frame` argument. It will not. If we want to make use of the changes, rather than just see them printed to the Console, we need to assign the results. We can do this by creating a new variable or overwriting the old one. We’ll gain first hand experience of this in the Data Wrangling block.
Remember, functions do not have side effects! New R users sometimes forget this and create all kinds of headaches for themselves. Don’t be that person.
3\.5 Combining functions
------------------------
Up until now we have not tried to do anything very complicated in our examples. Using R to actually get useful work almost always involves multiple steps, very often facilitated by a number of different functions. There is more than one way to do this. Here’s a simple example that takes an approach we already know about:
```
myvar <- sqrt(x = 10)
round(x = myvar, digits = 0)
```
```
## [1] 3
```
We calculated the square root of the number 10 and assigned the result to `myvar`, then we rounded this to a whole number and printed the result to the Console. So one way to use a series of functions in sequence is to assign a name to the result at each step and use this as an argument to the function in the next step.
Here is another way to replicate the calculation in the previous example:
```
round(x = sqrt(x = 10), digits = 0)
```
```
## [1] 3
```
The technical name for this is **function composition**. Another way of referring to this kind of expression is as a **nested function** call: we say that the `sqrt` function is nested inside the `round` function. The way to should read these constructs is **from the inside out**. The `sqrt(x = 10)` expression is on the right hand side of an `=` symbol, so this is evaluated first, the result is associated with the `x` argument of the `round` function, and only then does the `round` function do its job.
There aren’t really any new ideas here. We have already seen that the R interpreter evaluates whatever is on the right hand side of the `=` symbol first before associating the resulting value with the appropriate argument name. However, nested function calls can be confusing at first so we need to see them in action. There’s nothing to stop us using multiple levels of nesting either. Take a look at this example:
```
round(x = sqrt(x = abs(x = -10)), digits = 0)
```
```
## [1] 3
```
The `abs` function takes the absolute value of a number, i.e. removes the `-` sign if it is there. Remember, read nested calls from the inside out. In this example, first we took the absolute value of \-10, then we took the square root of the resulting number (10\), and then we rounded this to a whole number.
Nested function calls are useful because they make our R code less verbose (we have to write less), but this comes at the cost of reduced readability. We aim to keep function nesting to a minimum in this book, but we will occasionally have to work with the nesting construct so have to understand it even if we don’t like using it. We’ll see a much\-easier\-to\-read method for applying a series of functions in the Data Wrangling block.
3\.6 Specifying function arguments
----------------------------------
We have only been working with functions that carry out mathematical calculations with numbers so far. We will see many more in this course as it unfolds. Some functions are designed to extract information about functions for us. For example, take a look at the `args` function:
```
args(name = round)
```
```
## function (x, digits = 0)
## NULL
```
We can see what `args` does: it prints a summary of the main arguments of a function to the Console (though it doesn’t always print all the available arguments). What can we learn from the summary of the `round` arguments? Notice that the first argument, `x`, is shown without an associated value, whereas the `digits` part of the summary is printed as `digits = 0`. **The significance of this is that `digits` has a default value**. This means that we can leave out `digits` when using the round function:
```
round(x = 3.7)
```
```
## [1] 4
```
This is obviously the same result as we would get using `round(x = 3.7, digits = 0)`. This is a very useful feature of R, as it allows us keep our R code concise. Some functions take a large number of arguments, many of which are defined with sensible defaults. Unless we need to change these default arguments, we can just ignore them when we call such functions. The `x` argument of `round` does not have a default, which means we have to supply a value. This is sensible, as the whole purpose of `round` is to round any number we give it.
There is another way to simplify our use of functions. Take a look at this example:
```
round(3.72, digits = 1)
```
```
## [1] 3.7
```
What does this demonstrate? We do not have to specify argument names, i.e. there is no need to specify argument names. In the absence of an argument name the R interpreter uses the position of the supplied argument to work out which name to associate it with. In this example we left out the name of the argument at position 1\. This is where `x` belongs, so we end up rounding 3\.71 to 1 decimal place. R is even more flexible than this, as it carries out partial matching on argument names:
```
round(3.72, dig = 1)
```
```
## [1] 3.7
```
This also works because R can unambiguously match the argument I named `dig` to `digits`. Take note, if there were another argument to `round` that started with the letters `dig` this would have caused an error. We have to know our function arguments if we want to rely of partial matching.
#### Be careful with your arguments
Here is some advice. Do not rely on partial matching of function names. It just leads to confusion and the odd error. If you use it a lot you end up forgetting the true name of arguments, and if you abbreviate too much you create name matching conflicts. For example, if a function has arguments `arg1` and `arg2` and you use the partial name `a` for an argument, there is no way to know which argument you meant. We are pointing out partial matching so that you are aware of the behaviour. It is not worth the hassle of getting it wrong just to save on a little typing, so do not use it.
What about position matching? This can also cause problems if we’re not paying attention. For example, if you forget the order of the arguments to a function and then place your arguments in the wrong place, you will either generate an error or produce a nonsensical result. It is nice not to have to type out the `name = value` construct all the time though, so our advice is to rely positional matching only for the first argument. This is a common convention in R, and it makes sense because it is often obvious what kind of information or data the first argument should carry, so the its name is redundant.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/numeric-vectors.html |
Chapter 4 Numeric vectors
=========================
4\.1 Introduction
-----------------
The term “data structure” is computer science jargon for a particular way of organising data on our computers, so that it can be accessed easily and efficiently. Computer languages use many different kinds of data structures, but fortunately, we only need to learn about a couple of relatively simple ones to use R for data analysis. In fact, in this book only two kinds of data structure really matter: “vectors” and “data frames”. We’ll learn how to work with data frames in the next section of the book.
The next three chapters will consider vectors. This chapter has two goals. First, we want to learn the basics of how to work with vectors. For example, we’ll see how “vectorised operations” may be used to express a repeated calculation. Second, we’ll learn how to construct and use **numeric vectors** to perform various calculations. Keep in mind that although we’re going to focus on numeric vectors, many of the principles we learn here can be applied to the other kinds of vectors considered later.
4\.2 Atomic vectors
-------------------
A vector is a 1\-dimensional object that contains a set of data values, which are accessible by their position: position 1, position 2, position 3, and so one. When people talk about vectors in R they’re often referring to **atomic vectors**[6](#fn6). An atomic vector is the simplest kind of data structure in R. There are a few different kinds of atomic vector, but the defining feature of each one is that it can only contain data of one type. An atomic vector might contain all integers (e.g. 2, 4, 6, …) or all characters (e.g. “A”, “B”, “C”), but it can’t mix and match integers and characters (e.g. “A”, 2, “C”, 5\).
The word “atomic” in the name refers to the fact that an atomic vector can’t be broken down into anything simpler—they are the simplest kind of data structure R knows about. Even when working with a single number we’re actually dealing with an atomic vector. It just happens to be of length one. Here’s the very first expression we evaluated in the first chapter:
```
1 + 1
```
```
## [1] 2
```
Look at the output. What is that `[1]` at the beginning? We ignored it before because we weren’t in a position to understand its significance. The `[1]` is a clue that the output resulting from `1 + 1` is a atomic vector. We can verify this with the `is.vector` and `is.atomic` functions:
```
x <- 1 + 1
# what value is associated with x?
x
```
```
## [1] 2
```
```
# is it a vector?
is.vector(x)
```
```
## [1] TRUE
```
```
# is it atomic?
is.atomic(x)
```
```
## [1] TRUE
```
This little exercise demonstrates an important point about R. Atomic vectors really are the simplest kind of data structure in R. Unlike many other languages there is simply no way to represent just a number. Instead, a single number must be stored as a vector of length one[7](#fn7).
4\.3 Numeric vectors
--------------------
A lot of work in R involves **numeric vectors**. After all, data analysis is all about numbers. Here’s a simple way to construct a numeric vector:
```
numeric(length = 50)
```
```
## [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## [36] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
```
What happened? We made a numeric vector with 50 **elements**, each of which is the number 0\. The word “element” is used to reference any object (a number in this case) that resides at a particular position in a vector.
When we create a vector but don’t assign it to a name using `<-` R just prints it to the Console for us. Notice what happens when the vector is printed to the screen. Since the length\-50 vector can’t fit on one line, it was printed over two. At the beginning of each line there is a `[X]`: the `X` is a number that describes the position of the element shown at the beginning of a particular line.
#### Different kinds of numbers
Roughly speaking, R stores numbers in two different ways depending of whether they are whole numbers (“integers”) or numbers containing decimal points (“doubles” – don’t ask). We’re not going to worry about this distinction. Most of the time the distinction is fairly invisible to users so it is easier to just think in terms of numeric vectors. We can mix and match integers and doubles in R without having to worry too much about R is storing the numbers.
If we need to check that we really have made a numeric vector we can use the `is.numeric` function to do this:
```
# let's create a variable that is a numeric vector
numvec <- numeric(length = 50)
# check it really is a numeric vector
is.numeric(numvec)
```
```
## [1] TRUE
```
This returns `TRUE` as expected. A value of `FALSE` would imply that `numvec` is some other kind of object. This may not look like the most useful function in the world, but sometimes we need functions like `is.numeric` to understand what R is doing or root out mistakes in our scripts.
Keep in mind that when we print a numeric vector to Console R only prints the elements to 7 significant figures by default. We can see this by printing the built in constant `pi` to the Console:
```
pi
```
```
## [1] 3.141593
```
The actual value stored in `pi` is actually much more precise than this. We can see this by printing `pi` again using the `print` function:
```
print(pi, digits = 16)
```
```
## [1] 3.141592653589793
```
4\.4 Constructing numeric vectors
---------------------------------
We just saw how to use the `numeric` function to make a numeric vector of zeros. The size of the vector is controlled by the `length` argument—we used `length = 50` above to make a vector with 50 elements. This is arguably not a particularly useful skill, as we usually need to work vectors of particular values (not just 0\). A very useful function for creating custom vectors is the `c` function. Take a look at this example:
```
c(1.1, 2.3, 4.0, 5.7)
```
```
## [1] 1.1 2.3 4.0 5.7
```
The “c” in the function name stands for “combine”. The `c` function takes a variable number of arguments, each of which must be a vector of some kind, and combines these into a single, new vector. We supplied the `c` function with four arguments, each of which was a vector of length 1 (remember: a single number is treated as a length\-one vector). The `c` function combines these to generate a vector of length 4\. Simple. Now look at this example:
```
vec1 <- c(1.1, 2.3)
vec2 <- c(4.0, 5.7, 3.6)
c(vec1, vec2)
```
```
## [1] 1.1 2.3 4.0 5.7 3.6
```
This shows that we can use the `c` function to concatenate (“stick together”) two or more vectors, even if they are not of length 1\. We combined a length\-2 vector with a length\-3 vector to produce a new length\-5 vector.
Notice that we did not have to name the arguments in those two examples—there were no `=` involved. The `c` function is an example of one of those flexible functions that breaks the simple rules of thumb for using arguments that we set out earlier: it can take a variable number of arguments, and these arguments do not have predefined names. This behaviour is necessary for `c` to be of any use: in order to be useful it needs to be flexible enough to take any combination of arguments.
#### Finding out about a vector R
Sometimes it is useful to be able to find out a little extra information about a vector you are working with, especially if it is very large. Three functions that can extract some useful information about a vector for us are `length`, `head` and `tail`. Using a variety of different vectors, experiment with these to find out what they do. Make sure you use vectors that aren’t too short (e.g. with a length of at least 10\). Hint: `head` and `tail` can be used with a second argument, `n`.
4\.5 Named vectors
------------------
What happens if we name the arguments to `c` when constructing a vector? Take a look at this:
```
namedv <- c(a = 1, b = 2, c = 3)
namedv
```
```
## a b c
## 1 2 3
```
What happened here is that the argument names were used to define the names of elements in the vector we made. The resulting vector is still a 1\-dimensional data structure. When it is printed to the Console the value of each element is printed, along with the associated name above it. We can extract the names from a named vector using the `names` function:
```
names(namedv)
```
```
## [1] "a" "b" "c"
```
Being able to name the elements of a vector is very useful because it enables us to more easily identify relevant information and extract the bits we need—we’ll see how this works in the next chapter.
4\.6 Generating sequences of numbers
------------------------------------
The main limitation of the `c` function is that we have to manually construct vectors from their elements. This isn’t very practical if we need to construct very long vectors of numeric values. There are various functions that can help with this kind of thing though. These are useful when the elements of the target vector need to follow a sequence or repeating pattern. This may not appear all that useful at first, but repeating sequences are used a lot in R.
### 4\.6\.1 Generating sequences of numbers
The `seq` function allows us to generate sequences of numbers. It needs at least two arguments, but there are several different ways to control the sequence produced by `seq`. The method used is determined by our choice of arguments: `from`, `to`, `by`, `length.out` and `along.with`. We don’t need to use all of these though—setting 2\-3 of these arguments will often work:
1. Using the `by` argument generates a sequence of numbers that increase or decrease by the requested step size:
```
seq(from = 0, to = 12, by = 2)
```
```
## [1] 0 2 4 6 8 10 12
```
This is fairly self\-explanatory. The `seq` function constructed a numeric vector with elements that started at 0 and ended 12, with successive elements increasing in steps of 2\. Be careful when using `seq` like this. If the `by` argument does not lead to a sequence that ends exactly on the value of `to` then that value won’t appear in the vector. For example:
```
seq(from = 0, to = 11, by = 2)
```
```
## [1] 0 2 4 6 8 10
```
We can generate a descending sequence by using a negative `by` value, like this:
```
seq(from = 12, to = 0, by = -2)
```
```
## [1] 12 10 8 6 4 2 0
```
2. Using the `length.out` argument generates a sequence of numbers where the resulting vector has the length specified by `length.out`:
```
seq(from = 0, to = 12, length.out = 6)
```
```
## [1] 0.0 2.4 4.8 7.2 9.6 12.0
```
Using the `length.out` argument will always produce a sequence that starts and ends exactly on the values specified by `from` and `to` (if we need a descending sequence we just have to make sure `from` is larger than `to`).
3. Using the `along.with` argument allows us to use another vector to determine the length of the numeric sequence we want to produce:
```
# make any any vector we like
my_vec <- c(1, 3, 7, 2, 4, 2)
# use it to make a sequence of the same length
seq(from = 0, to = 12, along.with = my_vec)
```
```
## [1] 0.0 2.4 4.8 7.2 9.6 12.0
```
It doesn’t matter what the elements of `myvec` are. The behaviour of `seq` is controlled by the length of `myvec` when we use `along.with`.
It’s conventional to leave out the `from` and `to` argument names when using the `seq` function. For example, we could rewrite the first example above as:
```
seq(0, 12, by = 2)
```
```
## [1] 0 2 4 6 8 10 12
```
When we leave out the name of the third argument its value is position matched to the `by` argument:
```
seq(0, 12, 2)
```
```
## [1] 0 2 4 6 8 10 12
```
However, our advice is to explicitly name the `by` argument instead of relying on position matching, because this reminds us what we’re doing and will stop us forgetting about the different control arguments.
If we do not specify values of either `by`, `length.out` and `along.with` when using `seq` the default behaviour is to assume we meant `by = 1`:
```
seq(from = 1, to = 12)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10 11 12
```
Generating sequences of integers that go up or down in steps of 1 is something we do a lot in R. Because of this, there is a special operator to generate these simple sequences—the colon, `:`. For example, to produce the last sequence again we use:
```
1:12
```
```
## [1] 1 2 3 4 5 6 7 8 9 10 11 12
```
When we use the `:` operator the convention is to **not** leave spaces either side of it.
### 4\.6\.2 Generating repeated sequences of numbers
The `rep` function is designed to replicate the values inside a vector, i.e. it’s short for “replicate”. The first argument (called `x`) is the vector we want to replicate. There are two other arguments that control how this is done. The `times` argument specifies the number of times to replicate the whole vector:
```
# make a simple sequence of integers
seqvec <- 1:4
seqvec
```
```
## [1] 1 2 3 4
```
```
# now replicate *the whole vector* 3 times
rep(seqvec, times = 3)
```
```
## [1] 1 2 3 4 1 2 3 4 1 2 3 4
```
All we did here was take a sequence of integers from 1 to 4 and replicate this end\-to\-end three times, resulting in a length\-12 vector. Alternatively, we can use the `each` argument to replicate each element of a vector while retaining the original order:
```
# make a simple sequence of integers
seqvec <- 1:4
seqvec
```
```
## [1] 1 2 3 4
```
```
# now replicate *each element* vector 3 times
rep(seqvec, each = 3)
```
```
## [1] 1 1 1 2 2 2 3 3 3 4 4 4
```
This example produced a similar vector to the previous one. It contains the same elements and has the same length, but now the order is different. All the 1’s appear first, then the 2’s, and so on.
Predictably, we can also use both the `times` and `each` arguments together if we want to:
```
seqvec <- 1:4
rep(seqvec, times = 3, each = 2)
```
```
## [1] 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4
```
The way to think about how this works is to imagine that we apply `rep` twice, first with `each = 2`, then with `times = 3` (or vice versa). We can achieve the same thing using nested function calls, though it is much uglier:
```
seqvec <- 1:4
rep(rep(seqvec, each = 2), times = 3)
```
```
## [1] 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4
```
4\.7 Vectorised operations
--------------------------
All the simple arithmetic operators (e.g. `+` and `-`) and many mathematical functions are **vectorised** in R . What this means is that when we use a vectorised function it operates on vectors on an element\-by\-element basis. Take a look at this example to see what we mean by this:
```
# make a simple sequence
x <- 1:10
x
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
# make another simple sequence *of the same length*
y <- seq(0.1, 1, length = 10)
y
```
```
## [1] 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
```
```
# add them
x + y
```
```
## [1] 1.1 2.2 3.3 4.4 5.5 6.6 7.7 8.8 9.9 11.0
```
We constructed two length\-10 numeric vectors, called `x` and `y`, where `x` is a sequence from 1 to 10 and `y` is a sequence from 0\.1 to 1\. When R evaluates the expression `x + y` it does this by adding the first element of `x` to the first element of `y`, the second element of `x` to the second element of `y`, and so on, working through all 10 elements of `x` and `y`.
Vectorisation is also implemented in the standard mathematical functions. For example, our friend the `round` function will round each element of a numeric vector:
```
# make a simple sequence of non-integer values
my_nums <- seq(0, 1, length = 13)
my_nums
```
```
## [1] 0.00000000 0.08333333 0.16666667 0.25000000 0.33333333 0.41666667
## [7] 0.50000000 0.58333333 0.66666667 0.75000000 0.83333333 0.91666667
## [13] 1.00000000
```
```
# now round the values
round(my_nums, digits = 2)
```
```
## [1] 0.00 0.08 0.17 0.25 0.33 0.42 0.50 0.58 0.67 0.75 0.83 0.92 1.00
```
The same behaviour is seen with other mathematical functions like `sin`, `cos`, `exp`, and `log`. Each of these will apply the relevant function to each element of a numeric vector.
Not all functions are vectorised. For example, the `sum` function takes a vector of numbers and adds them up:
```
sum(my_nums)
```
```
## [1] 6.5
```
Although `sum` obviously works on a numeric vector it is not “vectorised” in the sense that it works element\-by\-element to return an output vector of the same length as its main argument.
Many functions apply the vectorisation principle to more than one argument. Take a look at this example to see what we mean by this:
```
# make a repeated set of non-integer values
my_nums <- rep(2 / 7, times = 6)
my_nums
```
```
## [1] 0.2857143 0.2857143 0.2857143 0.2857143 0.2857143 0.2857143
```
```
# round each one to a different number of decimal places
round(my_nums, digits = 1:6)
```
```
## [1] 0.300000 0.290000 0.286000 0.285700 0.285710 0.285714
```
We constructed a length 6 vector containing the number 2/7 (\~ 0\.285714\) and then used the `round` function to round each element to a different number of decimal places. The first element was rounded to 1 decimal place, the second to two decimal places, and so on. We get this behaviour because instead of using a single number for the `digits` argument, we used a vector that is an integer sequence from 1 to 6\.
#### Vectorisation is not the norm
R’s vectorised behaviour may seem like the “obvious” thing to do, but most computer languages do not work like this. In other languages we typically have to write a much more complicated expression to do something so simple. This is one of the reasons R is such a data analysis language: vectorisation allows us to express repetitious calculations in a simple, intuitive way. This behaviour can save us a lot of time. However, not every function treats its arguments in a vectorised way, so we always need to check (most easily, by experimenting) whether this behaviour is available before relying on it.
4\.1 Introduction
-----------------
The term “data structure” is computer science jargon for a particular way of organising data on our computers, so that it can be accessed easily and efficiently. Computer languages use many different kinds of data structures, but fortunately, we only need to learn about a couple of relatively simple ones to use R for data analysis. In fact, in this book only two kinds of data structure really matter: “vectors” and “data frames”. We’ll learn how to work with data frames in the next section of the book.
The next three chapters will consider vectors. This chapter has two goals. First, we want to learn the basics of how to work with vectors. For example, we’ll see how “vectorised operations” may be used to express a repeated calculation. Second, we’ll learn how to construct and use **numeric vectors** to perform various calculations. Keep in mind that although we’re going to focus on numeric vectors, many of the principles we learn here can be applied to the other kinds of vectors considered later.
4\.2 Atomic vectors
-------------------
A vector is a 1\-dimensional object that contains a set of data values, which are accessible by their position: position 1, position 2, position 3, and so one. When people talk about vectors in R they’re often referring to **atomic vectors**[6](#fn6). An atomic vector is the simplest kind of data structure in R. There are a few different kinds of atomic vector, but the defining feature of each one is that it can only contain data of one type. An atomic vector might contain all integers (e.g. 2, 4, 6, …) or all characters (e.g. “A”, “B”, “C”), but it can’t mix and match integers and characters (e.g. “A”, 2, “C”, 5\).
The word “atomic” in the name refers to the fact that an atomic vector can’t be broken down into anything simpler—they are the simplest kind of data structure R knows about. Even when working with a single number we’re actually dealing with an atomic vector. It just happens to be of length one. Here’s the very first expression we evaluated in the first chapter:
```
1 + 1
```
```
## [1] 2
```
Look at the output. What is that `[1]` at the beginning? We ignored it before because we weren’t in a position to understand its significance. The `[1]` is a clue that the output resulting from `1 + 1` is a atomic vector. We can verify this with the `is.vector` and `is.atomic` functions:
```
x <- 1 + 1
# what value is associated with x?
x
```
```
## [1] 2
```
```
# is it a vector?
is.vector(x)
```
```
## [1] TRUE
```
```
# is it atomic?
is.atomic(x)
```
```
## [1] TRUE
```
This little exercise demonstrates an important point about R. Atomic vectors really are the simplest kind of data structure in R. Unlike many other languages there is simply no way to represent just a number. Instead, a single number must be stored as a vector of length one[7](#fn7).
4\.3 Numeric vectors
--------------------
A lot of work in R involves **numeric vectors**. After all, data analysis is all about numbers. Here’s a simple way to construct a numeric vector:
```
numeric(length = 50)
```
```
## [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## [36] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
```
What happened? We made a numeric vector with 50 **elements**, each of which is the number 0\. The word “element” is used to reference any object (a number in this case) that resides at a particular position in a vector.
When we create a vector but don’t assign it to a name using `<-` R just prints it to the Console for us. Notice what happens when the vector is printed to the screen. Since the length\-50 vector can’t fit on one line, it was printed over two. At the beginning of each line there is a `[X]`: the `X` is a number that describes the position of the element shown at the beginning of a particular line.
#### Different kinds of numbers
Roughly speaking, R stores numbers in two different ways depending of whether they are whole numbers (“integers”) or numbers containing decimal points (“doubles” – don’t ask). We’re not going to worry about this distinction. Most of the time the distinction is fairly invisible to users so it is easier to just think in terms of numeric vectors. We can mix and match integers and doubles in R without having to worry too much about R is storing the numbers.
If we need to check that we really have made a numeric vector we can use the `is.numeric` function to do this:
```
# let's create a variable that is a numeric vector
numvec <- numeric(length = 50)
# check it really is a numeric vector
is.numeric(numvec)
```
```
## [1] TRUE
```
This returns `TRUE` as expected. A value of `FALSE` would imply that `numvec` is some other kind of object. This may not look like the most useful function in the world, but sometimes we need functions like `is.numeric` to understand what R is doing or root out mistakes in our scripts.
Keep in mind that when we print a numeric vector to Console R only prints the elements to 7 significant figures by default. We can see this by printing the built in constant `pi` to the Console:
```
pi
```
```
## [1] 3.141593
```
The actual value stored in `pi` is actually much more precise than this. We can see this by printing `pi` again using the `print` function:
```
print(pi, digits = 16)
```
```
## [1] 3.141592653589793
```
4\.4 Constructing numeric vectors
---------------------------------
We just saw how to use the `numeric` function to make a numeric vector of zeros. The size of the vector is controlled by the `length` argument—we used `length = 50` above to make a vector with 50 elements. This is arguably not a particularly useful skill, as we usually need to work vectors of particular values (not just 0\). A very useful function for creating custom vectors is the `c` function. Take a look at this example:
```
c(1.1, 2.3, 4.0, 5.7)
```
```
## [1] 1.1 2.3 4.0 5.7
```
The “c” in the function name stands for “combine”. The `c` function takes a variable number of arguments, each of which must be a vector of some kind, and combines these into a single, new vector. We supplied the `c` function with four arguments, each of which was a vector of length 1 (remember: a single number is treated as a length\-one vector). The `c` function combines these to generate a vector of length 4\. Simple. Now look at this example:
```
vec1 <- c(1.1, 2.3)
vec2 <- c(4.0, 5.7, 3.6)
c(vec1, vec2)
```
```
## [1] 1.1 2.3 4.0 5.7 3.6
```
This shows that we can use the `c` function to concatenate (“stick together”) two or more vectors, even if they are not of length 1\. We combined a length\-2 vector with a length\-3 vector to produce a new length\-5 vector.
Notice that we did not have to name the arguments in those two examples—there were no `=` involved. The `c` function is an example of one of those flexible functions that breaks the simple rules of thumb for using arguments that we set out earlier: it can take a variable number of arguments, and these arguments do not have predefined names. This behaviour is necessary for `c` to be of any use: in order to be useful it needs to be flexible enough to take any combination of arguments.
#### Finding out about a vector R
Sometimes it is useful to be able to find out a little extra information about a vector you are working with, especially if it is very large. Three functions that can extract some useful information about a vector for us are `length`, `head` and `tail`. Using a variety of different vectors, experiment with these to find out what they do. Make sure you use vectors that aren’t too short (e.g. with a length of at least 10\). Hint: `head` and `tail` can be used with a second argument, `n`.
4\.5 Named vectors
------------------
What happens if we name the arguments to `c` when constructing a vector? Take a look at this:
```
namedv <- c(a = 1, b = 2, c = 3)
namedv
```
```
## a b c
## 1 2 3
```
What happened here is that the argument names were used to define the names of elements in the vector we made. The resulting vector is still a 1\-dimensional data structure. When it is printed to the Console the value of each element is printed, along with the associated name above it. We can extract the names from a named vector using the `names` function:
```
names(namedv)
```
```
## [1] "a" "b" "c"
```
Being able to name the elements of a vector is very useful because it enables us to more easily identify relevant information and extract the bits we need—we’ll see how this works in the next chapter.
4\.6 Generating sequences of numbers
------------------------------------
The main limitation of the `c` function is that we have to manually construct vectors from their elements. This isn’t very practical if we need to construct very long vectors of numeric values. There are various functions that can help with this kind of thing though. These are useful when the elements of the target vector need to follow a sequence or repeating pattern. This may not appear all that useful at first, but repeating sequences are used a lot in R.
### 4\.6\.1 Generating sequences of numbers
The `seq` function allows us to generate sequences of numbers. It needs at least two arguments, but there are several different ways to control the sequence produced by `seq`. The method used is determined by our choice of arguments: `from`, `to`, `by`, `length.out` and `along.with`. We don’t need to use all of these though—setting 2\-3 of these arguments will often work:
1. Using the `by` argument generates a sequence of numbers that increase or decrease by the requested step size:
```
seq(from = 0, to = 12, by = 2)
```
```
## [1] 0 2 4 6 8 10 12
```
This is fairly self\-explanatory. The `seq` function constructed a numeric vector with elements that started at 0 and ended 12, with successive elements increasing in steps of 2\. Be careful when using `seq` like this. If the `by` argument does not lead to a sequence that ends exactly on the value of `to` then that value won’t appear in the vector. For example:
```
seq(from = 0, to = 11, by = 2)
```
```
## [1] 0 2 4 6 8 10
```
We can generate a descending sequence by using a negative `by` value, like this:
```
seq(from = 12, to = 0, by = -2)
```
```
## [1] 12 10 8 6 4 2 0
```
2. Using the `length.out` argument generates a sequence of numbers where the resulting vector has the length specified by `length.out`:
```
seq(from = 0, to = 12, length.out = 6)
```
```
## [1] 0.0 2.4 4.8 7.2 9.6 12.0
```
Using the `length.out` argument will always produce a sequence that starts and ends exactly on the values specified by `from` and `to` (if we need a descending sequence we just have to make sure `from` is larger than `to`).
3. Using the `along.with` argument allows us to use another vector to determine the length of the numeric sequence we want to produce:
```
# make any any vector we like
my_vec <- c(1, 3, 7, 2, 4, 2)
# use it to make a sequence of the same length
seq(from = 0, to = 12, along.with = my_vec)
```
```
## [1] 0.0 2.4 4.8 7.2 9.6 12.0
```
It doesn’t matter what the elements of `myvec` are. The behaviour of `seq` is controlled by the length of `myvec` when we use `along.with`.
It’s conventional to leave out the `from` and `to` argument names when using the `seq` function. For example, we could rewrite the first example above as:
```
seq(0, 12, by = 2)
```
```
## [1] 0 2 4 6 8 10 12
```
When we leave out the name of the third argument its value is position matched to the `by` argument:
```
seq(0, 12, 2)
```
```
## [1] 0 2 4 6 8 10 12
```
However, our advice is to explicitly name the `by` argument instead of relying on position matching, because this reminds us what we’re doing and will stop us forgetting about the different control arguments.
If we do not specify values of either `by`, `length.out` and `along.with` when using `seq` the default behaviour is to assume we meant `by = 1`:
```
seq(from = 1, to = 12)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10 11 12
```
Generating sequences of integers that go up or down in steps of 1 is something we do a lot in R. Because of this, there is a special operator to generate these simple sequences—the colon, `:`. For example, to produce the last sequence again we use:
```
1:12
```
```
## [1] 1 2 3 4 5 6 7 8 9 10 11 12
```
When we use the `:` operator the convention is to **not** leave spaces either side of it.
### 4\.6\.2 Generating repeated sequences of numbers
The `rep` function is designed to replicate the values inside a vector, i.e. it’s short for “replicate”. The first argument (called `x`) is the vector we want to replicate. There are two other arguments that control how this is done. The `times` argument specifies the number of times to replicate the whole vector:
```
# make a simple sequence of integers
seqvec <- 1:4
seqvec
```
```
## [1] 1 2 3 4
```
```
# now replicate *the whole vector* 3 times
rep(seqvec, times = 3)
```
```
## [1] 1 2 3 4 1 2 3 4 1 2 3 4
```
All we did here was take a sequence of integers from 1 to 4 and replicate this end\-to\-end three times, resulting in a length\-12 vector. Alternatively, we can use the `each` argument to replicate each element of a vector while retaining the original order:
```
# make a simple sequence of integers
seqvec <- 1:4
seqvec
```
```
## [1] 1 2 3 4
```
```
# now replicate *each element* vector 3 times
rep(seqvec, each = 3)
```
```
## [1] 1 1 1 2 2 2 3 3 3 4 4 4
```
This example produced a similar vector to the previous one. It contains the same elements and has the same length, but now the order is different. All the 1’s appear first, then the 2’s, and so on.
Predictably, we can also use both the `times` and `each` arguments together if we want to:
```
seqvec <- 1:4
rep(seqvec, times = 3, each = 2)
```
```
## [1] 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4
```
The way to think about how this works is to imagine that we apply `rep` twice, first with `each = 2`, then with `times = 3` (or vice versa). We can achieve the same thing using nested function calls, though it is much uglier:
```
seqvec <- 1:4
rep(rep(seqvec, each = 2), times = 3)
```
```
## [1] 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4
```
### 4\.6\.1 Generating sequences of numbers
The `seq` function allows us to generate sequences of numbers. It needs at least two arguments, but there are several different ways to control the sequence produced by `seq`. The method used is determined by our choice of arguments: `from`, `to`, `by`, `length.out` and `along.with`. We don’t need to use all of these though—setting 2\-3 of these arguments will often work:
1. Using the `by` argument generates a sequence of numbers that increase or decrease by the requested step size:
```
seq(from = 0, to = 12, by = 2)
```
```
## [1] 0 2 4 6 8 10 12
```
This is fairly self\-explanatory. The `seq` function constructed a numeric vector with elements that started at 0 and ended 12, with successive elements increasing in steps of 2\. Be careful when using `seq` like this. If the `by` argument does not lead to a sequence that ends exactly on the value of `to` then that value won’t appear in the vector. For example:
```
seq(from = 0, to = 11, by = 2)
```
```
## [1] 0 2 4 6 8 10
```
We can generate a descending sequence by using a negative `by` value, like this:
```
seq(from = 12, to = 0, by = -2)
```
```
## [1] 12 10 8 6 4 2 0
```
2. Using the `length.out` argument generates a sequence of numbers where the resulting vector has the length specified by `length.out`:
```
seq(from = 0, to = 12, length.out = 6)
```
```
## [1] 0.0 2.4 4.8 7.2 9.6 12.0
```
Using the `length.out` argument will always produce a sequence that starts and ends exactly on the values specified by `from` and `to` (if we need a descending sequence we just have to make sure `from` is larger than `to`).
3. Using the `along.with` argument allows us to use another vector to determine the length of the numeric sequence we want to produce:
```
# make any any vector we like
my_vec <- c(1, 3, 7, 2, 4, 2)
# use it to make a sequence of the same length
seq(from = 0, to = 12, along.with = my_vec)
```
```
## [1] 0.0 2.4 4.8 7.2 9.6 12.0
```
It doesn’t matter what the elements of `myvec` are. The behaviour of `seq` is controlled by the length of `myvec` when we use `along.with`.
It’s conventional to leave out the `from` and `to` argument names when using the `seq` function. For example, we could rewrite the first example above as:
```
seq(0, 12, by = 2)
```
```
## [1] 0 2 4 6 8 10 12
```
When we leave out the name of the third argument its value is position matched to the `by` argument:
```
seq(0, 12, 2)
```
```
## [1] 0 2 4 6 8 10 12
```
However, our advice is to explicitly name the `by` argument instead of relying on position matching, because this reminds us what we’re doing and will stop us forgetting about the different control arguments.
If we do not specify values of either `by`, `length.out` and `along.with` when using `seq` the default behaviour is to assume we meant `by = 1`:
```
seq(from = 1, to = 12)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10 11 12
```
Generating sequences of integers that go up or down in steps of 1 is something we do a lot in R. Because of this, there is a special operator to generate these simple sequences—the colon, `:`. For example, to produce the last sequence again we use:
```
1:12
```
```
## [1] 1 2 3 4 5 6 7 8 9 10 11 12
```
When we use the `:` operator the convention is to **not** leave spaces either side of it.
### 4\.6\.2 Generating repeated sequences of numbers
The `rep` function is designed to replicate the values inside a vector, i.e. it’s short for “replicate”. The first argument (called `x`) is the vector we want to replicate. There are two other arguments that control how this is done. The `times` argument specifies the number of times to replicate the whole vector:
```
# make a simple sequence of integers
seqvec <- 1:4
seqvec
```
```
## [1] 1 2 3 4
```
```
# now replicate *the whole vector* 3 times
rep(seqvec, times = 3)
```
```
## [1] 1 2 3 4 1 2 3 4 1 2 3 4
```
All we did here was take a sequence of integers from 1 to 4 and replicate this end\-to\-end three times, resulting in a length\-12 vector. Alternatively, we can use the `each` argument to replicate each element of a vector while retaining the original order:
```
# make a simple sequence of integers
seqvec <- 1:4
seqvec
```
```
## [1] 1 2 3 4
```
```
# now replicate *each element* vector 3 times
rep(seqvec, each = 3)
```
```
## [1] 1 1 1 2 2 2 3 3 3 4 4 4
```
This example produced a similar vector to the previous one. It contains the same elements and has the same length, but now the order is different. All the 1’s appear first, then the 2’s, and so on.
Predictably, we can also use both the `times` and `each` arguments together if we want to:
```
seqvec <- 1:4
rep(seqvec, times = 3, each = 2)
```
```
## [1] 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4
```
The way to think about how this works is to imagine that we apply `rep` twice, first with `each = 2`, then with `times = 3` (or vice versa). We can achieve the same thing using nested function calls, though it is much uglier:
```
seqvec <- 1:4
rep(rep(seqvec, each = 2), times = 3)
```
```
## [1] 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4
```
4\.7 Vectorised operations
--------------------------
All the simple arithmetic operators (e.g. `+` and `-`) and many mathematical functions are **vectorised** in R . What this means is that when we use a vectorised function it operates on vectors on an element\-by\-element basis. Take a look at this example to see what we mean by this:
```
# make a simple sequence
x <- 1:10
x
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
```
# make another simple sequence *of the same length*
y <- seq(0.1, 1, length = 10)
y
```
```
## [1] 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
```
```
# add them
x + y
```
```
## [1] 1.1 2.2 3.3 4.4 5.5 6.6 7.7 8.8 9.9 11.0
```
We constructed two length\-10 numeric vectors, called `x` and `y`, where `x` is a sequence from 1 to 10 and `y` is a sequence from 0\.1 to 1\. When R evaluates the expression `x + y` it does this by adding the first element of `x` to the first element of `y`, the second element of `x` to the second element of `y`, and so on, working through all 10 elements of `x` and `y`.
Vectorisation is also implemented in the standard mathematical functions. For example, our friend the `round` function will round each element of a numeric vector:
```
# make a simple sequence of non-integer values
my_nums <- seq(0, 1, length = 13)
my_nums
```
```
## [1] 0.00000000 0.08333333 0.16666667 0.25000000 0.33333333 0.41666667
## [7] 0.50000000 0.58333333 0.66666667 0.75000000 0.83333333 0.91666667
## [13] 1.00000000
```
```
# now round the values
round(my_nums, digits = 2)
```
```
## [1] 0.00 0.08 0.17 0.25 0.33 0.42 0.50 0.58 0.67 0.75 0.83 0.92 1.00
```
The same behaviour is seen with other mathematical functions like `sin`, `cos`, `exp`, and `log`. Each of these will apply the relevant function to each element of a numeric vector.
Not all functions are vectorised. For example, the `sum` function takes a vector of numbers and adds them up:
```
sum(my_nums)
```
```
## [1] 6.5
```
Although `sum` obviously works on a numeric vector it is not “vectorised” in the sense that it works element\-by\-element to return an output vector of the same length as its main argument.
Many functions apply the vectorisation principle to more than one argument. Take a look at this example to see what we mean by this:
```
# make a repeated set of non-integer values
my_nums <- rep(2 / 7, times = 6)
my_nums
```
```
## [1] 0.2857143 0.2857143 0.2857143 0.2857143 0.2857143 0.2857143
```
```
# round each one to a different number of decimal places
round(my_nums, digits = 1:6)
```
```
## [1] 0.300000 0.290000 0.286000 0.285700 0.285710 0.285714
```
We constructed a length 6 vector containing the number 2/7 (\~ 0\.285714\) and then used the `round` function to round each element to a different number of decimal places. The first element was rounded to 1 decimal place, the second to two decimal places, and so on. We get this behaviour because instead of using a single number for the `digits` argument, we used a vector that is an integer sequence from 1 to 6\.
#### Vectorisation is not the norm
R’s vectorised behaviour may seem like the “obvious” thing to do, but most computer languages do not work like this. In other languages we typically have to write a much more complicated expression to do something so simple. This is one of the reasons R is such a data analysis language: vectorisation allows us to express repetitious calculations in a simple, intuitive way. This behaviour can save us a lot of time. However, not every function treats its arguments in a vectorised way, so we always need to check (most easily, by experimenting) whether this behaviour is available before relying on it.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/other-kinds-of-vectors.html |
Chapter 5 Other kinds of vectors
================================
5\.1 Introduction
-----------------
The data we collect and analyse are often in the form of numbers, so it comes as no surprise that we work with numeric vectors a lot in R. Nonetheless, we also sometimes need to represent other kinds of vectors, either to represent different types of data, or to help us manipulate our data. This chapter introduces two new types of atomic vector to help us do this: character vectors and logical vectors.
5\.2 Character vectors
----------------------
The elements of **character vectors** are what are known as a “character string” (or “string” if we are feeling lazy). The term “character string” refers a sequence of characters, such as “Treatment 1”, “University of Sheffield”, “Population Density”. A character vector is an atomic vector that stores an ordered collection of one or more character strings.
If we want to construct a character vector in R we have to place double (`"`) or single (`'`) quotation marks around the characters. For example, we can print the name “Dylan” to the Console like this:
```
"Dylan"
```
```
## [1] "Dylan"
```
Notice the `[1]`. This shows that what we just printed is an atomic vector of some kind. We know it’s a character vector because the output is printed with double quotes around the value. We often need to make simple character vectors containing only one value—for example, to set the values of arguments to a function.
The quotation marks are not optional—they tell R we want to treat whatever is inside them as a literal value. The quoting is important. If we try to do the same thing as above without the quotes we end up with an error:
```
Dylan
```
```
## Error in eval(expr, envir, enclos): object 'Dylan' not found
```
What happened? When the interpreter sees the word `Dylan` without quotes it assumes that this must be the name of a variable, so it goes in search of it in the global environment. We haven’t made a variable called Dylan, so there is no way to evaluate the expression and R spits out an error to let us know there’s a problem.
Longer character vectors are typically constructed to represent data of some kind. The `c` function is often a good starting point for this kind of thing:
```
# make a length-3 character vector
my_name <- c("Dylan", "Zachary", "Childs")
my_name
```
```
## [1] "Dylan" "Zachary" "Childs"
```
Here we made a length\-3 character vector, with elements corresponding to a first name, middle name, and last name. If we want to extract one or more elements from a character vector by their position
Take note, this is **not** equivalent to the above :
```
my_name <- c("Dylan Zachary Childs")
my_name
```
```
## [1] "Dylan Zachary Childs"
```
The only element of this character vector is a single character string containing the first, middle and last name separated by spaces. We didn’t need to use the the `c` function here because we were only ever working with a length\-1 character vector. i.e. we could have typed `"Dylan Zachary Childs"` and we would have ended up with exactly the same text printed at the Console.
We can construct more complicated, repeating character vectors with `rep`. This works on character vectors in exactly the same way as it does on numeric vectors:
```
c_vec <- c("Dylan", "Zachary", "Childs")
rep(c_vec, each = 2, times = 3)
```
```
## [1] "Dylan" "Dylan" "Zachary" "Zachary" "Childs" "Childs" "Dylan"
## [8] "Dylan" "Zachary" "Zachary" "Childs" "Childs" "Dylan" "Dylan"
## [15] "Zachary" "Zachary" "Childs" "Childs"
```
Each element was replicated twice (`each = 2`), and then the whole vector was replicated three times (`times = 3`), end to end.
What about the `seq` function? Hopefully it is fairly obvious that we can’t use this function to build a character vector. The `seq` function is designed to make sequences of numbers, from a starting value, to another. The notion of a sequence of character strings – for example, from `"Dylan"` to `"Childs"` – is meaningless.
5\.3 Logical vectors
--------------------
The elements of **logical vectors** only take two values: `TRUE` or `FALSE`. Don’t let the simplicity of logical vectors fool you. They’re very useful. As with other kinds of atomic vectors the `c` and `rep` functions can be used to construct a logical vector:
```
l_vec <- c(TRUE, FALSE)
rep(l_vec, times = 2)
```
```
## [1] TRUE FALSE TRUE FALSE
```
Hopefully nothing about that output is surprising by this point.
So why are logical vectors useful? Their allow us to represent the results of questions such as, “is x greater than y” or “is x equal to y”. The results of such comparisons may then be used to carry out various kinds of subsetting operations.
Let’s first look at how we use logical vectors to evaluate comparisons. Before we can do that though we need to introduce **relational operators**. These sound fancy, but they are very simple: we use relational operators to evaluate the relative value of vector elements. Six are available in R:
* `x < y`: is x less than y?
* `x > y`: is x greater than y?
* `x <= y`: is x less than or equal to y?
* `x >= y`: is x greater than or equal to y?
* `x == y`: is x equal to y?
* `x != y`: is x not equal to y?
The easiest way to understand how these work is to simply use them. We need a couple of numeric variables first:
```
x <- 11:20
y <- seq(3, 30, by = 3)
x
```
```
## [1] 11 12 13 14 15 16 17 18 19 20
```
```
y
```
```
## [1] 3 6 9 12 15 18 21 24 27 30
```
Now, if we need to evaluate and represent a question like, “is x greater than than y”, we can use either `<` or `>`:
```
x > y
```
```
## [1] TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE
```
The `x > y` expression produces a logical vector, with `TRUE` values associated with elements in `x` are less than `y`, and `FALSE` otherwise. In this example, x is less than y until we reach the value of 15 in each sequence. Notice that relational operators are vectorised: they work on an element by element basis. They wouldn’t be much use if this were not the case.
What does the `==` operator do? It compares the elements of two vectors to determine if they are exactly equal:
```
x == y
```
```
## [1] FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE
```
The output of this comparison is true only for one element, the number 15, which is at the 5th position in both `x` and `y`. The `!=` operator is essentially the opposite of `==`. It identifies cases where two elements are not exactly equal. We could step through each of the different the other relational operators, but hopefully they are self\-explanatory at this point (if not, experiment with them).
#### `=` and `==` are not the same
If we want to test for equivalence between the elements of two vectors we must use double equals (`==`), not single equals (`=`). Forgetting to do this `==` instead of `=` is a very common source of mistakes. The `=` symbol already has a use in R—assigning name\-value pairs—so it can’t also be used to compare vectors because this would lead to ambiguity in our R scripts. Using `=` when you meant to use `==` is a very common mistake. If you make it, this will lead to all kinds of difficult\-to\-comprehend problems with your scripts. Try to remember the difference!
5\.1 Introduction
-----------------
The data we collect and analyse are often in the form of numbers, so it comes as no surprise that we work with numeric vectors a lot in R. Nonetheless, we also sometimes need to represent other kinds of vectors, either to represent different types of data, or to help us manipulate our data. This chapter introduces two new types of atomic vector to help us do this: character vectors and logical vectors.
5\.2 Character vectors
----------------------
The elements of **character vectors** are what are known as a “character string” (or “string” if we are feeling lazy). The term “character string” refers a sequence of characters, such as “Treatment 1”, “University of Sheffield”, “Population Density”. A character vector is an atomic vector that stores an ordered collection of one or more character strings.
If we want to construct a character vector in R we have to place double (`"`) or single (`'`) quotation marks around the characters. For example, we can print the name “Dylan” to the Console like this:
```
"Dylan"
```
```
## [1] "Dylan"
```
Notice the `[1]`. This shows that what we just printed is an atomic vector of some kind. We know it’s a character vector because the output is printed with double quotes around the value. We often need to make simple character vectors containing only one value—for example, to set the values of arguments to a function.
The quotation marks are not optional—they tell R we want to treat whatever is inside them as a literal value. The quoting is important. If we try to do the same thing as above without the quotes we end up with an error:
```
Dylan
```
```
## Error in eval(expr, envir, enclos): object 'Dylan' not found
```
What happened? When the interpreter sees the word `Dylan` without quotes it assumes that this must be the name of a variable, so it goes in search of it in the global environment. We haven’t made a variable called Dylan, so there is no way to evaluate the expression and R spits out an error to let us know there’s a problem.
Longer character vectors are typically constructed to represent data of some kind. The `c` function is often a good starting point for this kind of thing:
```
# make a length-3 character vector
my_name <- c("Dylan", "Zachary", "Childs")
my_name
```
```
## [1] "Dylan" "Zachary" "Childs"
```
Here we made a length\-3 character vector, with elements corresponding to a first name, middle name, and last name. If we want to extract one or more elements from a character vector by their position
Take note, this is **not** equivalent to the above :
```
my_name <- c("Dylan Zachary Childs")
my_name
```
```
## [1] "Dylan Zachary Childs"
```
The only element of this character vector is a single character string containing the first, middle and last name separated by spaces. We didn’t need to use the the `c` function here because we were only ever working with a length\-1 character vector. i.e. we could have typed `"Dylan Zachary Childs"` and we would have ended up with exactly the same text printed at the Console.
We can construct more complicated, repeating character vectors with `rep`. This works on character vectors in exactly the same way as it does on numeric vectors:
```
c_vec <- c("Dylan", "Zachary", "Childs")
rep(c_vec, each = 2, times = 3)
```
```
## [1] "Dylan" "Dylan" "Zachary" "Zachary" "Childs" "Childs" "Dylan"
## [8] "Dylan" "Zachary" "Zachary" "Childs" "Childs" "Dylan" "Dylan"
## [15] "Zachary" "Zachary" "Childs" "Childs"
```
Each element was replicated twice (`each = 2`), and then the whole vector was replicated three times (`times = 3`), end to end.
What about the `seq` function? Hopefully it is fairly obvious that we can’t use this function to build a character vector. The `seq` function is designed to make sequences of numbers, from a starting value, to another. The notion of a sequence of character strings – for example, from `"Dylan"` to `"Childs"` – is meaningless.
5\.3 Logical vectors
--------------------
The elements of **logical vectors** only take two values: `TRUE` or `FALSE`. Don’t let the simplicity of logical vectors fool you. They’re very useful. As with other kinds of atomic vectors the `c` and `rep` functions can be used to construct a logical vector:
```
l_vec <- c(TRUE, FALSE)
rep(l_vec, times = 2)
```
```
## [1] TRUE FALSE TRUE FALSE
```
Hopefully nothing about that output is surprising by this point.
So why are logical vectors useful? Their allow us to represent the results of questions such as, “is x greater than y” or “is x equal to y”. The results of such comparisons may then be used to carry out various kinds of subsetting operations.
Let’s first look at how we use logical vectors to evaluate comparisons. Before we can do that though we need to introduce **relational operators**. These sound fancy, but they are very simple: we use relational operators to evaluate the relative value of vector elements. Six are available in R:
* `x < y`: is x less than y?
* `x > y`: is x greater than y?
* `x <= y`: is x less than or equal to y?
* `x >= y`: is x greater than or equal to y?
* `x == y`: is x equal to y?
* `x != y`: is x not equal to y?
The easiest way to understand how these work is to simply use them. We need a couple of numeric variables first:
```
x <- 11:20
y <- seq(3, 30, by = 3)
x
```
```
## [1] 11 12 13 14 15 16 17 18 19 20
```
```
y
```
```
## [1] 3 6 9 12 15 18 21 24 27 30
```
Now, if we need to evaluate and represent a question like, “is x greater than than y”, we can use either `<` or `>`:
```
x > y
```
```
## [1] TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE
```
The `x > y` expression produces a logical vector, with `TRUE` values associated with elements in `x` are less than `y`, and `FALSE` otherwise. In this example, x is less than y until we reach the value of 15 in each sequence. Notice that relational operators are vectorised: they work on an element by element basis. They wouldn’t be much use if this were not the case.
What does the `==` operator do? It compares the elements of two vectors to determine if they are exactly equal:
```
x == y
```
```
## [1] FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE
```
The output of this comparison is true only for one element, the number 15, which is at the 5th position in both `x` and `y`. The `!=` operator is essentially the opposite of `==`. It identifies cases where two elements are not exactly equal. We could step through each of the different the other relational operators, but hopefully they are self\-explanatory at this point (if not, experiment with them).
#### `=` and `==` are not the same
If we want to test for equivalence between the elements of two vectors we must use double equals (`==`), not single equals (`=`). Forgetting to do this `==` instead of `=` is a very common source of mistakes. The `=` symbol already has a use in R—assigning name\-value pairs—so it can’t also be used to compare vectors because this would lead to ambiguity in our R scripts. Using `=` when you meant to use `==` is a very common mistake. If you make it, this will lead to all kinds of difficult\-to\-comprehend problems with your scripts. Try to remember the difference!
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/extracting-subsets-of-vectors.html |
Chapter 6 Extracting subsets of vectors
=======================================
6\.1 Introduction
-----------------
At the beginning of the last chapter we said that an atomic vector is a 1\-dimensional object that contains an **ordered** collection of data values. Why is this view of a vector useful? It means that we can extract a subset of elements from a vector once we know their position in that vector. There are two main ways to subset atomic vectors, both of which we’ll cover in this chapter. Whatever the method we use, subsetting involves a pair of opening and closing square brackets (`[` and `]`). These are always used together.
6\.2 Subsetting by position
---------------------------
We can use the `[` construct with a vector to subset its elements directly using their position. Take a look at this example:
```
numvec <- c(7.2, 3.6, 2.9)
numvec[2]
```
```
## [1] 3.6
```
The `numvec` variable is a length 3 numeric vector. In order to subset it and retain only the second element we used the `[ ]` construct with the number 2 inside, placing `[2]` next to the name of the vector. Notice that we do not place a space anywhere in this construct. We could, but this is not conventional.
Remember, “the number 2” is in fact a numeric vector of length 1\. This suggests we might be able to use longer vectors to extract more than one element:
```
# make a numeric sequence
my_vec <- seq(0, 0.5, by = 0.1)
my_vec
```
```
## [1] 0.0 0.1 0.2 0.3 0.4 0.5
```
```
# make an indexing vector
i <- c(1, 3)
# extract a subset of values
my_vec[i]
```
```
## [1] 0.0 0.2
```
We first constructed a numeric vector of length 2 called `i`, which has elements `1` and `3`. We then used this to extract his first and third value by placing `i` inside the `[ ]`. We didn’t have to carry out the subsetting operation in two steps. This achieves the same result:
```
my_vec[c(1, 3)]
```
```
## [1] 0.0 0.2
```
Notice that when we subset a vector of a particular type, we get a vector of the same type back, e.g. subsetting a numeric vector produces another numeric vector.
We can also subset a vector by removing certain elements. We use the `-` operator to do this. Here’s an example that produces the same result as the last example, but in a different way:
```
my_vec[-c(2, 4, 5, 6)]
```
```
## [1] 0.0 0.2
```
The `my_vec[-c(2, 4, 5, 6)]` expression indicates that we want all the elements of `c_vec` **except** those that at position 2, 4, 5, and 6\.
We have learned how to use the `[` construct with a numeric vector of integer(s) to subset the elements of vector by their position. This works exactly the same way with character vectors:
```
c_vec <- c("Dylan", "Zachary", "Childs")
c_vec[1]
```
```
## [1] "Dylan"
```
The `c_vec` variable is a length 3 character vector, with elements corresponding to his first, middle and last name. We used the `[ ]` construct with the number 1 inside, to extract the first element (i.e. the first name). Longer vectors can be used to extract more than one element, and we can use negative numbers to remove elements:
```
# extract the first and third value
c_vec[c(1, 3)]
```
```
## [1] "Dylan" "Childs"
```
```
# drop the second value (equivalent)
c_vec[-2]
```
```
## [1] "Dylan" "Childs"
```
6\.3 Subsetting with logical operators
--------------------------------------
Subsetting vectors by position suffers from one major drawback—we have to know where the elements we want sit in the vector. A second way to subset a vector makes use of logical vectors alongside `[ ]`. This is usually done using two vectors of the same length: the focal vector we wish to subset, and a logical vector that specifies which elements to keep. Here is a very simple example:
```
i <- c(TRUE, FALSE, TRUE)
c_vec[i]
```
```
## [1] "Dylan" "Childs"
```
This should be fairly self\-explanatory. The logical vector method of subsetting works element\-by\-element, and an element in `c_vec` is retained wherever the corresponding element in `i` contains a `TRUE`; otherwise it is discarded.
Why is this better than using position indexing? After all, using a vector of positions to subset a vector is much more direct. The reason is that we can use relational operators to help us select elements according to particular criteria. This is best illustrated with an example. We’ll start by defining two vectors of the same length:
```
name <- c("cat", "dog", "wren", "pig", "owl")
name
```
```
## [1] "cat" "dog" "wren" "pig" "owl"
```
```
type <- c("mammal", "mammal", "bird", "mammal", "bird")
type
```
```
## [1] "mammal" "mammal" "bird" "mammal" "bird"
```
The first, `name`, is a character vector containing the common names of a few animals. The second, `type`, is another character vector whose elements denote the type of animal, in this case, a bird or a mammal. The vectors are arranged such that the information is associated via the position of elements in each vector (cats and dogs are mammals, a wren is a bird, and so on).
Let’s assume that we want to create a subset of `name` that only contains the names of mammals. We can do this by creating a logical vector from `type`, where the values are `TRUE` when an element is equal to `"mammal"` and `FALSE` otherwise. We know how to do this using the `==` operator:
```
i <- type == "mammal"
i
```
```
## [1] TRUE TRUE FALSE TRUE FALSE
```
We stored the result in a variable called `i`. Now all we need to do is use with `i` inside the `[ ]` construct to subset `name`:
```
name[i]
```
```
## [1] "cat" "dog" "pig"
```
We did this the long way to understand the logic of subsetting vectors with logical operators. This is quite verbose though, and we usually combine the two steps into a single R expression:
```
name[type == "mammal"]
```
```
## [1] "cat" "dog" "pig"
```
We can use any of the relational operators to subset vectors like this. If we define a numeric variable that contains the mean mass (in grams) of each animal, we can use this to subset `names` according to the associated mean mass. For example, if we want a subset that contains only those animals where the mean mass is greater than 1kg we use:
```
mass <- c(2900, 9000, 10, 18000, 2000)
name[mass > 1000]
```
```
## [1] "cat" "dog" "pig" "owl"
```
Just remember, this way of using information in one vector to create subsets of a second vector only works if the information in each is associated via the position of their respective elements. Keeping a bunch of different vectors organised like this is difficult and error prone. In the next block we’ll learn how to use something called a data frame and the `dplyr` package to make working with a collection of related vectors much easier.
6\.1 Introduction
-----------------
At the beginning of the last chapter we said that an atomic vector is a 1\-dimensional object that contains an **ordered** collection of data values. Why is this view of a vector useful? It means that we can extract a subset of elements from a vector once we know their position in that vector. There are two main ways to subset atomic vectors, both of which we’ll cover in this chapter. Whatever the method we use, subsetting involves a pair of opening and closing square brackets (`[` and `]`). These are always used together.
6\.2 Subsetting by position
---------------------------
We can use the `[` construct with a vector to subset its elements directly using their position. Take a look at this example:
```
numvec <- c(7.2, 3.6, 2.9)
numvec[2]
```
```
## [1] 3.6
```
The `numvec` variable is a length 3 numeric vector. In order to subset it and retain only the second element we used the `[ ]` construct with the number 2 inside, placing `[2]` next to the name of the vector. Notice that we do not place a space anywhere in this construct. We could, but this is not conventional.
Remember, “the number 2” is in fact a numeric vector of length 1\. This suggests we might be able to use longer vectors to extract more than one element:
```
# make a numeric sequence
my_vec <- seq(0, 0.5, by = 0.1)
my_vec
```
```
## [1] 0.0 0.1 0.2 0.3 0.4 0.5
```
```
# make an indexing vector
i <- c(1, 3)
# extract a subset of values
my_vec[i]
```
```
## [1] 0.0 0.2
```
We first constructed a numeric vector of length 2 called `i`, which has elements `1` and `3`. We then used this to extract his first and third value by placing `i` inside the `[ ]`. We didn’t have to carry out the subsetting operation in two steps. This achieves the same result:
```
my_vec[c(1, 3)]
```
```
## [1] 0.0 0.2
```
Notice that when we subset a vector of a particular type, we get a vector of the same type back, e.g. subsetting a numeric vector produces another numeric vector.
We can also subset a vector by removing certain elements. We use the `-` operator to do this. Here’s an example that produces the same result as the last example, but in a different way:
```
my_vec[-c(2, 4, 5, 6)]
```
```
## [1] 0.0 0.2
```
The `my_vec[-c(2, 4, 5, 6)]` expression indicates that we want all the elements of `c_vec` **except** those that at position 2, 4, 5, and 6\.
We have learned how to use the `[` construct with a numeric vector of integer(s) to subset the elements of vector by their position. This works exactly the same way with character vectors:
```
c_vec <- c("Dylan", "Zachary", "Childs")
c_vec[1]
```
```
## [1] "Dylan"
```
The `c_vec` variable is a length 3 character vector, with elements corresponding to his first, middle and last name. We used the `[ ]` construct with the number 1 inside, to extract the first element (i.e. the first name). Longer vectors can be used to extract more than one element, and we can use negative numbers to remove elements:
```
# extract the first and third value
c_vec[c(1, 3)]
```
```
## [1] "Dylan" "Childs"
```
```
# drop the second value (equivalent)
c_vec[-2]
```
```
## [1] "Dylan" "Childs"
```
6\.3 Subsetting with logical operators
--------------------------------------
Subsetting vectors by position suffers from one major drawback—we have to know where the elements we want sit in the vector. A second way to subset a vector makes use of logical vectors alongside `[ ]`. This is usually done using two vectors of the same length: the focal vector we wish to subset, and a logical vector that specifies which elements to keep. Here is a very simple example:
```
i <- c(TRUE, FALSE, TRUE)
c_vec[i]
```
```
## [1] "Dylan" "Childs"
```
This should be fairly self\-explanatory. The logical vector method of subsetting works element\-by\-element, and an element in `c_vec` is retained wherever the corresponding element in `i` contains a `TRUE`; otherwise it is discarded.
Why is this better than using position indexing? After all, using a vector of positions to subset a vector is much more direct. The reason is that we can use relational operators to help us select elements according to particular criteria. This is best illustrated with an example. We’ll start by defining two vectors of the same length:
```
name <- c("cat", "dog", "wren", "pig", "owl")
name
```
```
## [1] "cat" "dog" "wren" "pig" "owl"
```
```
type <- c("mammal", "mammal", "bird", "mammal", "bird")
type
```
```
## [1] "mammal" "mammal" "bird" "mammal" "bird"
```
The first, `name`, is a character vector containing the common names of a few animals. The second, `type`, is another character vector whose elements denote the type of animal, in this case, a bird or a mammal. The vectors are arranged such that the information is associated via the position of elements in each vector (cats and dogs are mammals, a wren is a bird, and so on).
Let’s assume that we want to create a subset of `name` that only contains the names of mammals. We can do this by creating a logical vector from `type`, where the values are `TRUE` when an element is equal to `"mammal"` and `FALSE` otherwise. We know how to do this using the `==` operator:
```
i <- type == "mammal"
i
```
```
## [1] TRUE TRUE FALSE TRUE FALSE
```
We stored the result in a variable called `i`. Now all we need to do is use with `i` inside the `[ ]` construct to subset `name`:
```
name[i]
```
```
## [1] "cat" "dog" "pig"
```
We did this the long way to understand the logic of subsetting vectors with logical operators. This is quite verbose though, and we usually combine the two steps into a single R expression:
```
name[type == "mammal"]
```
```
## [1] "cat" "dog" "pig"
```
We can use any of the relational operators to subset vectors like this. If we define a numeric variable that contains the mean mass (in grams) of each animal, we can use this to subset `names` according to the associated mean mass. For example, if we want a subset that contains only those animals where the mean mass is greater than 1kg we use:
```
mass <- c(2900, 9000, 10, 18000, 2000)
name[mass > 1000]
```
```
## [1] "cat" "dog" "pig" "owl"
```
Just remember, this way of using information in one vector to create subsets of a second vector only works if the information in each is associated via the position of their respective elements. Keeping a bunch of different vectors organised like this is difficult and error prone. In the next block we’ll learn how to use something called a data frame and the `dplyr` package to make working with a collection of related vectors much easier.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/getting-help-1.html |
Chapter 7 Getting help
======================
7\.1 Introduction
-----------------
R has a comprehensive built\-in help system. This system is orientated around the base R functions and packages. Every good package comes with a set of **help files**. At a minimum these should provide information about the individual package functions and summaries of the data included with the package. They sometimes give descriptions of how different parts of the package should be used, and if we’re lucky, one or more “vignettes” that offer a practical demonstration of how to use a package. Other files are sometimes shipped with packages. For example, these might give an overview of the mathematical or computational theory a package relies on. We will not worry about these in this course.
We may as well get something out of the way early on. The word “help” in the phrase “help file” is a bit of a misnomer. It is probably more accurate to say R has an extensive **documentation** system. The reason we say this is that the majority of help files are associated with functions, and these kinds of files are designed first and foremost to document how a particular function or group of functions are meant to be used. For example, they describe what kinds of arguments a function can take and what kind of objects it will return to us. Help files are also written in a very precise, painful\-to\-read manner. They contain a lot of jargon which can be hard for new R users to decipher.
The take\-home message is that R help files are aimed more at experienced users than novices. Their primary purpose is to carefully document the different elements of a package, rather than explain how a particular function or the package as whole should be used to achieve a given end. That said, help files often contain useful examples, and many package authors do try to make our life easier by providing functional demonstrations of their package (those “vignettes” we mentioned above are a vehicle for doing this). It’s important to try to get to grips with the built in help system. It contains a great deal of useful information which we need to really start using R effectively. The road to enlightenment is bumpy though.
7\.2 Browsing the help system
-----------------------------
How do we access the help system? Help files are a little like mini web pages, which means we can navigate among them using hyperlinks. This makes it very easy to explore the help system. One way to begin browsing the help system uses the `help.start` function:
```
help.start()
```
If we type this now at the Console we should see the **Package Index** page open up in the **Help** tab of the bottom right pane in RStudio. This lists all the packages currently installed on a computer. We can view all the help files associated with a package by clicking on the appropriate link. For example, the functions that come with the base installation of R have a help file associated with them—click on the link to the R base package (`base`) to see these. Though we know about a few of these already, there are **a lot** of functions listed here. R is huge.
The packages that come with the base R installation and those that we install separately from base R have their own set of associated help files. These can be viewed by following the appropriate link on the **Package Index** page. We will learn how to navigate these in a moment. Take note: it is up to the developer of a package to produce usable help files. Well\-designed packages like **dplyr** and **ggplot2** have an extensive help system that covers almost everything the package can do. This isn’t always the the case though, particularly with new or packages or packages that are not widely used. We will only ever use well\-documented packages.
Notice that the help browser has Forward, Back, and Home buttons, just like a normal web browser. If we get lost in the mire of help pages we can always navigate backward until we get back to a familiar page. However, for some reason the Home button does not take us to the same page as `help.start`. Clicking on the home button takes us to a page with three sections:
1. The **Manuals** section looks like it might be useful for novice users. Unfortunately, it isn’t really. Even the “Introduction to R” manual is only helpful for someone with a bit of programming experience because it assumes we understand what terms like “data structure” and “data type” mean. It is worth reading this manual after gaining a bit of experience with R. The others manuals are more or less impenetrable unless the reader already knows quite a bit about computing in general.
2. The **Reference** section is a little more helpful. The “Packages” link just takes us to the same page opened by `help.start`. From here we can browse the help pages on a package\-specific basis. The “Search Engine \& Keywords” link takes us to a search engine page (no surprises there). We can use this to search for specific help pages, either by supplying a search term or by navigating through the different keywords. We’ll discuss the built\-in search engine in the next subsection.
3. The **Miscellaneous Material** section has a couple of potentially useful links. The “User Manuals” link lists any user manuals supplied by package authors. These tend to be aimed at more experienced users and the packages we will learn to use in this course do not provide them. However, it is worth knowing these exist as they are occasionally useful. The “Frequently Asked Questions” link is definitely worth reviewing at some point, but again, most of the FAQs are a little difficult for novice users to fully understand.
7\.3 Searching for help files
-----------------------------
After browsing help files via `help.start` for a bit it quickly becomes apparent that this way of searching for help is not very efficient. Quite often we know the name of the function we need to use and all we want to do is open that particular help file. We can do this with the `help` function:
```
help(topic = Trig)
```
After we run this line RStudio opens up the help file for the trigonometry topic in the **Help** tab. This file provides information about the various trigonometric functions such as `sin` or `cos`. We’ll learn how to make sense of such help pages in the next subsection. For now, we just want to see how to use `help`.
The `help` function needs a minimum of one argument: the name of the topic or function of interest. When we use it like this the help function searches across packages, looking for a help file whose name gives **an exact match** to the name we supplied. In this case, we opened the help file associated with the `Trig` topic. Most of the time we use the `help` function to find the help page for a specific function, rather than a general topic. This is fine if we can remember the name of the topic associated with different functions. Most of us cannot. Luckily, the help function will also match help pages by the name of the function(s) they cover:
```
help(topic = sin)
```
Here we searched for help on the `sin` function. This is part of the `Trig` topic so `help(topic = sin)` brings up the same page as the `help(topic = Trig)`.
There are several arguments of `help` that we can set to alter its behaviour. We will just consider one of these. By default, the `help` function only searches for files associated with the base functions or with packages that we have loaded in the current session with the `library` function. If we want to search for help on the `mutate` function—part of the `dplyr` package—but we haven’t run `library(dplyr)` in the current session this will fail:
```
help(mutate)
```
```
## Help on topic 'mutate' was found in the following packages:
##
## Package Library
## plyr /Library/Frameworks/R.framework/Versions/3.5/Resources/library
## dplyr /Library/Frameworks/R.framework/Versions/3.5/Resources/library
##
##
## Using the first match ...
```
Instead, we need tell `help` where to look by setting the `package` argument:
```
help(mutate, package = dplyr)
```
It’s good practise to use `help` every time we’re struggling with a particular function. Even very experienced R users regularly forget how to use the odd function and have to dive into the help. It’s for this reason that R has a built in shortcut for `help`. This is accessed via `?`. For example, instead of typing `help(topic = sin)` into the Console we can bring up the help page for the `sin` function by using `?` like this:
```
?sin
```
This is just a convenient shortcut that does the same thing as `help`. The only difference is that `?` does not allow us to set arguments such as `package`.
7\.4 Navigating help files
--------------------------
Navigating a typical help file is a little daunting at first. These files can be quite long and they contain a lot of jargon. The help files associated with functions – the most common type – have a consistent structure though. There are a number of distinct sections, whose order is always the same. Wrestling with a help file is much easier if we at least understand what each section is for. After the title, there are eight sections we need to know about:
1. **Description** gives us a short overview of what the function is meant to be used for. If the help page covers a family of related functions it gives a collective overview of all the functions. Always read this before diving into the rest of the help file.
2. **Usage** shows how the function(s) are meant to be used. It lists each member of the family as well as their common arguments. The argument names are listed on their own if they have no default, or in name\-value pairs, where the value gives the default used should we choose not to set it ourselves when we call the function.
3. **Arguments** lists each of the allowed arguments, along with a short description of what they influence. This also tells us what what kind of data we are allowed to use with each argument, along with the allowable values (if this is relevant). Always read this section.
4. **Details** describes precisely how the function(s) behave, often in painful, jargon\-laden detail. It is usually the longest and hardest\-to\-comprehend section but is worth reading as it flags up common “gotchas”. We can sometimes get away with ignoring this section, but when we really want to understand a function we need to wade through it.
5. **Value** explains kind of data structure or object a function returns to us when it finishes doing whatever it does. It’s often possible to guess what this will be from the type of function, but nonetheless, it usually a good idea to check whether our reasoning is correct. If it isn’t, we probably don’t understand the function yet.
6. **References** just lists the key reference(s). These are worth digging out if we really need to know the ‘hows’ and ‘whys’ of a function. We can often skip this information. The one exception is if the function implements a particular statistical tool. In that case it’s sensible to go away and read the literature before trying to use it.
7. **See Also** gives links to the help pages of related functions. These are usually functions that do something similar to the function of interest or are meant to be used in conjunction with it. We can often learn quite a bit about packages or related functions by following the links in this section.
8. **Examples** provides one or more examples of how to use the function. These are stand\-alone examples, so there’s nothing to stop us running them. This is often the most useful section of all. Seeing a function in action is a very good way to cut through the jargon and understand how it works.
7\.5 Vignettes and demos
------------------------
The Oxford English Dictionary defines a vignette as, “A brief evocative description, account, or episode.” The purpose of a package vignette in R is to provide a relatively brief, practical account of one or more of its features. Not all packages come with vignettes, though many of the best thought out packages do. We use the `vignette` function to view all the available vignettes in Rstudio. This will open up a tab that lists each vignette under their associated package name along with a brief description. A package will often have more than one vignette. If we just want to see the vignettes associated with a particular package, we have to set the `package` argument. For example, to see the vignettes associated with **dplyr** we use:
```
vignette(package = "dplyr")
```
Each vignette has a name (the “topic”) and is available either as a PDF or HTML file (or both). We can view a particular vignette by passing the `vignette` function the `package` and `topic` arguments. For example, to view the “data\_frames” vignette in the **`dplyr`** package we would use:
```
vignette(topic = "data_frames", package = "dplyr")
```
The `vignette` function is fine, though it is usually more convenient to browse the list of vignettes inside a web browser. This allows us to open a particular vignette directly by clicking on its link, rather than working at the Console. We can use the `browseVignettes` function to do this:
```
browseVignettes()
```
This will open a page in our browser showing the vignettes we can view. As one should expect by now, we can narrow our options to a specific package by setting the `package` argument.
In addition to vignettes, some packages also include one or more ‘demos’ (demonstrations). Demos are a little like vignettes, but instead of just opening a file for us to read, the demo function can actually runs a demonstration R scripts. We use the `demo` function (without any arguments) to list the available demos:
```
demo()
```
When we use the `demo` function like this it only lists the demos associated with packages that have been loaded in the current session (via `library`). If we want to see all the demos we can run we need to use the somewhat cryptic `demo(package = .packages(all.available = TRUE))`.
In order to actually run a demo we use the `demo` function, setting the `topic` and `package` arguments. For example, to run the “colors” demo in the **grDevices** package we would use:
```
demo(colors, package = "grDevices", ask = FALSE)
```
This particular demo shows off some of the pre\-defined colours we might use to customise the appearance of a plot. We’ve suppressed the output though because so much is produced.
7\.1 Introduction
-----------------
R has a comprehensive built\-in help system. This system is orientated around the base R functions and packages. Every good package comes with a set of **help files**. At a minimum these should provide information about the individual package functions and summaries of the data included with the package. They sometimes give descriptions of how different parts of the package should be used, and if we’re lucky, one or more “vignettes” that offer a practical demonstration of how to use a package. Other files are sometimes shipped with packages. For example, these might give an overview of the mathematical or computational theory a package relies on. We will not worry about these in this course.
We may as well get something out of the way early on. The word “help” in the phrase “help file” is a bit of a misnomer. It is probably more accurate to say R has an extensive **documentation** system. The reason we say this is that the majority of help files are associated with functions, and these kinds of files are designed first and foremost to document how a particular function or group of functions are meant to be used. For example, they describe what kinds of arguments a function can take and what kind of objects it will return to us. Help files are also written in a very precise, painful\-to\-read manner. They contain a lot of jargon which can be hard for new R users to decipher.
The take\-home message is that R help files are aimed more at experienced users than novices. Their primary purpose is to carefully document the different elements of a package, rather than explain how a particular function or the package as whole should be used to achieve a given end. That said, help files often contain useful examples, and many package authors do try to make our life easier by providing functional demonstrations of their package (those “vignettes” we mentioned above are a vehicle for doing this). It’s important to try to get to grips with the built in help system. It contains a great deal of useful information which we need to really start using R effectively. The road to enlightenment is bumpy though.
7\.2 Browsing the help system
-----------------------------
How do we access the help system? Help files are a little like mini web pages, which means we can navigate among them using hyperlinks. This makes it very easy to explore the help system. One way to begin browsing the help system uses the `help.start` function:
```
help.start()
```
If we type this now at the Console we should see the **Package Index** page open up in the **Help** tab of the bottom right pane in RStudio. This lists all the packages currently installed on a computer. We can view all the help files associated with a package by clicking on the appropriate link. For example, the functions that come with the base installation of R have a help file associated with them—click on the link to the R base package (`base`) to see these. Though we know about a few of these already, there are **a lot** of functions listed here. R is huge.
The packages that come with the base R installation and those that we install separately from base R have their own set of associated help files. These can be viewed by following the appropriate link on the **Package Index** page. We will learn how to navigate these in a moment. Take note: it is up to the developer of a package to produce usable help files. Well\-designed packages like **dplyr** and **ggplot2** have an extensive help system that covers almost everything the package can do. This isn’t always the the case though, particularly with new or packages or packages that are not widely used. We will only ever use well\-documented packages.
Notice that the help browser has Forward, Back, and Home buttons, just like a normal web browser. If we get lost in the mire of help pages we can always navigate backward until we get back to a familiar page. However, for some reason the Home button does not take us to the same page as `help.start`. Clicking on the home button takes us to a page with three sections:
1. The **Manuals** section looks like it might be useful for novice users. Unfortunately, it isn’t really. Even the “Introduction to R” manual is only helpful for someone with a bit of programming experience because it assumes we understand what terms like “data structure” and “data type” mean. It is worth reading this manual after gaining a bit of experience with R. The others manuals are more or less impenetrable unless the reader already knows quite a bit about computing in general.
2. The **Reference** section is a little more helpful. The “Packages” link just takes us to the same page opened by `help.start`. From here we can browse the help pages on a package\-specific basis. The “Search Engine \& Keywords” link takes us to a search engine page (no surprises there). We can use this to search for specific help pages, either by supplying a search term or by navigating through the different keywords. We’ll discuss the built\-in search engine in the next subsection.
3. The **Miscellaneous Material** section has a couple of potentially useful links. The “User Manuals” link lists any user manuals supplied by package authors. These tend to be aimed at more experienced users and the packages we will learn to use in this course do not provide them. However, it is worth knowing these exist as they are occasionally useful. The “Frequently Asked Questions” link is definitely worth reviewing at some point, but again, most of the FAQs are a little difficult for novice users to fully understand.
7\.3 Searching for help files
-----------------------------
After browsing help files via `help.start` for a bit it quickly becomes apparent that this way of searching for help is not very efficient. Quite often we know the name of the function we need to use and all we want to do is open that particular help file. We can do this with the `help` function:
```
help(topic = Trig)
```
After we run this line RStudio opens up the help file for the trigonometry topic in the **Help** tab. This file provides information about the various trigonometric functions such as `sin` or `cos`. We’ll learn how to make sense of such help pages in the next subsection. For now, we just want to see how to use `help`.
The `help` function needs a minimum of one argument: the name of the topic or function of interest. When we use it like this the help function searches across packages, looking for a help file whose name gives **an exact match** to the name we supplied. In this case, we opened the help file associated with the `Trig` topic. Most of the time we use the `help` function to find the help page for a specific function, rather than a general topic. This is fine if we can remember the name of the topic associated with different functions. Most of us cannot. Luckily, the help function will also match help pages by the name of the function(s) they cover:
```
help(topic = sin)
```
Here we searched for help on the `sin` function. This is part of the `Trig` topic so `help(topic = sin)` brings up the same page as the `help(topic = Trig)`.
There are several arguments of `help` that we can set to alter its behaviour. We will just consider one of these. By default, the `help` function only searches for files associated with the base functions or with packages that we have loaded in the current session with the `library` function. If we want to search for help on the `mutate` function—part of the `dplyr` package—but we haven’t run `library(dplyr)` in the current session this will fail:
```
help(mutate)
```
```
## Help on topic 'mutate' was found in the following packages:
##
## Package Library
## plyr /Library/Frameworks/R.framework/Versions/3.5/Resources/library
## dplyr /Library/Frameworks/R.framework/Versions/3.5/Resources/library
##
##
## Using the first match ...
```
Instead, we need tell `help` where to look by setting the `package` argument:
```
help(mutate, package = dplyr)
```
It’s good practise to use `help` every time we’re struggling with a particular function. Even very experienced R users regularly forget how to use the odd function and have to dive into the help. It’s for this reason that R has a built in shortcut for `help`. This is accessed via `?`. For example, instead of typing `help(topic = sin)` into the Console we can bring up the help page for the `sin` function by using `?` like this:
```
?sin
```
This is just a convenient shortcut that does the same thing as `help`. The only difference is that `?` does not allow us to set arguments such as `package`.
7\.4 Navigating help files
--------------------------
Navigating a typical help file is a little daunting at first. These files can be quite long and they contain a lot of jargon. The help files associated with functions – the most common type – have a consistent structure though. There are a number of distinct sections, whose order is always the same. Wrestling with a help file is much easier if we at least understand what each section is for. After the title, there are eight sections we need to know about:
1. **Description** gives us a short overview of what the function is meant to be used for. If the help page covers a family of related functions it gives a collective overview of all the functions. Always read this before diving into the rest of the help file.
2. **Usage** shows how the function(s) are meant to be used. It lists each member of the family as well as their common arguments. The argument names are listed on their own if they have no default, or in name\-value pairs, where the value gives the default used should we choose not to set it ourselves when we call the function.
3. **Arguments** lists each of the allowed arguments, along with a short description of what they influence. This also tells us what what kind of data we are allowed to use with each argument, along with the allowable values (if this is relevant). Always read this section.
4. **Details** describes precisely how the function(s) behave, often in painful, jargon\-laden detail. It is usually the longest and hardest\-to\-comprehend section but is worth reading as it flags up common “gotchas”. We can sometimes get away with ignoring this section, but when we really want to understand a function we need to wade through it.
5. **Value** explains kind of data structure or object a function returns to us when it finishes doing whatever it does. It’s often possible to guess what this will be from the type of function, but nonetheless, it usually a good idea to check whether our reasoning is correct. If it isn’t, we probably don’t understand the function yet.
6. **References** just lists the key reference(s). These are worth digging out if we really need to know the ‘hows’ and ‘whys’ of a function. We can often skip this information. The one exception is if the function implements a particular statistical tool. In that case it’s sensible to go away and read the literature before trying to use it.
7. **See Also** gives links to the help pages of related functions. These are usually functions that do something similar to the function of interest or are meant to be used in conjunction with it. We can often learn quite a bit about packages or related functions by following the links in this section.
8. **Examples** provides one or more examples of how to use the function. These are stand\-alone examples, so there’s nothing to stop us running them. This is often the most useful section of all. Seeing a function in action is a very good way to cut through the jargon and understand how it works.
7\.5 Vignettes and demos
------------------------
The Oxford English Dictionary defines a vignette as, “A brief evocative description, account, or episode.” The purpose of a package vignette in R is to provide a relatively brief, practical account of one or more of its features. Not all packages come with vignettes, though many of the best thought out packages do. We use the `vignette` function to view all the available vignettes in Rstudio. This will open up a tab that lists each vignette under their associated package name along with a brief description. A package will often have more than one vignette. If we just want to see the vignettes associated with a particular package, we have to set the `package` argument. For example, to see the vignettes associated with **dplyr** we use:
```
vignette(package = "dplyr")
```
Each vignette has a name (the “topic”) and is available either as a PDF or HTML file (or both). We can view a particular vignette by passing the `vignette` function the `package` and `topic` arguments. For example, to view the “data\_frames” vignette in the **`dplyr`** package we would use:
```
vignette(topic = "data_frames", package = "dplyr")
```
The `vignette` function is fine, though it is usually more convenient to browse the list of vignettes inside a web browser. This allows us to open a particular vignette directly by clicking on its link, rather than working at the Console. We can use the `browseVignettes` function to do this:
```
browseVignettes()
```
This will open a page in our browser showing the vignettes we can view. As one should expect by now, we can narrow our options to a specific package by setting the `package` argument.
In addition to vignettes, some packages also include one or more ‘demos’ (demonstrations). Demos are a little like vignettes, but instead of just opening a file for us to read, the demo function can actually runs a demonstration R scripts. We use the `demo` function (without any arguments) to list the available demos:
```
demo()
```
When we use the `demo` function like this it only lists the demos associated with packages that have been loaded in the current session (via `library`). If we want to see all the demos we can run we need to use the somewhat cryptic `demo(package = .packages(all.available = TRUE))`.
In order to actually run a demo we use the `demo` function, setting the `topic` and `package` arguments. For example, to run the “colors” demo in the **grDevices** package we would use:
```
demo(colors, package = "grDevices", ask = FALSE)
```
This particular demo shows off some of the pre\-defined colours we might use to customise the appearance of a plot. We’ve suppressed the output though because so much is produced.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/packages.html |
Chapter 8 Packages
==================
8\.1 The R package system
-------------------------
The R package system is probably the most important single factor driving increased adoption of R among quantitatively\-minded scientists. Packages make it very easy to extend the basic capabilities of R. In [his book](http://r-pkgs.had.co.nz) about R packages Hadley Wickam says,
> Packages are the fundamental units of reproducible R code. They include reusable R functions, the documentation that describes how to use them, and sample data.
An R package is just a collection of folders and files in a standard, well\-defined format. They bundle together computer code, data, and documentation in a way that is easy to use and share with other users. The computer code might all be R code, but it can also include code written in other languages. Packages provide an R\-friendly interface to use this “foreign” code without the need to understand how it works.
The base R distribution it comes with quiet a few pre\-installed packages. These are “mature” packages that implement widely used statistical and plotting functionality. These base R packages represent a very small subset of the available R packages. The majority of these are hosted on a network of web servers around the world collectively know as [CRAN](http://cran.r-project.org). This network—known as a repository—is the same one we used to download the base R distribution in the [Get up and running with R and RStudio](get-up-and-running-with-r-and-rstudio.html#get-up-and-running-with-r-and-rstudio) chapter. CRAN stands for the Comprehensive R Archive Network, pronounced either “see\-ran” or “kran”. CRAN is a fairly spartan web site, so it’s easy to navigate.
When we [navigate to CRAN](http://cran.r-project.org) we see about a dozen links of the right hand side of the home page. Under the *Software* section there is a link called [Packages](http://cran.r-project.org/web/packages/). Near the top of this page there is a link called [Table of available packages, sorted by name](http://cran.r-project.org/web/packages/available_packages_by_name.html) that points to a very long list of all the packages on CRAN. The column on the left shows each package name, followed by a brief description of what the package does on the right. There are a huge number of packages here (over 12000 at the time of writing).
8\.2 Task views
---------------
A big list of packages presented like is overwhelming. Unless we already know the name of the package we want to investigate, it’s very hard to find anything useful by scanning the “all packages” table. A more user\-friendly view of many R packages can be found on the [Task Views](http://cran.r-project.org/web/views/) page (the link is on the left hand side, under the section labelled *CRAN*). A Task View is basically a curated guide to the packages and functions that are useful for certain disciplines. The Task Views page shows a list of these discipline\-specific topics, along with a brief description.
The [Environmentrics](http://cran.r-project.org/web/views/Environmetrics.html) Task View maintained by Gavin Simpson contains information about using R to analyse ecological and environmental data. It is not surprising this Task View exists. Ecologists and environmental scientists are among the most enthusiastic R users. This view is a good place to start looking for a new package to support a particular analysis in a future project. The [Experimental Design](http://cran.r-project.org/web/views/ExperimentalDesign.html), [Graphics](http://cran.r-project.org/web/views/Graphics.html), [Multivariate](http://cran.r-project.org/web/views/Multivariate.html), [Phylogenetics](http://cran.r-project.org/web/views/Phylogenetics.html), [Spatial](http://cran.r-project.org/web/views/Spatial.html), [Survival](http://cran.r-project.org/web/views/Survival.html) and [Time Series](http://cran.r-project.org/web/views/TimeSeries.html) Task Views all contain many useful packages for biologists and environmental scientists.
8\.3 Using packages
-------------------
Two things need to happen in order for us to use a package. First, we need to ensure that a copy of the folders and files that make up the package are copied to an appropriate folder on our computer. This process of putting the package files into the correct location is called **installing** the package. Second, we need to **load and attach** the package for use in a particular R session. As always, the word “session” refers to the time between when we start up R and close it down again. It’s worth unpacking these two ideas a bit, because packages are a frequent source of confusion for new users:
* If we don’t have a copy of a package’s folders and files in the right format and the right place on our computer we can’t use it. This is probably fairly obvious. The process of making this copy is called **installing** the package. It is possible to manually install packages by going to the CRAN website, downloading the package, and then using various tools to install it. We won’t be using this approach though because it’s both inefficient and error prone. Instead, we’ll use built\-in R functions to grab the package from CRAN and install it for us, all in one step.
* We don’t need to re\-install a packages we plan to use every time we start a new R session. It is worth saying that again, **there is no need to install a package every time we start up R / RStudio**. Once we have a copy of the package on our hard drive it will remain there for us to use. The only exception to this rule is that a major update to R (not RStudio!) will sometimes require a complete re\-install of the packages. This is because the R installer will not copy installed packages to the major new version of R. These major updates are fairly infrequent though, occurring perhaps every 1\-2 years.
* Installing a package does nothing more than place a copy of the relevant files on our hard drive. If we actually want to use the functions or the data that comes with a package we need to make them available in our current R session. Unlike package installation this **load and attach** process as it’s known has to be repeated every time we restart R. If we forget to load up the package we can’t use it.
### 8\.3\.1 Viewing installed packages
We sometimes need to check whether a package is currently installed. RStudio provides a simple, intuitive way to see which packages are installed on our computer. The **Packages** tab in the top right pane of RStudio shows the name of every installed package, a brief description (the same one seen on CRAN) and a version number. We can also manage our packages from this tab, as we are about to find out.
There are also a few R functions that can be used to check whether a package is currently installed. For example, the `find.package` function can do this:
```
find.package("MASS")
```
```
## [1] "/Library/Frameworks/R.framework/Versions/3.5/Resources/library/MASS"
```
This either prints a “file path” showing us where the package is located, or returns an error if the package can’t be found. Alternatively, the function called `installed.packages` returns something called a data frame (these are discussed later in the book) containing a lot more information about the installed packages.
### 8\.3\.2 Installing packages
R packages can be installed from a number of different sources. For example, they can be installed from a local file on a computer, from the CRAN repository, or from a different kind of online repository called Github. Although various alternatives to CRAN are becoming more popular, we’re only going to worry about installing packages that live on CRAN in this book. This is no bad thing—the packages that live outside CRAN tend to be a little more experimental.
In order to install a package from an online repository like CRAN we have to first download the package files, possibly uncompress them (like we would a ZIP file), and move them to the correct location. All of this can be done at the Console using a single function: `install.packages`. For example, if we want to install a package called **fortunes**, we use:
```
install.packages("fortunes")
```
The quotes are necessary by the way. If everything is working—we have an active internet connection, the package name is valid, and so on—R will briefly pause while it communicates with the CRAN servers, we should see some red text reporting back what’s happening, and then we’re returned to the prompt. The red text is just letting us know what R is up to. As long as this text does not include the word “error”, there is usually no need to worry about it.
There is nothing to stop us using `install.packages` to install more than one package at a time. We are going to use **dplyr** and **ggplot2** later in the book. Since neither of these is part of the base R distribution, we need to download and install them from CRAN. Here’s one way to do this:
```
pckg.names <- c("dplyr", "ggplot2")
install.packages(pckg.names)
```
There are a couple of things to keep in mind. First, package names are case sensitive. For example, **fortunes** is not the same as **Fortunes**. Quite often package installations fail because we used the wrong case somewhere in the package name. The other aspect of packages we need to know about is related to **dependencies**: some packages rely on other packages in order to work properly. By default `install.packages` will install these dependencies, so we don’t usually have to worry too much about them. Just don’t be surprised if the `install.packages` function installs more than one package when only one was requested.
#### Install dplyr and ggplot2
We’re going to be using **dplyr** and **ggplot2** packages later in the book. If they aren’t already installed on your computer (check with `find.package`), now is a good time to install them so they’re ready to use later.
RStudio provides a way of interacting with `install.packages` via point\-and\-click. The **Packages** tab has an “Install”" button at the top right. Clicking on this brings up a small window with three main fields: “Install from”, “Packages”, and “Install to Library”. We only need to work with the “Packages” field – the other two can be left at their defaults. When we start typing in the first few letters of a package name (e.g. **dplyr**) RStudio provides a list of available packages that match this. After we select the one we want and click the “Install” button, RStudio invokes `install.packages` with the appropriate arguments at the Console for us.
#### Never use `install.packages` in scripts
Because installing a package is a “do once” operation, it is almost never a good idea to place `install.packages` in a typical R script. A script may be run 100s of times as we develop an analysis. Installing a package is quite time consuming, so we don’t really want to do it every time we run our analysis. As long as the package has been installed at some point in the past it is ready to be used and the script will work fine without re\-installing it.
### 8\.3\.3 Loading and attaching packages
Once we’ve installed a package or two we’ll probably want to actually use them. Two things have to happen to access a package’s facilities: the package has to be loaded into memory, and then it has to attached to something called a search path so that R can find it. It is beyond the scope of this book to get in to “how” and “why” of these events. Fortunately, there’s no need to worry about these details, as both loading and attaching can be done in a single step with a function called `library`. The `library` function works exactly as we might expect it to. If we want to start using the `fortunes` package—which was just installed above—all we need is:
```
library("fortunes")
```
Nothing much happens if everything is working as it should. R just returns us to the prompt without printing anything to the Console. The difference is that now we can use the functions that **fortunes** provides. As it turns out, there is only one, called fortune:
```
fortune()
```
```
##
## Friends don't let friends use Excel for statistics!
## -- Jonathan D. Cryer (about problems with using Microsoft Excel for
## statistics)
## JSM 2001, Atlanta (August 2001)
```
The **fortunes** package is either very useful or utterly pointless, depending on ones perspective. It dispenses quotes from various R experts delivered to the venerable R mailing list (some of these are even funny).
Once again, if we really don’t like working in the Console RStudio can help us out. There is a small button next to each package listed in the **Packages** tab. Packages that have been loaded and attached have a blue check box next to them, whereas this is absent from those that have not. Clicking on an empty check box will load up the package. Try this. Notice that all it does is invoke `library` with the appropriate arguments for us (RStudio explicitly sets the `lib.loc` argument, whereas above we just relied on the default value).
### Don’t use RStudio for loading packages!
We looked at how it works, because at some point most people realise they can use RStudio to load and attach packages. We don’t recommend using this route though. It’s much better to put `library` statements into a script. Why? Because if we rely on RStudio to load packages, we have to do this every time we want to run a script, and if we forget one we need, the script won’t work. This is another example of where relying on RStudio ultimately makes things more, not less, challenging.
One last tip: we can use library anywhere, but typically the `library` expressions live at the very beginning of a script so that everything is ready to use later on.
### 8\.3\.4 An analogy
The package system often confuses new users. The reason for this stems from the fact that they aren’t clear about what the `install.packages` and `library` functions are doing. One way to think about these is by analogy with smartphone “Apps”. Think of an R package as being analogous to a smartphone App— a package effectively extends what R can do, just as an App extends what a phone can do.
When we want to try out a new App we have to first download it from an App store and install it on our phone. Once it has been downloaded, an App lives permanently on the phone (unless we delete it!) and can be used whenever it’s needed. Downloading and installing the App is something we only have to do once. Packages are no different. When we want to use an R package we first have to make sure it is installed on the computer. This is effectively what `install.packages` does: it grabs the package from CRAN (the “App store”) and installs it on our computer. Installing a package is a “do once” operation. Once we’ve installed it, we don’t need to install a package again each time we restart R. The package is sat on the hard drive ready to be used.
In order to actually use an App which has been installed on a phone we open it up by tapping on its icon. This obviously has to happen every time we want to use the App. The package equivalent of opening a smartphone App is the “load and attach” operation. This is what `library` does. It makes a package available for use in a particular session. We have to use `library` to load the package every time we start a new R session if we plan to access the functions in that package: loading and attaching a package via `library` is a “do every time” operation.
8\.1 The R package system
-------------------------
The R package system is probably the most important single factor driving increased adoption of R among quantitatively\-minded scientists. Packages make it very easy to extend the basic capabilities of R. In [his book](http://r-pkgs.had.co.nz) about R packages Hadley Wickam says,
> Packages are the fundamental units of reproducible R code. They include reusable R functions, the documentation that describes how to use them, and sample data.
An R package is just a collection of folders and files in a standard, well\-defined format. They bundle together computer code, data, and documentation in a way that is easy to use and share with other users. The computer code might all be R code, but it can also include code written in other languages. Packages provide an R\-friendly interface to use this “foreign” code without the need to understand how it works.
The base R distribution it comes with quiet a few pre\-installed packages. These are “mature” packages that implement widely used statistical and plotting functionality. These base R packages represent a very small subset of the available R packages. The majority of these are hosted on a network of web servers around the world collectively know as [CRAN](http://cran.r-project.org). This network—known as a repository—is the same one we used to download the base R distribution in the [Get up and running with R and RStudio](get-up-and-running-with-r-and-rstudio.html#get-up-and-running-with-r-and-rstudio) chapter. CRAN stands for the Comprehensive R Archive Network, pronounced either “see\-ran” or “kran”. CRAN is a fairly spartan web site, so it’s easy to navigate.
When we [navigate to CRAN](http://cran.r-project.org) we see about a dozen links of the right hand side of the home page. Under the *Software* section there is a link called [Packages](http://cran.r-project.org/web/packages/). Near the top of this page there is a link called [Table of available packages, sorted by name](http://cran.r-project.org/web/packages/available_packages_by_name.html) that points to a very long list of all the packages on CRAN. The column on the left shows each package name, followed by a brief description of what the package does on the right. There are a huge number of packages here (over 12000 at the time of writing).
8\.2 Task views
---------------
A big list of packages presented like is overwhelming. Unless we already know the name of the package we want to investigate, it’s very hard to find anything useful by scanning the “all packages” table. A more user\-friendly view of many R packages can be found on the [Task Views](http://cran.r-project.org/web/views/) page (the link is on the left hand side, under the section labelled *CRAN*). A Task View is basically a curated guide to the packages and functions that are useful for certain disciplines. The Task Views page shows a list of these discipline\-specific topics, along with a brief description.
The [Environmentrics](http://cran.r-project.org/web/views/Environmetrics.html) Task View maintained by Gavin Simpson contains information about using R to analyse ecological and environmental data. It is not surprising this Task View exists. Ecologists and environmental scientists are among the most enthusiastic R users. This view is a good place to start looking for a new package to support a particular analysis in a future project. The [Experimental Design](http://cran.r-project.org/web/views/ExperimentalDesign.html), [Graphics](http://cran.r-project.org/web/views/Graphics.html), [Multivariate](http://cran.r-project.org/web/views/Multivariate.html), [Phylogenetics](http://cran.r-project.org/web/views/Phylogenetics.html), [Spatial](http://cran.r-project.org/web/views/Spatial.html), [Survival](http://cran.r-project.org/web/views/Survival.html) and [Time Series](http://cran.r-project.org/web/views/TimeSeries.html) Task Views all contain many useful packages for biologists and environmental scientists.
8\.3 Using packages
-------------------
Two things need to happen in order for us to use a package. First, we need to ensure that a copy of the folders and files that make up the package are copied to an appropriate folder on our computer. This process of putting the package files into the correct location is called **installing** the package. Second, we need to **load and attach** the package for use in a particular R session. As always, the word “session” refers to the time between when we start up R and close it down again. It’s worth unpacking these two ideas a bit, because packages are a frequent source of confusion for new users:
* If we don’t have a copy of a package’s folders and files in the right format and the right place on our computer we can’t use it. This is probably fairly obvious. The process of making this copy is called **installing** the package. It is possible to manually install packages by going to the CRAN website, downloading the package, and then using various tools to install it. We won’t be using this approach though because it’s both inefficient and error prone. Instead, we’ll use built\-in R functions to grab the package from CRAN and install it for us, all in one step.
* We don’t need to re\-install a packages we plan to use every time we start a new R session. It is worth saying that again, **there is no need to install a package every time we start up R / RStudio**. Once we have a copy of the package on our hard drive it will remain there for us to use. The only exception to this rule is that a major update to R (not RStudio!) will sometimes require a complete re\-install of the packages. This is because the R installer will not copy installed packages to the major new version of R. These major updates are fairly infrequent though, occurring perhaps every 1\-2 years.
* Installing a package does nothing more than place a copy of the relevant files on our hard drive. If we actually want to use the functions or the data that comes with a package we need to make them available in our current R session. Unlike package installation this **load and attach** process as it’s known has to be repeated every time we restart R. If we forget to load up the package we can’t use it.
### 8\.3\.1 Viewing installed packages
We sometimes need to check whether a package is currently installed. RStudio provides a simple, intuitive way to see which packages are installed on our computer. The **Packages** tab in the top right pane of RStudio shows the name of every installed package, a brief description (the same one seen on CRAN) and a version number. We can also manage our packages from this tab, as we are about to find out.
There are also a few R functions that can be used to check whether a package is currently installed. For example, the `find.package` function can do this:
```
find.package("MASS")
```
```
## [1] "/Library/Frameworks/R.framework/Versions/3.5/Resources/library/MASS"
```
This either prints a “file path” showing us where the package is located, or returns an error if the package can’t be found. Alternatively, the function called `installed.packages` returns something called a data frame (these are discussed later in the book) containing a lot more information about the installed packages.
### 8\.3\.2 Installing packages
R packages can be installed from a number of different sources. For example, they can be installed from a local file on a computer, from the CRAN repository, or from a different kind of online repository called Github. Although various alternatives to CRAN are becoming more popular, we’re only going to worry about installing packages that live on CRAN in this book. This is no bad thing—the packages that live outside CRAN tend to be a little more experimental.
In order to install a package from an online repository like CRAN we have to first download the package files, possibly uncompress them (like we would a ZIP file), and move them to the correct location. All of this can be done at the Console using a single function: `install.packages`. For example, if we want to install a package called **fortunes**, we use:
```
install.packages("fortunes")
```
The quotes are necessary by the way. If everything is working—we have an active internet connection, the package name is valid, and so on—R will briefly pause while it communicates with the CRAN servers, we should see some red text reporting back what’s happening, and then we’re returned to the prompt. The red text is just letting us know what R is up to. As long as this text does not include the word “error”, there is usually no need to worry about it.
There is nothing to stop us using `install.packages` to install more than one package at a time. We are going to use **dplyr** and **ggplot2** later in the book. Since neither of these is part of the base R distribution, we need to download and install them from CRAN. Here’s one way to do this:
```
pckg.names <- c("dplyr", "ggplot2")
install.packages(pckg.names)
```
There are a couple of things to keep in mind. First, package names are case sensitive. For example, **fortunes** is not the same as **Fortunes**. Quite often package installations fail because we used the wrong case somewhere in the package name. The other aspect of packages we need to know about is related to **dependencies**: some packages rely on other packages in order to work properly. By default `install.packages` will install these dependencies, so we don’t usually have to worry too much about them. Just don’t be surprised if the `install.packages` function installs more than one package when only one was requested.
#### Install dplyr and ggplot2
We’re going to be using **dplyr** and **ggplot2** packages later in the book. If they aren’t already installed on your computer (check with `find.package`), now is a good time to install them so they’re ready to use later.
RStudio provides a way of interacting with `install.packages` via point\-and\-click. The **Packages** tab has an “Install”" button at the top right. Clicking on this brings up a small window with three main fields: “Install from”, “Packages”, and “Install to Library”. We only need to work with the “Packages” field – the other two can be left at their defaults. When we start typing in the first few letters of a package name (e.g. **dplyr**) RStudio provides a list of available packages that match this. After we select the one we want and click the “Install” button, RStudio invokes `install.packages` with the appropriate arguments at the Console for us.
#### Never use `install.packages` in scripts
Because installing a package is a “do once” operation, it is almost never a good idea to place `install.packages` in a typical R script. A script may be run 100s of times as we develop an analysis. Installing a package is quite time consuming, so we don’t really want to do it every time we run our analysis. As long as the package has been installed at some point in the past it is ready to be used and the script will work fine without re\-installing it.
### 8\.3\.3 Loading and attaching packages
Once we’ve installed a package or two we’ll probably want to actually use them. Two things have to happen to access a package’s facilities: the package has to be loaded into memory, and then it has to attached to something called a search path so that R can find it. It is beyond the scope of this book to get in to “how” and “why” of these events. Fortunately, there’s no need to worry about these details, as both loading and attaching can be done in a single step with a function called `library`. The `library` function works exactly as we might expect it to. If we want to start using the `fortunes` package—which was just installed above—all we need is:
```
library("fortunes")
```
Nothing much happens if everything is working as it should. R just returns us to the prompt without printing anything to the Console. The difference is that now we can use the functions that **fortunes** provides. As it turns out, there is only one, called fortune:
```
fortune()
```
```
##
## Friends don't let friends use Excel for statistics!
## -- Jonathan D. Cryer (about problems with using Microsoft Excel for
## statistics)
## JSM 2001, Atlanta (August 2001)
```
The **fortunes** package is either very useful or utterly pointless, depending on ones perspective. It dispenses quotes from various R experts delivered to the venerable R mailing list (some of these are even funny).
Once again, if we really don’t like working in the Console RStudio can help us out. There is a small button next to each package listed in the **Packages** tab. Packages that have been loaded and attached have a blue check box next to them, whereas this is absent from those that have not. Clicking on an empty check box will load up the package. Try this. Notice that all it does is invoke `library` with the appropriate arguments for us (RStudio explicitly sets the `lib.loc` argument, whereas above we just relied on the default value).
### Don’t use RStudio for loading packages!
We looked at how it works, because at some point most people realise they can use RStudio to load and attach packages. We don’t recommend using this route though. It’s much better to put `library` statements into a script. Why? Because if we rely on RStudio to load packages, we have to do this every time we want to run a script, and if we forget one we need, the script won’t work. This is another example of where relying on RStudio ultimately makes things more, not less, challenging.
One last tip: we can use library anywhere, but typically the `library` expressions live at the very beginning of a script so that everything is ready to use later on.
### 8\.3\.4 An analogy
The package system often confuses new users. The reason for this stems from the fact that they aren’t clear about what the `install.packages` and `library` functions are doing. One way to think about these is by analogy with smartphone “Apps”. Think of an R package as being analogous to a smartphone App— a package effectively extends what R can do, just as an App extends what a phone can do.
When we want to try out a new App we have to first download it from an App store and install it on our phone. Once it has been downloaded, an App lives permanently on the phone (unless we delete it!) and can be used whenever it’s needed. Downloading and installing the App is something we only have to do once. Packages are no different. When we want to use an R package we first have to make sure it is installed on the computer. This is effectively what `install.packages` does: it grabs the package from CRAN (the “App store”) and installs it on our computer. Installing a package is a “do once” operation. Once we’ve installed it, we don’t need to install a package again each time we restart R. The package is sat on the hard drive ready to be used.
In order to actually use an App which has been installed on a phone we open it up by tapping on its icon. This obviously has to happen every time we want to use the App. The package equivalent of opening a smartphone App is the “load and attach” operation. This is what `library` does. It makes a package available for use in a particular session. We have to use `library` to load the package every time we start a new R session if we plan to access the functions in that package: loading and attaching a package via `library` is a “do every time” operation.
### 8\.3\.1 Viewing installed packages
We sometimes need to check whether a package is currently installed. RStudio provides a simple, intuitive way to see which packages are installed on our computer. The **Packages** tab in the top right pane of RStudio shows the name of every installed package, a brief description (the same one seen on CRAN) and a version number. We can also manage our packages from this tab, as we are about to find out.
There are also a few R functions that can be used to check whether a package is currently installed. For example, the `find.package` function can do this:
```
find.package("MASS")
```
```
## [1] "/Library/Frameworks/R.framework/Versions/3.5/Resources/library/MASS"
```
This either prints a “file path” showing us where the package is located, or returns an error if the package can’t be found. Alternatively, the function called `installed.packages` returns something called a data frame (these are discussed later in the book) containing a lot more information about the installed packages.
### 8\.3\.2 Installing packages
R packages can be installed from a number of different sources. For example, they can be installed from a local file on a computer, from the CRAN repository, or from a different kind of online repository called Github. Although various alternatives to CRAN are becoming more popular, we’re only going to worry about installing packages that live on CRAN in this book. This is no bad thing—the packages that live outside CRAN tend to be a little more experimental.
In order to install a package from an online repository like CRAN we have to first download the package files, possibly uncompress them (like we would a ZIP file), and move them to the correct location. All of this can be done at the Console using a single function: `install.packages`. For example, if we want to install a package called **fortunes**, we use:
```
install.packages("fortunes")
```
The quotes are necessary by the way. If everything is working—we have an active internet connection, the package name is valid, and so on—R will briefly pause while it communicates with the CRAN servers, we should see some red text reporting back what’s happening, and then we’re returned to the prompt. The red text is just letting us know what R is up to. As long as this text does not include the word “error”, there is usually no need to worry about it.
There is nothing to stop us using `install.packages` to install more than one package at a time. We are going to use **dplyr** and **ggplot2** later in the book. Since neither of these is part of the base R distribution, we need to download and install them from CRAN. Here’s one way to do this:
```
pckg.names <- c("dplyr", "ggplot2")
install.packages(pckg.names)
```
There are a couple of things to keep in mind. First, package names are case sensitive. For example, **fortunes** is not the same as **Fortunes**. Quite often package installations fail because we used the wrong case somewhere in the package name. The other aspect of packages we need to know about is related to **dependencies**: some packages rely on other packages in order to work properly. By default `install.packages` will install these dependencies, so we don’t usually have to worry too much about them. Just don’t be surprised if the `install.packages` function installs more than one package when only one was requested.
#### Install dplyr and ggplot2
We’re going to be using **dplyr** and **ggplot2** packages later in the book. If they aren’t already installed on your computer (check with `find.package`), now is a good time to install them so they’re ready to use later.
RStudio provides a way of interacting with `install.packages` via point\-and\-click. The **Packages** tab has an “Install”" button at the top right. Clicking on this brings up a small window with three main fields: “Install from”, “Packages”, and “Install to Library”. We only need to work with the “Packages” field – the other two can be left at their defaults. When we start typing in the first few letters of a package name (e.g. **dplyr**) RStudio provides a list of available packages that match this. After we select the one we want and click the “Install” button, RStudio invokes `install.packages` with the appropriate arguments at the Console for us.
#### Never use `install.packages` in scripts
Because installing a package is a “do once” operation, it is almost never a good idea to place `install.packages` in a typical R script. A script may be run 100s of times as we develop an analysis. Installing a package is quite time consuming, so we don’t really want to do it every time we run our analysis. As long as the package has been installed at some point in the past it is ready to be used and the script will work fine without re\-installing it.
### 8\.3\.3 Loading and attaching packages
Once we’ve installed a package or two we’ll probably want to actually use them. Two things have to happen to access a package’s facilities: the package has to be loaded into memory, and then it has to attached to something called a search path so that R can find it. It is beyond the scope of this book to get in to “how” and “why” of these events. Fortunately, there’s no need to worry about these details, as both loading and attaching can be done in a single step with a function called `library`. The `library` function works exactly as we might expect it to. If we want to start using the `fortunes` package—which was just installed above—all we need is:
```
library("fortunes")
```
Nothing much happens if everything is working as it should. R just returns us to the prompt without printing anything to the Console. The difference is that now we can use the functions that **fortunes** provides. As it turns out, there is only one, called fortune:
```
fortune()
```
```
##
## Friends don't let friends use Excel for statistics!
## -- Jonathan D. Cryer (about problems with using Microsoft Excel for
## statistics)
## JSM 2001, Atlanta (August 2001)
```
The **fortunes** package is either very useful or utterly pointless, depending on ones perspective. It dispenses quotes from various R experts delivered to the venerable R mailing list (some of these are even funny).
Once again, if we really don’t like working in the Console RStudio can help us out. There is a small button next to each package listed in the **Packages** tab. Packages that have been loaded and attached have a blue check box next to them, whereas this is absent from those that have not. Clicking on an empty check box will load up the package. Try this. Notice that all it does is invoke `library` with the appropriate arguments for us (RStudio explicitly sets the `lib.loc` argument, whereas above we just relied on the default value).
### Don’t use RStudio for loading packages!
We looked at how it works, because at some point most people realise they can use RStudio to load and attach packages. We don’t recommend using this route though. It’s much better to put `library` statements into a script. Why? Because if we rely on RStudio to load packages, we have to do this every time we want to run a script, and if we forget one we need, the script won’t work. This is another example of where relying on RStudio ultimately makes things more, not less, challenging.
One last tip: we can use library anywhere, but typically the `library` expressions live at the very beginning of a script so that everything is ready to use later on.
### 8\.3\.4 An analogy
The package system often confuses new users. The reason for this stems from the fact that they aren’t clear about what the `install.packages` and `library` functions are doing. One way to think about these is by analogy with smartphone “Apps”. Think of an R package as being analogous to a smartphone App— a package effectively extends what R can do, just as an App extends what a phone can do.
When we want to try out a new App we have to first download it from an App store and install it on our phone. Once it has been downloaded, an App lives permanently on the phone (unless we delete it!) and can be used whenever it’s needed. Downloading and installing the App is something we only have to do once. Packages are no different. When we want to use an R package we first have to make sure it is installed on the computer. This is effectively what `install.packages` does: it grabs the package from CRAN (the “App store”) and installs it on our computer. Installing a package is a “do once” operation. Once we’ve installed it, we don’t need to install a package again each time we restart R. The package is sat on the hard drive ready to be used.
In order to actually use an App which has been installed on a phone we open it up by tapping on its icon. This obviously has to happen every time we want to use the App. The package equivalent of opening a smartphone App is the “load and attach” operation. This is what `library` does. It makes a package available for use in a particular session. We have to use `library` to load the package every time we start a new R session if we plan to access the functions in that package: loading and attaching a package via `library` is a “do every time” operation.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/data-frames.html |
Chapter 9 Data frames
=====================
9\.1 Introduction
-----------------
We learned in the [A quick introduction to R](a-quick-introduction-to-r.html#a-quick-introduction-to-r) chapter that the word “variable” is used as short hand for any kind of named object. For example, we can make a variable called `num_vec` that refers to a simple numeric vector using:
```
num_vec <- 1:10
```
When a computer scientist talks about variables they’re usually referring to these sorts of name\-value associations. However, the word “variable” has a second, more abstract meaning in the world of data analysis and statistics: it refers to anything we can control or measure. For example, if our data comes from an experiment, the data will typically involve variables whose values describe the experimental conditions (e.g. “control plots” vs. “fertiliser plots”) and the quantities we chose to measure (e.g. species biomass and diversity).
These kinds of abstract variables are often called “statistical variables”. Statistical variables can be further broken down into a range of different types, such as numeric and categorical variables. We’ll discuss these later on in the [Exploratory data analysis](exploratory-data-analysis.html#exploratory-data-analysis) chapter. The reason we’re pointing out the dual meaning of the word “variable” here is because we need to work with both interpretations. The dual meaning can be confusing, but both meanings are in widespread use so we just have to get used to them. We’ll try to minimise confusion by using the phrase “statistical variable” when we are referring to data, rather than R objects.
We’re introducing these ideas now because we’re going to consider a new type of data object in R: the **data frame**. Real world data analysis involves collections of data (“data sets”) that involve several related statistical variables. We’ve seen that an atomic vector can only be used to store one type of data such as a collection of numbers. This means a vector can be used to store a single statistical variable, How should we keep a large collection of variables organised? We could work with them separately but this is very error prone. Ideally, we need a way to keep related variables together. This is the problem that **data frames** are designed to manage.
9\.2 Data frames
----------------
Data frames are one of those R features that mark it out as a particularly good environment for data analysis. We can think of a data frame as table\-like objects with rows and columns. They collect together different statistical variables, storing each of them as a different column. Related observations are all found in the same row. This will make more sense in a moment. Let’s consider the columns first.
Each column is a vector of some kind. These are usually simple vectors containing numbers or character strings, though it is also possible to include more complicated vectors inside data frames. We’ll only work with data frames made up of relatively simple vectors in this book. The key constraint that a data frame applies is that each vector must have the same length. This is what gives a data frame it table\-like structure.
The simplest way to get a feel for data frames is to make one. Data frames are usually constructed by reading some external data into R, but for the purposes of learning about them it is better to build one from its component parts. We’ll make some artificial data describing a hypothetical experiment to do this. Imagine that we’ve conducted a small experiment to examine biomass and community diversity in six field plots. Three plots were subjected to fertiliser enrichment. The other three plots act as experimental controls. We could store the data describing this experiment in three vectors:
* `trt` (short for “treatment”) shows which experimental manipulation was used.
* `bms` (short for “biomass”) shows the total biomass measured at the end of the experiment.
* `div` (short for “diversity”) shows the number of species present at the end of the experiment.
Here’s some R code to generate these three vectors (it doesn’t matter what the actual values are, they’re made up):
```
trt <- rep(c("Control","Fertilser"), each = 3)
bms <- c(284, 328, 291, 956, 954, 685)
div <- c(8, 12, 11, 8, 4, 5)
```
```
trt
```
```
## [1] "Control" "Control" "Control" "Fertilser" "Fertilser" "Fertilser"
```
```
bms
```
```
## [1] 284 328 291 956 954 685
```
```
div
```
```
## [1] 8 12 11 8 4 5
```
Notice that the information about different observations are linked by their positions in these vectors. For example, the third control plot had a biomass of ‘291’ and a species diversity ‘11’.
We can use the `data.frame` function to construct a data frame from one or more vectors. To build a data frame from the three vectors we created and print these to the Console, we use:
```
experim.data <- data.frame(trt, bms, div)
experim.data
```
```
## trt bms div
## 1 Control 284 8
## 2 Control 328 12
## 3 Control 291 11
## 4 Fertilser 956 8
## 5 Fertilser 954 4
## 6 Fertilser 685 5
```
Notice what happens when we print the data frame: it is displayed as though it has rows and columns. That’s what we meant when we said a data frame is a table\-like structure. The `data.frame` function takes a variable number of arguments. We used the `trt`, `bms` and `div` vectors as arguments, resulting in a data frame with three columns. Each of these vectors has 6 elements, so the resulting data frame has 6 rows. The names of the vectors were used to name its columns. The rows do not have names, but they are numbered to reflect their position.
The words `trt`, `bms` and `div` are not very informative. If we prefer to work with more informative column names—this is always a good idea—then we have to name the `data.frame` arguments:
```
experim.data <- data.frame(Treatment = trt, Biomass = bms, Diversity = div)
experim.data
```
```
## Treatment Biomass Diversity
## 1 Control 284 8
## 2 Control 328 12
## 3 Control 291 11
## 4 Fertilser 956 8
## 5 Fertilser 954 4
## 6 Fertilser 685 5
```
The new data frame contains the same data as the previous one but now the column names correspond to the names we chose. These names are better because they describe each variable using a human\-readable word.
#### Don’t bother with row names
We can also name the rows of a data frame using the `row.names` argument of the `data.frame` function. We won’t bother to show an example of this though. Why? We can’t easily work with the information in row names so there’s not much point adding it. If we need to include row\-specific data in a data frame it’s best to include an additional variable, i.e. an extra column.
9\.3 Exploring data frames
--------------------------
The first things we should do when presented with a new data set is explore its structure to understand what we’re dealing with. This is easy when the data is stored in a data frame. If the data set is reasonably small we can just print it to the Console. This is not very practical for even moderate\-sized data sets though. The `head` and `tail` functions extract the first and last few rows of a data set, so these can be used to print part of a data set. The `n` argument controls the number of rows printed:
```
head(experim.data, n = 3)
```
```
## Treatment Biomass Diversity
## 1 Control 284 8
## 2 Control 328 12
## 3 Control 291 11
```
```
tail(experim.data, n = 3)
```
```
## Treatment Biomass Diversity
## 4 Fertilser 956 8
## 5 Fertilser 954 4
## 6 Fertilser 685 5
```
The `View` function can be used to visualise the whole data set in a spreadsheet like view:
```
View(experim.data)
```
This shows the rows and columns of the data frame argument in a table\- or spreadsheet\-like format. When we run this in RStudio a new tab opens up with the `experim.data` data inside it.
#### `View` only displays the data
The `View` function is designed to allow us to display the data in a data frame as a table of rows and columns. We can’t change the data in any way with the `View` function. We can reorder the way the data are presented, but keep in mind that this won’t alter the underlying data.
There are quite a few different R functions that will extract information about a data frame. The `nrow` and `ncol` functions return the number of rows and columns, respectively:
```
nrow(experim.data)
```
```
## [1] 6
```
```
ncol(experim.data)
```
```
## [1] 3
```
The `names` function is used to extract the column names from a data frame:
```
colnames(experim.data)
```
```
## [1] "Treatment" "Biomass" "Diversity"
```
The `experim.data` data frame has three columns, so `names` returns a character vector of length three, where each element corresponds to a column name. There is also a `rownames` function if we need that too. The `nrow`, `ncol`, `names` and `rownames` functions each return a vector, so we can assign the result if we need to use it later. For example, if we want to extract and store the column names for any reason we could use `varnames <- names(experim.data)`.
9\.4 Extracting data from data frames
-------------------------------------
Data frames would not be much use if we could not extract and modify the data in them. In this section we will briefly review how to carry out these kinds of operations using basic R functions.
### 9\.4\.1 Extracting and adding a single variable
A data frame is just a collection of variables stored in columns, where each column is a vector of some kind. There are several ways to extract these variables from a data frame. If we just want to extract a single variable we have two options.
The first way of extracting a variable from a data frame uses a double square brackets construct, `[[`. For example, we extract the `Biomass` variable from our example data frame with the double square brackets like this:
```
experim.data[["Biomass"]]
```
```
## [1] 284 328 291 956 954 685
```
This prints whatever is in the `Biomass` column to the Console. What kind of object is this? It’s a numeric vector:
```
is.numeric(experim.data[["Biomass"]])
```
```
## [1] TRUE
```
A data frame really is nothing more than a collection of vectors. Notice that all we did was print the resulting vector to the Console. If we want to actually do something with this numeric vector we need to assign the result:
```
bmass <- experim.data$Biomass
bmass^2
```
```
## [1] 80656 107584 84681 913936 910116 469225
```
Here, we extracted the `Biomass` variable, assigned it to `bmass`, and then squared this. The value of `Biomass` variable inside the `experim.data` data frame is unchanged.
Notice that we used `"Biomass"` instead of `Biomass` inside the double square brackets, i.e. we quoted the name of the variable. This is because we want R to treat the word “Biomass” as a literal value. This little detail is important! If we don’t quote the name then R will assume that `Biomass` is the name of an object and go in search of it in the global environment. Since we haven’t created something called `Biomass`, leaving out the quotes generates an error:
```
experim.data[[Biomass]]
```
```
## Error in (function(x, i, exact) if (is.matrix(i)) as.matrix(x)[[i]] else .subset2(x, : object 'Biomass' not found
```
The error message is telling us that R can’t find a variable called `Biomass` in the global environment. On the other hand, this example does work:
```
vname <- "Biomass"
experim.data[[vname]]
```
```
## [1] 284 328 291 956 954 685
```
This works because we first defined `vname` to be a character vector of length one, whose value is the name of a variable in `experim.data`. When R encounters `vname` inside the `[[` construct it goes and finds the value associated with it and uses this value to determine the variable to extract.
The second method for extracting a variable from a data frame uses the `$` operator. For example, to extract the `Biomass` column from the `experim.data` data frame, we use:
```
experim.data$Biomass
```
```
## [1] 284 328 291 956 954 685
```
We use the `$` operator by placing the name of the data frame we want to work with on the left hand side and and the name of the column (i.e. the variable) we want to extract on the right hand side. Notice that this time we didn’t have to put quotes around the variable name when using the `$` operator. We can do this if we want to—i.e. `experim.data$"Biomass"` also works—but `$` doesn’t require it.
Why is there more than one way to extract variables from a data frame? There’s no simple way to answer this question without getting into the details of how R represents data frames. The simple answer is that `$` and `[[` are not actually equivalent, even though they appear to do much the same thing. We’ve looked at the two extraction methods because they are both widely used. However, the `$` method is a bit easier to read and people tend to prefer it for interactive data analysis tasks (the `[[` construct tends to be used when we need a bit more flexibility).
### 9\.4\.2 Adding a variable to a data frame
How do we add a new variable to an existing data frame? It turns out that the `$` operator is also be used for this job by combining it with the assignment operator. Using it this way is fairly intuitive. For example, if we want to add a new (made up) variable called `Elevation` to `experim.data`, we do it like this:
```
experim.data$Elevation <- c(364, 294, 321, 358, 298, 312)
```
This assigns some fake elevation data to a new variable in `experim.data` using the `$` operator. The new variable is called `Elevation` because that was the name we used on the right hand side of `$`. This changes `experim.data`, such that it now contains four columns (variables):
```
head(experim.data, n = 3)
```
```
## Treatment Biomass Diversity Elevation
## 1 Control 284 8 364
## 2 Control 328 12 294
## 3 Control 291 11 321
```
The `[[` operator can also be used with `<-` to add variables to a data frame. We won’t bother to show an example, as it works in exactly the same way as `$` and we won’t be using the `[[` method in this book.
### 9\.4\.3 Subsetting data frames
What do we do if, instead of just extracting a single variable from a data frame, we need to select a subset of rows and/or columns? We use the single square brackets construct, `[`, to do this. There are two different ways we can use single square brackets, both of which involve the use of indexing vector(s) inside the `[` construct.
The first use of `[` allows us to subset one or more columns while keeping all the rows. This works exactly as the `[` does for vectors. Just think of columns as the elements of the data frame. For example, if we want to subset `experim.data` such that we are only left with the first and second columns (`Treatment` and `Biomass`), we can use a numeric indexing vector:
```
experim.data[c(1:2)]
```
```
## Treatment Biomass
## 1 Control 284
## 2 Control 328
## 3 Control 291
## 4 Fertilser 956
## 5 Fertilser 954
## 6 Fertilser 685
```
However, this is not a very good way to subset columns because we have to know the position of each variable. If for some reason we change the order of the columns, we have to update our R code accordingly. A better approach uses a character vector of column names inside the `[`:
```
experim.data[c("Treatment", "Biomass")]
```
```
## Treatment Biomass
## 1 Control 284
## 2 Control 328
## 3 Control 291
## 4 Fertilser 956
## 5 Fertilser 954
## 6 Fertilser 685
```
The second use of `[ ]` is designed to allow us to subset rows and columns at the same time. We have to specify both the rows and the columns we require, using a comma (“`,`”) to separate a row and column index vector. This is easiest to understand with an example:
```
# row index
rindex <- 1:3
# column index
cindex <- c("Treatment", "Biomass")
# subset the data frame
experim.data[rindex, cindex]
```
```
## Treatment Biomass
## 1 Control 284
## 2 Control 328
## 3 Control 291
```
This example extracts a subset of `experim.data` corresponding to rows 1 through 3, and columns “Treatment” and “Biomass”. The `rindex` is a numeric vector of row positions, and `cindex` is a character vector of column names. This shows that rows and columns can be selected by referencing their position or their names. The rows are not named in `experim.data`, so we specified the positions.
Storing the index vectors first is quite a long\-winded way of subsetting a data frame. However, there is nothing to stop us doing everything in one step:
```
experim.data[1:3, c("Treatment", "Biomass")]
```
```
## Treatment Biomass
## 1 Control 284
## 2 Control 328
## 3 Control 291
```
If we need to subset just rows or columns we just leave out the appropriate index vector:
```
experim.data[1:3, ]
```
```
## Treatment Biomass Diversity Elevation
## 1 Control 284 8 364
## 2 Control 328 12 294
## 3 Control 291 11 321
```
The absence of an index vector before/after the comma indicates that we want to keep every row/column. Here we kept all the columns but only the first three rows.
#### Be careful with `[`
Subsetting with the `[rindex, cindex]` construct produces another data frame. This should be apparent from the way the last example was printed. This is **usually** how this construct works. We say usually, because subsetting just one column produces a vector. This is very unfortunate, as it produces unpredictable behaviour if we’re not paying attention.
The `[` construct works with three types of index vectors. We’ve just seen that the index vector can be a numeric or character type. The third approach uses a logical index vector. For example, we can subset the `experim.data` data frame, keeping just the rows where the `Treatment` variable is equal to “Control”, using:
```
# make a logical index vector
rindex <- experim.data $ Treatment == "Control"
rindex
```
```
## [1] TRUE TRUE TRUE FALSE FALSE FALSE
```
```
#
experim.data[rindex, ]
```
```
## Treatment Biomass Diversity Elevation
## 1 Control 284 8 364
## 2 Control 328 12 294
## 3 Control 291 11 321
```
Notice that we construct the logical `rindex` vector by extracting the `Treatment` variable with the `$` operator and using the `==` operator to test for equality with “Control”. Don’t worry too much if that seems confusing. We combined many different ideas in that example. We’re going to learn a much more transparent way to achieve the same result in later chapters.
9\.5 Final words
----------------
We’ve seen how to extract/add variables and subset data frames using the `$`, `[[` and `[` constructs. The last example also showed that we can use a combination of relational operators (e.g. `==`, `!=` or `>=`) and the square brackets construct to subset a data frame according to one or more criteria. There are also a number of base R functions that allow us to manipulate data frames in a slightly more intuitive way. For example, there is a function called `transform` that adds new variables and changes existing ones, and a function called `subset` to select variables and subset rows in a data frame according to the values of its variables.
We’ve shown these approaches because they’re still used by many people. However, we will rely on the **dplyr** package to handle operations like subsetting and transforming data frame variables in this book. The **dplyr** package provides a much cleaner, less error prone framework for manipulating data frames, and can be used to work with similar kinds of objects that store data in consistent way. Before we can do that though, we need to learn a little bit about how to organise and import data into R.
9\.1 Introduction
-----------------
We learned in the [A quick introduction to R](a-quick-introduction-to-r.html#a-quick-introduction-to-r) chapter that the word “variable” is used as short hand for any kind of named object. For example, we can make a variable called `num_vec` that refers to a simple numeric vector using:
```
num_vec <- 1:10
```
When a computer scientist talks about variables they’re usually referring to these sorts of name\-value associations. However, the word “variable” has a second, more abstract meaning in the world of data analysis and statistics: it refers to anything we can control or measure. For example, if our data comes from an experiment, the data will typically involve variables whose values describe the experimental conditions (e.g. “control plots” vs. “fertiliser plots”) and the quantities we chose to measure (e.g. species biomass and diversity).
These kinds of abstract variables are often called “statistical variables”. Statistical variables can be further broken down into a range of different types, such as numeric and categorical variables. We’ll discuss these later on in the [Exploratory data analysis](exploratory-data-analysis.html#exploratory-data-analysis) chapter. The reason we’re pointing out the dual meaning of the word “variable” here is because we need to work with both interpretations. The dual meaning can be confusing, but both meanings are in widespread use so we just have to get used to them. We’ll try to minimise confusion by using the phrase “statistical variable” when we are referring to data, rather than R objects.
We’re introducing these ideas now because we’re going to consider a new type of data object in R: the **data frame**. Real world data analysis involves collections of data (“data sets”) that involve several related statistical variables. We’ve seen that an atomic vector can only be used to store one type of data such as a collection of numbers. This means a vector can be used to store a single statistical variable, How should we keep a large collection of variables organised? We could work with them separately but this is very error prone. Ideally, we need a way to keep related variables together. This is the problem that **data frames** are designed to manage.
9\.2 Data frames
----------------
Data frames are one of those R features that mark it out as a particularly good environment for data analysis. We can think of a data frame as table\-like objects with rows and columns. They collect together different statistical variables, storing each of them as a different column. Related observations are all found in the same row. This will make more sense in a moment. Let’s consider the columns first.
Each column is a vector of some kind. These are usually simple vectors containing numbers or character strings, though it is also possible to include more complicated vectors inside data frames. We’ll only work with data frames made up of relatively simple vectors in this book. The key constraint that a data frame applies is that each vector must have the same length. This is what gives a data frame it table\-like structure.
The simplest way to get a feel for data frames is to make one. Data frames are usually constructed by reading some external data into R, but for the purposes of learning about them it is better to build one from its component parts. We’ll make some artificial data describing a hypothetical experiment to do this. Imagine that we’ve conducted a small experiment to examine biomass and community diversity in six field plots. Three plots were subjected to fertiliser enrichment. The other three plots act as experimental controls. We could store the data describing this experiment in three vectors:
* `trt` (short for “treatment”) shows which experimental manipulation was used.
* `bms` (short for “biomass”) shows the total biomass measured at the end of the experiment.
* `div` (short for “diversity”) shows the number of species present at the end of the experiment.
Here’s some R code to generate these three vectors (it doesn’t matter what the actual values are, they’re made up):
```
trt <- rep(c("Control","Fertilser"), each = 3)
bms <- c(284, 328, 291, 956, 954, 685)
div <- c(8, 12, 11, 8, 4, 5)
```
```
trt
```
```
## [1] "Control" "Control" "Control" "Fertilser" "Fertilser" "Fertilser"
```
```
bms
```
```
## [1] 284 328 291 956 954 685
```
```
div
```
```
## [1] 8 12 11 8 4 5
```
Notice that the information about different observations are linked by their positions in these vectors. For example, the third control plot had a biomass of ‘291’ and a species diversity ‘11’.
We can use the `data.frame` function to construct a data frame from one or more vectors. To build a data frame from the three vectors we created and print these to the Console, we use:
```
experim.data <- data.frame(trt, bms, div)
experim.data
```
```
## trt bms div
## 1 Control 284 8
## 2 Control 328 12
## 3 Control 291 11
## 4 Fertilser 956 8
## 5 Fertilser 954 4
## 6 Fertilser 685 5
```
Notice what happens when we print the data frame: it is displayed as though it has rows and columns. That’s what we meant when we said a data frame is a table\-like structure. The `data.frame` function takes a variable number of arguments. We used the `trt`, `bms` and `div` vectors as arguments, resulting in a data frame with three columns. Each of these vectors has 6 elements, so the resulting data frame has 6 rows. The names of the vectors were used to name its columns. The rows do not have names, but they are numbered to reflect their position.
The words `trt`, `bms` and `div` are not very informative. If we prefer to work with more informative column names—this is always a good idea—then we have to name the `data.frame` arguments:
```
experim.data <- data.frame(Treatment = trt, Biomass = bms, Diversity = div)
experim.data
```
```
## Treatment Biomass Diversity
## 1 Control 284 8
## 2 Control 328 12
## 3 Control 291 11
## 4 Fertilser 956 8
## 5 Fertilser 954 4
## 6 Fertilser 685 5
```
The new data frame contains the same data as the previous one but now the column names correspond to the names we chose. These names are better because they describe each variable using a human\-readable word.
#### Don’t bother with row names
We can also name the rows of a data frame using the `row.names` argument of the `data.frame` function. We won’t bother to show an example of this though. Why? We can’t easily work with the information in row names so there’s not much point adding it. If we need to include row\-specific data in a data frame it’s best to include an additional variable, i.e. an extra column.
9\.3 Exploring data frames
--------------------------
The first things we should do when presented with a new data set is explore its structure to understand what we’re dealing with. This is easy when the data is stored in a data frame. If the data set is reasonably small we can just print it to the Console. This is not very practical for even moderate\-sized data sets though. The `head` and `tail` functions extract the first and last few rows of a data set, so these can be used to print part of a data set. The `n` argument controls the number of rows printed:
```
head(experim.data, n = 3)
```
```
## Treatment Biomass Diversity
## 1 Control 284 8
## 2 Control 328 12
## 3 Control 291 11
```
```
tail(experim.data, n = 3)
```
```
## Treatment Biomass Diversity
## 4 Fertilser 956 8
## 5 Fertilser 954 4
## 6 Fertilser 685 5
```
The `View` function can be used to visualise the whole data set in a spreadsheet like view:
```
View(experim.data)
```
This shows the rows and columns of the data frame argument in a table\- or spreadsheet\-like format. When we run this in RStudio a new tab opens up with the `experim.data` data inside it.
#### `View` only displays the data
The `View` function is designed to allow us to display the data in a data frame as a table of rows and columns. We can’t change the data in any way with the `View` function. We can reorder the way the data are presented, but keep in mind that this won’t alter the underlying data.
There are quite a few different R functions that will extract information about a data frame. The `nrow` and `ncol` functions return the number of rows and columns, respectively:
```
nrow(experim.data)
```
```
## [1] 6
```
```
ncol(experim.data)
```
```
## [1] 3
```
The `names` function is used to extract the column names from a data frame:
```
colnames(experim.data)
```
```
## [1] "Treatment" "Biomass" "Diversity"
```
The `experim.data` data frame has three columns, so `names` returns a character vector of length three, where each element corresponds to a column name. There is also a `rownames` function if we need that too. The `nrow`, `ncol`, `names` and `rownames` functions each return a vector, so we can assign the result if we need to use it later. For example, if we want to extract and store the column names for any reason we could use `varnames <- names(experim.data)`.
9\.4 Extracting data from data frames
-------------------------------------
Data frames would not be much use if we could not extract and modify the data in them. In this section we will briefly review how to carry out these kinds of operations using basic R functions.
### 9\.4\.1 Extracting and adding a single variable
A data frame is just a collection of variables stored in columns, where each column is a vector of some kind. There are several ways to extract these variables from a data frame. If we just want to extract a single variable we have two options.
The first way of extracting a variable from a data frame uses a double square brackets construct, `[[`. For example, we extract the `Biomass` variable from our example data frame with the double square brackets like this:
```
experim.data[["Biomass"]]
```
```
## [1] 284 328 291 956 954 685
```
This prints whatever is in the `Biomass` column to the Console. What kind of object is this? It’s a numeric vector:
```
is.numeric(experim.data[["Biomass"]])
```
```
## [1] TRUE
```
A data frame really is nothing more than a collection of vectors. Notice that all we did was print the resulting vector to the Console. If we want to actually do something with this numeric vector we need to assign the result:
```
bmass <- experim.data$Biomass
bmass^2
```
```
## [1] 80656 107584 84681 913936 910116 469225
```
Here, we extracted the `Biomass` variable, assigned it to `bmass`, and then squared this. The value of `Biomass` variable inside the `experim.data` data frame is unchanged.
Notice that we used `"Biomass"` instead of `Biomass` inside the double square brackets, i.e. we quoted the name of the variable. This is because we want R to treat the word “Biomass” as a literal value. This little detail is important! If we don’t quote the name then R will assume that `Biomass` is the name of an object and go in search of it in the global environment. Since we haven’t created something called `Biomass`, leaving out the quotes generates an error:
```
experim.data[[Biomass]]
```
```
## Error in (function(x, i, exact) if (is.matrix(i)) as.matrix(x)[[i]] else .subset2(x, : object 'Biomass' not found
```
The error message is telling us that R can’t find a variable called `Biomass` in the global environment. On the other hand, this example does work:
```
vname <- "Biomass"
experim.data[[vname]]
```
```
## [1] 284 328 291 956 954 685
```
This works because we first defined `vname` to be a character vector of length one, whose value is the name of a variable in `experim.data`. When R encounters `vname` inside the `[[` construct it goes and finds the value associated with it and uses this value to determine the variable to extract.
The second method for extracting a variable from a data frame uses the `$` operator. For example, to extract the `Biomass` column from the `experim.data` data frame, we use:
```
experim.data$Biomass
```
```
## [1] 284 328 291 956 954 685
```
We use the `$` operator by placing the name of the data frame we want to work with on the left hand side and and the name of the column (i.e. the variable) we want to extract on the right hand side. Notice that this time we didn’t have to put quotes around the variable name when using the `$` operator. We can do this if we want to—i.e. `experim.data$"Biomass"` also works—but `$` doesn’t require it.
Why is there more than one way to extract variables from a data frame? There’s no simple way to answer this question without getting into the details of how R represents data frames. The simple answer is that `$` and `[[` are not actually equivalent, even though they appear to do much the same thing. We’ve looked at the two extraction methods because they are both widely used. However, the `$` method is a bit easier to read and people tend to prefer it for interactive data analysis tasks (the `[[` construct tends to be used when we need a bit more flexibility).
### 9\.4\.2 Adding a variable to a data frame
How do we add a new variable to an existing data frame? It turns out that the `$` operator is also be used for this job by combining it with the assignment operator. Using it this way is fairly intuitive. For example, if we want to add a new (made up) variable called `Elevation` to `experim.data`, we do it like this:
```
experim.data$Elevation <- c(364, 294, 321, 358, 298, 312)
```
This assigns some fake elevation data to a new variable in `experim.data` using the `$` operator. The new variable is called `Elevation` because that was the name we used on the right hand side of `$`. This changes `experim.data`, such that it now contains four columns (variables):
```
head(experim.data, n = 3)
```
```
## Treatment Biomass Diversity Elevation
## 1 Control 284 8 364
## 2 Control 328 12 294
## 3 Control 291 11 321
```
The `[[` operator can also be used with `<-` to add variables to a data frame. We won’t bother to show an example, as it works in exactly the same way as `$` and we won’t be using the `[[` method in this book.
### 9\.4\.3 Subsetting data frames
What do we do if, instead of just extracting a single variable from a data frame, we need to select a subset of rows and/or columns? We use the single square brackets construct, `[`, to do this. There are two different ways we can use single square brackets, both of which involve the use of indexing vector(s) inside the `[` construct.
The first use of `[` allows us to subset one or more columns while keeping all the rows. This works exactly as the `[` does for vectors. Just think of columns as the elements of the data frame. For example, if we want to subset `experim.data` such that we are only left with the first and second columns (`Treatment` and `Biomass`), we can use a numeric indexing vector:
```
experim.data[c(1:2)]
```
```
## Treatment Biomass
## 1 Control 284
## 2 Control 328
## 3 Control 291
## 4 Fertilser 956
## 5 Fertilser 954
## 6 Fertilser 685
```
However, this is not a very good way to subset columns because we have to know the position of each variable. If for some reason we change the order of the columns, we have to update our R code accordingly. A better approach uses a character vector of column names inside the `[`:
```
experim.data[c("Treatment", "Biomass")]
```
```
## Treatment Biomass
## 1 Control 284
## 2 Control 328
## 3 Control 291
## 4 Fertilser 956
## 5 Fertilser 954
## 6 Fertilser 685
```
The second use of `[ ]` is designed to allow us to subset rows and columns at the same time. We have to specify both the rows and the columns we require, using a comma (“`,`”) to separate a row and column index vector. This is easiest to understand with an example:
```
# row index
rindex <- 1:3
# column index
cindex <- c("Treatment", "Biomass")
# subset the data frame
experim.data[rindex, cindex]
```
```
## Treatment Biomass
## 1 Control 284
## 2 Control 328
## 3 Control 291
```
This example extracts a subset of `experim.data` corresponding to rows 1 through 3, and columns “Treatment” and “Biomass”. The `rindex` is a numeric vector of row positions, and `cindex` is a character vector of column names. This shows that rows and columns can be selected by referencing their position or their names. The rows are not named in `experim.data`, so we specified the positions.
Storing the index vectors first is quite a long\-winded way of subsetting a data frame. However, there is nothing to stop us doing everything in one step:
```
experim.data[1:3, c("Treatment", "Biomass")]
```
```
## Treatment Biomass
## 1 Control 284
## 2 Control 328
## 3 Control 291
```
If we need to subset just rows or columns we just leave out the appropriate index vector:
```
experim.data[1:3, ]
```
```
## Treatment Biomass Diversity Elevation
## 1 Control 284 8 364
## 2 Control 328 12 294
## 3 Control 291 11 321
```
The absence of an index vector before/after the comma indicates that we want to keep every row/column. Here we kept all the columns but only the first three rows.
#### Be careful with `[`
Subsetting with the `[rindex, cindex]` construct produces another data frame. This should be apparent from the way the last example was printed. This is **usually** how this construct works. We say usually, because subsetting just one column produces a vector. This is very unfortunate, as it produces unpredictable behaviour if we’re not paying attention.
The `[` construct works with three types of index vectors. We’ve just seen that the index vector can be a numeric or character type. The third approach uses a logical index vector. For example, we can subset the `experim.data` data frame, keeping just the rows where the `Treatment` variable is equal to “Control”, using:
```
# make a logical index vector
rindex <- experim.data $ Treatment == "Control"
rindex
```
```
## [1] TRUE TRUE TRUE FALSE FALSE FALSE
```
```
#
experim.data[rindex, ]
```
```
## Treatment Biomass Diversity Elevation
## 1 Control 284 8 364
## 2 Control 328 12 294
## 3 Control 291 11 321
```
Notice that we construct the logical `rindex` vector by extracting the `Treatment` variable with the `$` operator and using the `==` operator to test for equality with “Control”. Don’t worry too much if that seems confusing. We combined many different ideas in that example. We’re going to learn a much more transparent way to achieve the same result in later chapters.
### 9\.4\.1 Extracting and adding a single variable
A data frame is just a collection of variables stored in columns, where each column is a vector of some kind. There are several ways to extract these variables from a data frame. If we just want to extract a single variable we have two options.
The first way of extracting a variable from a data frame uses a double square brackets construct, `[[`. For example, we extract the `Biomass` variable from our example data frame with the double square brackets like this:
```
experim.data[["Biomass"]]
```
```
## [1] 284 328 291 956 954 685
```
This prints whatever is in the `Biomass` column to the Console. What kind of object is this? It’s a numeric vector:
```
is.numeric(experim.data[["Biomass"]])
```
```
## [1] TRUE
```
A data frame really is nothing more than a collection of vectors. Notice that all we did was print the resulting vector to the Console. If we want to actually do something with this numeric vector we need to assign the result:
```
bmass <- experim.data$Biomass
bmass^2
```
```
## [1] 80656 107584 84681 913936 910116 469225
```
Here, we extracted the `Biomass` variable, assigned it to `bmass`, and then squared this. The value of `Biomass` variable inside the `experim.data` data frame is unchanged.
Notice that we used `"Biomass"` instead of `Biomass` inside the double square brackets, i.e. we quoted the name of the variable. This is because we want R to treat the word “Biomass” as a literal value. This little detail is important! If we don’t quote the name then R will assume that `Biomass` is the name of an object and go in search of it in the global environment. Since we haven’t created something called `Biomass`, leaving out the quotes generates an error:
```
experim.data[[Biomass]]
```
```
## Error in (function(x, i, exact) if (is.matrix(i)) as.matrix(x)[[i]] else .subset2(x, : object 'Biomass' not found
```
The error message is telling us that R can’t find a variable called `Biomass` in the global environment. On the other hand, this example does work:
```
vname <- "Biomass"
experim.data[[vname]]
```
```
## [1] 284 328 291 956 954 685
```
This works because we first defined `vname` to be a character vector of length one, whose value is the name of a variable in `experim.data`. When R encounters `vname` inside the `[[` construct it goes and finds the value associated with it and uses this value to determine the variable to extract.
The second method for extracting a variable from a data frame uses the `$` operator. For example, to extract the `Biomass` column from the `experim.data` data frame, we use:
```
experim.data$Biomass
```
```
## [1] 284 328 291 956 954 685
```
We use the `$` operator by placing the name of the data frame we want to work with on the left hand side and and the name of the column (i.e. the variable) we want to extract on the right hand side. Notice that this time we didn’t have to put quotes around the variable name when using the `$` operator. We can do this if we want to—i.e. `experim.data$"Biomass"` also works—but `$` doesn’t require it.
Why is there more than one way to extract variables from a data frame? There’s no simple way to answer this question without getting into the details of how R represents data frames. The simple answer is that `$` and `[[` are not actually equivalent, even though they appear to do much the same thing. We’ve looked at the two extraction methods because they are both widely used. However, the `$` method is a bit easier to read and people tend to prefer it for interactive data analysis tasks (the `[[` construct tends to be used when we need a bit more flexibility).
### 9\.4\.2 Adding a variable to a data frame
How do we add a new variable to an existing data frame? It turns out that the `$` operator is also be used for this job by combining it with the assignment operator. Using it this way is fairly intuitive. For example, if we want to add a new (made up) variable called `Elevation` to `experim.data`, we do it like this:
```
experim.data$Elevation <- c(364, 294, 321, 358, 298, 312)
```
This assigns some fake elevation data to a new variable in `experim.data` using the `$` operator. The new variable is called `Elevation` because that was the name we used on the right hand side of `$`. This changes `experim.data`, such that it now contains four columns (variables):
```
head(experim.data, n = 3)
```
```
## Treatment Biomass Diversity Elevation
## 1 Control 284 8 364
## 2 Control 328 12 294
## 3 Control 291 11 321
```
The `[[` operator can also be used with `<-` to add variables to a data frame. We won’t bother to show an example, as it works in exactly the same way as `$` and we won’t be using the `[[` method in this book.
### 9\.4\.3 Subsetting data frames
What do we do if, instead of just extracting a single variable from a data frame, we need to select a subset of rows and/or columns? We use the single square brackets construct, `[`, to do this. There are two different ways we can use single square brackets, both of which involve the use of indexing vector(s) inside the `[` construct.
The first use of `[` allows us to subset one or more columns while keeping all the rows. This works exactly as the `[` does for vectors. Just think of columns as the elements of the data frame. For example, if we want to subset `experim.data` such that we are only left with the first and second columns (`Treatment` and `Biomass`), we can use a numeric indexing vector:
```
experim.data[c(1:2)]
```
```
## Treatment Biomass
## 1 Control 284
## 2 Control 328
## 3 Control 291
## 4 Fertilser 956
## 5 Fertilser 954
## 6 Fertilser 685
```
However, this is not a very good way to subset columns because we have to know the position of each variable. If for some reason we change the order of the columns, we have to update our R code accordingly. A better approach uses a character vector of column names inside the `[`:
```
experim.data[c("Treatment", "Biomass")]
```
```
## Treatment Biomass
## 1 Control 284
## 2 Control 328
## 3 Control 291
## 4 Fertilser 956
## 5 Fertilser 954
## 6 Fertilser 685
```
The second use of `[ ]` is designed to allow us to subset rows and columns at the same time. We have to specify both the rows and the columns we require, using a comma (“`,`”) to separate a row and column index vector. This is easiest to understand with an example:
```
# row index
rindex <- 1:3
# column index
cindex <- c("Treatment", "Biomass")
# subset the data frame
experim.data[rindex, cindex]
```
```
## Treatment Biomass
## 1 Control 284
## 2 Control 328
## 3 Control 291
```
This example extracts a subset of `experim.data` corresponding to rows 1 through 3, and columns “Treatment” and “Biomass”. The `rindex` is a numeric vector of row positions, and `cindex` is a character vector of column names. This shows that rows and columns can be selected by referencing their position or their names. The rows are not named in `experim.data`, so we specified the positions.
Storing the index vectors first is quite a long\-winded way of subsetting a data frame. However, there is nothing to stop us doing everything in one step:
```
experim.data[1:3, c("Treatment", "Biomass")]
```
```
## Treatment Biomass
## 1 Control 284
## 2 Control 328
## 3 Control 291
```
If we need to subset just rows or columns we just leave out the appropriate index vector:
```
experim.data[1:3, ]
```
```
## Treatment Biomass Diversity Elevation
## 1 Control 284 8 364
## 2 Control 328 12 294
## 3 Control 291 11 321
```
The absence of an index vector before/after the comma indicates that we want to keep every row/column. Here we kept all the columns but only the first three rows.
#### Be careful with `[`
Subsetting with the `[rindex, cindex]` construct produces another data frame. This should be apparent from the way the last example was printed. This is **usually** how this construct works. We say usually, because subsetting just one column produces a vector. This is very unfortunate, as it produces unpredictable behaviour if we’re not paying attention.
The `[` construct works with three types of index vectors. We’ve just seen that the index vector can be a numeric or character type. The third approach uses a logical index vector. For example, we can subset the `experim.data` data frame, keeping just the rows where the `Treatment` variable is equal to “Control”, using:
```
# make a logical index vector
rindex <- experim.data $ Treatment == "Control"
rindex
```
```
## [1] TRUE TRUE TRUE FALSE FALSE FALSE
```
```
#
experim.data[rindex, ]
```
```
## Treatment Biomass Diversity Elevation
## 1 Control 284 8 364
## 2 Control 328 12 294
## 3 Control 291 11 321
```
Notice that we construct the logical `rindex` vector by extracting the `Treatment` variable with the `$` operator and using the `==` operator to test for equality with “Control”. Don’t worry too much if that seems confusing. We combined many different ideas in that example. We’re going to learn a much more transparent way to achieve the same result in later chapters.
9\.5 Final words
----------------
We’ve seen how to extract/add variables and subset data frames using the `$`, `[[` and `[` constructs. The last example also showed that we can use a combination of relational operators (e.g. `==`, `!=` or `>=`) and the square brackets construct to subset a data frame according to one or more criteria. There are also a number of base R functions that allow us to manipulate data frames in a slightly more intuitive way. For example, there is a function called `transform` that adds new variables and changes existing ones, and a function called `subset` to select variables and subset rows in a data frame according to the values of its variables.
We’ve shown these approaches because they’re still used by many people. However, we will rely on the **dplyr** package to handle operations like subsetting and transforming data frame variables in this book. The **dplyr** package provides a much cleaner, less error prone framework for manipulating data frames, and can be used to work with similar kinds of objects that store data in consistent way. Before we can do that though, we need to learn a little bit about how to organise and import data into R.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/working-directories-and-data-files.html |
Chapter 10 Working directories and data files
=============================================
10\.1 Introduction
------------------
R is able to access data from a huge range of different data storage formats and repositories. With the right tools, we can use R to pull in data from various data bases, proprietary storage formats (e.g. Excel), online web sites, or plain old text files. We aren’t going to evaluate the many packages and functions used to pull data into R—a whole book could be written about this topic alone. Instead, we’re going to examine the simplest method for data import: reading in data from a text file. We’ll also briefly look at how to access data stored in packages.
10\.2 Data files: the CSV format
--------------------------------
Just about every piece of software that stores data in some kind of table\-like structure can export those data to a CSV file. The CSV acronym stands for “Comma Separated Values”. CSV files are just ordinary text files. The only thing that makes them a CSV file is the fact that they store data in a particular format. This format is very simple: each row of a CSV file corresponds to a row in the data, and each value in a row (corresponding to a different column) is separated by a comma. Here is what the artificial data from the last chapter looks like in CSV format:
```
## "trt","bms","div"
## "Control",284,8
## "Control",328,12
## "Control",291,11
## "Fertilser",956,8
## "Fertilser",954,4
## "Fertilser",685,5
```
The first line contains the variable names, with each name separated by a comma. It’s usually a good idea to include the variable names in a CSV file, though this is optional. After the variable names, each new line is a row of data. Values which are not numbers have double quotation marks around them; numeric values lack these quotes. Notice that this is the same convention that applies to the elements of atomic vectors. Quoting non\-numeric values is actually optional, but reading CSV files into R works best when non\-numeric values are in quotes because this reduces ambiguity.
### 10\.2\.1 Exporting CSV files from Excel
Those who work with small or moderate data sets (i.e. 100s\-1000s of lines) often use Excel to manage and store their data. There are good reasons for why this isn’t necessarily a sensible thing to do—for example, Excel has a nasty habit of “helpfully” formatting data values. Nonetheless, Excel is a ubiquitous and convenient tool for data management, so it’s important to know how to pull data into R from Excel. It is possible to read data directly from Excel into R, but this way of doing things can be error prone for an inexperienced user and requires us to use an external package (the **readxl** package is currently the best option). Instead, the simplest way to transfer data from Excel to R is to first export the relevant worksheet to a CSV file, and then import this new file using R’s standard file import tools.
We’ll discuss the import tools in a moment. The initial export step is just a matter of selecting the Excel worksheet that contains the relevant data, navigating to `Save As...`, choosing the `Comma Separated Values (.csv)`, and following the familiar file save routine. That’s it. After following this step our data are free of Excel and ready to be read into R.
#### Always check your Excel worksheet
Importing data from Excel can turn into a frustrating process if we’re not careful. Most problems have their origin in the Excel worksheet used to store the data, rather than R. Problems usually arise because we haven’t been paying close attention to a worksheet. For example, imagine we’re working with a very simple data set, which contains three columns of data and a few hundred rows. If at some point we accidentally (or even intentionally) add a value to a cell in the forth column, Excel will assume the fourth column is “real” data. When we then export the worksheet to CSV, instead of the expected three columns of data, we end up with four columns, where most of the fourth column is just missing information. This kind of mistake is surprisingly common and is a frequent source of confusion. The take\-home message is that when Excel is used to hold raw data should be used to do just that—the worksheet containing our raw data should hold only that, and nothing else.
10\.3 The working directory
---------------------------
Before we start worrying about data import, we first need to learn a bit about how R searches for the files that reside on our computer’s hard drive. The key concept is that of the “working directory”. A “directory” is just another word for “folder”. The working directory is simply a default location (i.e. a folder) R uses when searching for files. The working directory must always be set, and there are standard rules that govern how this is chosen when a new R session starts. For example, if we start R by double clicking on an script file (i.e. a file with a “.R” extension), R will typically set the working directory to be the location of the R script. We say typically, because this behaviour can be overridden.
There’s no need to learn the rules for how the default working directory is chosen, because we can always use R/RStudio to find out which folder is currently set as the working directory. Here are a couple of options:
1. When RStudio first starts up, the **Files** tab in the bottom right window shows us the contents (i.e. the files and folders) of the working directory. Be careful though, if we use the file viewer to navigate to a new location this does not change the working directory.
2. The `getwd` function will print the location to the working directory to the Console. It does this by displaying a **file path**. If you’re comfortable with file paths then the output of `getwd` will make perfect sense. If not, it doesn’t matter too much. Use the RStudio **Files** tab instead.
Why does any of this matter? We need to know where R will look for files if we plan to read in data. Fortunately, it’s easy to change the working directory to a new location if we need to do this:
1. Using RStudio, we can set the working directory via the `Session > Set Working Directory... > Choose Directory...` menu. Once this menu item is selected we’re presented with the standard file/folder dialogue box to choose the working directory.
2. Alternatively, we can use a function called `setwd` at the Console, though once again, we have to be comfortable with file paths to use this. Using RStudio is easier, so we won’t demonstrate how to use `setwd`.
10\.4 Importing data with `read.csv`
------------------------------------
Now that we know roughly how a CSV file is formatted, and where R will look for such files, we need to understand how to read them into R. The standard R function for reading in a CSV file is called `read.csv`. There are a few other options (e.g. `read_csv` from the **readr** package), but we’ll use `read.csv` because it’s part of the base R distribution, which means we can use it without relying on an external package.
The `read.csv` function does one thing: given the location of a CSV file, it will read the data into R and return it to us as a data frame. There are a couple of different strategies for using `read.csv`. One is considered good practice and is fairly robust. The second is widely used, but creates more problems than it solves. We’ll discuss both, and then explain why the first strategy is generally better than the second.
#### 10\.4\.0\.1 Strategy 1—set the working directory first
Remember, the working directory is the default location used by R to search for files. This means that if we set the working directory to be wherever our data file lives, we can use the `read.csv` function without having to tell R where to look for it. Let’s assume our data is in a CSV file called “my\-great\-data.csv”. We should be able to see “my\-great\-data.csv” in the **Files** tab in RStudio if the working directory is set to its location. If we can’t see it there, the working directory still needs to be set (e.g. via `Session > Set Working Directory... > Choose Directory...`).
Once we’ve set the working directory to this location, reading the “my\-great\-data.csv” file into R is simple:
```
my_data <- read.csv(file = "my-great-data.csv", stringsAsFactors = FALSE)
```
R knows where to find the file because we first set the working directory to be the location of the file. If we forget to do this R will complain and throw an error. We have to assign the output a name so that we can actually use the new data frame (`my_data` in this example), otherwise all that will happen is the resulting data frame is read in and printed to the Console.
#### 10\.4\.0\.2 Strategy 2—use the full path to the CSV file
If we are comfortable with “file paths” then `read.csv` can be used without bothering to set the working directory. For example, if we have the CSV file called “my\-great\-data.csv” on the in a folder called “r\_data”, then on a Unix machine we might read it into a data frame using something like:
```
my_data <- read.csv(file = "~/r_data/my-great-data.csv", stringsAsFactors = FALSE)
```
When used like this, we have to give `read.csv` the full path to the file. This assumes of course that we understand how to construct a file path—the details vary depending on the operating system.
#### 10\.4\.0\.3 Why use the first strategy?
Both methods get to the same end point: after running the code at the Console we should end up with an object called `my_data` in the global environment, which is a data frame containing the data in the “my\-great\-data.csv” file. So why should we prefer
1. Many novice R users with no experience of programming struggle with file paths, leading to a lot of frustration and wasted time trying to specify them. The first method only requires us to set the working directory with RStudio and know the name of the file we want to read in. There’s no need to deal with file paths.
2. The second strategy creates problems when we move our data around or work on different machines, as the file paths will need to be changed in each new situation. The first strategy is robust to such changes. For example, if we move all our data to a new location, we just have to set the working directory to the new location and our R code will still work.
10\.5 Importing data with RStudio (Avoid this!)
-----------------------------------------------
It is also possible to import data from a CSV file into R using RStudio. The steps are as follows:
1. Click on the **Environment** tab in the top right pane of RStudio
2. Select `Import Dataset > From Text File...`
3. Select the CSV file to read into R and click Open
4. Enter a name (no spaces allowed) or stick with the default and click Import
We’re only pointing out this method because new users are often tempted to use it—we **do not** recommend it. Why? It creates the same kinds of problems as the second strategy discussed above. All RStudio does generate the correct usage of a function called `read_csv` (from the **readr** package) and evaluate this at the Console. The code isn’t part of a script so we have to do this every time we want to work with the data file. It’s easy to make a mistake using this approach, e.g. by accidentally misnaming the data frame or reading in the wrong data. It may be tempting to copy the generated R code into a script. However, we still have the portability problem outlined above to deal with. Take our word for it. The RStudio\-focussed way of reading data into R just creates more problems than it solves. Don’t use it.
10\.6 Package data
------------------
Remember what Hadley Wickam said about packages? “… include reusable R functions, the documentation that describes how to use them, **and sample data**.” Many packages come with one or more sample data sets. These are very handy, as they’re used in examples and package vignettes. We can use the `data` function to get R to list the data sets hiding away in packages:
```
data(package = .packages(all.available = TRUE))
```
The mysterious `.packages(all.available = TRUE)` part of this generates a character vector with the names of all the installed packages in it. If we only use `data()` then R only lists the data sets found in a package called `datasets`, and in packages we have loaded and attached in the current R session using the `library` function.
The `datasets` package is part of the base R distribution. It exists only to store example data sets. The package is automatically loaded when we start R, i.e. there’s no need to use `library` to access it, meaning any data stored in this package can be accessed every time we start R. We’ll use a couple of data sets in the `datasets` package later to demonstrate how to work with the **dplyr** and **ggplot2** packages.
10\.1 Introduction
------------------
R is able to access data from a huge range of different data storage formats and repositories. With the right tools, we can use R to pull in data from various data bases, proprietary storage formats (e.g. Excel), online web sites, or plain old text files. We aren’t going to evaluate the many packages and functions used to pull data into R—a whole book could be written about this topic alone. Instead, we’re going to examine the simplest method for data import: reading in data from a text file. We’ll also briefly look at how to access data stored in packages.
10\.2 Data files: the CSV format
--------------------------------
Just about every piece of software that stores data in some kind of table\-like structure can export those data to a CSV file. The CSV acronym stands for “Comma Separated Values”. CSV files are just ordinary text files. The only thing that makes them a CSV file is the fact that they store data in a particular format. This format is very simple: each row of a CSV file corresponds to a row in the data, and each value in a row (corresponding to a different column) is separated by a comma. Here is what the artificial data from the last chapter looks like in CSV format:
```
## "trt","bms","div"
## "Control",284,8
## "Control",328,12
## "Control",291,11
## "Fertilser",956,8
## "Fertilser",954,4
## "Fertilser",685,5
```
The first line contains the variable names, with each name separated by a comma. It’s usually a good idea to include the variable names in a CSV file, though this is optional. After the variable names, each new line is a row of data. Values which are not numbers have double quotation marks around them; numeric values lack these quotes. Notice that this is the same convention that applies to the elements of atomic vectors. Quoting non\-numeric values is actually optional, but reading CSV files into R works best when non\-numeric values are in quotes because this reduces ambiguity.
### 10\.2\.1 Exporting CSV files from Excel
Those who work with small or moderate data sets (i.e. 100s\-1000s of lines) often use Excel to manage and store their data. There are good reasons for why this isn’t necessarily a sensible thing to do—for example, Excel has a nasty habit of “helpfully” formatting data values. Nonetheless, Excel is a ubiquitous and convenient tool for data management, so it’s important to know how to pull data into R from Excel. It is possible to read data directly from Excel into R, but this way of doing things can be error prone for an inexperienced user and requires us to use an external package (the **readxl** package is currently the best option). Instead, the simplest way to transfer data from Excel to R is to first export the relevant worksheet to a CSV file, and then import this new file using R’s standard file import tools.
We’ll discuss the import tools in a moment. The initial export step is just a matter of selecting the Excel worksheet that contains the relevant data, navigating to `Save As...`, choosing the `Comma Separated Values (.csv)`, and following the familiar file save routine. That’s it. After following this step our data are free of Excel and ready to be read into R.
#### Always check your Excel worksheet
Importing data from Excel can turn into a frustrating process if we’re not careful. Most problems have their origin in the Excel worksheet used to store the data, rather than R. Problems usually arise because we haven’t been paying close attention to a worksheet. For example, imagine we’re working with a very simple data set, which contains three columns of data and a few hundred rows. If at some point we accidentally (or even intentionally) add a value to a cell in the forth column, Excel will assume the fourth column is “real” data. When we then export the worksheet to CSV, instead of the expected three columns of data, we end up with four columns, where most of the fourth column is just missing information. This kind of mistake is surprisingly common and is a frequent source of confusion. The take\-home message is that when Excel is used to hold raw data should be used to do just that—the worksheet containing our raw data should hold only that, and nothing else.
### 10\.2\.1 Exporting CSV files from Excel
Those who work with small or moderate data sets (i.e. 100s\-1000s of lines) often use Excel to manage and store their data. There are good reasons for why this isn’t necessarily a sensible thing to do—for example, Excel has a nasty habit of “helpfully” formatting data values. Nonetheless, Excel is a ubiquitous and convenient tool for data management, so it’s important to know how to pull data into R from Excel. It is possible to read data directly from Excel into R, but this way of doing things can be error prone for an inexperienced user and requires us to use an external package (the **readxl** package is currently the best option). Instead, the simplest way to transfer data from Excel to R is to first export the relevant worksheet to a CSV file, and then import this new file using R’s standard file import tools.
We’ll discuss the import tools in a moment. The initial export step is just a matter of selecting the Excel worksheet that contains the relevant data, navigating to `Save As...`, choosing the `Comma Separated Values (.csv)`, and following the familiar file save routine. That’s it. After following this step our data are free of Excel and ready to be read into R.
#### Always check your Excel worksheet
Importing data from Excel can turn into a frustrating process if we’re not careful. Most problems have their origin in the Excel worksheet used to store the data, rather than R. Problems usually arise because we haven’t been paying close attention to a worksheet. For example, imagine we’re working with a very simple data set, which contains three columns of data and a few hundred rows. If at some point we accidentally (or even intentionally) add a value to a cell in the forth column, Excel will assume the fourth column is “real” data. When we then export the worksheet to CSV, instead of the expected three columns of data, we end up with four columns, where most of the fourth column is just missing information. This kind of mistake is surprisingly common and is a frequent source of confusion. The take\-home message is that when Excel is used to hold raw data should be used to do just that—the worksheet containing our raw data should hold only that, and nothing else.
10\.3 The working directory
---------------------------
Before we start worrying about data import, we first need to learn a bit about how R searches for the files that reside on our computer’s hard drive. The key concept is that of the “working directory”. A “directory” is just another word for “folder”. The working directory is simply a default location (i.e. a folder) R uses when searching for files. The working directory must always be set, and there are standard rules that govern how this is chosen when a new R session starts. For example, if we start R by double clicking on an script file (i.e. a file with a “.R” extension), R will typically set the working directory to be the location of the R script. We say typically, because this behaviour can be overridden.
There’s no need to learn the rules for how the default working directory is chosen, because we can always use R/RStudio to find out which folder is currently set as the working directory. Here are a couple of options:
1. When RStudio first starts up, the **Files** tab in the bottom right window shows us the contents (i.e. the files and folders) of the working directory. Be careful though, if we use the file viewer to navigate to a new location this does not change the working directory.
2. The `getwd` function will print the location to the working directory to the Console. It does this by displaying a **file path**. If you’re comfortable with file paths then the output of `getwd` will make perfect sense. If not, it doesn’t matter too much. Use the RStudio **Files** tab instead.
Why does any of this matter? We need to know where R will look for files if we plan to read in data. Fortunately, it’s easy to change the working directory to a new location if we need to do this:
1. Using RStudio, we can set the working directory via the `Session > Set Working Directory... > Choose Directory...` menu. Once this menu item is selected we’re presented with the standard file/folder dialogue box to choose the working directory.
2. Alternatively, we can use a function called `setwd` at the Console, though once again, we have to be comfortable with file paths to use this. Using RStudio is easier, so we won’t demonstrate how to use `setwd`.
10\.4 Importing data with `read.csv`
------------------------------------
Now that we know roughly how a CSV file is formatted, and where R will look for such files, we need to understand how to read them into R. The standard R function for reading in a CSV file is called `read.csv`. There are a few other options (e.g. `read_csv` from the **readr** package), but we’ll use `read.csv` because it’s part of the base R distribution, which means we can use it without relying on an external package.
The `read.csv` function does one thing: given the location of a CSV file, it will read the data into R and return it to us as a data frame. There are a couple of different strategies for using `read.csv`. One is considered good practice and is fairly robust. The second is widely used, but creates more problems than it solves. We’ll discuss both, and then explain why the first strategy is generally better than the second.
#### 10\.4\.0\.1 Strategy 1—set the working directory first
Remember, the working directory is the default location used by R to search for files. This means that if we set the working directory to be wherever our data file lives, we can use the `read.csv` function without having to tell R where to look for it. Let’s assume our data is in a CSV file called “my\-great\-data.csv”. We should be able to see “my\-great\-data.csv” in the **Files** tab in RStudio if the working directory is set to its location. If we can’t see it there, the working directory still needs to be set (e.g. via `Session > Set Working Directory... > Choose Directory...`).
Once we’ve set the working directory to this location, reading the “my\-great\-data.csv” file into R is simple:
```
my_data <- read.csv(file = "my-great-data.csv", stringsAsFactors = FALSE)
```
R knows where to find the file because we first set the working directory to be the location of the file. If we forget to do this R will complain and throw an error. We have to assign the output a name so that we can actually use the new data frame (`my_data` in this example), otherwise all that will happen is the resulting data frame is read in and printed to the Console.
#### 10\.4\.0\.2 Strategy 2—use the full path to the CSV file
If we are comfortable with “file paths” then `read.csv` can be used without bothering to set the working directory. For example, if we have the CSV file called “my\-great\-data.csv” on the in a folder called “r\_data”, then on a Unix machine we might read it into a data frame using something like:
```
my_data <- read.csv(file = "~/r_data/my-great-data.csv", stringsAsFactors = FALSE)
```
When used like this, we have to give `read.csv` the full path to the file. This assumes of course that we understand how to construct a file path—the details vary depending on the operating system.
#### 10\.4\.0\.3 Why use the first strategy?
Both methods get to the same end point: after running the code at the Console we should end up with an object called `my_data` in the global environment, which is a data frame containing the data in the “my\-great\-data.csv” file. So why should we prefer
1. Many novice R users with no experience of programming struggle with file paths, leading to a lot of frustration and wasted time trying to specify them. The first method only requires us to set the working directory with RStudio and know the name of the file we want to read in. There’s no need to deal with file paths.
2. The second strategy creates problems when we move our data around or work on different machines, as the file paths will need to be changed in each new situation. The first strategy is robust to such changes. For example, if we move all our data to a new location, we just have to set the working directory to the new location and our R code will still work.
#### 10\.4\.0\.1 Strategy 1—set the working directory first
Remember, the working directory is the default location used by R to search for files. This means that if we set the working directory to be wherever our data file lives, we can use the `read.csv` function without having to tell R where to look for it. Let’s assume our data is in a CSV file called “my\-great\-data.csv”. We should be able to see “my\-great\-data.csv” in the **Files** tab in RStudio if the working directory is set to its location. If we can’t see it there, the working directory still needs to be set (e.g. via `Session > Set Working Directory... > Choose Directory...`).
Once we’ve set the working directory to this location, reading the “my\-great\-data.csv” file into R is simple:
```
my_data <- read.csv(file = "my-great-data.csv", stringsAsFactors = FALSE)
```
R knows where to find the file because we first set the working directory to be the location of the file. If we forget to do this R will complain and throw an error. We have to assign the output a name so that we can actually use the new data frame (`my_data` in this example), otherwise all that will happen is the resulting data frame is read in and printed to the Console.
#### 10\.4\.0\.2 Strategy 2—use the full path to the CSV file
If we are comfortable with “file paths” then `read.csv` can be used without bothering to set the working directory. For example, if we have the CSV file called “my\-great\-data.csv” on the in a folder called “r\_data”, then on a Unix machine we might read it into a data frame using something like:
```
my_data <- read.csv(file = "~/r_data/my-great-data.csv", stringsAsFactors = FALSE)
```
When used like this, we have to give `read.csv` the full path to the file. This assumes of course that we understand how to construct a file path—the details vary depending on the operating system.
#### 10\.4\.0\.3 Why use the first strategy?
Both methods get to the same end point: after running the code at the Console we should end up with an object called `my_data` in the global environment, which is a data frame containing the data in the “my\-great\-data.csv” file. So why should we prefer
1. Many novice R users with no experience of programming struggle with file paths, leading to a lot of frustration and wasted time trying to specify them. The first method only requires us to set the working directory with RStudio and know the name of the file we want to read in. There’s no need to deal with file paths.
2. The second strategy creates problems when we move our data around or work on different machines, as the file paths will need to be changed in each new situation. The first strategy is robust to such changes. For example, if we move all our data to a new location, we just have to set the working directory to the new location and our R code will still work.
10\.5 Importing data with RStudio (Avoid this!)
-----------------------------------------------
It is also possible to import data from a CSV file into R using RStudio. The steps are as follows:
1. Click on the **Environment** tab in the top right pane of RStudio
2. Select `Import Dataset > From Text File...`
3. Select the CSV file to read into R and click Open
4. Enter a name (no spaces allowed) or stick with the default and click Import
We’re only pointing out this method because new users are often tempted to use it—we **do not** recommend it. Why? It creates the same kinds of problems as the second strategy discussed above. All RStudio does generate the correct usage of a function called `read_csv` (from the **readr** package) and evaluate this at the Console. The code isn’t part of a script so we have to do this every time we want to work with the data file. It’s easy to make a mistake using this approach, e.g. by accidentally misnaming the data frame or reading in the wrong data. It may be tempting to copy the generated R code into a script. However, we still have the portability problem outlined above to deal with. Take our word for it. The RStudio\-focussed way of reading data into R just creates more problems than it solves. Don’t use it.
10\.6 Package data
------------------
Remember what Hadley Wickam said about packages? “… include reusable R functions, the documentation that describes how to use them, **and sample data**.” Many packages come with one or more sample data sets. These are very handy, as they’re used in examples and package vignettes. We can use the `data` function to get R to list the data sets hiding away in packages:
```
data(package = .packages(all.available = TRUE))
```
The mysterious `.packages(all.available = TRUE)` part of this generates a character vector with the names of all the installed packages in it. If we only use `data()` then R only lists the data sets found in a package called `datasets`, and in packages we have loaded and attached in the current R session using the `library` function.
The `datasets` package is part of the base R distribution. It exists only to store example data sets. The package is automatically loaded when we start R, i.e. there’s no need to use `library` to access it, meaning any data stored in this package can be accessed every time we start R. We’ll use a couple of data sets in the `datasets` package later to demonstrate how to work with the **dplyr** and **ggplot2** packages.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/dplyr-and-the-tidy-data-concept.html |
Chapter 11 **dplyr** and the tidy data concept
==============================================
11\.1 Introduction
------------------
Data wrangling refers to the process of manipulating raw data into the format that we want it in, for example for data visualisation or statistical analyses. There are a wide range of ways we may want to manipulate our data, for example by creating new variables, subsetting the data, or calculating summaries. Data wrangling is often a time consuming process. It is also not the most interesting part of any analysis \- we are interested in answering biological questions, not in formatting data. However, it is a necessary step to go through to be able to conduct the analyses that we’re really interested in. Learning how to manipulate data efficiently can save us a lot of time and trouble and is therefore a really important skill to master.
11\.2 The value of **dplyr**
----------------------------
The **dplyr** package has been carefully designed to make life easier to manipulate data frames and other kinds of similar objects. A key reason for its ease\-of\-use is that **dplyr** is very consistent in the way its functions work. For example, the first argument of the main **dplyr** functions is always an object containing our data. This consistency makes it very easy to get to grips with each of the main **dplyr** functions—it’s often possible to understand how one works by seeing one or two examples of its use.
A second reason for favouring **dplyr** is that it is orientated around a few core functions, each of which is designed to do one thing well. The key **dplyr** functions are often referred to as “verbs”, reflecting the fact that they “do something” to data. For example: (1\) `select` is used to obtain a subset of variables; (2\) `mutate` is used to construct new variables; (3\) `filter` is used to obtain a subset of rows; (4\) `arrange` is used to reorder rows; and (5\) `summarise` is used to calculate information about groups. We’ll cover each of these verbs in detail in later chapters, as well as a few additional functions such as `rename` and `group_by`.
Apart from being easy to use, **dplyr** is also fast compared to base R functions. This won’t matter much for the small data sets we use in this book, but **dplyr** is a good option for large data sets. The **dplyr** package also allows us to work with data stored in different ways, for example, by interacting directly with a number of database systems. We won’t work with anything other than data frames (and the closely\-related “tibble”) but it is worth knowing about this facility. Learning to use **dplyr** with data frames makes it easy to work with these other kinds of data objects.
#### A **dplyr** cheat sheet
The developers of RStudio have produced a very usable [cheat sheat](http://www.rstudio.com/resources/cheatsheets/) that summarises the main data wrangling tools provided by **dplyr**. Our advice is to download this, print out a copy and refer to this often as you start working with **dplyr**.
11\.3 Tidy data
---------------
**dplyr** will work with any data frame, but it’s at its most powerful when our data are organised as [tidy data](http://vita.had.co.nz/papers/tidy-data.pdf). The word “tidy” has a very specific meaning in this context. Tidy data has a specific structure that makes it easy to manipulate, model and visualise. A tidy data set is one where each variable is in only one column and each row contains only one observation. This might seem like the “obvious” way to organise data, but many people fail to adopt this convention.
We aren’t going to explore the tidy data concept in great detail, but the basic principles are not difficult to understand. We’ll use an example to illustrate what the “one variable \= one column” and “one observation \= one row” idea means. Let’s return to the made\-up experiment investigating the response of communities to fertilizer addition. This time, imagine we had only measured biomass, but that we had measured it twice over the course of the experiment.
We’ll examine some artificial data for the experiment and look at two ways to organise it to help us understand the tidy data idea. The first way uses a separate column for each biomass measurement:
```
## Treatment BiomassT1 BiomassT2
## 1 Control 284 324
## 2 Control 328 400
## 3 Control 291 355
## 4 Fertilser 956 1197
## 5 Fertilser 954 1012
## 6 Fertilser 685 859
```
This often seems like the natural way to store such data, especially for experienced Excel users. However, this format is not **tidy**. Why? The biomass variable has been split across two columns (“BiomassT1” and “BiomassT2”), which means each row corresponds to two observations.
We won’t go into the “whys” here, but take our word for it: adopting this format makes it difficult to efficiently work with data. This is not really an R\-specific problem. This non\-tidy format is sub\-optimal in many different data analysis environments.
A tidy version of the example data set would still have three columns, but now these would be: “Treatment”, denoting the experimental treatment applied; “Time”, denoting the sampling occasion; and “Biomass”, denoting the biomass measured:
```
## Treatment Time Biomass
## 1 Control T1 284
## 2 Control T1 328
## 3 Control T1 291
## 4 Fertilser T1 956
## 5 Fertilser T1 954
## 6 Fertilser T1 685
## 7 Control T2 324
## 8 Control T2 400
## 9 Control T2 355
## 10 Fertilser T2 1197
## 11 Fertilser T2 1012
## 12 Fertilser T2 859
```
These data are tidy: each variable is in only one column, and each observation has its own unique row. These data are well\-suited to use with **dplyr**.
#### Always try to start with tidy data
The best way to make sure your data set is tidy is to store in that format **when it’s first collected and recorded**. There are packages that can help convert non tidy data into the tidy data format (e.g. the **tidyr** package), but life is much simpler if we just make sure our data are tidy from the very beginning.
11\.4 A quick look at **dplyr**
-------------------------------
We’ll finish up this chapter by taking a quick look at a few features of the **dplyr** package, before really drilling down into how it works. The package is not part of the base R installation, so we have to install it first via `install.packages("dplyr")`. Remember, we only have to install **dplyr** once, so there’s no need to leave the `install.packages` line in script that uses the package. We do have to add `library` to the top of any scripts using the package to load and attach it:
```
library("dplyr")
```
We need some data to work with. We’ll use two data sets to illustrate the key ideas in the next few chapters: the `iris` data set in the **datasets** package and the `storms` data set in the **nasaweather** package.
The **datasets** package ships with R and is loaded and attached at start up, so there’s no need to do anything to make `iris` available. The **nasaweather** package doesn’t ship with R so it needs to be installed via `install.packages("nasaweather")`. Finally, we have to add `library` to the top of our script to load and attach the package:
```
library("nasaweather")
```
The **nasaweather** package is a bare bones data package. It doesn’t contain any new R functions, just data. We’ll be using the `storms` data set from **nasaweather**: this contains information about tropical storms in North America (from 1995\-2000\). We’re just using it as a convenient example to illustrate the workings of the **dplyr**, and later, the **ggplot2** packages.
### 11\.4\.1 Tibble (`tbl`) objects
The primary purpose of the **dplyr** package is to make it easier to manipulate data interactively. In order to facilitate this kind of work **dplyr** implements a special kind of data object known as a `tbl` (pronounced “tibble”). We can think of a tibble as a special data frame with a few extra whistles and bells.
We can convert an ordinary data frame to a a tibble using the `tbl_df` function. It’s a good idea (though not necessary) to convert ordinary data frames to tibbles. Why? When a data frame is printed to the Console R will try to print every column and row until it reaches a (very large) maximum permitted amount of output. The result is a mess of text that’s virtually impossible to make sense of. In contrast, when a tibble is printed to the Console, it does so in a compact way. To see this, we can convert the `iris` data set to a tibble using `tbl_df` and then print the resulting object to the Console:
```
# make a "tibble" copy of iris
iris_tbl <- tbl_df(iris)
# print it
iris_tbl
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
Notice that only the first 10 rows are printed. This is much nicer than trying to wade through every row of a data frame.
### 11\.4\.2 The `glimpse` function
Sometimes we just need a quick, compact summary of a data frame or tibble. This is the job of the `glimpse` function from **dplyr**. The glimpse function is very similar to `str`:
```
glimpse(iris_tbl)
```
```
## Observations: 150
## Variables: 5
## $ Sepal.Length <dbl> 5.1, 4.9, 4.7, 4.6, 5.0, 5.4, 4.6, 5.0, 4.4, 4.9, 5…
## $ Sepal.Width <dbl> 3.5, 3.0, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3…
## $ Petal.Length <dbl> 1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1…
## $ Petal.Width <dbl> 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0…
## $ Species <fct> setosa, setosa, setosa, setosa, setosa, setosa, set…
```
The function takes one argument: the name of a data frame or tibble. It then tells us how many rows it has, how many variables there are, what these variables are called, and what kind of data are associated with each variable. This function is useful when we’re working with a data set containing many variables.
11\.1 Introduction
------------------
Data wrangling refers to the process of manipulating raw data into the format that we want it in, for example for data visualisation or statistical analyses. There are a wide range of ways we may want to manipulate our data, for example by creating new variables, subsetting the data, or calculating summaries. Data wrangling is often a time consuming process. It is also not the most interesting part of any analysis \- we are interested in answering biological questions, not in formatting data. However, it is a necessary step to go through to be able to conduct the analyses that we’re really interested in. Learning how to manipulate data efficiently can save us a lot of time and trouble and is therefore a really important skill to master.
11\.2 The value of **dplyr**
----------------------------
The **dplyr** package has been carefully designed to make life easier to manipulate data frames and other kinds of similar objects. A key reason for its ease\-of\-use is that **dplyr** is very consistent in the way its functions work. For example, the first argument of the main **dplyr** functions is always an object containing our data. This consistency makes it very easy to get to grips with each of the main **dplyr** functions—it’s often possible to understand how one works by seeing one or two examples of its use.
A second reason for favouring **dplyr** is that it is orientated around a few core functions, each of which is designed to do one thing well. The key **dplyr** functions are often referred to as “verbs”, reflecting the fact that they “do something” to data. For example: (1\) `select` is used to obtain a subset of variables; (2\) `mutate` is used to construct new variables; (3\) `filter` is used to obtain a subset of rows; (4\) `arrange` is used to reorder rows; and (5\) `summarise` is used to calculate information about groups. We’ll cover each of these verbs in detail in later chapters, as well as a few additional functions such as `rename` and `group_by`.
Apart from being easy to use, **dplyr** is also fast compared to base R functions. This won’t matter much for the small data sets we use in this book, but **dplyr** is a good option for large data sets. The **dplyr** package also allows us to work with data stored in different ways, for example, by interacting directly with a number of database systems. We won’t work with anything other than data frames (and the closely\-related “tibble”) but it is worth knowing about this facility. Learning to use **dplyr** with data frames makes it easy to work with these other kinds of data objects.
#### A **dplyr** cheat sheet
The developers of RStudio have produced a very usable [cheat sheat](http://www.rstudio.com/resources/cheatsheets/) that summarises the main data wrangling tools provided by **dplyr**. Our advice is to download this, print out a copy and refer to this often as you start working with **dplyr**.
11\.3 Tidy data
---------------
**dplyr** will work with any data frame, but it’s at its most powerful when our data are organised as [tidy data](http://vita.had.co.nz/papers/tidy-data.pdf). The word “tidy” has a very specific meaning in this context. Tidy data has a specific structure that makes it easy to manipulate, model and visualise. A tidy data set is one where each variable is in only one column and each row contains only one observation. This might seem like the “obvious” way to organise data, but many people fail to adopt this convention.
We aren’t going to explore the tidy data concept in great detail, but the basic principles are not difficult to understand. We’ll use an example to illustrate what the “one variable \= one column” and “one observation \= one row” idea means. Let’s return to the made\-up experiment investigating the response of communities to fertilizer addition. This time, imagine we had only measured biomass, but that we had measured it twice over the course of the experiment.
We’ll examine some artificial data for the experiment and look at two ways to organise it to help us understand the tidy data idea. The first way uses a separate column for each biomass measurement:
```
## Treatment BiomassT1 BiomassT2
## 1 Control 284 324
## 2 Control 328 400
## 3 Control 291 355
## 4 Fertilser 956 1197
## 5 Fertilser 954 1012
## 6 Fertilser 685 859
```
This often seems like the natural way to store such data, especially for experienced Excel users. However, this format is not **tidy**. Why? The biomass variable has been split across two columns (“BiomassT1” and “BiomassT2”), which means each row corresponds to two observations.
We won’t go into the “whys” here, but take our word for it: adopting this format makes it difficult to efficiently work with data. This is not really an R\-specific problem. This non\-tidy format is sub\-optimal in many different data analysis environments.
A tidy version of the example data set would still have three columns, but now these would be: “Treatment”, denoting the experimental treatment applied; “Time”, denoting the sampling occasion; and “Biomass”, denoting the biomass measured:
```
## Treatment Time Biomass
## 1 Control T1 284
## 2 Control T1 328
## 3 Control T1 291
## 4 Fertilser T1 956
## 5 Fertilser T1 954
## 6 Fertilser T1 685
## 7 Control T2 324
## 8 Control T2 400
## 9 Control T2 355
## 10 Fertilser T2 1197
## 11 Fertilser T2 1012
## 12 Fertilser T2 859
```
These data are tidy: each variable is in only one column, and each observation has its own unique row. These data are well\-suited to use with **dplyr**.
#### Always try to start with tidy data
The best way to make sure your data set is tidy is to store in that format **when it’s first collected and recorded**. There are packages that can help convert non tidy data into the tidy data format (e.g. the **tidyr** package), but life is much simpler if we just make sure our data are tidy from the very beginning.
11\.4 A quick look at **dplyr**
-------------------------------
We’ll finish up this chapter by taking a quick look at a few features of the **dplyr** package, before really drilling down into how it works. The package is not part of the base R installation, so we have to install it first via `install.packages("dplyr")`. Remember, we only have to install **dplyr** once, so there’s no need to leave the `install.packages` line in script that uses the package. We do have to add `library` to the top of any scripts using the package to load and attach it:
```
library("dplyr")
```
We need some data to work with. We’ll use two data sets to illustrate the key ideas in the next few chapters: the `iris` data set in the **datasets** package and the `storms` data set in the **nasaweather** package.
The **datasets** package ships with R and is loaded and attached at start up, so there’s no need to do anything to make `iris` available. The **nasaweather** package doesn’t ship with R so it needs to be installed via `install.packages("nasaweather")`. Finally, we have to add `library` to the top of our script to load and attach the package:
```
library("nasaweather")
```
The **nasaweather** package is a bare bones data package. It doesn’t contain any new R functions, just data. We’ll be using the `storms` data set from **nasaweather**: this contains information about tropical storms in North America (from 1995\-2000\). We’re just using it as a convenient example to illustrate the workings of the **dplyr**, and later, the **ggplot2** packages.
### 11\.4\.1 Tibble (`tbl`) objects
The primary purpose of the **dplyr** package is to make it easier to manipulate data interactively. In order to facilitate this kind of work **dplyr** implements a special kind of data object known as a `tbl` (pronounced “tibble”). We can think of a tibble as a special data frame with a few extra whistles and bells.
We can convert an ordinary data frame to a a tibble using the `tbl_df` function. It’s a good idea (though not necessary) to convert ordinary data frames to tibbles. Why? When a data frame is printed to the Console R will try to print every column and row until it reaches a (very large) maximum permitted amount of output. The result is a mess of text that’s virtually impossible to make sense of. In contrast, when a tibble is printed to the Console, it does so in a compact way. To see this, we can convert the `iris` data set to a tibble using `tbl_df` and then print the resulting object to the Console:
```
# make a "tibble" copy of iris
iris_tbl <- tbl_df(iris)
# print it
iris_tbl
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
Notice that only the first 10 rows are printed. This is much nicer than trying to wade through every row of a data frame.
### 11\.4\.2 The `glimpse` function
Sometimes we just need a quick, compact summary of a data frame or tibble. This is the job of the `glimpse` function from **dplyr**. The glimpse function is very similar to `str`:
```
glimpse(iris_tbl)
```
```
## Observations: 150
## Variables: 5
## $ Sepal.Length <dbl> 5.1, 4.9, 4.7, 4.6, 5.0, 5.4, 4.6, 5.0, 4.4, 4.9, 5…
## $ Sepal.Width <dbl> 3.5, 3.0, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3…
## $ Petal.Length <dbl> 1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1…
## $ Petal.Width <dbl> 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0…
## $ Species <fct> setosa, setosa, setosa, setosa, setosa, setosa, set…
```
The function takes one argument: the name of a data frame or tibble. It then tells us how many rows it has, how many variables there are, what these variables are called, and what kind of data are associated with each variable. This function is useful when we’re working with a data set containing many variables.
### 11\.4\.1 Tibble (`tbl`) objects
The primary purpose of the **dplyr** package is to make it easier to manipulate data interactively. In order to facilitate this kind of work **dplyr** implements a special kind of data object known as a `tbl` (pronounced “tibble”). We can think of a tibble as a special data frame with a few extra whistles and bells.
We can convert an ordinary data frame to a a tibble using the `tbl_df` function. It’s a good idea (though not necessary) to convert ordinary data frames to tibbles. Why? When a data frame is printed to the Console R will try to print every column and row until it reaches a (very large) maximum permitted amount of output. The result is a mess of text that’s virtually impossible to make sense of. In contrast, when a tibble is printed to the Console, it does so in a compact way. To see this, we can convert the `iris` data set to a tibble using `tbl_df` and then print the resulting object to the Console:
```
# make a "tibble" copy of iris
iris_tbl <- tbl_df(iris)
# print it
iris_tbl
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
Notice that only the first 10 rows are printed. This is much nicer than trying to wade through every row of a data frame.
### 11\.4\.2 The `glimpse` function
Sometimes we just need a quick, compact summary of a data frame or tibble. This is the job of the `glimpse` function from **dplyr**. The glimpse function is very similar to `str`:
```
glimpse(iris_tbl)
```
```
## Observations: 150
## Variables: 5
## $ Sepal.Length <dbl> 5.1, 4.9, 4.7, 4.6, 5.0, 5.4, 4.6, 5.0, 4.4, 4.9, 5…
## $ Sepal.Width <dbl> 3.5, 3.0, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3…
## $ Petal.Length <dbl> 1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1…
## $ Petal.Width <dbl> 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0…
## $ Species <fct> setosa, setosa, setosa, setosa, setosa, setosa, set…
```
The function takes one argument: the name of a data frame or tibble. It then tells us how many rows it has, how many variables there are, what these variables are called, and what kind of data are associated with each variable. This function is useful when we’re working with a data set containing many variables.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/working-with-variables.html |
Chapter 12 Working with variables
=================================
12\.1 Introduction
------------------
This chapter will explore the the **dplyr** `select` and `mutate` verbs, as well as the related `rename` and `transmute` verbs. These four verbs are considered together because they all operate on the variables (i.e. the columns) of a data frame or tibble:
* The `select` function selects a subset of variables to retain and (optionally) renames them in the process.
* The `mutate` function creates new variables from preexisting ones and retains the original variables.
* The `rename` function renames one or more variables while keeping the remaining variable names unchanged.
* The `transmute` function creates new variables from preexisting ones and drops the original variables.
### 12\.1\.1 Getting ready
A script that uses **dplyr** will typically start by loading and attaching the package:
```
library("dplyr")
```
Obviously we need to have first installed `dplyr` package for this to work.
We’ll use the `iris` data set in the **datasets** package to illustrate the ideas in this chapter. The **datasets** package ships with R and is loaded and attached at start up, so there’s no need to do anything to make `iris` available. The `iris` data set is an ordinary data frame. Before we start working, it’s handy to convert this to a tibble so that it prints to the Console in a compact way:
```
iris_tbl <- tbl_df(iris)
```
We gave the new tibble version a new name. We didn’t have to do this, but it will remind us that we’re working with tibbles.
12\.2 Subset variables with `select`
------------------------------------
We use `select` to **select variables** from a data frame or tibble. This is typically used when we have a data set with many variables but only need to work with a subset of these. Basic usage of `select` looks like this:
```
select(data_set, vname1, vname2, ...)
```
take note: this is not an example we can run. This is a “pseudo code” example, designed to show, in abstract terms, how we use `select`:
* The first argument, `data_set` (“data object”), must be the name of the object containing our data.
* We then include a series of one or more additional arguments, where each one is the name of a variable in `data_set`. We’ve expressed this as `vname1, vname2, ...`, where `vname1` and `vname2` are names of the first two variables, and the `...` is acting as placeholder for the remaining variables (there could be any number of these).
It’s easiest to understand how a function like `select` works by seeing it in action. We select the `Species`, `Petal.Length` and `Petal.Width` variables from `iris_tbl` like this:
```
select(iris_tbl, Species, Petal.Length, Petal.Width)
```
```
## # A tibble: 150 x 3
## Species Petal.Length Petal.Width
## <fct> <dbl> <dbl>
## 1 setosa 1.4 0.2
## 2 setosa 1.4 0.2
## 3 setosa 1.3 0.2
## 4 setosa 1.5 0.2
## 5 setosa 1.4 0.2
## 6 setosa 1.7 0.4
## 7 setosa 1.4 0.3
## 8 setosa 1.5 0.2
## 9 setosa 1.4 0.2
## 10 setosa 1.5 0.1
## # … with 140 more rows
```
Hopefully nothing about this example is surprising or confusing. There are a few things to notice about how `select` works though:
* The `select` function is one of those non\-standard functions we briefly mentioned in the [Using functions](using-functions.html#using-functions) chapter. This means the variable names should not be surrounded by quotes unless they have spaces in them (which is best avoided).
* The `select` function is just like other R functions: it does not have “side effects”. What this means is that it does not change the original `iris_tbl`. We printed the result produced by `select` to the Console, so we can’t access the new data set. If we need to access the result we have to assign it a name using `<-`.
* The order of variables (i.e. the column order) in the resulting object is the same as the order in which they were supplied as arguments. This means we can reorder variables at the same time as selecting them if we need to.
* The `select` function will return the same kind of data object it is working on. It returns a data frame if our data was originally in a data frame and a tibble if it was a tibble. In this example, R prints a tibble because we had converted `iris_tbl` from a data frame to a tibble.
It’s sometimes more convenient to use `select` to subset variables by specifying those we do **not** need, rather than specifying of the ones to keep. We use the `-` operator indicate that variables should be dropped. For example, to drop the `Petal.Width` and `Petal.Length` columns, we use:
```
select(iris_tbl, -Petal.Width, -Petal.Length)
```
```
## # A tibble: 150 x 3
## Sepal.Length Sepal.Width Species
## <dbl> <dbl> <fct>
## 1 5.1 3.5 setosa
## 2 4.9 3 setosa
## 3 4.7 3.2 setosa
## 4 4.6 3.1 setosa
## 5 5 3.6 setosa
## 6 5.4 3.9 setosa
## 7 4.6 3.4 setosa
## 8 5 3.4 setosa
## 9 4.4 2.9 setosa
## 10 4.9 3.1 setosa
## # … with 140 more rows
```
This returns a tibble with just the remaining variables:`Sepal.Length`, `Sepal.Width` and `Species`.
The `select` function can also be used to grab (or drop) a set of variables that occur in a sequence next to one another. We specify a series of adjacent variables using the `:` operator. This must be used with two variable names, one on the left hand side and one on the right. When we use `:` like this, `select` will subset both those variables along with any others that fall in between them. For example, if we need the two `Petal` variables and `Species`, we use:
```
select(iris_tbl, Petal.Length:Species)
```
```
## # A tibble: 150 x 3
## Petal.Length Petal.Width Species
## <dbl> <dbl> <fct>
## 1 1.4 0.2 setosa
## 2 1.4 0.2 setosa
## 3 1.3 0.2 setosa
## 4 1.5 0.2 setosa
## 5 1.4 0.2 setosa
## 6 1.7 0.4 setosa
## 7 1.4 0.3 setosa
## 8 1.5 0.2 setosa
## 9 1.4 0.2 setosa
## 10 1.5 0.1 setosa
## # … with 140 more rows
```
The `:` operator can be combined with `-` if we need to drop a series of variables according to their position in a data frame or tibble:
```
select(iris_tbl, -(Petal.Length:Species))
```
```
## # A tibble: 150 x 2
## Sepal.Length Sepal.Width
## <dbl> <dbl>
## 1 5.1 3.5
## 2 4.9 3
## 3 4.7 3.2
## 4 4.6 3.1
## 5 5 3.6
## 6 5.4 3.9
## 7 4.6 3.4
## 8 5 3.4
## 9 4.4 2.9
## 10 4.9 3.1
## # … with 140 more rows
```
The extra `( )` around are `Petal.Length:Species` important here — `select` will throw an error if we don’t include them.
### 12\.2\.1 Renaming variables with `select` and `rename`
In addition to selecting a subset of variables, the `select` function can also rename variables at the same time. To do this, we have to name the arguments using `=`, placing the new name on the left hand side. For example, to select the`Species`, `Petal.Length` and `Petal.Width` variables from `iris_tbl`, but also rename `Petal.Length` and `Petal.Width` to `Petal_Length` and `Petal_Width`, we use:
```
select(iris_tbl, Species, Petal_Length = Petal.Length, Petal_Width = Petal.Width)
```
```
## # A tibble: 150 x 3
## Species Petal_Length Petal_Width
## <fct> <dbl> <dbl>
## 1 setosa 1.4 0.2
## 2 setosa 1.4 0.2
## 3 setosa 1.3 0.2
## 4 setosa 1.5 0.2
## 5 setosa 1.4 0.2
## 6 setosa 1.7 0.4
## 7 setosa 1.4 0.3
## 8 setosa 1.5 0.2
## 9 setosa 1.4 0.2
## 10 setosa 1.5 0.1
## # … with 140 more rows
```
Renaming variables is a common task when working with data frames and tibbles. What should we do if the *only* thing we would like to achieve is to rename a variables, rather than rename and select variables? The `dplyr` provides an additional function called `rename` for this purpose. This function renames certain variables while retaining all others. It works in a similar way to `select`. For example, to rename `Petal.Length` and `Petal.Width` to `Petal_Length` and `Petal_Width`, we use:
```
rename(iris_tbl, Petal_Length = Petal.Length, Petal_Width = Petal.Width)
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal_Length Petal_Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
Notice that the rename function also preserves the order of the variables found in the original data.
12\.3 Creating variables with `mutate`
--------------------------------------
We use `mutate` to **add new variables** to a data frame or tibble. This is useful if we need to construct one or more derived variables to support an analysis. Basic usage of `mutate` looks like this:
```
mutate(data_set, <expression1>, <expression2>, ...)
```
Again, this is not an example we can run; it’s pseudo code that highlights in abstract terms how to use `mutate`. As always with `dplyr`, the first argument, `data_set`, should be the name of the object containing our data. We then include a series of one or more additional arguments, where each of these is a valid R expression involving one or more variables in `data_set`. We’ve have expressed these as `<expression1>, <expression2>`, where `<expression1>` and `<expression2>` represent the first two expressions, and the `...` is acting as placeholder for the remaining expressions. Remember, this is not valid R code. It is just intended to demonstrate the general usage of `mutate`.
To see `mutate` in action, let’s construct a new version of `iris_tbl` that contains a variable summarising the approximate area of sepals:
```
mutate(iris_tbl, Sepal.Width * Sepal.Length)
```
```
## # A tibble: 150 x 6
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows, and 1 more variable: `Sepal.Width *
## # Sepal.Length` <dbl>
```
This creates a copy of `iris_tbl` with a new column called `Sepal.Width * Sepal.Length` (mentioned at the bottom of the printed output). Most of the rules that apply to `select` also apply to `mutate`:
* The expression that performs the required calculation is not surrounded by quotes. This makes sense, because an expression is meant to be evaluated so that it “does something”. It is not a value.
* Once again, we just printed the result produced by `mutate` to the Console, rather than assigning the result a name using `<-`. The `mutate` function does not have side effects, meaning it does not change the original `iris_tbl` in any way.
* The `select` function returns the same kind of data object as the one it is working on: a data frame if our data was originally in a data frame, a tibble if it was a tibble.
Creating a variable called something like `Sepal.Width * Sepal.Length` is not exactly ideal because it’s a difficult name to work with. The `mutate` function can name variables at the same time as they are created. We have to name the arguments using `=`, placing the name on the left hand side, to do this. Here’s how to use this construct to name the new area variable `Sepal.Area`:
```
mutate(iris_tbl, Sepal.Area = Sepal.Width * Sepal.Length)
```
```
## # A tibble: 150 x 6
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species Sepal.Area
## <dbl> <dbl> <dbl> <dbl> <fct> <dbl>
## 1 5.1 3.5 1.4 0.2 setosa 17.8
## 2 4.9 3 1.4 0.2 setosa 14.7
## 3 4.7 3.2 1.3 0.2 setosa 15.0
## 4 4.6 3.1 1.5 0.2 setosa 14.3
## 5 5 3.6 1.4 0.2 setosa 18
## 6 5.4 3.9 1.7 0.4 setosa 21.1
## 7 4.6 3.4 1.4 0.3 setosa 15.6
## 8 5 3.4 1.5 0.2 setosa 17
## 9 4.4 2.9 1.4 0.2 setosa 12.8
## 10 4.9 3.1 1.5 0.1 setosa 15.2
## # … with 140 more rows
```
We can create more than one variable by supplying `mutate` multiple (named) arguments:
```
mutate(iris_tbl,
Sepal.Area = Sepal.Width * Sepal.Length,
Petal.Area = Petal.Width * Petal.Length,
Area.Ratio = Petal.Area / Petal.Area)
```
```
## # A tibble: 150 x 8
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species Sepal.Area
## <dbl> <dbl> <dbl> <dbl> <fct> <dbl>
## 1 5.1 3.5 1.4 0.2 setosa 17.8
## 2 4.9 3 1.4 0.2 setosa 14.7
## 3 4.7 3.2 1.3 0.2 setosa 15.0
## 4 4.6 3.1 1.5 0.2 setosa 14.3
## 5 5 3.6 1.4 0.2 setosa 18
## 6 5.4 3.9 1.7 0.4 setosa 21.1
## 7 4.6 3.4 1.4 0.3 setosa 15.6
## 8 5 3.4 1.5 0.2 setosa 17
## 9 4.4 2.9 1.4 0.2 setosa 12.8
## 10 4.9 3.1 1.5 0.1 setosa 15.2
## # … with 140 more rows, and 2 more variables: Petal.Area <dbl>,
## # Area.Ratio <dbl>
```
Notice that here we placed each argument on a new line, remembering the comma to separate arguments. There is nothing to stop us doing this because R ignores white space. This is useful though, because it allows us, the user, to makes long function calls easier to read by breaking them up on different lines.
This last example reveals a nice feature of `mutate`: we can use newly created variables in further calculations. Here we constructed approximate sepal and petal area variables, and then used these to construct a third variable containing the ratio of these two quantities, `Area.Ratio`.
### 12\.3\.1 Transforming and dropping variables
Occasionally we may want to construct one or more new variables, and then drop all other variables in the original dataset. The `transmute` function is designed to do this. It works exactly like `mutate`, but it has a slightly different behaviour:
```
transmute(iris_tbl,
Sepal.Area = Sepal.Width * Sepal.Length,
Petal.Area = Petal.Width * Petal.Length,
Area.Ratio = Petal.Area / Petal.Area)
```
```
## # A tibble: 150 x 3
## Sepal.Area Petal.Area Area.Ratio
## <dbl> <dbl> <dbl>
## 1 17.8 0.280 1
## 2 14.7 0.280 1
## 3 15.0 0.26 1
## 4 14.3 0.3 1
## 5 18 0.280 1
## 6 21.1 0.68 1
## 7 15.6 0.42 1
## 8 17 0.3 1
## 9 12.8 0.280 1
## 10 15.2 0.15 1
## # … with 140 more rows
```
Here we repeated the previous example, but now only the new variables were retained in the resulting tibble. If we also want to retain one or more variables without altering them we just have to pass them as unnamed arguments. For example, if we need to retain species identity in the output, we use:
```
transmute(iris_tbl,
Species,
Sepal.Area = Sepal.Width * Sepal.Length,
Petal.Area = Petal.Width * Petal.Length,
Area.Ratio = Petal.Area / Petal.Area)
```
```
## # A tibble: 150 x 4
## Species Sepal.Area Petal.Area Area.Ratio
## <fct> <dbl> <dbl> <dbl>
## 1 setosa 17.8 0.280 1
## 2 setosa 14.7 0.280 1
## 3 setosa 15.0 0.26 1
## 4 setosa 14.3 0.3 1
## 5 setosa 18 0.280 1
## 6 setosa 21.1 0.68 1
## 7 setosa 15.6 0.42 1
## 8 setosa 17 0.3 1
## 9 setosa 12.8 0.280 1
## 10 setosa 15.2 0.15 1
## # … with 140 more rows
```
12\.1 Introduction
------------------
This chapter will explore the the **dplyr** `select` and `mutate` verbs, as well as the related `rename` and `transmute` verbs. These four verbs are considered together because they all operate on the variables (i.e. the columns) of a data frame or tibble:
* The `select` function selects a subset of variables to retain and (optionally) renames them in the process.
* The `mutate` function creates new variables from preexisting ones and retains the original variables.
* The `rename` function renames one or more variables while keeping the remaining variable names unchanged.
* The `transmute` function creates new variables from preexisting ones and drops the original variables.
### 12\.1\.1 Getting ready
A script that uses **dplyr** will typically start by loading and attaching the package:
```
library("dplyr")
```
Obviously we need to have first installed `dplyr` package for this to work.
We’ll use the `iris` data set in the **datasets** package to illustrate the ideas in this chapter. The **datasets** package ships with R and is loaded and attached at start up, so there’s no need to do anything to make `iris` available. The `iris` data set is an ordinary data frame. Before we start working, it’s handy to convert this to a tibble so that it prints to the Console in a compact way:
```
iris_tbl <- tbl_df(iris)
```
We gave the new tibble version a new name. We didn’t have to do this, but it will remind us that we’re working with tibbles.
### 12\.1\.1 Getting ready
A script that uses **dplyr** will typically start by loading and attaching the package:
```
library("dplyr")
```
Obviously we need to have first installed `dplyr` package for this to work.
We’ll use the `iris` data set in the **datasets** package to illustrate the ideas in this chapter. The **datasets** package ships with R and is loaded and attached at start up, so there’s no need to do anything to make `iris` available. The `iris` data set is an ordinary data frame. Before we start working, it’s handy to convert this to a tibble so that it prints to the Console in a compact way:
```
iris_tbl <- tbl_df(iris)
```
We gave the new tibble version a new name. We didn’t have to do this, but it will remind us that we’re working with tibbles.
12\.2 Subset variables with `select`
------------------------------------
We use `select` to **select variables** from a data frame or tibble. This is typically used when we have a data set with many variables but only need to work with a subset of these. Basic usage of `select` looks like this:
```
select(data_set, vname1, vname2, ...)
```
take note: this is not an example we can run. This is a “pseudo code” example, designed to show, in abstract terms, how we use `select`:
* The first argument, `data_set` (“data object”), must be the name of the object containing our data.
* We then include a series of one or more additional arguments, where each one is the name of a variable in `data_set`. We’ve expressed this as `vname1, vname2, ...`, where `vname1` and `vname2` are names of the first two variables, and the `...` is acting as placeholder for the remaining variables (there could be any number of these).
It’s easiest to understand how a function like `select` works by seeing it in action. We select the `Species`, `Petal.Length` and `Petal.Width` variables from `iris_tbl` like this:
```
select(iris_tbl, Species, Petal.Length, Petal.Width)
```
```
## # A tibble: 150 x 3
## Species Petal.Length Petal.Width
## <fct> <dbl> <dbl>
## 1 setosa 1.4 0.2
## 2 setosa 1.4 0.2
## 3 setosa 1.3 0.2
## 4 setosa 1.5 0.2
## 5 setosa 1.4 0.2
## 6 setosa 1.7 0.4
## 7 setosa 1.4 0.3
## 8 setosa 1.5 0.2
## 9 setosa 1.4 0.2
## 10 setosa 1.5 0.1
## # … with 140 more rows
```
Hopefully nothing about this example is surprising or confusing. There are a few things to notice about how `select` works though:
* The `select` function is one of those non\-standard functions we briefly mentioned in the [Using functions](using-functions.html#using-functions) chapter. This means the variable names should not be surrounded by quotes unless they have spaces in them (which is best avoided).
* The `select` function is just like other R functions: it does not have “side effects”. What this means is that it does not change the original `iris_tbl`. We printed the result produced by `select` to the Console, so we can’t access the new data set. If we need to access the result we have to assign it a name using `<-`.
* The order of variables (i.e. the column order) in the resulting object is the same as the order in which they were supplied as arguments. This means we can reorder variables at the same time as selecting them if we need to.
* The `select` function will return the same kind of data object it is working on. It returns a data frame if our data was originally in a data frame and a tibble if it was a tibble. In this example, R prints a tibble because we had converted `iris_tbl` from a data frame to a tibble.
It’s sometimes more convenient to use `select` to subset variables by specifying those we do **not** need, rather than specifying of the ones to keep. We use the `-` operator indicate that variables should be dropped. For example, to drop the `Petal.Width` and `Petal.Length` columns, we use:
```
select(iris_tbl, -Petal.Width, -Petal.Length)
```
```
## # A tibble: 150 x 3
## Sepal.Length Sepal.Width Species
## <dbl> <dbl> <fct>
## 1 5.1 3.5 setosa
## 2 4.9 3 setosa
## 3 4.7 3.2 setosa
## 4 4.6 3.1 setosa
## 5 5 3.6 setosa
## 6 5.4 3.9 setosa
## 7 4.6 3.4 setosa
## 8 5 3.4 setosa
## 9 4.4 2.9 setosa
## 10 4.9 3.1 setosa
## # … with 140 more rows
```
This returns a tibble with just the remaining variables:`Sepal.Length`, `Sepal.Width` and `Species`.
The `select` function can also be used to grab (or drop) a set of variables that occur in a sequence next to one another. We specify a series of adjacent variables using the `:` operator. This must be used with two variable names, one on the left hand side and one on the right. When we use `:` like this, `select` will subset both those variables along with any others that fall in between them. For example, if we need the two `Petal` variables and `Species`, we use:
```
select(iris_tbl, Petal.Length:Species)
```
```
## # A tibble: 150 x 3
## Petal.Length Petal.Width Species
## <dbl> <dbl> <fct>
## 1 1.4 0.2 setosa
## 2 1.4 0.2 setosa
## 3 1.3 0.2 setosa
## 4 1.5 0.2 setosa
## 5 1.4 0.2 setosa
## 6 1.7 0.4 setosa
## 7 1.4 0.3 setosa
## 8 1.5 0.2 setosa
## 9 1.4 0.2 setosa
## 10 1.5 0.1 setosa
## # … with 140 more rows
```
The `:` operator can be combined with `-` if we need to drop a series of variables according to their position in a data frame or tibble:
```
select(iris_tbl, -(Petal.Length:Species))
```
```
## # A tibble: 150 x 2
## Sepal.Length Sepal.Width
## <dbl> <dbl>
## 1 5.1 3.5
## 2 4.9 3
## 3 4.7 3.2
## 4 4.6 3.1
## 5 5 3.6
## 6 5.4 3.9
## 7 4.6 3.4
## 8 5 3.4
## 9 4.4 2.9
## 10 4.9 3.1
## # … with 140 more rows
```
The extra `( )` around are `Petal.Length:Species` important here — `select` will throw an error if we don’t include them.
### 12\.2\.1 Renaming variables with `select` and `rename`
In addition to selecting a subset of variables, the `select` function can also rename variables at the same time. To do this, we have to name the arguments using `=`, placing the new name on the left hand side. For example, to select the`Species`, `Petal.Length` and `Petal.Width` variables from `iris_tbl`, but also rename `Petal.Length` and `Petal.Width` to `Petal_Length` and `Petal_Width`, we use:
```
select(iris_tbl, Species, Petal_Length = Petal.Length, Petal_Width = Petal.Width)
```
```
## # A tibble: 150 x 3
## Species Petal_Length Petal_Width
## <fct> <dbl> <dbl>
## 1 setosa 1.4 0.2
## 2 setosa 1.4 0.2
## 3 setosa 1.3 0.2
## 4 setosa 1.5 0.2
## 5 setosa 1.4 0.2
## 6 setosa 1.7 0.4
## 7 setosa 1.4 0.3
## 8 setosa 1.5 0.2
## 9 setosa 1.4 0.2
## 10 setosa 1.5 0.1
## # … with 140 more rows
```
Renaming variables is a common task when working with data frames and tibbles. What should we do if the *only* thing we would like to achieve is to rename a variables, rather than rename and select variables? The `dplyr` provides an additional function called `rename` for this purpose. This function renames certain variables while retaining all others. It works in a similar way to `select`. For example, to rename `Petal.Length` and `Petal.Width` to `Petal_Length` and `Petal_Width`, we use:
```
rename(iris_tbl, Petal_Length = Petal.Length, Petal_Width = Petal.Width)
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal_Length Petal_Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
Notice that the rename function also preserves the order of the variables found in the original data.
### 12\.2\.1 Renaming variables with `select` and `rename`
In addition to selecting a subset of variables, the `select` function can also rename variables at the same time. To do this, we have to name the arguments using `=`, placing the new name on the left hand side. For example, to select the`Species`, `Petal.Length` and `Petal.Width` variables from `iris_tbl`, but also rename `Petal.Length` and `Petal.Width` to `Petal_Length` and `Petal_Width`, we use:
```
select(iris_tbl, Species, Petal_Length = Petal.Length, Petal_Width = Petal.Width)
```
```
## # A tibble: 150 x 3
## Species Petal_Length Petal_Width
## <fct> <dbl> <dbl>
## 1 setosa 1.4 0.2
## 2 setosa 1.4 0.2
## 3 setosa 1.3 0.2
## 4 setosa 1.5 0.2
## 5 setosa 1.4 0.2
## 6 setosa 1.7 0.4
## 7 setosa 1.4 0.3
## 8 setosa 1.5 0.2
## 9 setosa 1.4 0.2
## 10 setosa 1.5 0.1
## # … with 140 more rows
```
Renaming variables is a common task when working with data frames and tibbles. What should we do if the *only* thing we would like to achieve is to rename a variables, rather than rename and select variables? The `dplyr` provides an additional function called `rename` for this purpose. This function renames certain variables while retaining all others. It works in a similar way to `select`. For example, to rename `Petal.Length` and `Petal.Width` to `Petal_Length` and `Petal_Width`, we use:
```
rename(iris_tbl, Petal_Length = Petal.Length, Petal_Width = Petal.Width)
```
```
## # A tibble: 150 x 5
## Sepal.Length Sepal.Width Petal_Length Petal_Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows
```
Notice that the rename function also preserves the order of the variables found in the original data.
12\.3 Creating variables with `mutate`
--------------------------------------
We use `mutate` to **add new variables** to a data frame or tibble. This is useful if we need to construct one or more derived variables to support an analysis. Basic usage of `mutate` looks like this:
```
mutate(data_set, <expression1>, <expression2>, ...)
```
Again, this is not an example we can run; it’s pseudo code that highlights in abstract terms how to use `mutate`. As always with `dplyr`, the first argument, `data_set`, should be the name of the object containing our data. We then include a series of one or more additional arguments, where each of these is a valid R expression involving one or more variables in `data_set`. We’ve have expressed these as `<expression1>, <expression2>`, where `<expression1>` and `<expression2>` represent the first two expressions, and the `...` is acting as placeholder for the remaining expressions. Remember, this is not valid R code. It is just intended to demonstrate the general usage of `mutate`.
To see `mutate` in action, let’s construct a new version of `iris_tbl` that contains a variable summarising the approximate area of sepals:
```
mutate(iris_tbl, Sepal.Width * Sepal.Length)
```
```
## # A tibble: 150 x 6
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## <dbl> <dbl> <dbl> <dbl> <fct>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # … with 140 more rows, and 1 more variable: `Sepal.Width *
## # Sepal.Length` <dbl>
```
This creates a copy of `iris_tbl` with a new column called `Sepal.Width * Sepal.Length` (mentioned at the bottom of the printed output). Most of the rules that apply to `select` also apply to `mutate`:
* The expression that performs the required calculation is not surrounded by quotes. This makes sense, because an expression is meant to be evaluated so that it “does something”. It is not a value.
* Once again, we just printed the result produced by `mutate` to the Console, rather than assigning the result a name using `<-`. The `mutate` function does not have side effects, meaning it does not change the original `iris_tbl` in any way.
* The `select` function returns the same kind of data object as the one it is working on: a data frame if our data was originally in a data frame, a tibble if it was a tibble.
Creating a variable called something like `Sepal.Width * Sepal.Length` is not exactly ideal because it’s a difficult name to work with. The `mutate` function can name variables at the same time as they are created. We have to name the arguments using `=`, placing the name on the left hand side, to do this. Here’s how to use this construct to name the new area variable `Sepal.Area`:
```
mutate(iris_tbl, Sepal.Area = Sepal.Width * Sepal.Length)
```
```
## # A tibble: 150 x 6
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species Sepal.Area
## <dbl> <dbl> <dbl> <dbl> <fct> <dbl>
## 1 5.1 3.5 1.4 0.2 setosa 17.8
## 2 4.9 3 1.4 0.2 setosa 14.7
## 3 4.7 3.2 1.3 0.2 setosa 15.0
## 4 4.6 3.1 1.5 0.2 setosa 14.3
## 5 5 3.6 1.4 0.2 setosa 18
## 6 5.4 3.9 1.7 0.4 setosa 21.1
## 7 4.6 3.4 1.4 0.3 setosa 15.6
## 8 5 3.4 1.5 0.2 setosa 17
## 9 4.4 2.9 1.4 0.2 setosa 12.8
## 10 4.9 3.1 1.5 0.1 setosa 15.2
## # … with 140 more rows
```
We can create more than one variable by supplying `mutate` multiple (named) arguments:
```
mutate(iris_tbl,
Sepal.Area = Sepal.Width * Sepal.Length,
Petal.Area = Petal.Width * Petal.Length,
Area.Ratio = Petal.Area / Petal.Area)
```
```
## # A tibble: 150 x 8
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species Sepal.Area
## <dbl> <dbl> <dbl> <dbl> <fct> <dbl>
## 1 5.1 3.5 1.4 0.2 setosa 17.8
## 2 4.9 3 1.4 0.2 setosa 14.7
## 3 4.7 3.2 1.3 0.2 setosa 15.0
## 4 4.6 3.1 1.5 0.2 setosa 14.3
## 5 5 3.6 1.4 0.2 setosa 18
## 6 5.4 3.9 1.7 0.4 setosa 21.1
## 7 4.6 3.4 1.4 0.3 setosa 15.6
## 8 5 3.4 1.5 0.2 setosa 17
## 9 4.4 2.9 1.4 0.2 setosa 12.8
## 10 4.9 3.1 1.5 0.1 setosa 15.2
## # … with 140 more rows, and 2 more variables: Petal.Area <dbl>,
## # Area.Ratio <dbl>
```
Notice that here we placed each argument on a new line, remembering the comma to separate arguments. There is nothing to stop us doing this because R ignores white space. This is useful though, because it allows us, the user, to makes long function calls easier to read by breaking them up on different lines.
This last example reveals a nice feature of `mutate`: we can use newly created variables in further calculations. Here we constructed approximate sepal and petal area variables, and then used these to construct a third variable containing the ratio of these two quantities, `Area.Ratio`.
### 12\.3\.1 Transforming and dropping variables
Occasionally we may want to construct one or more new variables, and then drop all other variables in the original dataset. The `transmute` function is designed to do this. It works exactly like `mutate`, but it has a slightly different behaviour:
```
transmute(iris_tbl,
Sepal.Area = Sepal.Width * Sepal.Length,
Petal.Area = Petal.Width * Petal.Length,
Area.Ratio = Petal.Area / Petal.Area)
```
```
## # A tibble: 150 x 3
## Sepal.Area Petal.Area Area.Ratio
## <dbl> <dbl> <dbl>
## 1 17.8 0.280 1
## 2 14.7 0.280 1
## 3 15.0 0.26 1
## 4 14.3 0.3 1
## 5 18 0.280 1
## 6 21.1 0.68 1
## 7 15.6 0.42 1
## 8 17 0.3 1
## 9 12.8 0.280 1
## 10 15.2 0.15 1
## # … with 140 more rows
```
Here we repeated the previous example, but now only the new variables were retained in the resulting tibble. If we also want to retain one or more variables without altering them we just have to pass them as unnamed arguments. For example, if we need to retain species identity in the output, we use:
```
transmute(iris_tbl,
Species,
Sepal.Area = Sepal.Width * Sepal.Length,
Petal.Area = Petal.Width * Petal.Length,
Area.Ratio = Petal.Area / Petal.Area)
```
```
## # A tibble: 150 x 4
## Species Sepal.Area Petal.Area Area.Ratio
## <fct> <dbl> <dbl> <dbl>
## 1 setosa 17.8 0.280 1
## 2 setosa 14.7 0.280 1
## 3 setosa 15.0 0.26 1
## 4 setosa 14.3 0.3 1
## 5 setosa 18 0.280 1
## 6 setosa 21.1 0.68 1
## 7 setosa 15.6 0.42 1
## 8 setosa 17 0.3 1
## 9 setosa 12.8 0.280 1
## 10 setosa 15.2 0.15 1
## # … with 140 more rows
```
### 12\.3\.1 Transforming and dropping variables
Occasionally we may want to construct one or more new variables, and then drop all other variables in the original dataset. The `transmute` function is designed to do this. It works exactly like `mutate`, but it has a slightly different behaviour:
```
transmute(iris_tbl,
Sepal.Area = Sepal.Width * Sepal.Length,
Petal.Area = Petal.Width * Petal.Length,
Area.Ratio = Petal.Area / Petal.Area)
```
```
## # A tibble: 150 x 3
## Sepal.Area Petal.Area Area.Ratio
## <dbl> <dbl> <dbl>
## 1 17.8 0.280 1
## 2 14.7 0.280 1
## 3 15.0 0.26 1
## 4 14.3 0.3 1
## 5 18 0.280 1
## 6 21.1 0.68 1
## 7 15.6 0.42 1
## 8 17 0.3 1
## 9 12.8 0.280 1
## 10 15.2 0.15 1
## # … with 140 more rows
```
Here we repeated the previous example, but now only the new variables were retained in the resulting tibble. If we also want to retain one or more variables without altering them we just have to pass them as unnamed arguments. For example, if we need to retain species identity in the output, we use:
```
transmute(iris_tbl,
Species,
Sepal.Area = Sepal.Width * Sepal.Length,
Petal.Area = Petal.Width * Petal.Length,
Area.Ratio = Petal.Area / Petal.Area)
```
```
## # A tibble: 150 x 4
## Species Sepal.Area Petal.Area Area.Ratio
## <fct> <dbl> <dbl> <dbl>
## 1 setosa 17.8 0.280 1
## 2 setosa 14.7 0.280 1
## 3 setosa 15.0 0.26 1
## 4 setosa 14.3 0.3 1
## 5 setosa 18 0.280 1
## 6 setosa 21.1 0.68 1
## 7 setosa 15.6 0.42 1
## 8 setosa 17 0.3 1
## 9 setosa 12.8 0.280 1
## 10 setosa 15.2 0.15 1
## # … with 140 more rows
```
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/working-with-observations.html |
Chapter 13 Working with observations
====================================
13\.1 Introduction
------------------
This chapter will explore the `filter` and `arrange` verbs. These are discussed together because they are used to manipulate observations (i.e. rows) of a data frame or tibble:
* The `filter` function extracts a subset of observations based on supplied conditions involving the variables in our data.
* The `arrange` function reorders the rows according to the values in one or more variables.
### 13\.1\.1 Getting ready
We should start a new script by loading and attaching the **dplyr** package:
```
library("dplyr")
```
We’re going to use the `storms` data set in the **nasaweather** package this time. This means we need to load and attach the **nasaweather** package to make `storms` available:
```
library("nasaweather")
```
The `storms` data set is an ordinary data frame, so let’s convert it to a tibble so that it prints nicely:
```
storms_tbl <- tbl_df(storms)
```
13\.2 Subset observations with `filter`
---------------------------------------
We use `filter` to **subset observations** in a data frame or tibble containing our data. This is often done when we want to limit an analysis to a subset of observations. Basic usage of `filter` looks something like this:
```
filter(data_set, <expression1>, <expression1>, ...)
```
Remember, this is pseudo code (it’s not an example we can run). The first argument, `data_set`, must the name of the object containing our data. We then include one or more additional arguments, where each of these is a valid R expression involving one or more variables in `data_set`. Each expression must return a logical vector. We’ve expressed these as `<expression1>, <expression2>, ...`, where `<expression1>` and `<expression2>` represent the first two expressions, and the `...` is acting as placeholder for the remaining expressions.
To see how `filter` works in action, we’ll use it to subset observations in the `storms_tbl` dataset, based on two relational criteria:
```
filter(storms_tbl, pressure <= 960, wind >= 100)
```
```
## # A tibble: 199 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Felix 1995 8 12 0 22.1 -57.8 955 100 Hurric… 73
## 2 Felix 1995 8 12 6 22.9 -59 943 110 Hurric… 73
## 3 Felix 1995 8 12 12 23.6 -60.2 932 115 Hurric… 73
## 4 Felix 1995 8 12 18 24.3 -61 929 120 Hurric… 73
## 5 Felix 1995 8 13 0 25.1 -61.6 930 115 Hurric… 74
## 6 Felix 1995 8 13 6 25.9 -61.9 937 105 Hurric… 74
## 7 Felix 1995 8 13 12 26.6 -62.3 942 100 Hurric… 74
## 8 Luis 1995 9 1 6 15.8 -42.6 958 105 Hurric… 93
## 9 Luis 1995 9 1 12 16.2 -43.6 950 115 Hurric… 93
## 10 Luis 1995 9 1 18 16.5 -44.7 948 115 Hurric… 93
## # … with 189 more rows
```
In this example we’ve created a subset of `storms_tbl` that only includes observations where the `pressure` variable is less than or equal to 960 and the `wind` variable is greater than or equal to 100\. Both conditions must be met for an observation to be included in the resulting tibble. The conditions are not combined as an either/or operation.
This is probably starting to become repetitious, but there are a few features of `filter` that we should note:
* Each expression that performs a comparison is not surrounded by quotes. This makes sense, because the expression is meant to be evaluated to return a logical vector – it is not “a value”.
* As usual, the result produced by `mutate` in our example was printed to the Console. The `mutate` function did not change the original `storms_tbl` in any way (no side effects!).
* The `filter` function will return the same kind of data object it is working on: it returns a data frame if our data was originally in a data frame, and a tibble if it was a tibble.
We can achieve the same result as the above example in a different way. This involves the `&` operator:
```
filter(storms_tbl, pressure <= 960 & wind >= 100)
```
```
## # A tibble: 199 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Felix 1995 8 12 0 22.1 -57.8 955 100 Hurric… 73
## 2 Felix 1995 8 12 6 22.9 -59 943 110 Hurric… 73
## 3 Felix 1995 8 12 12 23.6 -60.2 932 115 Hurric… 73
## 4 Felix 1995 8 12 18 24.3 -61 929 120 Hurric… 73
## 5 Felix 1995 8 13 0 25.1 -61.6 930 115 Hurric… 74
## 6 Felix 1995 8 13 6 25.9 -61.9 937 105 Hurric… 74
## 7 Felix 1995 8 13 12 26.6 -62.3 942 100 Hurric… 74
## 8 Luis 1995 9 1 6 15.8 -42.6 958 105 Hurric… 93
## 9 Luis 1995 9 1 12 16.2 -43.6 950 115 Hurric… 93
## 10 Luis 1995 9 1 18 16.5 -44.7 948 115 Hurric… 93
## # … with 189 more rows
```
Once again, we created a subset of `storms_tbl` that only includes observation where the `pressure` variable is less than or equal to 960 *and* the `wind` variable is greater than or equal to 100\. However, rather than supplying `pressure <= 960` and `wind >= 100` as two arguments, we used a single R expression, combining them with the `&`. We’re pointing this out because we sometimes need to subset on an either/or basis, and in those cases we have to use this second approach. For example:
```
filter(storms_tbl, pressure <= 960 | wind >= 100)
```
```
## # A tibble: 266 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Felix 1995 8 12 0 22.1 -57.8 955 100 Hurric… 73
## 2 Felix 1995 8 12 6 22.9 -59 943 110 Hurric… 73
## 3 Felix 1995 8 12 12 23.6 -60.2 932 115 Hurric… 73
## 4 Felix 1995 8 12 18 24.3 -61 929 120 Hurric… 73
## 5 Felix 1995 8 13 0 25.1 -61.6 930 115 Hurric… 74
## 6 Felix 1995 8 13 6 25.9 -61.9 937 105 Hurric… 74
## 7 Felix 1995 8 13 12 26.6 -62.3 942 100 Hurric… 74
## 8 Felix 1995 8 13 18 27.4 -62.3 947 95 Hurric… 74
## 9 Felix 1995 8 14 0 28.2 -62.5 948 90 Hurric… 75
## 10 Felix 1995 8 14 6 29 -62.9 954 80 Hurric… 75
## # … with 256 more rows
```
This creates a subset of `storms_tbl` that only includes observation where the `pressure` variable is less than or equal to 960 *or* the `wind` variable is greater than or equal to 100\.
We’re also not restricted to using some combination of relational operators such as `==`, `>=` or `!=` when working with `filter`. The conditions specified in the `filter` function can be any expression that returns a logical vector. The only constraint is that the length of this logical vector has to equal the length of its input vectors.
Here’s an example. The group membership `%in%` operator (part of base R, not `dplyr`) is used to determine whether the values in one vector occurs among the values in a second vector. It’s used like this: `vec1 %in% vec2`. This returns a vector where the values are `TRUE` if an element of `vec1` is in `vec2`, and `FALSE` otherwise. We can use the `%in%` operator with `filter` to select to subset rows by the values of one or more variables:
```
sub_storms_tbl <- filter(storms_tbl, name %in% c("Roxanne", "Marilyn", "Dolly"))
# print the output
sub_storms_tbl $ name
```
```
## [1] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [8] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [15] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [22] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [29] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [36] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [43] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [50] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [57] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [64] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [71] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [78] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [85] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [92] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [99] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [106] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [113] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [120] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [127] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Dolly" "Dolly"
## [134] "Dolly" "Dolly" "Dolly" "Dolly" "Dolly" "Dolly" "Dolly"
## [141] "Dolly" "Dolly" "Dolly" "Dolly" "Dolly" "Dolly" "Dolly"
## [148] "Dolly" "Dolly" "Dolly" "Dolly" "Dolly" "Dolly" "Dolly"
## [155] "Dolly"
```
13\.3 Reording observations with `arrange`
------------------------------------------
We use `arrange` to **reorder the rows** of an object containing our data. This is sometimes used when we want to inspect a dataset to look for associations among the different variables. This is hard to do if they are not ordered. Basic usage of `arrange` looks like this:
```
arrange(data_set, vname1, vname2, ...)
```
Yes, this is pseudo\-code. As always, the first argument, `data_set`, is the name of the object containing our data. We then include a series of one or more additional arguments, where each of these should be the name of a variable in `data_set`: `vname1` and `vname2` are names of the first two ordering variables, and the `...` is acting as placeholder for the remaining variables.
To see `arrange` in action, let’s construct a new version of `storms_tbl` where the rows have been reordered first by `wind`, and then by `pressure`:
```
arrange(storms_tbl, wind, pressure)
```
```
## # A tibble: 2,747 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Fran 1996 9 9 12 45.7 -72.3 1006 15 Extra… 101
## 2 Fran 1996 9 9 18 46 -71.1 1008 15 Extra… 101
## 3 Fran 1996 9 10 0 46.7 -70 1010 15 Extra… 102
## 4 Franc… 1998 9 13 6 31.7 -96.9 1002 20 Tropi… 105
## 5 Dean 1995 7 31 18 30.5 -96.5 1003 20 Tropi… 61
## 6 Erin 1995 8 4 12 33.2 -89.7 1003 20 Tropi… 65
## 7 Erin 1995 8 4 18 34.1 -90.2 1003 20 Tropi… 65
## 8 Erin 1995 8 5 0 34.8 -90.2 1003 20 Tropi… 66
## 9 Erin 1995 8 5 6 35.4 -90.1 1003 20 Tropi… 66
## 10 Erin 1995 8 5 12 36.3 -89.8 1003 20 Tropi… 66
## # … with 2,737 more rows
```
This creates a new version of `storms_tbl` where the rows are sorted according to the values of `wind` and `pressure` in ascending order – i.e. from smallest to largest. Since `wind` appears before `pressure` among the arguments, the values of `pressure` are only used to break ties within any particular value of `wind`.
For the sake of avoiding any doubt about how `arrange` works, let’s quickly review its behaviour:
* The variable names used as arguments of `arrange` are not surrounded by quotes.
* The `arrange` function did not change the original `iris_tbl` in any way.
* The `arrange` function will return the same kind of data object it is working on.
There isn’t much else we need to to learn about `arrange`. By default, it sorts variables in ascending order. If we need it to sort a variable in descending order, we wrap the variable name in the `desc` function:
```
arrange(storms_tbl, wind, desc(pressure))
```
```
## # A tibble: 2,747 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Fran 1996 9 10 0 46.7 -70 1010 15 Extra… 102
## 2 Fran 1996 9 9 18 46 -71.1 1008 15 Extra… 101
## 3 Fran 1996 9 9 12 45.7 -72.3 1006 15 Extra… 101
## 4 Barry 1995 7 5 6 32 -72 1019 20 Extra… 35
## 5 Barry 1995 7 5 12 32 -72 1019 20 Extra… 35
## 6 Barry 1995 7 5 18 31.9 -72 1018 20 Extra… 35
## 7 Maril… 1995 9 30 12 34.6 -49.3 1016 20 Extra… 122
## 8 Maril… 1995 9 30 18 34.7 -50 1016 20 Extra… 122
## 9 Maril… 1995 10 1 0 34.8 -50.5 1016 20 Extra… 123
## 10 Maril… 1995 10 1 6 35 -51 1016 20 Extra… 123
## # … with 2,737 more rows
```
This creates a new version of `storms_tbl` where the rows are sorted according to the values of `wind` and `pressure`, in ascending and descending order, respectively.
13\.1 Introduction
------------------
This chapter will explore the `filter` and `arrange` verbs. These are discussed together because they are used to manipulate observations (i.e. rows) of a data frame or tibble:
* The `filter` function extracts a subset of observations based on supplied conditions involving the variables in our data.
* The `arrange` function reorders the rows according to the values in one or more variables.
### 13\.1\.1 Getting ready
We should start a new script by loading and attaching the **dplyr** package:
```
library("dplyr")
```
We’re going to use the `storms` data set in the **nasaweather** package this time. This means we need to load and attach the **nasaweather** package to make `storms` available:
```
library("nasaweather")
```
The `storms` data set is an ordinary data frame, so let’s convert it to a tibble so that it prints nicely:
```
storms_tbl <- tbl_df(storms)
```
### 13\.1\.1 Getting ready
We should start a new script by loading and attaching the **dplyr** package:
```
library("dplyr")
```
We’re going to use the `storms` data set in the **nasaweather** package this time. This means we need to load and attach the **nasaweather** package to make `storms` available:
```
library("nasaweather")
```
The `storms` data set is an ordinary data frame, so let’s convert it to a tibble so that it prints nicely:
```
storms_tbl <- tbl_df(storms)
```
13\.2 Subset observations with `filter`
---------------------------------------
We use `filter` to **subset observations** in a data frame or tibble containing our data. This is often done when we want to limit an analysis to a subset of observations. Basic usage of `filter` looks something like this:
```
filter(data_set, <expression1>, <expression1>, ...)
```
Remember, this is pseudo code (it’s not an example we can run). The first argument, `data_set`, must the name of the object containing our data. We then include one or more additional arguments, where each of these is a valid R expression involving one or more variables in `data_set`. Each expression must return a logical vector. We’ve expressed these as `<expression1>, <expression2>, ...`, where `<expression1>` and `<expression2>` represent the first two expressions, and the `...` is acting as placeholder for the remaining expressions.
To see how `filter` works in action, we’ll use it to subset observations in the `storms_tbl` dataset, based on two relational criteria:
```
filter(storms_tbl, pressure <= 960, wind >= 100)
```
```
## # A tibble: 199 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Felix 1995 8 12 0 22.1 -57.8 955 100 Hurric… 73
## 2 Felix 1995 8 12 6 22.9 -59 943 110 Hurric… 73
## 3 Felix 1995 8 12 12 23.6 -60.2 932 115 Hurric… 73
## 4 Felix 1995 8 12 18 24.3 -61 929 120 Hurric… 73
## 5 Felix 1995 8 13 0 25.1 -61.6 930 115 Hurric… 74
## 6 Felix 1995 8 13 6 25.9 -61.9 937 105 Hurric… 74
## 7 Felix 1995 8 13 12 26.6 -62.3 942 100 Hurric… 74
## 8 Luis 1995 9 1 6 15.8 -42.6 958 105 Hurric… 93
## 9 Luis 1995 9 1 12 16.2 -43.6 950 115 Hurric… 93
## 10 Luis 1995 9 1 18 16.5 -44.7 948 115 Hurric… 93
## # … with 189 more rows
```
In this example we’ve created a subset of `storms_tbl` that only includes observations where the `pressure` variable is less than or equal to 960 and the `wind` variable is greater than or equal to 100\. Both conditions must be met for an observation to be included in the resulting tibble. The conditions are not combined as an either/or operation.
This is probably starting to become repetitious, but there are a few features of `filter` that we should note:
* Each expression that performs a comparison is not surrounded by quotes. This makes sense, because the expression is meant to be evaluated to return a logical vector – it is not “a value”.
* As usual, the result produced by `mutate` in our example was printed to the Console. The `mutate` function did not change the original `storms_tbl` in any way (no side effects!).
* The `filter` function will return the same kind of data object it is working on: it returns a data frame if our data was originally in a data frame, and a tibble if it was a tibble.
We can achieve the same result as the above example in a different way. This involves the `&` operator:
```
filter(storms_tbl, pressure <= 960 & wind >= 100)
```
```
## # A tibble: 199 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Felix 1995 8 12 0 22.1 -57.8 955 100 Hurric… 73
## 2 Felix 1995 8 12 6 22.9 -59 943 110 Hurric… 73
## 3 Felix 1995 8 12 12 23.6 -60.2 932 115 Hurric… 73
## 4 Felix 1995 8 12 18 24.3 -61 929 120 Hurric… 73
## 5 Felix 1995 8 13 0 25.1 -61.6 930 115 Hurric… 74
## 6 Felix 1995 8 13 6 25.9 -61.9 937 105 Hurric… 74
## 7 Felix 1995 8 13 12 26.6 -62.3 942 100 Hurric… 74
## 8 Luis 1995 9 1 6 15.8 -42.6 958 105 Hurric… 93
## 9 Luis 1995 9 1 12 16.2 -43.6 950 115 Hurric… 93
## 10 Luis 1995 9 1 18 16.5 -44.7 948 115 Hurric… 93
## # … with 189 more rows
```
Once again, we created a subset of `storms_tbl` that only includes observation where the `pressure` variable is less than or equal to 960 *and* the `wind` variable is greater than or equal to 100\. However, rather than supplying `pressure <= 960` and `wind >= 100` as two arguments, we used a single R expression, combining them with the `&`. We’re pointing this out because we sometimes need to subset on an either/or basis, and in those cases we have to use this second approach. For example:
```
filter(storms_tbl, pressure <= 960 | wind >= 100)
```
```
## # A tibble: 266 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Felix 1995 8 12 0 22.1 -57.8 955 100 Hurric… 73
## 2 Felix 1995 8 12 6 22.9 -59 943 110 Hurric… 73
## 3 Felix 1995 8 12 12 23.6 -60.2 932 115 Hurric… 73
## 4 Felix 1995 8 12 18 24.3 -61 929 120 Hurric… 73
## 5 Felix 1995 8 13 0 25.1 -61.6 930 115 Hurric… 74
## 6 Felix 1995 8 13 6 25.9 -61.9 937 105 Hurric… 74
## 7 Felix 1995 8 13 12 26.6 -62.3 942 100 Hurric… 74
## 8 Felix 1995 8 13 18 27.4 -62.3 947 95 Hurric… 74
## 9 Felix 1995 8 14 0 28.2 -62.5 948 90 Hurric… 75
## 10 Felix 1995 8 14 6 29 -62.9 954 80 Hurric… 75
## # … with 256 more rows
```
This creates a subset of `storms_tbl` that only includes observation where the `pressure` variable is less than or equal to 960 *or* the `wind` variable is greater than or equal to 100\.
We’re also not restricted to using some combination of relational operators such as `==`, `>=` or `!=` when working with `filter`. The conditions specified in the `filter` function can be any expression that returns a logical vector. The only constraint is that the length of this logical vector has to equal the length of its input vectors.
Here’s an example. The group membership `%in%` operator (part of base R, not `dplyr`) is used to determine whether the values in one vector occurs among the values in a second vector. It’s used like this: `vec1 %in% vec2`. This returns a vector where the values are `TRUE` if an element of `vec1` is in `vec2`, and `FALSE` otherwise. We can use the `%in%` operator with `filter` to select to subset rows by the values of one or more variables:
```
sub_storms_tbl <- filter(storms_tbl, name %in% c("Roxanne", "Marilyn", "Dolly"))
# print the output
sub_storms_tbl $ name
```
```
## [1] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [8] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [15] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [22] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [29] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [36] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [43] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [50] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [57] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [64] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [71] "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn" "Marilyn"
## [78] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [85] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [92] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [99] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [106] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [113] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [120] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne"
## [127] "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Roxanne" "Dolly" "Dolly"
## [134] "Dolly" "Dolly" "Dolly" "Dolly" "Dolly" "Dolly" "Dolly"
## [141] "Dolly" "Dolly" "Dolly" "Dolly" "Dolly" "Dolly" "Dolly"
## [148] "Dolly" "Dolly" "Dolly" "Dolly" "Dolly" "Dolly" "Dolly"
## [155] "Dolly"
```
13\.3 Reording observations with `arrange`
------------------------------------------
We use `arrange` to **reorder the rows** of an object containing our data. This is sometimes used when we want to inspect a dataset to look for associations among the different variables. This is hard to do if they are not ordered. Basic usage of `arrange` looks like this:
```
arrange(data_set, vname1, vname2, ...)
```
Yes, this is pseudo\-code. As always, the first argument, `data_set`, is the name of the object containing our data. We then include a series of one or more additional arguments, where each of these should be the name of a variable in `data_set`: `vname1` and `vname2` are names of the first two ordering variables, and the `...` is acting as placeholder for the remaining variables.
To see `arrange` in action, let’s construct a new version of `storms_tbl` where the rows have been reordered first by `wind`, and then by `pressure`:
```
arrange(storms_tbl, wind, pressure)
```
```
## # A tibble: 2,747 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Fran 1996 9 9 12 45.7 -72.3 1006 15 Extra… 101
## 2 Fran 1996 9 9 18 46 -71.1 1008 15 Extra… 101
## 3 Fran 1996 9 10 0 46.7 -70 1010 15 Extra… 102
## 4 Franc… 1998 9 13 6 31.7 -96.9 1002 20 Tropi… 105
## 5 Dean 1995 7 31 18 30.5 -96.5 1003 20 Tropi… 61
## 6 Erin 1995 8 4 12 33.2 -89.7 1003 20 Tropi… 65
## 7 Erin 1995 8 4 18 34.1 -90.2 1003 20 Tropi… 65
## 8 Erin 1995 8 5 0 34.8 -90.2 1003 20 Tropi… 66
## 9 Erin 1995 8 5 6 35.4 -90.1 1003 20 Tropi… 66
## 10 Erin 1995 8 5 12 36.3 -89.8 1003 20 Tropi… 66
## # … with 2,737 more rows
```
This creates a new version of `storms_tbl` where the rows are sorted according to the values of `wind` and `pressure` in ascending order – i.e. from smallest to largest. Since `wind` appears before `pressure` among the arguments, the values of `pressure` are only used to break ties within any particular value of `wind`.
For the sake of avoiding any doubt about how `arrange` works, let’s quickly review its behaviour:
* The variable names used as arguments of `arrange` are not surrounded by quotes.
* The `arrange` function did not change the original `iris_tbl` in any way.
* The `arrange` function will return the same kind of data object it is working on.
There isn’t much else we need to to learn about `arrange`. By default, it sorts variables in ascending order. If we need it to sort a variable in descending order, we wrap the variable name in the `desc` function:
```
arrange(storms_tbl, wind, desc(pressure))
```
```
## # A tibble: 2,747 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Fran 1996 9 10 0 46.7 -70 1010 15 Extra… 102
## 2 Fran 1996 9 9 18 46 -71.1 1008 15 Extra… 101
## 3 Fran 1996 9 9 12 45.7 -72.3 1006 15 Extra… 101
## 4 Barry 1995 7 5 6 32 -72 1019 20 Extra… 35
## 5 Barry 1995 7 5 12 32 -72 1019 20 Extra… 35
## 6 Barry 1995 7 5 18 31.9 -72 1018 20 Extra… 35
## 7 Maril… 1995 9 30 12 34.6 -49.3 1016 20 Extra… 122
## 8 Maril… 1995 9 30 18 34.7 -50 1016 20 Extra… 122
## 9 Maril… 1995 10 1 0 34.8 -50.5 1016 20 Extra… 123
## 10 Maril… 1995 10 1 6 35 -51 1016 20 Extra… 123
## # … with 2,737 more rows
```
This creates a new version of `storms_tbl` where the rows are sorted according to the values of `wind` and `pressure`, in ascending and descending order, respectively.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/helper-functions.html |
Chapter 14 Helper functions
===========================
14\.1 Introduction
------------------
There are a number of **helper functions** supplied by **dplyr**. Many of these are shown in the handy **dplyr** [cheat sheat](http://www.rstudio.com/resources/cheatsheets/). This is a short chapter. We aren’t going to try to cover every single helper function here. Instead, we’ll highlight some of the more useful ones, and point out where the others tend to be used. We also assume that the `storms_tbl` and `iris_tbl` tibbles have already been constructed (look over the previous two chapters to see how this is done).
14\.2 Working with `select`
---------------------------
There are relatively few helper functions that can be used with `select`. The job of these functions is to make it easier to match variable names according to various criteria. We’ll look at the three simplest of these, but look at the examples in the help file for `select` and the [cheat sheat](http://www.rstudio.com/resources/cheatsheets/) to see what else is available.
We can select variables according to the sequence of characters used at the start of their name with the `starts_with` function. For example, to select all the variables in `iris_tbl` that begin with the word “Petal”, we use:
```
select(iris_tbl, starts_with("petal"))
```
```
## # A tibble: 150 x 2
## Petal.Length Petal.Width
## <dbl> <dbl>
## 1 1.4 0.2
## 2 1.4 0.2
## 3 1.3 0.2
## 4 1.5 0.2
## 5 1.4 0.2
## 6 1.7 0.4
## 7 1.4 0.3
## 8 1.5 0.2
## 9 1.4 0.2
## 10 1.5 0.1
## # … with 140 more rows
```
This returns a table containing just `Petal.Length` and `Petal.Width`. As one might expect, there is also a helper function to select variables according to characters used at the end of their name. This is the `ends_with` function (no surprises here). To select all the variables in `iris_tbl` that end with the word “Length”, we use:
```
select(iris_tbl, ends_with("length"))
```
```
## # A tibble: 150 x 2
## Sepal.Length Petal.Length
## <dbl> <dbl>
## 1 5.1 1.4
## 2 4.9 1.4
## 3 4.7 1.3
## 4 4.6 1.5
## 5 5 1.4
## 6 5.4 1.7
## 7 4.6 1.4
## 8 5 1.5
## 9 4.4 1.4
## 10 4.9 1.5
## # … with 140 more rows
```
Notice that we have to quote the character string that we want to match against. This is not optional. However, the `starts_with` and `ends_with` functions are not case sensitive by default. For example, I passed `starts_with` the argument `"petal"` instead of `"Petal"`, yet it still selected variables beginning with the character string `"Petal"`. If we want to select variables on a case\-sensitive basis, we need to set an argument `ignore.case` to `FALSE` in `starts_with` and `ends_with`.
The last `select` helper function we will look at is called `contains`. This allows us to select variables based on a partial match anywhere in their name. Look at what happens if we pass `contains` the argument `"."`:
```
select(iris_tbl, contains("."))
```
```
## # A tibble: 150 x 4
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## <dbl> <dbl> <dbl> <dbl>
## 1 5.1 3.5 1.4 0.2
## 2 4.9 3 1.4 0.2
## 3 4.7 3.2 1.3 0.2
## 4 4.6 3.1 1.5 0.2
## 5 5 3.6 1.4 0.2
## 6 5.4 3.9 1.7 0.4
## 7 4.6 3.4 1.4 0.3
## 8 5 3.4 1.5 0.2
## 9 4.4 2.9 1.4 0.2
## 10 4.9 3.1 1.5 0.1
## # … with 140 more rows
```
This selects all the variables with a dot in their name.
There is nothing to stop us combining the different variable selection methods. For example, we can use this approach to select all the variables whose names start with the word “Petal” or end with the word “Length”:
```
select(iris_tbl, ends_with("length"), starts_with("petal"))
```
```
## # A tibble: 150 x 3
## Sepal.Length Petal.Length Petal.Width
## <dbl> <dbl> <dbl>
## 1 5.1 1.4 0.2
## 2 4.9 1.4 0.2
## 3 4.7 1.3 0.2
## 4 4.6 1.5 0.2
## 5 5 1.4 0.2
## 6 5.4 1.7 0.4
## 7 4.6 1.4 0.3
## 8 5 1.5 0.2
## 9 4.4 1.4 0.2
## 10 4.9 1.5 0.1
## # … with 140 more rows
```
When we apply more than one selection criteria like this the `select` function returns all the variables that match either criteria, rather than just the set that meets all the criteria.
14\.3 Working with `mutate` and `transmute`
-------------------------------------------
There are quite a few **helper functions** that can be used with `mutate`. These make it easier to carry out certain transformations that aren’t easy to do with base R functions. We won’t explore these here as they tend to be needed only in quite specific circumstances. However, in situations where we need to construct an unusual variable—for example, one that ranks the values of another variable—it’s always worth looking at the that [handy cheat sheat](http://www.rstudio.com/resources/cheatsheets/) to see what options might be available.
14\.4 Working with `filter`
---------------------------
There’s one `dplyr` **helper function** that works with `filter` that’s definitely worth knowing about: the `between` function. This is used to identify the values of a variable that lie inside a defined range:
```
filter(storms_tbl, between(pressure, 960, 970))
```
```
## # A tibble: 213 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Felix 1995 8 11 18 21.3 -56.5 965 90 Hurric… 72
## 2 Felix 1995 8 14 12 29.9 -63.4 962 80 Hurric… 75
## 3 Felix 1995 8 14 18 30.7 -64.1 962 75 Hurric… 75
## 4 Felix 1995 8 15 0 31.3 -65.1 962 75 Hurric… 76
## 5 Felix 1995 8 15 6 31.9 -66.2 964 75 Hurric… 76
## 6 Felix 1995 8 15 12 32.5 -67.4 968 70 Hurric… 76
## 7 Felix 1995 8 15 18 33.1 -68.8 965 70 Hurric… 76
## 8 Felix 1995 8 16 0 33.5 -70.1 963 70 Hurric… 77
## 9 Felix 1995 8 16 6 34 -71.3 966 70 Hurric… 77
## 10 Felix 1995 8 16 12 34.6 -72.4 968 70 Hurric… 77
## # … with 203 more rows
```
This example filters the `storms` dataset such that only values of `pressure` between 960 and 970 are retained. We could do the same thing using some combination of `>` or `<`, but the `between` function makes things a bit easier to read.
14\.1 Introduction
------------------
There are a number of **helper functions** supplied by **dplyr**. Many of these are shown in the handy **dplyr** [cheat sheat](http://www.rstudio.com/resources/cheatsheets/). This is a short chapter. We aren’t going to try to cover every single helper function here. Instead, we’ll highlight some of the more useful ones, and point out where the others tend to be used. We also assume that the `storms_tbl` and `iris_tbl` tibbles have already been constructed (look over the previous two chapters to see how this is done).
14\.2 Working with `select`
---------------------------
There are relatively few helper functions that can be used with `select`. The job of these functions is to make it easier to match variable names according to various criteria. We’ll look at the three simplest of these, but look at the examples in the help file for `select` and the [cheat sheat](http://www.rstudio.com/resources/cheatsheets/) to see what else is available.
We can select variables according to the sequence of characters used at the start of their name with the `starts_with` function. For example, to select all the variables in `iris_tbl` that begin with the word “Petal”, we use:
```
select(iris_tbl, starts_with("petal"))
```
```
## # A tibble: 150 x 2
## Petal.Length Petal.Width
## <dbl> <dbl>
## 1 1.4 0.2
## 2 1.4 0.2
## 3 1.3 0.2
## 4 1.5 0.2
## 5 1.4 0.2
## 6 1.7 0.4
## 7 1.4 0.3
## 8 1.5 0.2
## 9 1.4 0.2
## 10 1.5 0.1
## # … with 140 more rows
```
This returns a table containing just `Petal.Length` and `Petal.Width`. As one might expect, there is also a helper function to select variables according to characters used at the end of their name. This is the `ends_with` function (no surprises here). To select all the variables in `iris_tbl` that end with the word “Length”, we use:
```
select(iris_tbl, ends_with("length"))
```
```
## # A tibble: 150 x 2
## Sepal.Length Petal.Length
## <dbl> <dbl>
## 1 5.1 1.4
## 2 4.9 1.4
## 3 4.7 1.3
## 4 4.6 1.5
## 5 5 1.4
## 6 5.4 1.7
## 7 4.6 1.4
## 8 5 1.5
## 9 4.4 1.4
## 10 4.9 1.5
## # … with 140 more rows
```
Notice that we have to quote the character string that we want to match against. This is not optional. However, the `starts_with` and `ends_with` functions are not case sensitive by default. For example, I passed `starts_with` the argument `"petal"` instead of `"Petal"`, yet it still selected variables beginning with the character string `"Petal"`. If we want to select variables on a case\-sensitive basis, we need to set an argument `ignore.case` to `FALSE` in `starts_with` and `ends_with`.
The last `select` helper function we will look at is called `contains`. This allows us to select variables based on a partial match anywhere in their name. Look at what happens if we pass `contains` the argument `"."`:
```
select(iris_tbl, contains("."))
```
```
## # A tibble: 150 x 4
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## <dbl> <dbl> <dbl> <dbl>
## 1 5.1 3.5 1.4 0.2
## 2 4.9 3 1.4 0.2
## 3 4.7 3.2 1.3 0.2
## 4 4.6 3.1 1.5 0.2
## 5 5 3.6 1.4 0.2
## 6 5.4 3.9 1.7 0.4
## 7 4.6 3.4 1.4 0.3
## 8 5 3.4 1.5 0.2
## 9 4.4 2.9 1.4 0.2
## 10 4.9 3.1 1.5 0.1
## # … with 140 more rows
```
This selects all the variables with a dot in their name.
There is nothing to stop us combining the different variable selection methods. For example, we can use this approach to select all the variables whose names start with the word “Petal” or end with the word “Length”:
```
select(iris_tbl, ends_with("length"), starts_with("petal"))
```
```
## # A tibble: 150 x 3
## Sepal.Length Petal.Length Petal.Width
## <dbl> <dbl> <dbl>
## 1 5.1 1.4 0.2
## 2 4.9 1.4 0.2
## 3 4.7 1.3 0.2
## 4 4.6 1.5 0.2
## 5 5 1.4 0.2
## 6 5.4 1.7 0.4
## 7 4.6 1.4 0.3
## 8 5 1.5 0.2
## 9 4.4 1.4 0.2
## 10 4.9 1.5 0.1
## # … with 140 more rows
```
When we apply more than one selection criteria like this the `select` function returns all the variables that match either criteria, rather than just the set that meets all the criteria.
14\.3 Working with `mutate` and `transmute`
-------------------------------------------
There are quite a few **helper functions** that can be used with `mutate`. These make it easier to carry out certain transformations that aren’t easy to do with base R functions. We won’t explore these here as they tend to be needed only in quite specific circumstances. However, in situations where we need to construct an unusual variable—for example, one that ranks the values of another variable—it’s always worth looking at the that [handy cheat sheat](http://www.rstudio.com/resources/cheatsheets/) to see what options might be available.
14\.4 Working with `filter`
---------------------------
There’s one `dplyr` **helper function** that works with `filter` that’s definitely worth knowing about: the `between` function. This is used to identify the values of a variable that lie inside a defined range:
```
filter(storms_tbl, between(pressure, 960, 970))
```
```
## # A tibble: 213 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Felix 1995 8 11 18 21.3 -56.5 965 90 Hurric… 72
## 2 Felix 1995 8 14 12 29.9 -63.4 962 80 Hurric… 75
## 3 Felix 1995 8 14 18 30.7 -64.1 962 75 Hurric… 75
## 4 Felix 1995 8 15 0 31.3 -65.1 962 75 Hurric… 76
## 5 Felix 1995 8 15 6 31.9 -66.2 964 75 Hurric… 76
## 6 Felix 1995 8 15 12 32.5 -67.4 968 70 Hurric… 76
## 7 Felix 1995 8 15 18 33.1 -68.8 965 70 Hurric… 76
## 8 Felix 1995 8 16 0 33.5 -70.1 963 70 Hurric… 77
## 9 Felix 1995 8 16 6 34 -71.3 966 70 Hurric… 77
## 10 Felix 1995 8 16 12 34.6 -72.4 968 70 Hurric… 77
## # … with 203 more rows
```
This example filters the `storms` dataset such that only values of `pressure` between 960 and 970 are retained. We could do the same thing using some combination of `>` or `<`, but the `between` function makes things a bit easier to read.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/grouping-and-summarising-data.html |
Chapter 15 Grouping and summarising data
========================================
This chapter will explore the `summarise` and `group_by` verbs. These two verbs are considered together because they are often used together, and their usage is quite distinct from the other **dplyr** verbs we’ve encountered:
* The `group_by` function adds information into a data object (e.g. a data frame or tibble), which makes subsequent calculations happen on a group\-specific basis.
* The `summarise` function is a data reduction function calculates single\-number summaries of one or more variables, respecting the group structure if present.
### 15\.0\.1 Getting ready
We can start a new script by loading and attaching the **dplyr** package:
```
library("dplyr")
```
We’re going to use both the `storms` and `iris` data sets in the **nasaweather** and **datasets** packages, respectively. The **datasets** package ships is automatically loaded and attached at start up, so we need to make the **nasaweather** package available:
```
library("nasaweather")
```
Finally, let’s convert both data sets to a tibble so they print to the Console cleanly:
```
storms_tbl <- tbl_df(storms)
iris_tbl <- tbl_df(iris)
```
15\.1 Summarising variables with `summarise`
--------------------------------------------
We use `summarise` to **calculate summaries of variables** in an object containing our data. We do this kind of calculation all the time when analysing data. In terms of pseudo\-code, usage of `summarise` looks like this:
```
summarise(data_set, <expression1>, <expression2>, ...)
```
The first argument, `data_set`, must be the name of the data frame or tibble containing our data. We then include a series of one or more additional arguments, each of these is a valid R expression involving at least one variable in `data_set`. These are given by the pseudo\-code placeholder `<expression1>, <expression2>, ...`, where `<expression1>` and `<expression2>` represent the first two expressions, and the `...` is acting as placeholder for the remaining expressions. These expressions can be any calculation involving R functions. The only constraint is that they must generate **a single value** when evaluated.
That last sentence was important. It’s easy to use `summarise` if can we remember one thing: `summarise` is designed to work with functions that take a vector as their input and return a single value (i.e. a vector of length one). Any calculation that does this can be used with `summarise`.
The `summarise` verb is best understood by example. The R function called `mean` takes a vector of numbers (several numbers) and calculates their arithmetic mean (one number). We can use `mean` with `summarise` to calculate the mean of the `Petal.Length` and `Petal.Width` variables in `iris_tbl` like this:
```
summarise(iris_tbl, mean(Petal.Length), mean(Petal.Width))
```
```
## # A tibble: 1 x 2
## `mean(Petal.Length)` `mean(Petal.Width)`
## <dbl> <dbl>
## 1 3.76 1.20
```
Notice what kind of object `summarise` returns: it’s a tibble with only one row and two columns. There are two columns because we calculated two means, and there is one row containing these means. Simple. There are a few other things to note about how `summarise` works:
* As with all **dplyr** functions, the expression that performs the required summary calculation is not surrounded by quotes because it is an expression that it “does some calculations”.
* The order of the expression in the resulting tibble is the same as the order in which they were used as arguments.
* Even though the dimensions of the output object have changed, `summarise` returns the same kind of data object as its input. It returns a data frame if our data was originally in a data frame, or a tibble if it was in a tibble.
Notice that `summarise` used the expressions to name the variables. Variable names like `mean(Petal.Length)` and `mean(Petal.Width)` are not very helpful. They’re quite long for one. More problematically, they contain special reserved characters like `(`, which makes referring to columns in the resulting tibble more difficult than it needs to be:
```
# make a summary tibble an assign it a name
iris_means <- summarise(iris_tbl, mean(Petal.Length), mean(Petal.Width))
# extract the mean petal length
iris_means$`mean(Petal.Length)`
```
```
## [1] 3.758
```
We have to place ‘back ticks’ (as above) or ordinary quotes around the name to extract the new column when it includes special characters.
It’s better to avoid using the default names. The `summarise` function can name the new variables at the same time as they are created. Predictably, we do this by naming the arguments using `=`, placing the name we require on the left hand side. For example:
```
summarise(iris_tbl, Mean_PL = mean(Petal.Length), Mean_PW = mean(Petal.Width))
```
```
## # A tibble: 1 x 2
## Mean_PL Mean_PW
## <dbl> <dbl>
## 1 3.76 1.20
```
There are very many base R functions that can be used with `summarise`. A few useful ones for calculating summaries of numeric variables are:
* `min` and `max` calculate the minimum and maximum values of a vector.
* `mean` and `median` calculate the mean and median of a numeric vector.
* `sd` and `var` calculate the standard deviation and variance of a numeric vector.
We can combine more than one function in a `summarise` expression as long as it returns a single number. This means we can do arbitrarily complicated calculations in a single step. For example, if we need to know the ratio of the mean and median values of petal length and petal width in `iris_tbl`, we use:
```
summarise(iris_tbl,
Mn_Md_PL = mean(Petal.Length) / median(Petal.Length),
Mn_Md_PW = mean(Petal.Width) / median(Petal.Width))
```
```
## # A tibble: 1 x 2
## Mn_Md_PL Mn_Md_PW
## <dbl> <dbl>
## 1 0.864 0.923
```
Notice that we placed each argument on a separate line in this example. This is just a style issue—we don’t have to do this, but since R doesn’t care about white space, we can use new lines and spaces to keep everything a bit more more human\-readable. It pays to organise `summarise` calculations like this as they become longer. It allows us to see the logic of the calculations more easily, and helps us spot potential errors when they occur.
### 15\.1\.1 Helper functions
There are a small number **dplyr** helper functions that can be used with `summarise`. These generally provide summaries that aren’t available directly using base R functions. For example, the `n_distinct` function is used to calculate the number of distinct values in a variable:
```
summarise(iris_tbl, Num.PL.Vals = n_distinct(Petal.Length))
```
```
## # A tibble: 1 x 1
## Num.PL.Vals
## <int>
## 1 43
```
This tells us that there are 43 unique values of `Petal.Length`. We won’t explore any others here. The [handy cheat sheat](http://www.rstudio.com/resources/cheatsheets/) is worth looking over to see what additional options are available.
15\.2 Grouped operations using `group_by`
-----------------------------------------
Performing a calculation with one or more variables over the whole data set is useful, but very often we also need to carry out an operation on different subsets of our data. For example, it’s probably more useful to know how the mean sepal and petal traits vary among the different species in the `iris_tbl` data set, rather than knowing the overall mean of these traits. We could calculate separate means by using `filter` to create different subsets of `iris_tbl`, and then using `summary` on each of these to calculate the relevant means. This would get the job done, but it’s not very efficient and very soon becomes tiresome when we have to work with many groups.
The `group_by` function provides a more elegant solution to this kind of problem. It doesn’t do all that much on its own though. All the `group_by` function does is add a bit of grouping information to a tibble or data frame. In effect, it defines subsets of data on the basis of one or more **grouping variables**. The magic happens when the grouped object is used with a **dplyr** verb like `summarise` or `mutate`. Once a data frame or tibble has been tagged with grouping information, operations that involve these (and other) verbs are carried out on separate subsets of the data, where the subsets correspond to the different values of the grouping variable(s).
Basic usage of `group_by` looks like this:
```
group_by(data_set, vname1, vname2, ...)
```
The first argument, `data_set` (“data object”), must be the name of the object containing our data. We then have to include one or more additional arguments, where each of these is the name of a variable in `data_set`. I have expressed this as `vname1, vname2, ...`, where `vname1` and `vname2` are names of the first two variables, and the `...` is acting as placeholder for the remaining variables.
As usual, it’s much easier to understand how `group_by` works once we’ve seen it in action. We’ll illustrate `group_by` by using it alongside `summarise` with the `storms_tbl` data set. We’re aiming to calculate the mean wind speed for every type of storm. The first step is to use `group_by` to add grouping information to `storms_tbl`:
```
group_by(storms_tbl, type)
```
```
## # A tibble: 2,747 x 11
## # Groups: type [4]
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Allis… 1995 6 3 0 17.4 -84.3 1005 30 Tropi… 3
## 2 Allis… 1995 6 3 6 18.3 -84.9 1004 30 Tropi… 3
## 3 Allis… 1995 6 3 12 19.3 -85.7 1003 35 Tropi… 3
## 4 Allis… 1995 6 3 18 20.6 -85.8 1001 40 Tropi… 3
## 5 Allis… 1995 6 4 0 22 -86 997 50 Tropi… 4
## 6 Allis… 1995 6 4 6 23.3 -86.3 995 60 Tropi… 4
## 7 Allis… 1995 6 4 12 24.7 -86.2 987 65 Hurri… 4
## 8 Allis… 1995 6 4 18 26.2 -86.2 988 65 Hurri… 4
## 9 Allis… 1995 6 5 0 27.6 -86.1 988 65 Hurri… 5
## 10 Allis… 1995 6 5 6 28.5 -85.6 990 60 Tropi… 5
## # … with 2,737 more rows
```
Compare this to the output produced when we print the original `storms_tbl` data set:
```
storms_tbl
```
```
## # A tibble: 2,747 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Allis… 1995 6 3 0 17.4 -84.3 1005 30 Tropi… 3
## 2 Allis… 1995 6 3 6 18.3 -84.9 1004 30 Tropi… 3
## 3 Allis… 1995 6 3 12 19.3 -85.7 1003 35 Tropi… 3
## 4 Allis… 1995 6 3 18 20.6 -85.8 1001 40 Tropi… 3
## 5 Allis… 1995 6 4 0 22 -86 997 50 Tropi… 4
## 6 Allis… 1995 6 4 6 23.3 -86.3 995 60 Tropi… 4
## 7 Allis… 1995 6 4 12 24.7 -86.2 987 65 Hurri… 4
## 8 Allis… 1995 6 4 18 26.2 -86.2 988 65 Hurri… 4
## 9 Allis… 1995 6 5 0 27.6 -86.1 988 65 Hurri… 5
## 10 Allis… 1995 6 5 6 28.5 -85.6 990 60 Tropi… 5
## # … with 2,737 more rows
```
There is almost no change in the printed information—`group_by` really doesn’t do much on its own. The main change is that the tibble resulting from the `group_by` operation has a little bit of additional information printed at the top: `Groups: type [4]`. The `Groups: type` part of this tells us that the tibble is grouped by the `type` variable and nothing else. The `[4]` part tells us that there are 4 different groups.
The only thing `group_by` did was add this grouping information to a copy of `storms_tbl`. The original `storms_tbl` object was not altered in any way. If we actually want to do anything useful useful with the result we need to assign it a name so that we can work with it:
```
storms_grouped <- group_by(storms_tbl, type)
```
Now we have a grouped tibble called `storms_grouped`, where the groups are defined by the values of `type`. Any operations on this tibble will now be performed on a “by group” basis. To see this in action, we use `summarise` to calculate the mean wind speed:
```
summarise(storms_grouped, mean.wind = mean(wind))
```
```
## # A tibble: 4 x 2
## type mean.wind
## <chr> <dbl>
## 1 Extratropical 40.1
## 2 Hurricane 84.7
## 3 Tropical Depression 27.4
## 4 Tropical Storm 47.3
```
When we used `summarise` on an ungrouped tibble the result was a tibble with one row: the overall global mean. Now the resulting tibble has four rows, one for each value of `type`: The `type` variable in the new tibble tells us what these values are; the `mean.wind` variable shows the mean wind speed for each value.
### 15\.2\.1 More than one grouping variable
What if we need to calculate summaries using more than one grouping variable? The workflow is unchanged. Let’s assume we want to know the mean wind speed and atmospheric pressure associated with each storm type in each year. We first make a grouped copy of the data set with the appropriate grouping variables:
```
# group the storms_tbl data by storm year + assign the result a name
storms_grouped <- group_by(storms_tbl, type, year)
#
storms_grouped
```
```
## # A tibble: 2,747 x 11
## # Groups: type, year [24]
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Allis… 1995 6 3 0 17.4 -84.3 1005 30 Tropi… 3
## 2 Allis… 1995 6 3 6 18.3 -84.9 1004 30 Tropi… 3
## 3 Allis… 1995 6 3 12 19.3 -85.7 1003 35 Tropi… 3
## 4 Allis… 1995 6 3 18 20.6 -85.8 1001 40 Tropi… 3
## 5 Allis… 1995 6 4 0 22 -86 997 50 Tropi… 4
## 6 Allis… 1995 6 4 6 23.3 -86.3 995 60 Tropi… 4
## 7 Allis… 1995 6 4 12 24.7 -86.2 987 65 Hurri… 4
## 8 Allis… 1995 6 4 18 26.2 -86.2 988 65 Hurri… 4
## 9 Allis… 1995 6 5 0 27.6 -86.1 988 65 Hurri… 5
## 10 Allis… 1995 6 5 6 28.5 -85.6 990 60 Tropi… 5
## # … with 2,737 more rows
```
We grouped the `storms_tbl` data by `type` and `year` and assigned the grouped tibble the name `storms_grouped`. When we print this to the Console we see `Groups: type, year [24]` near the top, which tells us that the tibble is grouped by two variables with 24 unique combinations of values. We then calculate the mean wind speed and pressure of each storm type in each year:
```
summarise(storms_grouped,
mean_wind = mean(wind),
mean_pressure = mean(pressure))
```
```
## # A tibble: 24 x 4
## # Groups: type [?]
## type year mean_wind mean_pressure
## <chr> <int> <dbl> <dbl>
## 1 Extratropical 1995 38.7 995.
## 2 Extratropical 1996 40.4 991.
## 3 Extratropical 1997 38.9 1000.
## 4 Extratropical 1998 42.9 991.
## 5 Extratropical 1999 38.9 992.
## 6 Extratropical 2000 39.7 997.
## 7 Hurricane 1995 82.0 970.
## 8 Hurricane 1996 85.5 969.
## 9 Hurricane 1997 80.4 976.
## 10 Hurricane 1998 87.0 972.
## # … with 14 more rows
```
This calculates mean wind speed and atmospheric pressure for different combination of `type` and `year`. The first line shows us that the mean wind speed and pressure associated with extra\-tropical storms in 1995 was 38\.7 mph and 995 millibars, the second line shows us that the mean wind speed and pressure associated with extra\-tropical storms in 1995 was 40\.4 mph and 991 millibars, and so on. There are 24 rows in total because there were 24 unique combinations of `type` and `year` in the original `storms_tbl`.
### 15\.2\.2 Using `group_by` with other verbs
The `summarise` function is the only **dplyr** verb we’ll use with grouped tibbles in this book. However, all the main verbs alter their behaviour to respect group identity when used with tibbles with grouping information. When `mutate` or `transmute` are used with a grouped object they still add new variables, but now the calculations occur “by group”. Here’s an example using `transmute`:
```
# group the storms data by storm name + assign the result a name
storms_grouped <- group_by(storms_tbl, name)
# create a data set 'mean centred' wind speed variable
transmute(storms_grouped, wind_centred = wind - mean(wind))
```
```
## # A tibble: 2,747 x 2
## # Groups: name [79]
## name wind_centred
## <chr> <dbl>
## 1 Allison -14.4
## 2 Allison -14.4
## 3 Allison -9.39
## 4 Allison -4.39
## 5 Allison 5.61
## 6 Allison 15.6
## 7 Allison 20.6
## 8 Allison 20.6
## 9 Allison 20.6
## 10 Allison 15.6
## # … with 2,737 more rows
```
In this example we calculated the “group mean\-centered” version of the wind variable. The new `wind_centred` variable contains the difference between the wind speed and the mean of whichever storm type is associated with the observation.
15\.3 Removing grouping information
-----------------------------------
On occasion it’s necessary to remove grouping information from a data object. This is most often done when working with “pipes” (the topic of the next chapter) when we need to revert back to operating on the whole data set. The `ungroup` function removes grouping information:
```
ungroup(storms_grouped)
```
```
## # A tibble: 2,747 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Allis… 1995 6 3 0 17.4 -84.3 1005 30 Tropi… 3
## 2 Allis… 1995 6 3 6 18.3 -84.9 1004 30 Tropi… 3
## 3 Allis… 1995 6 3 12 19.3 -85.7 1003 35 Tropi… 3
## 4 Allis… 1995 6 3 18 20.6 -85.8 1001 40 Tropi… 3
## 5 Allis… 1995 6 4 0 22 -86 997 50 Tropi… 4
## 6 Allis… 1995 6 4 6 23.3 -86.3 995 60 Tropi… 4
## 7 Allis… 1995 6 4 12 24.7 -86.2 987 65 Hurri… 4
## 8 Allis… 1995 6 4 18 26.2 -86.2 988 65 Hurri… 4
## 9 Allis… 1995 6 5 0 27.6 -86.1 988 65 Hurri… 5
## 10 Allis… 1995 6 5 6 28.5 -85.6 990 60 Tropi… 5
## # … with 2,737 more rows
```
Looking at the top right of the printed summary, we can see that the `Group:` part is now gone—the `ungroup` function effectively created a copy of `storms_grouped` that is identical to the original `storms_tbl` tibble.
### 15\.0\.1 Getting ready
We can start a new script by loading and attaching the **dplyr** package:
```
library("dplyr")
```
We’re going to use both the `storms` and `iris` data sets in the **nasaweather** and **datasets** packages, respectively. The **datasets** package ships is automatically loaded and attached at start up, so we need to make the **nasaweather** package available:
```
library("nasaweather")
```
Finally, let’s convert both data sets to a tibble so they print to the Console cleanly:
```
storms_tbl <- tbl_df(storms)
iris_tbl <- tbl_df(iris)
```
15\.1 Summarising variables with `summarise`
--------------------------------------------
We use `summarise` to **calculate summaries of variables** in an object containing our data. We do this kind of calculation all the time when analysing data. In terms of pseudo\-code, usage of `summarise` looks like this:
```
summarise(data_set, <expression1>, <expression2>, ...)
```
The first argument, `data_set`, must be the name of the data frame or tibble containing our data. We then include a series of one or more additional arguments, each of these is a valid R expression involving at least one variable in `data_set`. These are given by the pseudo\-code placeholder `<expression1>, <expression2>, ...`, where `<expression1>` and `<expression2>` represent the first two expressions, and the `...` is acting as placeholder for the remaining expressions. These expressions can be any calculation involving R functions. The only constraint is that they must generate **a single value** when evaluated.
That last sentence was important. It’s easy to use `summarise` if can we remember one thing: `summarise` is designed to work with functions that take a vector as their input and return a single value (i.e. a vector of length one). Any calculation that does this can be used with `summarise`.
The `summarise` verb is best understood by example. The R function called `mean` takes a vector of numbers (several numbers) and calculates their arithmetic mean (one number). We can use `mean` with `summarise` to calculate the mean of the `Petal.Length` and `Petal.Width` variables in `iris_tbl` like this:
```
summarise(iris_tbl, mean(Petal.Length), mean(Petal.Width))
```
```
## # A tibble: 1 x 2
## `mean(Petal.Length)` `mean(Petal.Width)`
## <dbl> <dbl>
## 1 3.76 1.20
```
Notice what kind of object `summarise` returns: it’s a tibble with only one row and two columns. There are two columns because we calculated two means, and there is one row containing these means. Simple. There are a few other things to note about how `summarise` works:
* As with all **dplyr** functions, the expression that performs the required summary calculation is not surrounded by quotes because it is an expression that it “does some calculations”.
* The order of the expression in the resulting tibble is the same as the order in which they were used as arguments.
* Even though the dimensions of the output object have changed, `summarise` returns the same kind of data object as its input. It returns a data frame if our data was originally in a data frame, or a tibble if it was in a tibble.
Notice that `summarise` used the expressions to name the variables. Variable names like `mean(Petal.Length)` and `mean(Petal.Width)` are not very helpful. They’re quite long for one. More problematically, they contain special reserved characters like `(`, which makes referring to columns in the resulting tibble more difficult than it needs to be:
```
# make a summary tibble an assign it a name
iris_means <- summarise(iris_tbl, mean(Petal.Length), mean(Petal.Width))
# extract the mean petal length
iris_means$`mean(Petal.Length)`
```
```
## [1] 3.758
```
We have to place ‘back ticks’ (as above) or ordinary quotes around the name to extract the new column when it includes special characters.
It’s better to avoid using the default names. The `summarise` function can name the new variables at the same time as they are created. Predictably, we do this by naming the arguments using `=`, placing the name we require on the left hand side. For example:
```
summarise(iris_tbl, Mean_PL = mean(Petal.Length), Mean_PW = mean(Petal.Width))
```
```
## # A tibble: 1 x 2
## Mean_PL Mean_PW
## <dbl> <dbl>
## 1 3.76 1.20
```
There are very many base R functions that can be used with `summarise`. A few useful ones for calculating summaries of numeric variables are:
* `min` and `max` calculate the minimum and maximum values of a vector.
* `mean` and `median` calculate the mean and median of a numeric vector.
* `sd` and `var` calculate the standard deviation and variance of a numeric vector.
We can combine more than one function in a `summarise` expression as long as it returns a single number. This means we can do arbitrarily complicated calculations in a single step. For example, if we need to know the ratio of the mean and median values of petal length and petal width in `iris_tbl`, we use:
```
summarise(iris_tbl,
Mn_Md_PL = mean(Petal.Length) / median(Petal.Length),
Mn_Md_PW = mean(Petal.Width) / median(Petal.Width))
```
```
## # A tibble: 1 x 2
## Mn_Md_PL Mn_Md_PW
## <dbl> <dbl>
## 1 0.864 0.923
```
Notice that we placed each argument on a separate line in this example. This is just a style issue—we don’t have to do this, but since R doesn’t care about white space, we can use new lines and spaces to keep everything a bit more more human\-readable. It pays to organise `summarise` calculations like this as they become longer. It allows us to see the logic of the calculations more easily, and helps us spot potential errors when they occur.
### 15\.1\.1 Helper functions
There are a small number **dplyr** helper functions that can be used with `summarise`. These generally provide summaries that aren’t available directly using base R functions. For example, the `n_distinct` function is used to calculate the number of distinct values in a variable:
```
summarise(iris_tbl, Num.PL.Vals = n_distinct(Petal.Length))
```
```
## # A tibble: 1 x 1
## Num.PL.Vals
## <int>
## 1 43
```
This tells us that there are 43 unique values of `Petal.Length`. We won’t explore any others here. The [handy cheat sheat](http://www.rstudio.com/resources/cheatsheets/) is worth looking over to see what additional options are available.
### 15\.1\.1 Helper functions
There are a small number **dplyr** helper functions that can be used with `summarise`. These generally provide summaries that aren’t available directly using base R functions. For example, the `n_distinct` function is used to calculate the number of distinct values in a variable:
```
summarise(iris_tbl, Num.PL.Vals = n_distinct(Petal.Length))
```
```
## # A tibble: 1 x 1
## Num.PL.Vals
## <int>
## 1 43
```
This tells us that there are 43 unique values of `Petal.Length`. We won’t explore any others here. The [handy cheat sheat](http://www.rstudio.com/resources/cheatsheets/) is worth looking over to see what additional options are available.
15\.2 Grouped operations using `group_by`
-----------------------------------------
Performing a calculation with one or more variables over the whole data set is useful, but very often we also need to carry out an operation on different subsets of our data. For example, it’s probably more useful to know how the mean sepal and petal traits vary among the different species in the `iris_tbl` data set, rather than knowing the overall mean of these traits. We could calculate separate means by using `filter` to create different subsets of `iris_tbl`, and then using `summary` on each of these to calculate the relevant means. This would get the job done, but it’s not very efficient and very soon becomes tiresome when we have to work with many groups.
The `group_by` function provides a more elegant solution to this kind of problem. It doesn’t do all that much on its own though. All the `group_by` function does is add a bit of grouping information to a tibble or data frame. In effect, it defines subsets of data on the basis of one or more **grouping variables**. The magic happens when the grouped object is used with a **dplyr** verb like `summarise` or `mutate`. Once a data frame or tibble has been tagged with grouping information, operations that involve these (and other) verbs are carried out on separate subsets of the data, where the subsets correspond to the different values of the grouping variable(s).
Basic usage of `group_by` looks like this:
```
group_by(data_set, vname1, vname2, ...)
```
The first argument, `data_set` (“data object”), must be the name of the object containing our data. We then have to include one or more additional arguments, where each of these is the name of a variable in `data_set`. I have expressed this as `vname1, vname2, ...`, where `vname1` and `vname2` are names of the first two variables, and the `...` is acting as placeholder for the remaining variables.
As usual, it’s much easier to understand how `group_by` works once we’ve seen it in action. We’ll illustrate `group_by` by using it alongside `summarise` with the `storms_tbl` data set. We’re aiming to calculate the mean wind speed for every type of storm. The first step is to use `group_by` to add grouping information to `storms_tbl`:
```
group_by(storms_tbl, type)
```
```
## # A tibble: 2,747 x 11
## # Groups: type [4]
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Allis… 1995 6 3 0 17.4 -84.3 1005 30 Tropi… 3
## 2 Allis… 1995 6 3 6 18.3 -84.9 1004 30 Tropi… 3
## 3 Allis… 1995 6 3 12 19.3 -85.7 1003 35 Tropi… 3
## 4 Allis… 1995 6 3 18 20.6 -85.8 1001 40 Tropi… 3
## 5 Allis… 1995 6 4 0 22 -86 997 50 Tropi… 4
## 6 Allis… 1995 6 4 6 23.3 -86.3 995 60 Tropi… 4
## 7 Allis… 1995 6 4 12 24.7 -86.2 987 65 Hurri… 4
## 8 Allis… 1995 6 4 18 26.2 -86.2 988 65 Hurri… 4
## 9 Allis… 1995 6 5 0 27.6 -86.1 988 65 Hurri… 5
## 10 Allis… 1995 6 5 6 28.5 -85.6 990 60 Tropi… 5
## # … with 2,737 more rows
```
Compare this to the output produced when we print the original `storms_tbl` data set:
```
storms_tbl
```
```
## # A tibble: 2,747 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Allis… 1995 6 3 0 17.4 -84.3 1005 30 Tropi… 3
## 2 Allis… 1995 6 3 6 18.3 -84.9 1004 30 Tropi… 3
## 3 Allis… 1995 6 3 12 19.3 -85.7 1003 35 Tropi… 3
## 4 Allis… 1995 6 3 18 20.6 -85.8 1001 40 Tropi… 3
## 5 Allis… 1995 6 4 0 22 -86 997 50 Tropi… 4
## 6 Allis… 1995 6 4 6 23.3 -86.3 995 60 Tropi… 4
## 7 Allis… 1995 6 4 12 24.7 -86.2 987 65 Hurri… 4
## 8 Allis… 1995 6 4 18 26.2 -86.2 988 65 Hurri… 4
## 9 Allis… 1995 6 5 0 27.6 -86.1 988 65 Hurri… 5
## 10 Allis… 1995 6 5 6 28.5 -85.6 990 60 Tropi… 5
## # … with 2,737 more rows
```
There is almost no change in the printed information—`group_by` really doesn’t do much on its own. The main change is that the tibble resulting from the `group_by` operation has a little bit of additional information printed at the top: `Groups: type [4]`. The `Groups: type` part of this tells us that the tibble is grouped by the `type` variable and nothing else. The `[4]` part tells us that there are 4 different groups.
The only thing `group_by` did was add this grouping information to a copy of `storms_tbl`. The original `storms_tbl` object was not altered in any way. If we actually want to do anything useful useful with the result we need to assign it a name so that we can work with it:
```
storms_grouped <- group_by(storms_tbl, type)
```
Now we have a grouped tibble called `storms_grouped`, where the groups are defined by the values of `type`. Any operations on this tibble will now be performed on a “by group” basis. To see this in action, we use `summarise` to calculate the mean wind speed:
```
summarise(storms_grouped, mean.wind = mean(wind))
```
```
## # A tibble: 4 x 2
## type mean.wind
## <chr> <dbl>
## 1 Extratropical 40.1
## 2 Hurricane 84.7
## 3 Tropical Depression 27.4
## 4 Tropical Storm 47.3
```
When we used `summarise` on an ungrouped tibble the result was a tibble with one row: the overall global mean. Now the resulting tibble has four rows, one for each value of `type`: The `type` variable in the new tibble tells us what these values are; the `mean.wind` variable shows the mean wind speed for each value.
### 15\.2\.1 More than one grouping variable
What if we need to calculate summaries using more than one grouping variable? The workflow is unchanged. Let’s assume we want to know the mean wind speed and atmospheric pressure associated with each storm type in each year. We first make a grouped copy of the data set with the appropriate grouping variables:
```
# group the storms_tbl data by storm year + assign the result a name
storms_grouped <- group_by(storms_tbl, type, year)
#
storms_grouped
```
```
## # A tibble: 2,747 x 11
## # Groups: type, year [24]
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Allis… 1995 6 3 0 17.4 -84.3 1005 30 Tropi… 3
## 2 Allis… 1995 6 3 6 18.3 -84.9 1004 30 Tropi… 3
## 3 Allis… 1995 6 3 12 19.3 -85.7 1003 35 Tropi… 3
## 4 Allis… 1995 6 3 18 20.6 -85.8 1001 40 Tropi… 3
## 5 Allis… 1995 6 4 0 22 -86 997 50 Tropi… 4
## 6 Allis… 1995 6 4 6 23.3 -86.3 995 60 Tropi… 4
## 7 Allis… 1995 6 4 12 24.7 -86.2 987 65 Hurri… 4
## 8 Allis… 1995 6 4 18 26.2 -86.2 988 65 Hurri… 4
## 9 Allis… 1995 6 5 0 27.6 -86.1 988 65 Hurri… 5
## 10 Allis… 1995 6 5 6 28.5 -85.6 990 60 Tropi… 5
## # … with 2,737 more rows
```
We grouped the `storms_tbl` data by `type` and `year` and assigned the grouped tibble the name `storms_grouped`. When we print this to the Console we see `Groups: type, year [24]` near the top, which tells us that the tibble is grouped by two variables with 24 unique combinations of values. We then calculate the mean wind speed and pressure of each storm type in each year:
```
summarise(storms_grouped,
mean_wind = mean(wind),
mean_pressure = mean(pressure))
```
```
## # A tibble: 24 x 4
## # Groups: type [?]
## type year mean_wind mean_pressure
## <chr> <int> <dbl> <dbl>
## 1 Extratropical 1995 38.7 995.
## 2 Extratropical 1996 40.4 991.
## 3 Extratropical 1997 38.9 1000.
## 4 Extratropical 1998 42.9 991.
## 5 Extratropical 1999 38.9 992.
## 6 Extratropical 2000 39.7 997.
## 7 Hurricane 1995 82.0 970.
## 8 Hurricane 1996 85.5 969.
## 9 Hurricane 1997 80.4 976.
## 10 Hurricane 1998 87.0 972.
## # … with 14 more rows
```
This calculates mean wind speed and atmospheric pressure for different combination of `type` and `year`. The first line shows us that the mean wind speed and pressure associated with extra\-tropical storms in 1995 was 38\.7 mph and 995 millibars, the second line shows us that the mean wind speed and pressure associated with extra\-tropical storms in 1995 was 40\.4 mph and 991 millibars, and so on. There are 24 rows in total because there were 24 unique combinations of `type` and `year` in the original `storms_tbl`.
### 15\.2\.2 Using `group_by` with other verbs
The `summarise` function is the only **dplyr** verb we’ll use with grouped tibbles in this book. However, all the main verbs alter their behaviour to respect group identity when used with tibbles with grouping information. When `mutate` or `transmute` are used with a grouped object they still add new variables, but now the calculations occur “by group”. Here’s an example using `transmute`:
```
# group the storms data by storm name + assign the result a name
storms_grouped <- group_by(storms_tbl, name)
# create a data set 'mean centred' wind speed variable
transmute(storms_grouped, wind_centred = wind - mean(wind))
```
```
## # A tibble: 2,747 x 2
## # Groups: name [79]
## name wind_centred
## <chr> <dbl>
## 1 Allison -14.4
## 2 Allison -14.4
## 3 Allison -9.39
## 4 Allison -4.39
## 5 Allison 5.61
## 6 Allison 15.6
## 7 Allison 20.6
## 8 Allison 20.6
## 9 Allison 20.6
## 10 Allison 15.6
## # … with 2,737 more rows
```
In this example we calculated the “group mean\-centered” version of the wind variable. The new `wind_centred` variable contains the difference between the wind speed and the mean of whichever storm type is associated with the observation.
### 15\.2\.1 More than one grouping variable
What if we need to calculate summaries using more than one grouping variable? The workflow is unchanged. Let’s assume we want to know the mean wind speed and atmospheric pressure associated with each storm type in each year. We first make a grouped copy of the data set with the appropriate grouping variables:
```
# group the storms_tbl data by storm year + assign the result a name
storms_grouped <- group_by(storms_tbl, type, year)
#
storms_grouped
```
```
## # A tibble: 2,747 x 11
## # Groups: type, year [24]
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Allis… 1995 6 3 0 17.4 -84.3 1005 30 Tropi… 3
## 2 Allis… 1995 6 3 6 18.3 -84.9 1004 30 Tropi… 3
## 3 Allis… 1995 6 3 12 19.3 -85.7 1003 35 Tropi… 3
## 4 Allis… 1995 6 3 18 20.6 -85.8 1001 40 Tropi… 3
## 5 Allis… 1995 6 4 0 22 -86 997 50 Tropi… 4
## 6 Allis… 1995 6 4 6 23.3 -86.3 995 60 Tropi… 4
## 7 Allis… 1995 6 4 12 24.7 -86.2 987 65 Hurri… 4
## 8 Allis… 1995 6 4 18 26.2 -86.2 988 65 Hurri… 4
## 9 Allis… 1995 6 5 0 27.6 -86.1 988 65 Hurri… 5
## 10 Allis… 1995 6 5 6 28.5 -85.6 990 60 Tropi… 5
## # … with 2,737 more rows
```
We grouped the `storms_tbl` data by `type` and `year` and assigned the grouped tibble the name `storms_grouped`. When we print this to the Console we see `Groups: type, year [24]` near the top, which tells us that the tibble is grouped by two variables with 24 unique combinations of values. We then calculate the mean wind speed and pressure of each storm type in each year:
```
summarise(storms_grouped,
mean_wind = mean(wind),
mean_pressure = mean(pressure))
```
```
## # A tibble: 24 x 4
## # Groups: type [?]
## type year mean_wind mean_pressure
## <chr> <int> <dbl> <dbl>
## 1 Extratropical 1995 38.7 995.
## 2 Extratropical 1996 40.4 991.
## 3 Extratropical 1997 38.9 1000.
## 4 Extratropical 1998 42.9 991.
## 5 Extratropical 1999 38.9 992.
## 6 Extratropical 2000 39.7 997.
## 7 Hurricane 1995 82.0 970.
## 8 Hurricane 1996 85.5 969.
## 9 Hurricane 1997 80.4 976.
## 10 Hurricane 1998 87.0 972.
## # … with 14 more rows
```
This calculates mean wind speed and atmospheric pressure for different combination of `type` and `year`. The first line shows us that the mean wind speed and pressure associated with extra\-tropical storms in 1995 was 38\.7 mph and 995 millibars, the second line shows us that the mean wind speed and pressure associated with extra\-tropical storms in 1995 was 40\.4 mph and 991 millibars, and so on. There are 24 rows in total because there were 24 unique combinations of `type` and `year` in the original `storms_tbl`.
### 15\.2\.2 Using `group_by` with other verbs
The `summarise` function is the only **dplyr** verb we’ll use with grouped tibbles in this book. However, all the main verbs alter their behaviour to respect group identity when used with tibbles with grouping information. When `mutate` or `transmute` are used with a grouped object they still add new variables, but now the calculations occur “by group”. Here’s an example using `transmute`:
```
# group the storms data by storm name + assign the result a name
storms_grouped <- group_by(storms_tbl, name)
# create a data set 'mean centred' wind speed variable
transmute(storms_grouped, wind_centred = wind - mean(wind))
```
```
## # A tibble: 2,747 x 2
## # Groups: name [79]
## name wind_centred
## <chr> <dbl>
## 1 Allison -14.4
## 2 Allison -14.4
## 3 Allison -9.39
## 4 Allison -4.39
## 5 Allison 5.61
## 6 Allison 15.6
## 7 Allison 20.6
## 8 Allison 20.6
## 9 Allison 20.6
## 10 Allison 15.6
## # … with 2,737 more rows
```
In this example we calculated the “group mean\-centered” version of the wind variable. The new `wind_centred` variable contains the difference between the wind speed and the mean of whichever storm type is associated with the observation.
15\.3 Removing grouping information
-----------------------------------
On occasion it’s necessary to remove grouping information from a data object. This is most often done when working with “pipes” (the topic of the next chapter) when we need to revert back to operating on the whole data set. The `ungroup` function removes grouping information:
```
ungroup(storms_grouped)
```
```
## # A tibble: 2,747 x 11
## name year month day hour lat long pressure wind type seasday
## <chr> <int> <int> <int> <int> <dbl> <dbl> <int> <int> <chr> <int>
## 1 Allis… 1995 6 3 0 17.4 -84.3 1005 30 Tropi… 3
## 2 Allis… 1995 6 3 6 18.3 -84.9 1004 30 Tropi… 3
## 3 Allis… 1995 6 3 12 19.3 -85.7 1003 35 Tropi… 3
## 4 Allis… 1995 6 3 18 20.6 -85.8 1001 40 Tropi… 3
## 5 Allis… 1995 6 4 0 22 -86 997 50 Tropi… 4
## 6 Allis… 1995 6 4 6 23.3 -86.3 995 60 Tropi… 4
## 7 Allis… 1995 6 4 12 24.7 -86.2 987 65 Hurri… 4
## 8 Allis… 1995 6 4 18 26.2 -86.2 988 65 Hurri… 4
## 9 Allis… 1995 6 5 0 27.6 -86.1 988 65 Hurri… 5
## 10 Allis… 1995 6 5 6 28.5 -85.6 990 60 Tropi… 5
## # … with 2,737 more rows
```
Looking at the top right of the printed summary, we can see that the `Group:` part is now gone—the `ungroup` function effectively created a copy of `storms_grouped` that is identical to the original `storms_tbl` tibble.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/building-pipelines.html |
Chapter 16 Building pipelines
=============================
This chapter will introduce something called the pipe operator: `%>%`. We don’t often use the various **dplyr** verbs in isolation. Instead, starting with our raw data, they are combined in a sequence to prepare the data for further analysis (e.g. making a plot, calculating summaries, fitting a statistical model, and so on). The function of the pipe operator is to make the data wrangling part of such a workflow as transparent as possible.
16\.1 Why do we need ‘pipes’?
-----------------------------
We’ve seen that carrying out calculations on a per\-group basis can be achieved by grouping a tibble, assigning this a name, and then applying the `summarise` function to the new tibble. For example, if we need the mean wind speed for every storm recorded in `storms_tbl`, we could use:
```
# 1. make a grouped copy of the storms data
storms_grouped <- group_by(storms_tbl, name)
# 2. calculate the mean wind speed for each storm
summarise(storms_grouped, mean.wind = mean(wind))
```
```
## # A tibble: 79 x 2
## name mean.wind
## <chr> <dbl>
## 1 Alberto 63.0
## 2 Alex 35.4
## 3 Allison 44.4
## 4 Ana 32.1
## 5 Arlene 39.0
## 6 Arthur 35.2
## 7 Barry 39.8
## 8 Bertha 60
## 9 Beryl 36.1
## 10 Bill 50.6
## # … with 69 more rows
```
There’s nothing wrong with this way of doing things. However, this approach to building up an analysis is quite verbose—especially if an analysis involves more than a couple of steps—because we have to keep storing intermediate steps. It also tends to clutter the global environment with lots of data objects we don’t need.
One way to make things more concise is to use a nested function call (we examined these in the [Using functions](using-functions.html#using-functions) chapter), like this:
```
summarise(group_by(storms_tbl, type), mean.wind = mean(wind))
```
```
## # A tibble: 4 x 2
## type mean.wind
## <chr> <dbl>
## 1 Extratropical 40.1
## 2 Hurricane 84.7
## 3 Tropical Depression 27.4
## 4 Tropical Storm 47.3
```
Here we placed the `group_by` function call inside the list of arguments to `summarise`. Remember, you have to read nested function calls from the inside out to understand what they are doing. This is exactly equivalent to the previous example. We get the same result without having to store intermediate data. However, there are a couple of good reasons why this approach is not advised:
* Experienced R users probably don’t mind this approach because they’re used to nested functions calls. Nonetheless, no reasonable person would argue that nesting functions inside one another is intuitive. Reading outward from the inside of a large number of nested functions is hard work.
* Even for experienced R users, using function nesting is a fairly error prone approach. For example, it’s very easy to accidentally put an argument or two on the wrong side of a closing `)`. If we’re lucky this will produce an error and we’ll catch the problem. If we’re not, we may just end up with nonsense in the output.
There’s a third option for combing several functions that has the dual benefit of keeping our code concise and readable, while avoiding the need to clutter the global environment with intermediate objects. This third approach involves something called the “pipe” operator: `%>%` (no spaces allowed). This isn’t part of base R though. Instead, it’s part of a package called **magrittr**. but there’s no need to install this if we’re using **dplyr** because **dplyr** imports it for us.
The `%>%` operator has become very popular in recent years. The main reason for this is because it allows us to specify a chain of function calls in a (reasonably) human readable format. Here’s how we write the previous example using the pipe operator `%>%`:
```
storms_tbl %>% group_by(., type) %>% summarise(., mean.wind = mean(wind))
```
```
## # A tibble: 4 x 2
## type mean.wind
## <chr> <dbl>
## 1 Extratropical 40.1
## 2 Hurricane 84.7
## 3 Tropical Depression 27.4
## 4 Tropical Storm 47.3
```
How do we make sense of this? Every time we see the `%>%` operator it means the following: take whatever is produced by the left hand expression and use it as an argument in the function on the right hand side. The `.` serves as a placeholder for the location of the corresponding argument. This means we can understand what a sequence of calculations is doing by reading from left to right, just as we would read the words in a book. This example says, take the `storms_tbl` data, group it by `type`, then take the resulting grouped tibble and apply the summarise function to it to calculate the mean of `wind`. It is exactly the same calculation we did above.
When using the pipe operator we can often leave out the `.` placeholder. Remember, this signifies the argument of the function on the right of `%>%` that is associated with the result from on left of `%>%`. If we choose to leave out the `.`, the pipe operator assumes we meant to slot it into the first argument. This means we can simplify our example even more:
```
storms_tbl %>% group_by(type) %>% summarise(mean.wind = mean(wind))
```
```
## # A tibble: 4 x 2
## type mean.wind
## <chr> <dbl>
## 1 Extratropical 40.1
## 2 Hurricane 84.7
## 3 Tropical Depression 27.4
## 4 Tropical Storm 47.3
```
This is why the first argument of a **dplyr** verb is always the data object. This convention ensures that we can use `%>%` without explicitly specifying the argument to match against.
Remember, R does not care about white space, which means we can break a chained set of function calls over several lines if it becomes too long:
```
storms_tbl %>%
group_by(type) %>%
summarise(mean.wind = mean(wind))
```
```
## # A tibble: 4 x 2
## type mean.wind
## <chr> <dbl>
## 1 Extratropical 40.1
## 2 Hurricane 84.7
## 3 Tropical Depression 27.4
## 4 Tropical Storm 47.3
```
In fact, many **dplyr** users always place each part of a pipeline onto a new line to help with overall readability.
Finally, when we need to assign the result of a chained functions we have to break the left to right rule a bit, placing the assignment at the beginning:
```
new_data <-
storms_tbl %>%
group_by(type) %>%
summarise(mean.wind = mean(wind))
```
(Actually, there is a rightward assignment operator, `->`, but let’s not worry about that)
#### Why is `%>%` called the ‘pipe’ operator?
The `%>%` operator takes the output from one function and “pipes it” to another as the input. It’s called ‘the pipe’ for the simple reason that it allows us to create an analysis ‘pipeline’ from a series of function calls. Incidentally, if you Google the phrase “magritt pipe” you’ll see that **magrittr** is a very clever name for an R package.
One final piece of advice: learn how to use the `%>%` method of chaining together functions. Why? Because it’s the simplest and cleanest method for doing this, many of the examples in the **dplyr** help files and on the web use it, and the majority of people carrying out real world data wrangling with **dplyr** rely on piping.
16\.1 Why do we need ‘pipes’?
-----------------------------
We’ve seen that carrying out calculations on a per\-group basis can be achieved by grouping a tibble, assigning this a name, and then applying the `summarise` function to the new tibble. For example, if we need the mean wind speed for every storm recorded in `storms_tbl`, we could use:
```
# 1. make a grouped copy of the storms data
storms_grouped <- group_by(storms_tbl, name)
# 2. calculate the mean wind speed for each storm
summarise(storms_grouped, mean.wind = mean(wind))
```
```
## # A tibble: 79 x 2
## name mean.wind
## <chr> <dbl>
## 1 Alberto 63.0
## 2 Alex 35.4
## 3 Allison 44.4
## 4 Ana 32.1
## 5 Arlene 39.0
## 6 Arthur 35.2
## 7 Barry 39.8
## 8 Bertha 60
## 9 Beryl 36.1
## 10 Bill 50.6
## # … with 69 more rows
```
There’s nothing wrong with this way of doing things. However, this approach to building up an analysis is quite verbose—especially if an analysis involves more than a couple of steps—because we have to keep storing intermediate steps. It also tends to clutter the global environment with lots of data objects we don’t need.
One way to make things more concise is to use a nested function call (we examined these in the [Using functions](using-functions.html#using-functions) chapter), like this:
```
summarise(group_by(storms_tbl, type), mean.wind = mean(wind))
```
```
## # A tibble: 4 x 2
## type mean.wind
## <chr> <dbl>
## 1 Extratropical 40.1
## 2 Hurricane 84.7
## 3 Tropical Depression 27.4
## 4 Tropical Storm 47.3
```
Here we placed the `group_by` function call inside the list of arguments to `summarise`. Remember, you have to read nested function calls from the inside out to understand what they are doing. This is exactly equivalent to the previous example. We get the same result without having to store intermediate data. However, there are a couple of good reasons why this approach is not advised:
* Experienced R users probably don’t mind this approach because they’re used to nested functions calls. Nonetheless, no reasonable person would argue that nesting functions inside one another is intuitive. Reading outward from the inside of a large number of nested functions is hard work.
* Even for experienced R users, using function nesting is a fairly error prone approach. For example, it’s very easy to accidentally put an argument or two on the wrong side of a closing `)`. If we’re lucky this will produce an error and we’ll catch the problem. If we’re not, we may just end up with nonsense in the output.
There’s a third option for combing several functions that has the dual benefit of keeping our code concise and readable, while avoiding the need to clutter the global environment with intermediate objects. This third approach involves something called the “pipe” operator: `%>%` (no spaces allowed). This isn’t part of base R though. Instead, it’s part of a package called **magrittr**. but there’s no need to install this if we’re using **dplyr** because **dplyr** imports it for us.
The `%>%` operator has become very popular in recent years. The main reason for this is because it allows us to specify a chain of function calls in a (reasonably) human readable format. Here’s how we write the previous example using the pipe operator `%>%`:
```
storms_tbl %>% group_by(., type) %>% summarise(., mean.wind = mean(wind))
```
```
## # A tibble: 4 x 2
## type mean.wind
## <chr> <dbl>
## 1 Extratropical 40.1
## 2 Hurricane 84.7
## 3 Tropical Depression 27.4
## 4 Tropical Storm 47.3
```
How do we make sense of this? Every time we see the `%>%` operator it means the following: take whatever is produced by the left hand expression and use it as an argument in the function on the right hand side. The `.` serves as a placeholder for the location of the corresponding argument. This means we can understand what a sequence of calculations is doing by reading from left to right, just as we would read the words in a book. This example says, take the `storms_tbl` data, group it by `type`, then take the resulting grouped tibble and apply the summarise function to it to calculate the mean of `wind`. It is exactly the same calculation we did above.
When using the pipe operator we can often leave out the `.` placeholder. Remember, this signifies the argument of the function on the right of `%>%` that is associated with the result from on left of `%>%`. If we choose to leave out the `.`, the pipe operator assumes we meant to slot it into the first argument. This means we can simplify our example even more:
```
storms_tbl %>% group_by(type) %>% summarise(mean.wind = mean(wind))
```
```
## # A tibble: 4 x 2
## type mean.wind
## <chr> <dbl>
## 1 Extratropical 40.1
## 2 Hurricane 84.7
## 3 Tropical Depression 27.4
## 4 Tropical Storm 47.3
```
This is why the first argument of a **dplyr** verb is always the data object. This convention ensures that we can use `%>%` without explicitly specifying the argument to match against.
Remember, R does not care about white space, which means we can break a chained set of function calls over several lines if it becomes too long:
```
storms_tbl %>%
group_by(type) %>%
summarise(mean.wind = mean(wind))
```
```
## # A tibble: 4 x 2
## type mean.wind
## <chr> <dbl>
## 1 Extratropical 40.1
## 2 Hurricane 84.7
## 3 Tropical Depression 27.4
## 4 Tropical Storm 47.3
```
In fact, many **dplyr** users always place each part of a pipeline onto a new line to help with overall readability.
Finally, when we need to assign the result of a chained functions we have to break the left to right rule a bit, placing the assignment at the beginning:
```
new_data <-
storms_tbl %>%
group_by(type) %>%
summarise(mean.wind = mean(wind))
```
(Actually, there is a rightward assignment operator, `->`, but let’s not worry about that)
#### Why is `%>%` called the ‘pipe’ operator?
The `%>%` operator takes the output from one function and “pipes it” to another as the input. It’s called ‘the pipe’ for the simple reason that it allows us to create an analysis ‘pipeline’ from a series of function calls. Incidentally, if you Google the phrase “magritt pipe” you’ll see that **magrittr** is a very clever name for an R package.
One final piece of advice: learn how to use the `%>%` method of chaining together functions. Why? Because it’s the simplest and cleanest method for doing this, many of the examples in the **dplyr** help files and on the web use it, and the majority of people carrying out real world data wrangling with **dplyr** rely on piping.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/introduction-to-ggplot2.html |
Chapter 18 Introduction to **ggplot2**
======================================
One of the main reasons data analysts turn to R is for its strong data visualisation capabilities. The R ecosystem includes many different packages that support data visualisation. The three that are used widely used are: 1\) the base graphics system, which uses the **graphics** package; 2\) the **lattice** package; and 3\) the **ggplot2** package. Each of these has its own strengths and weaknesses:
* Base graphics is part of base R—it’s available immediately after we start R. It’s very flexible and allows us to construct more or less any plot we like. This flexibility comes at a cost though. It’s quite easy to get up and running with the base R graphics—there are functions like `plot` and `hist` for making commonly\-used figures—but building complex figures is time consuming. We have to write a lot of R code to prepare even moderately complex plots, there are a huge number of graphical parameters to learn, and many of the standard plotting functions are inconsistent in the way they work.
* The **lattice** package was developed by Deepayan Sarkar to implement ideas of Bill Cleveland in his 1993 book, Visualizing Data. The package implements something called Trellis graphics, a very useful approach for graphical exploratory data analysis. Trellis graphics are designed to help us visualise complicated, multiple variable relationships. The **lattice** package has many “high level” functions (e.g. `xyplot`) to make this process easy, but still retains much of the fine\-grained control that characterises the standard graphics system. The **lattice** package is very powerful but can be hard to learn.
* The **ggplot2** package was developed by Hadley Wickham to implement some of the ideas in a book called The Grammar of Graphics by Wilkinson (2005\). It produces Trellis\-like graphics, but is quite different from **lattice** in the way it goes about this. It uses its own mini\-language to define graphical objects, adopting the language of Wilkinson’s book to define these. It takes a little while to learn the basics, but once these have been mastered these it’s very easy to produce sophisticated plots with very little R code. The downside of working with **ggplot2** is that it isn’t as flexible as base graphics.
We aren’t going to survey all three of these plotting systems. There isn’t space, and in any case, it’s possible to meet most data visualisation needs by becoming proficient with just one of them. This book focuses on **ggplot2**. In many ways the **ggplot2** package hits the ‘sweet spot’ between standard graphics and **lattice**. It allows us to produce complex visualisations without the need to write lines and lines of R code, but is still flexible enough to allow us to tweak the appearance of a figure so that it meets our specific needs.
18\.1 The anatomy of ggplot2
----------------------------
The easiest way to learn **ggplot2** is by seeing it in action. Before we dive in it’s worth surveying the essential features of the **ggplot2** ‘grammar’. Most of this is fairly abstract and won’t make sense on first reading. This is fine. Abstract ideas like ‘aesthetics’ and ‘geoms’ will start to make sense as we work through a range of different examples in the next few chapters.
**ggplot2** is designed to reflect Wilkinson’s grammar of graphics. The **ggplot2** version of this grammar revolves around the idea of **layers**. The underlying idea is that we construct a visualisation in a structured manner by defining one or more of these layers, which together with a few other components define a complete **ggplot2** object. Each layer may have its own data or it may share its data with another layer, and each layer displays its data in a specific way. The resulting **ggplot2** object is defined by a combination of:
1. a default data set along with a set of mappings from variables to aesthetics,
2. one or more layers, each comprising a number of components,
3. one scale for each aesthetic mapping,
4. a coordinate system for the plot,
5. a faceting specification that tells **ggplot2** how to define a multi\-panel plot.
We’ll skim over each of these in turn before moving onto the business of actually using **ggplot2**.
### 18\.1\.1 Layers
Each layer in a **ggplot2** plot may have five different components, though we don’t necessarily have to specify all of these:
* The **data**. At a minimum, every plot needs some data. Unlike base R graphics, **ggplot2** always accepts data in one format, an R data frame (or tibble). Each layer can be associated with it’s own data set. However, we don’t have explicitly add data to each layer. When we choose not to specify the data set for a layer **ggplot2** will try use the default data if it has been defined.
* A set of **aesthetic mappings**. These describe how variables in the data are associated with the aesthetic properties of the layer. Commonly encountered aesthetic properties include position (x and y locations), colour, and size of the objects on a plot. Each layer can be associated with it’s own unique aesthetic mappings. When we choose not to specify these for a layer **ggplot2** will use the defaults if they were defined.
* A **geometric object**, called a ‘geom’. The geom tells **ggplot2** how to actually represent the layer—they refer to the objects we can actually see on a plot, such as points, lines or bars. Each geom only works with a particular of aesthetic mappings. We always have to define at least one geom when using **ggplot2**.
* A **stat**. These take the raw data in the data frame and transform in some useful way. A stat allows us to produce summaries of our raw data. We won’t use them in this book because we can produce the same kinds of figures by first processing our data with **dplyr**. Nonetheless, the stat facility is one of the things that makes **ggplot2** particularly useful for exploratory data analysis.
* A **position adjustment**. These apply small tweaks to the position of layer elements. These are most often used in plots like bar plots where we need to define how the bars are plotted, but they can occasionally be useful in other kinds of plots.
### 18\.1\.2 Scales
The scale part of a **ggplot2** object controls how the data is mapped to the aesthetic attributes. A scale takes the data and converts it into something we can perceive, such as an x/y location, or the colour and size of points in a plot. A scale must be defined for every aesthetic in a plot. It doesn’t make sense to define an aesthetic mapping without a scale because there is no way for **ggplot2** to know how to go from the data to the aesthetics without one.
Scales operate in the same way on the data in a plot. If we include several layers they all have to use the same scale for the shared aesthetic mappings. This behaviour is sensible because it ensures that the information that is displayed is consistent.
If we choose not to explicitly define a scale for an aesthetic **ggplot2** will use a default. Very often this will be a ‘sensible’ choice, which means we can get quite a long way with **ggplot2** without ever really understanding scales. We won’t worry too much about them, though we will take a brief look at a few of the more common options.
### 18\.1\.3 Coordinate system
A **ggplot2** coordinate system takes the position of objects (e.g. points and lines) and maps them onto the 2d plane that a plot lives on. Most people are already very familiar with the most common coordinate system (even if they didn’t realise it). The Cartesian coordinate system is the one we’ve all been using ever since we first constructed a graph with paper and pencil at school. All the most common statistical plots use this coordinate system so we won’t consider any others in this book.
### 18\.1\.4 Faceting
The idea behind faceting is very simple. Faceting allows us to break a data set up into subsets according to the unique values of one or two variables, and then produce a separate plot for each subset. The result is a multipanel plot, where each panel shares the same layers, scales, etc. The data is the only thing that varies from panel to panel. The result is a kind of ‘Trellis plot’, similar to those produced by the **lattice** package. Faceting is a very powerful tool that allows us to slice up our data in different ways and really understand the relationship between different variables. Together with aesthetic mappings, faceting allows us to summarise relationships among 4\-6 variables in a single plot.
18\.2 A quick introduction to ggplot2
-------------------------------------
Now that we’ve briefly reviewed the **ggplot2** grammar we can start learning how to use it. The package uses this grammar to define a sort of mini\-language within R, using functions to specify components like aesthetics and geoms, which are combined with data to define a **ggplot2** graphics object. Once we’ve constructed a suitable object we can use it to display our graphic on the computer screen or save in a particular graphics format (e.g. PDF, PNG, JPEG, and so on).
Rather than orientating this introduction around each of the key functions we’re going to develop a simple example to help us see how **ggplot2** works. Many of the key ideas about how **ggplot2** works can be taken away from this one example, so it’s definitely worth investing the time to understand it, i.e. use the example understand how the different **ggplot2** functions are related to the grammar outlined above.
Our goal is to produce a scatter plot. The scatter plot is one of the most commonly used visualisation tools in the EDA toolbox. It’s designed to show how one numeric variable is related to another. A scatter plot uses horizontal and vertical axes (the ‘x’ and ‘y’ axes) to visualise pairs of related observations as a series of points in two dimensions.
We’ll use the `storms` data from the **nasaweather** package to construct the scatter plot. The questions we want to explore are: 1\) what is the relationship between wind speed (`wind`) and atmospheric pressure (`pressure`); 2\) and how does this vary among (`year`) and within (`seasday`) years? That is, we want to investigate how wind speed depends on atmospheric pressure, and how this relationship varies over time.
### 18\.2\.1 Making a start
To begin working with a graphical object we have to first set up a basic skeleton to build on. This is the job of the `ggplot` function. We can build an empty object by using `ggplot` without any arguments:
```
plt <- ggplot()
summary(plt)
```
```
## data: [x]
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
```
Here’s what just happened: we constructed the skeleton object, assigned it to a variable called `plt`, and then used the `summary` function to inspect the result. When first learning about **ggplot2**, it’s a good idea to use `summary` on various objects to get an idea of their structure. It’s quite ‘verbose’, but the important parts of the output are near the top, before the `faceting:` part. In this case, the ‘important part’ is basically empty. This tells us that there are no data, aesthetic mapping, layers, etc associated with `plt`. All we did was set up an empty object.
#### **ggplot2** vs. `ggplot`
Notice that the while package is called **ggplot2**, the actual function that does the work of setting up the skeleton graphical object is called `ggplot`. Try not to mix them up—this is a common source of errors.
How can we improve on this? We should add a default data set, which we do by passing the name of a data frame or **dplyr** tibble to `ggplot`. Let’s try this with `storms`:
```
plt <- ggplot(storms)
summary(plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
```
Notice that when we summarise the resulting object this time we see that the variables inside `storms` (`name`, `year`, `month`, etc) now comprise the data inside our `plt` object.
The next step is to add a default aesthetic mapping to our graphical object. Remember, these describe how variables in the data are mapped to the aesthetic properties of the layer(s). One way to think about aesthetic mappings is that they define what kind of relationships our plot will describe. Since we’re making a scatter plot we need to define mappings for positions on the ‘x’ and ‘y’ axes. We want to investigate how wind speed depends on atmospheric pressure, so we need to associate `wind` with the y axis and `pressure` with the x axis.
We define an aesthetic mapping with the `aes` function (‘aesthetic mapping’). One way to do this is like this:
```
plt <- plt + aes(x = pressure, y = wind)
```
This little snippet of R code may look quite strange at first glance. There are a couple things to take away from this:
1. We ‘add’ the aesthetic mapping to the `plt` object using the `+` operator. This has nothing to do with arithmetic. The **ggplot2** package uses some clever programming tricks to redefine the way `+` works with its objects so that it can be used to combine them. This is nice because it makes building up a plot from the components of the grammar very natural.
2. The second thing to notice is that an aesthetic mapping is defined by one or more name\-value pairs, specified as arguments of `aes`. The names on the left hand side of each `=` refer to the properties of our graphical object (e.g. the ‘x’ and ‘y’ positions). The values on right hand side refer to variable names in the data set that we want to associate with these properties.
Notice that we overwrote the original `plt` object with the updated version using the assignment operator. We could have created a distinct object, but there’s usually no advantage to do this. Once again, we should inspect the result using `summary`:
```
summary(plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~pressure, y = ~wind
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
```
As hoped, the data (`data:`) from the original `plt` are still there, but now we can also see that two default mappings (`mapping:`) have been defined for the x and y axis positions. We have successfully used the `ggplot` and `aes` functions to set up a graphical object with both default data and aesthetic mappings. Any layers that we now add will use these unless we choose to override them by specifying different options.
In order to produce a plotable version of `plt` we now need to specify a layer. This will tell **ggplot2** how to visualise the data. Remember, each layer has five different components: data, aesthetic mappings, a geom, a stat and a position adjustment. Since we’ve already set up the default data and aesthetic mappings, there’s no need to define these again—**ggplot2** will use the defaults if we leave them out of the definition. This leaves the geom, stat and position adjustment.
What kind of geom do we need? A scatter plots allow us to explore a relationship as a series of *points*. We need to add a layer that uses the *point* geom. What about the stat and position? These are difficult to explain (and understand) without drilling down into the details of how **ggplot2** works. The important insight is that both the stat and the position adjustment components change our data in some way before plotting it. If we want to avoid having **ggplot2** do anything to our data, the key word is ‘identity’. We use this as the value when we want **ggplot2** to plot our data ‘as is’.
We’re going examine the easy way to add a layer in a moment. However we’ll start with a long\-winded approach first, because this reveals exactly what happens whenever we build a **ggplot2** object. The general function for adding a layer is simply called `layer`. Here’s how it works in its most basic usage:
```
plt <- plt + layer(geom = "point", stat = "identity", position = "identity")
```
All we did here was take the `plt` object, add a layer to it with the `layer` function, and then overwrite the old version of `plt`. Again, we add the new component using the `+` symbol. We passed three arguments to the `layer` function to…
1. define the **geom**: the name of this argument was `geom` and the value assigned to it was `"point"`.
2. define the **stat**: the name of this argument was `stat` and the value assigned to it was `"identity"`.
3. define the **position adjustment** : the name of this argument was `position` and the value assigned to it was `"identity"`.
Let’s review the structure of the resulting graphical object one last time to see what we’ve achieved:
```
summary(plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~pressure, y = ~wind
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
## -----------------------------------
## geom_point: na.rm = FALSE
## stat_identity: na.rm = FALSE
## position_identity
```
The text above the `-----` line is the same as before. It summarises the default data and the aesthetic mapping. The text below this summarises the layer we just added. It tells us that this layer will use the points geom (`geom_point`), the identity stat (`stat_identity`), and the identity position adjustment (`position_identity`).
Now `plt` has everything it needs to actually render a figure. How do we do this? We just ‘print’ the object:
```
print(plt)
```
That’s it (finally)! We have used produced a scatter plot showing how wind speed depends of atmospheric pressure. This clearly shows that higher wind speeds are associated with lower pressure systems. That wasn’t really why we made this plot though—we wanted to see how the **ggplot2** functions are related to its grammar. Here’s a quick summary of what we did:
```
# step 1. set up the skeleton object with a default data set
plt <- ggplot(storms)
# step 2. add the default aesthetic mappings
plt <- plt + aes(x = pressure, y = wind)
# step 3. specify the layer we want to use
plt <- plt + layer(geom = "point", stat = "identity", position = "identity")
# step 4. render the plot
print(plt)
```
#### Don’t use this workflow!
It’s possible to construct any **ggplot2** visualisation using the workflow outlined in this subsection. It isn’t recommended though. The workflow adopted here was selected to reveal how the grammar works, rather than for its efficiency. A more concise, standard approach to using **ggplot2** is outlined next. Use this for real world analysis.
### 18\.2\.2 The standard way of using **ggplot2**
The **ggplot2** package is quite flexible, which means we can arrive at a particular visualisation in a number of different ways. To keep life simple, we’re going to adopt one consistent work flow for the remainder of this book. This won’t reveal the full array of **ggplot2** tricks, but it is sufficient to enable us to construct a wide range of standard visualisations. To see it in action, we’ll make exactly the same wind speed vs. atmospheric pressure scatter plot again, only this time, we’ll use a few short cuts.
We began building our **ggplot2** object by setting up a skeleton object with a default data set and then added the default aesthetic mappings. There is a more concise way to achieve the same result:
```
plt <- ggplot(storms, aes(x = pressure, y = wind))
summary(plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~pressure, y = ~wind
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
```
In this form the `aes` function is used inside `ggplot`, i.e. it supplies a second argument to `ggplot`. This approach is the most commonly used approach for setting up a graphical object with default data and aesthetic mappings. We will use it from now on.
The next step is to add a layer. We just saw that the `layer` function can be used to construct a layer from its component parts. However, `ggplot` provides a number of functions that add layers according to the type of geom they use. They look like this: `geom_NAME`, where `NAME` stands for the name of the different possible geoms. An alternative to the last line is therefore:
```
plt <- plt + geom_point()
```
We didn’t have to specify the stat or the position adjustment components of the layer because the `geom_NAME` functions all choose sensible defaults, though these can be overridden if needed, but 90% of the time there’s no need to do this. This way of defining a layer is much simpler and less error prone than the `layer` method. We will use the `geom_NAME` method from now on.
There’s one last trick we need to learn to use **ggplot2** efficiently. We’ve been building a plot object in several steps, giving the intermediates the name `plt`, and then manually printing the object to display it when it’s ready. This is useful if we want to make different versions of the same plot. However, we very often just want to build the plot and display it in one go. This is done by combining everything with `+` and printing the resulting object directly:
```
ggplot(storms, aes(x = pressure, y = wind)) + geom_point()
```
That’s it! As we have seen, there’s a lot going on underneath this, but this small snippet of R code contains everything **ggplot2** needs to construct and display the simple scatter plot of wind speed against atmospheric pressure.
18\.3 Increasing the information density
----------------------------------------
We introduced the example by saying that we were interested in the relationship between wind speed, atmospheric pressure, observation year, and the time of year. So far we’ve only examined the first two. We’ll finish this chapter by exploring the two main approaches for increasing the information in visualisation to investigate the relationship with the remaining two variables.
### 18\.3\.1 Using additional aesthetics
How can we learn about relationship of these two variables to time of year (`seasday`)? We need to include information in the `seasday` variable in our scatter plot somehow. There are different ways we might do this, but the basic trick is to map the `seasday` variable to a new aesthetic. We need to change the way we are using `aes`. One option is to map the `seasday` to the point colours so that the colour of the points correspond to the time of year:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point()
```
Notice that **ggplot2** automatically adds a legend to the figure to help us interpret it. A colour scale is not much use without a legend. Points are now coloured according to whether they are associated with early (dark blue) or late (light blue) observations. There’s a hint that lower intensity storms tend to be at the beginning an end of the storm season, but it’s hard to be sure because there is so much overplotting—i.e. many points are in the same place.
We could no doubt improves on this visualisation, but nonetheless, it illustrates the important concept: we can add information to a plot by mapping additional variables to new aesthetics. There is nothing to stop us using different aesthetics if we wanted to squeeze even more information into this plot. For example, we could map the storm type variable (`type`) to the point shape if we wanted, using `shape = type` inside `aes`. However, this graph is already a bit too crowded, so this might not be too helpful in this instance.
### 18\.3\.2 Using facets
What if we want to see how the wind speed and pressure relationship might vary among years? One way to do this is to make a separate scatter plot for each year. We don’t have to do this manually though. We can use the faceting facility of **ggplot2** instead. This allows us to break up our data set up into subsets according to the unique values of one or two variables and produce a separate plot for each subset, but without having to write much R code.
Faceting operates on the whole figure so we can’t apply it by changing the properties of a layer. Instead, we have to use a new function to add the faceting information. Here’s how we split things up by year using the `facet_wrap` function:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point() +
facet_wrap(~ year, nrow = 2, ncol = 3)
```
The first argument of `facet_wrap` (`~ year`) says to split up the data set according to the values of `year`. The `nrow` and `ncol` arguments just specify how to split the panels across rows and columns of the resulting plot. Notice that the panels share the same scales for the ‘x’ and ‘y’ axes. This makes it easy to compare information.
The plot indicates that the wind speed – pressure relationship is more or less invariant across years, and that perhaps 1997 and 2000 were not such bad storm years compared to the others. This isn’t really surprising. The occurrence of tropical storms is somewhat stochastic, but the laws of atmospheric physics don’t change from one year to the next!
#### Don’t forget the `~`
We have to include the `~` symbol at the beginning of the `~ year` part of the `facet_wrap` specification. Trust us, the faceting won’t work without it. The `~` specifies something called a ‘formula’. The main job of a formula is to specify relationships among variables. These are usually used in R’s statistical modelling functions (not covered in this book), but they occasionally pop up in other places.
18\.1 The anatomy of ggplot2
----------------------------
The easiest way to learn **ggplot2** is by seeing it in action. Before we dive in it’s worth surveying the essential features of the **ggplot2** ‘grammar’. Most of this is fairly abstract and won’t make sense on first reading. This is fine. Abstract ideas like ‘aesthetics’ and ‘geoms’ will start to make sense as we work through a range of different examples in the next few chapters.
**ggplot2** is designed to reflect Wilkinson’s grammar of graphics. The **ggplot2** version of this grammar revolves around the idea of **layers**. The underlying idea is that we construct a visualisation in a structured manner by defining one or more of these layers, which together with a few other components define a complete **ggplot2** object. Each layer may have its own data or it may share its data with another layer, and each layer displays its data in a specific way. The resulting **ggplot2** object is defined by a combination of:
1. a default data set along with a set of mappings from variables to aesthetics,
2. one or more layers, each comprising a number of components,
3. one scale for each aesthetic mapping,
4. a coordinate system for the plot,
5. a faceting specification that tells **ggplot2** how to define a multi\-panel plot.
We’ll skim over each of these in turn before moving onto the business of actually using **ggplot2**.
### 18\.1\.1 Layers
Each layer in a **ggplot2** plot may have five different components, though we don’t necessarily have to specify all of these:
* The **data**. At a minimum, every plot needs some data. Unlike base R graphics, **ggplot2** always accepts data in one format, an R data frame (or tibble). Each layer can be associated with it’s own data set. However, we don’t have explicitly add data to each layer. When we choose not to specify the data set for a layer **ggplot2** will try use the default data if it has been defined.
* A set of **aesthetic mappings**. These describe how variables in the data are associated with the aesthetic properties of the layer. Commonly encountered aesthetic properties include position (x and y locations), colour, and size of the objects on a plot. Each layer can be associated with it’s own unique aesthetic mappings. When we choose not to specify these for a layer **ggplot2** will use the defaults if they were defined.
* A **geometric object**, called a ‘geom’. The geom tells **ggplot2** how to actually represent the layer—they refer to the objects we can actually see on a plot, such as points, lines or bars. Each geom only works with a particular of aesthetic mappings. We always have to define at least one geom when using **ggplot2**.
* A **stat**. These take the raw data in the data frame and transform in some useful way. A stat allows us to produce summaries of our raw data. We won’t use them in this book because we can produce the same kinds of figures by first processing our data with **dplyr**. Nonetheless, the stat facility is one of the things that makes **ggplot2** particularly useful for exploratory data analysis.
* A **position adjustment**. These apply small tweaks to the position of layer elements. These are most often used in plots like bar plots where we need to define how the bars are plotted, but they can occasionally be useful in other kinds of plots.
### 18\.1\.2 Scales
The scale part of a **ggplot2** object controls how the data is mapped to the aesthetic attributes. A scale takes the data and converts it into something we can perceive, such as an x/y location, or the colour and size of points in a plot. A scale must be defined for every aesthetic in a plot. It doesn’t make sense to define an aesthetic mapping without a scale because there is no way for **ggplot2** to know how to go from the data to the aesthetics without one.
Scales operate in the same way on the data in a plot. If we include several layers they all have to use the same scale for the shared aesthetic mappings. This behaviour is sensible because it ensures that the information that is displayed is consistent.
If we choose not to explicitly define a scale for an aesthetic **ggplot2** will use a default. Very often this will be a ‘sensible’ choice, which means we can get quite a long way with **ggplot2** without ever really understanding scales. We won’t worry too much about them, though we will take a brief look at a few of the more common options.
### 18\.1\.3 Coordinate system
A **ggplot2** coordinate system takes the position of objects (e.g. points and lines) and maps them onto the 2d plane that a plot lives on. Most people are already very familiar with the most common coordinate system (even if they didn’t realise it). The Cartesian coordinate system is the one we’ve all been using ever since we first constructed a graph with paper and pencil at school. All the most common statistical plots use this coordinate system so we won’t consider any others in this book.
### 18\.1\.4 Faceting
The idea behind faceting is very simple. Faceting allows us to break a data set up into subsets according to the unique values of one or two variables, and then produce a separate plot for each subset. The result is a multipanel plot, where each panel shares the same layers, scales, etc. The data is the only thing that varies from panel to panel. The result is a kind of ‘Trellis plot’, similar to those produced by the **lattice** package. Faceting is a very powerful tool that allows us to slice up our data in different ways and really understand the relationship between different variables. Together with aesthetic mappings, faceting allows us to summarise relationships among 4\-6 variables in a single plot.
### 18\.1\.1 Layers
Each layer in a **ggplot2** plot may have five different components, though we don’t necessarily have to specify all of these:
* The **data**. At a minimum, every plot needs some data. Unlike base R graphics, **ggplot2** always accepts data in one format, an R data frame (or tibble). Each layer can be associated with it’s own data set. However, we don’t have explicitly add data to each layer. When we choose not to specify the data set for a layer **ggplot2** will try use the default data if it has been defined.
* A set of **aesthetic mappings**. These describe how variables in the data are associated with the aesthetic properties of the layer. Commonly encountered aesthetic properties include position (x and y locations), colour, and size of the objects on a plot. Each layer can be associated with it’s own unique aesthetic mappings. When we choose not to specify these for a layer **ggplot2** will use the defaults if they were defined.
* A **geometric object**, called a ‘geom’. The geom tells **ggplot2** how to actually represent the layer—they refer to the objects we can actually see on a plot, such as points, lines or bars. Each geom only works with a particular of aesthetic mappings. We always have to define at least one geom when using **ggplot2**.
* A **stat**. These take the raw data in the data frame and transform in some useful way. A stat allows us to produce summaries of our raw data. We won’t use them in this book because we can produce the same kinds of figures by first processing our data with **dplyr**. Nonetheless, the stat facility is one of the things that makes **ggplot2** particularly useful for exploratory data analysis.
* A **position adjustment**. These apply small tweaks to the position of layer elements. These are most often used in plots like bar plots where we need to define how the bars are plotted, but they can occasionally be useful in other kinds of plots.
### 18\.1\.2 Scales
The scale part of a **ggplot2** object controls how the data is mapped to the aesthetic attributes. A scale takes the data and converts it into something we can perceive, such as an x/y location, or the colour and size of points in a plot. A scale must be defined for every aesthetic in a plot. It doesn’t make sense to define an aesthetic mapping without a scale because there is no way for **ggplot2** to know how to go from the data to the aesthetics without one.
Scales operate in the same way on the data in a plot. If we include several layers they all have to use the same scale for the shared aesthetic mappings. This behaviour is sensible because it ensures that the information that is displayed is consistent.
If we choose not to explicitly define a scale for an aesthetic **ggplot2** will use a default. Very often this will be a ‘sensible’ choice, which means we can get quite a long way with **ggplot2** without ever really understanding scales. We won’t worry too much about them, though we will take a brief look at a few of the more common options.
### 18\.1\.3 Coordinate system
A **ggplot2** coordinate system takes the position of objects (e.g. points and lines) and maps them onto the 2d plane that a plot lives on. Most people are already very familiar with the most common coordinate system (even if they didn’t realise it). The Cartesian coordinate system is the one we’ve all been using ever since we first constructed a graph with paper and pencil at school. All the most common statistical plots use this coordinate system so we won’t consider any others in this book.
### 18\.1\.4 Faceting
The idea behind faceting is very simple. Faceting allows us to break a data set up into subsets according to the unique values of one or two variables, and then produce a separate plot for each subset. The result is a multipanel plot, where each panel shares the same layers, scales, etc. The data is the only thing that varies from panel to panel. The result is a kind of ‘Trellis plot’, similar to those produced by the **lattice** package. Faceting is a very powerful tool that allows us to slice up our data in different ways and really understand the relationship between different variables. Together with aesthetic mappings, faceting allows us to summarise relationships among 4\-6 variables in a single plot.
18\.2 A quick introduction to ggplot2
-------------------------------------
Now that we’ve briefly reviewed the **ggplot2** grammar we can start learning how to use it. The package uses this grammar to define a sort of mini\-language within R, using functions to specify components like aesthetics and geoms, which are combined with data to define a **ggplot2** graphics object. Once we’ve constructed a suitable object we can use it to display our graphic on the computer screen or save in a particular graphics format (e.g. PDF, PNG, JPEG, and so on).
Rather than orientating this introduction around each of the key functions we’re going to develop a simple example to help us see how **ggplot2** works. Many of the key ideas about how **ggplot2** works can be taken away from this one example, so it’s definitely worth investing the time to understand it, i.e. use the example understand how the different **ggplot2** functions are related to the grammar outlined above.
Our goal is to produce a scatter plot. The scatter plot is one of the most commonly used visualisation tools in the EDA toolbox. It’s designed to show how one numeric variable is related to another. A scatter plot uses horizontal and vertical axes (the ‘x’ and ‘y’ axes) to visualise pairs of related observations as a series of points in two dimensions.
We’ll use the `storms` data from the **nasaweather** package to construct the scatter plot. The questions we want to explore are: 1\) what is the relationship between wind speed (`wind`) and atmospheric pressure (`pressure`); 2\) and how does this vary among (`year`) and within (`seasday`) years? That is, we want to investigate how wind speed depends on atmospheric pressure, and how this relationship varies over time.
### 18\.2\.1 Making a start
To begin working with a graphical object we have to first set up a basic skeleton to build on. This is the job of the `ggplot` function. We can build an empty object by using `ggplot` without any arguments:
```
plt <- ggplot()
summary(plt)
```
```
## data: [x]
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
```
Here’s what just happened: we constructed the skeleton object, assigned it to a variable called `plt`, and then used the `summary` function to inspect the result. When first learning about **ggplot2**, it’s a good idea to use `summary` on various objects to get an idea of their structure. It’s quite ‘verbose’, but the important parts of the output are near the top, before the `faceting:` part. In this case, the ‘important part’ is basically empty. This tells us that there are no data, aesthetic mapping, layers, etc associated with `plt`. All we did was set up an empty object.
#### **ggplot2** vs. `ggplot`
Notice that the while package is called **ggplot2**, the actual function that does the work of setting up the skeleton graphical object is called `ggplot`. Try not to mix them up—this is a common source of errors.
How can we improve on this? We should add a default data set, which we do by passing the name of a data frame or **dplyr** tibble to `ggplot`. Let’s try this with `storms`:
```
plt <- ggplot(storms)
summary(plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
```
Notice that when we summarise the resulting object this time we see that the variables inside `storms` (`name`, `year`, `month`, etc) now comprise the data inside our `plt` object.
The next step is to add a default aesthetic mapping to our graphical object. Remember, these describe how variables in the data are mapped to the aesthetic properties of the layer(s). One way to think about aesthetic mappings is that they define what kind of relationships our plot will describe. Since we’re making a scatter plot we need to define mappings for positions on the ‘x’ and ‘y’ axes. We want to investigate how wind speed depends on atmospheric pressure, so we need to associate `wind` with the y axis and `pressure` with the x axis.
We define an aesthetic mapping with the `aes` function (‘aesthetic mapping’). One way to do this is like this:
```
plt <- plt + aes(x = pressure, y = wind)
```
This little snippet of R code may look quite strange at first glance. There are a couple things to take away from this:
1. We ‘add’ the aesthetic mapping to the `plt` object using the `+` operator. This has nothing to do with arithmetic. The **ggplot2** package uses some clever programming tricks to redefine the way `+` works with its objects so that it can be used to combine them. This is nice because it makes building up a plot from the components of the grammar very natural.
2. The second thing to notice is that an aesthetic mapping is defined by one or more name\-value pairs, specified as arguments of `aes`. The names on the left hand side of each `=` refer to the properties of our graphical object (e.g. the ‘x’ and ‘y’ positions). The values on right hand side refer to variable names in the data set that we want to associate with these properties.
Notice that we overwrote the original `plt` object with the updated version using the assignment operator. We could have created a distinct object, but there’s usually no advantage to do this. Once again, we should inspect the result using `summary`:
```
summary(plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~pressure, y = ~wind
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
```
As hoped, the data (`data:`) from the original `plt` are still there, but now we can also see that two default mappings (`mapping:`) have been defined for the x and y axis positions. We have successfully used the `ggplot` and `aes` functions to set up a graphical object with both default data and aesthetic mappings. Any layers that we now add will use these unless we choose to override them by specifying different options.
In order to produce a plotable version of `plt` we now need to specify a layer. This will tell **ggplot2** how to visualise the data. Remember, each layer has five different components: data, aesthetic mappings, a geom, a stat and a position adjustment. Since we’ve already set up the default data and aesthetic mappings, there’s no need to define these again—**ggplot2** will use the defaults if we leave them out of the definition. This leaves the geom, stat and position adjustment.
What kind of geom do we need? A scatter plots allow us to explore a relationship as a series of *points*. We need to add a layer that uses the *point* geom. What about the stat and position? These are difficult to explain (and understand) without drilling down into the details of how **ggplot2** works. The important insight is that both the stat and the position adjustment components change our data in some way before plotting it. If we want to avoid having **ggplot2** do anything to our data, the key word is ‘identity’. We use this as the value when we want **ggplot2** to plot our data ‘as is’.
We’re going examine the easy way to add a layer in a moment. However we’ll start with a long\-winded approach first, because this reveals exactly what happens whenever we build a **ggplot2** object. The general function for adding a layer is simply called `layer`. Here’s how it works in its most basic usage:
```
plt <- plt + layer(geom = "point", stat = "identity", position = "identity")
```
All we did here was take the `plt` object, add a layer to it with the `layer` function, and then overwrite the old version of `plt`. Again, we add the new component using the `+` symbol. We passed three arguments to the `layer` function to…
1. define the **geom**: the name of this argument was `geom` and the value assigned to it was `"point"`.
2. define the **stat**: the name of this argument was `stat` and the value assigned to it was `"identity"`.
3. define the **position adjustment** : the name of this argument was `position` and the value assigned to it was `"identity"`.
Let’s review the structure of the resulting graphical object one last time to see what we’ve achieved:
```
summary(plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~pressure, y = ~wind
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
## -----------------------------------
## geom_point: na.rm = FALSE
## stat_identity: na.rm = FALSE
## position_identity
```
The text above the `-----` line is the same as before. It summarises the default data and the aesthetic mapping. The text below this summarises the layer we just added. It tells us that this layer will use the points geom (`geom_point`), the identity stat (`stat_identity`), and the identity position adjustment (`position_identity`).
Now `plt` has everything it needs to actually render a figure. How do we do this? We just ‘print’ the object:
```
print(plt)
```
That’s it (finally)! We have used produced a scatter plot showing how wind speed depends of atmospheric pressure. This clearly shows that higher wind speeds are associated with lower pressure systems. That wasn’t really why we made this plot though—we wanted to see how the **ggplot2** functions are related to its grammar. Here’s a quick summary of what we did:
```
# step 1. set up the skeleton object with a default data set
plt <- ggplot(storms)
# step 2. add the default aesthetic mappings
plt <- plt + aes(x = pressure, y = wind)
# step 3. specify the layer we want to use
plt <- plt + layer(geom = "point", stat = "identity", position = "identity")
# step 4. render the plot
print(plt)
```
#### Don’t use this workflow!
It’s possible to construct any **ggplot2** visualisation using the workflow outlined in this subsection. It isn’t recommended though. The workflow adopted here was selected to reveal how the grammar works, rather than for its efficiency. A more concise, standard approach to using **ggplot2** is outlined next. Use this for real world analysis.
### 18\.2\.2 The standard way of using **ggplot2**
The **ggplot2** package is quite flexible, which means we can arrive at a particular visualisation in a number of different ways. To keep life simple, we’re going to adopt one consistent work flow for the remainder of this book. This won’t reveal the full array of **ggplot2** tricks, but it is sufficient to enable us to construct a wide range of standard visualisations. To see it in action, we’ll make exactly the same wind speed vs. atmospheric pressure scatter plot again, only this time, we’ll use a few short cuts.
We began building our **ggplot2** object by setting up a skeleton object with a default data set and then added the default aesthetic mappings. There is a more concise way to achieve the same result:
```
plt <- ggplot(storms, aes(x = pressure, y = wind))
summary(plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~pressure, y = ~wind
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
```
In this form the `aes` function is used inside `ggplot`, i.e. it supplies a second argument to `ggplot`. This approach is the most commonly used approach for setting up a graphical object with default data and aesthetic mappings. We will use it from now on.
The next step is to add a layer. We just saw that the `layer` function can be used to construct a layer from its component parts. However, `ggplot` provides a number of functions that add layers according to the type of geom they use. They look like this: `geom_NAME`, where `NAME` stands for the name of the different possible geoms. An alternative to the last line is therefore:
```
plt <- plt + geom_point()
```
We didn’t have to specify the stat or the position adjustment components of the layer because the `geom_NAME` functions all choose sensible defaults, though these can be overridden if needed, but 90% of the time there’s no need to do this. This way of defining a layer is much simpler and less error prone than the `layer` method. We will use the `geom_NAME` method from now on.
There’s one last trick we need to learn to use **ggplot2** efficiently. We’ve been building a plot object in several steps, giving the intermediates the name `plt`, and then manually printing the object to display it when it’s ready. This is useful if we want to make different versions of the same plot. However, we very often just want to build the plot and display it in one go. This is done by combining everything with `+` and printing the resulting object directly:
```
ggplot(storms, aes(x = pressure, y = wind)) + geom_point()
```
That’s it! As we have seen, there’s a lot going on underneath this, but this small snippet of R code contains everything **ggplot2** needs to construct and display the simple scatter plot of wind speed against atmospheric pressure.
### 18\.2\.1 Making a start
To begin working with a graphical object we have to first set up a basic skeleton to build on. This is the job of the `ggplot` function. We can build an empty object by using `ggplot` without any arguments:
```
plt <- ggplot()
summary(plt)
```
```
## data: [x]
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
```
Here’s what just happened: we constructed the skeleton object, assigned it to a variable called `plt`, and then used the `summary` function to inspect the result. When first learning about **ggplot2**, it’s a good idea to use `summary` on various objects to get an idea of their structure. It’s quite ‘verbose’, but the important parts of the output are near the top, before the `faceting:` part. In this case, the ‘important part’ is basically empty. This tells us that there are no data, aesthetic mapping, layers, etc associated with `plt`. All we did was set up an empty object.
#### **ggplot2** vs. `ggplot`
Notice that the while package is called **ggplot2**, the actual function that does the work of setting up the skeleton graphical object is called `ggplot`. Try not to mix them up—this is a common source of errors.
How can we improve on this? We should add a default data set, which we do by passing the name of a data frame or **dplyr** tibble to `ggplot`. Let’s try this with `storms`:
```
plt <- ggplot(storms)
summary(plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
```
Notice that when we summarise the resulting object this time we see that the variables inside `storms` (`name`, `year`, `month`, etc) now comprise the data inside our `plt` object.
The next step is to add a default aesthetic mapping to our graphical object. Remember, these describe how variables in the data are mapped to the aesthetic properties of the layer(s). One way to think about aesthetic mappings is that they define what kind of relationships our plot will describe. Since we’re making a scatter plot we need to define mappings for positions on the ‘x’ and ‘y’ axes. We want to investigate how wind speed depends on atmospheric pressure, so we need to associate `wind` with the y axis and `pressure` with the x axis.
We define an aesthetic mapping with the `aes` function (‘aesthetic mapping’). One way to do this is like this:
```
plt <- plt + aes(x = pressure, y = wind)
```
This little snippet of R code may look quite strange at first glance. There are a couple things to take away from this:
1. We ‘add’ the aesthetic mapping to the `plt` object using the `+` operator. This has nothing to do with arithmetic. The **ggplot2** package uses some clever programming tricks to redefine the way `+` works with its objects so that it can be used to combine them. This is nice because it makes building up a plot from the components of the grammar very natural.
2. The second thing to notice is that an aesthetic mapping is defined by one or more name\-value pairs, specified as arguments of `aes`. The names on the left hand side of each `=` refer to the properties of our graphical object (e.g. the ‘x’ and ‘y’ positions). The values on right hand side refer to variable names in the data set that we want to associate with these properties.
Notice that we overwrote the original `plt` object with the updated version using the assignment operator. We could have created a distinct object, but there’s usually no advantage to do this. Once again, we should inspect the result using `summary`:
```
summary(plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~pressure, y = ~wind
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
```
As hoped, the data (`data:`) from the original `plt` are still there, but now we can also see that two default mappings (`mapping:`) have been defined for the x and y axis positions. We have successfully used the `ggplot` and `aes` functions to set up a graphical object with both default data and aesthetic mappings. Any layers that we now add will use these unless we choose to override them by specifying different options.
In order to produce a plotable version of `plt` we now need to specify a layer. This will tell **ggplot2** how to visualise the data. Remember, each layer has five different components: data, aesthetic mappings, a geom, a stat and a position adjustment. Since we’ve already set up the default data and aesthetic mappings, there’s no need to define these again—**ggplot2** will use the defaults if we leave them out of the definition. This leaves the geom, stat and position adjustment.
What kind of geom do we need? A scatter plots allow us to explore a relationship as a series of *points*. We need to add a layer that uses the *point* geom. What about the stat and position? These are difficult to explain (and understand) without drilling down into the details of how **ggplot2** works. The important insight is that both the stat and the position adjustment components change our data in some way before plotting it. If we want to avoid having **ggplot2** do anything to our data, the key word is ‘identity’. We use this as the value when we want **ggplot2** to plot our data ‘as is’.
We’re going examine the easy way to add a layer in a moment. However we’ll start with a long\-winded approach first, because this reveals exactly what happens whenever we build a **ggplot2** object. The general function for adding a layer is simply called `layer`. Here’s how it works in its most basic usage:
```
plt <- plt + layer(geom = "point", stat = "identity", position = "identity")
```
All we did here was take the `plt` object, add a layer to it with the `layer` function, and then overwrite the old version of `plt`. Again, we add the new component using the `+` symbol. We passed three arguments to the `layer` function to…
1. define the **geom**: the name of this argument was `geom` and the value assigned to it was `"point"`.
2. define the **stat**: the name of this argument was `stat` and the value assigned to it was `"identity"`.
3. define the **position adjustment** : the name of this argument was `position` and the value assigned to it was `"identity"`.
Let’s review the structure of the resulting graphical object one last time to see what we’ve achieved:
```
summary(plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~pressure, y = ~wind
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
## -----------------------------------
## geom_point: na.rm = FALSE
## stat_identity: na.rm = FALSE
## position_identity
```
The text above the `-----` line is the same as before. It summarises the default data and the aesthetic mapping. The text below this summarises the layer we just added. It tells us that this layer will use the points geom (`geom_point`), the identity stat (`stat_identity`), and the identity position adjustment (`position_identity`).
Now `plt` has everything it needs to actually render a figure. How do we do this? We just ‘print’ the object:
```
print(plt)
```
That’s it (finally)! We have used produced a scatter plot showing how wind speed depends of atmospheric pressure. This clearly shows that higher wind speeds are associated with lower pressure systems. That wasn’t really why we made this plot though—we wanted to see how the **ggplot2** functions are related to its grammar. Here’s a quick summary of what we did:
```
# step 1. set up the skeleton object with a default data set
plt <- ggplot(storms)
# step 2. add the default aesthetic mappings
plt <- plt + aes(x = pressure, y = wind)
# step 3. specify the layer we want to use
plt <- plt + layer(geom = "point", stat = "identity", position = "identity")
# step 4. render the plot
print(plt)
```
#### Don’t use this workflow!
It’s possible to construct any **ggplot2** visualisation using the workflow outlined in this subsection. It isn’t recommended though. The workflow adopted here was selected to reveal how the grammar works, rather than for its efficiency. A more concise, standard approach to using **ggplot2** is outlined next. Use this for real world analysis.
### 18\.2\.2 The standard way of using **ggplot2**
The **ggplot2** package is quite flexible, which means we can arrive at a particular visualisation in a number of different ways. To keep life simple, we’re going to adopt one consistent work flow for the remainder of this book. This won’t reveal the full array of **ggplot2** tricks, but it is sufficient to enable us to construct a wide range of standard visualisations. To see it in action, we’ll make exactly the same wind speed vs. atmospheric pressure scatter plot again, only this time, we’ll use a few short cuts.
We began building our **ggplot2** object by setting up a skeleton object with a default data set and then added the default aesthetic mappings. There is a more concise way to achieve the same result:
```
plt <- ggplot(storms, aes(x = pressure, y = wind))
summary(plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~pressure, y = ~wind
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
```
In this form the `aes` function is used inside `ggplot`, i.e. it supplies a second argument to `ggplot`. This approach is the most commonly used approach for setting up a graphical object with default data and aesthetic mappings. We will use it from now on.
The next step is to add a layer. We just saw that the `layer` function can be used to construct a layer from its component parts. However, `ggplot` provides a number of functions that add layers according to the type of geom they use. They look like this: `geom_NAME`, where `NAME` stands for the name of the different possible geoms. An alternative to the last line is therefore:
```
plt <- plt + geom_point()
```
We didn’t have to specify the stat or the position adjustment components of the layer because the `geom_NAME` functions all choose sensible defaults, though these can be overridden if needed, but 90% of the time there’s no need to do this. This way of defining a layer is much simpler and less error prone than the `layer` method. We will use the `geom_NAME` method from now on.
There’s one last trick we need to learn to use **ggplot2** efficiently. We’ve been building a plot object in several steps, giving the intermediates the name `plt`, and then manually printing the object to display it when it’s ready. This is useful if we want to make different versions of the same plot. However, we very often just want to build the plot and display it in one go. This is done by combining everything with `+` and printing the resulting object directly:
```
ggplot(storms, aes(x = pressure, y = wind)) + geom_point()
```
That’s it! As we have seen, there’s a lot going on underneath this, but this small snippet of R code contains everything **ggplot2** needs to construct and display the simple scatter plot of wind speed against atmospheric pressure.
18\.3 Increasing the information density
----------------------------------------
We introduced the example by saying that we were interested in the relationship between wind speed, atmospheric pressure, observation year, and the time of year. So far we’ve only examined the first two. We’ll finish this chapter by exploring the two main approaches for increasing the information in visualisation to investigate the relationship with the remaining two variables.
### 18\.3\.1 Using additional aesthetics
How can we learn about relationship of these two variables to time of year (`seasday`)? We need to include information in the `seasday` variable in our scatter plot somehow. There are different ways we might do this, but the basic trick is to map the `seasday` variable to a new aesthetic. We need to change the way we are using `aes`. One option is to map the `seasday` to the point colours so that the colour of the points correspond to the time of year:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point()
```
Notice that **ggplot2** automatically adds a legend to the figure to help us interpret it. A colour scale is not much use without a legend. Points are now coloured according to whether they are associated with early (dark blue) or late (light blue) observations. There’s a hint that lower intensity storms tend to be at the beginning an end of the storm season, but it’s hard to be sure because there is so much overplotting—i.e. many points are in the same place.
We could no doubt improves on this visualisation, but nonetheless, it illustrates the important concept: we can add information to a plot by mapping additional variables to new aesthetics. There is nothing to stop us using different aesthetics if we wanted to squeeze even more information into this plot. For example, we could map the storm type variable (`type`) to the point shape if we wanted, using `shape = type` inside `aes`. However, this graph is already a bit too crowded, so this might not be too helpful in this instance.
### 18\.3\.2 Using facets
What if we want to see how the wind speed and pressure relationship might vary among years? One way to do this is to make a separate scatter plot for each year. We don’t have to do this manually though. We can use the faceting facility of **ggplot2** instead. This allows us to break up our data set up into subsets according to the unique values of one or two variables and produce a separate plot for each subset, but without having to write much R code.
Faceting operates on the whole figure so we can’t apply it by changing the properties of a layer. Instead, we have to use a new function to add the faceting information. Here’s how we split things up by year using the `facet_wrap` function:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point() +
facet_wrap(~ year, nrow = 2, ncol = 3)
```
The first argument of `facet_wrap` (`~ year`) says to split up the data set according to the values of `year`. The `nrow` and `ncol` arguments just specify how to split the panels across rows and columns of the resulting plot. Notice that the panels share the same scales for the ‘x’ and ‘y’ axes. This makes it easy to compare information.
The plot indicates that the wind speed – pressure relationship is more or less invariant across years, and that perhaps 1997 and 2000 were not such bad storm years compared to the others. This isn’t really surprising. The occurrence of tropical storms is somewhat stochastic, but the laws of atmospheric physics don’t change from one year to the next!
#### Don’t forget the `~`
We have to include the `~` symbol at the beginning of the `~ year` part of the `facet_wrap` specification. Trust us, the faceting won’t work without it. The `~` specifies something called a ‘formula’. The main job of a formula is to specify relationships among variables. These are usually used in R’s statistical modelling functions (not covered in this book), but they occasionally pop up in other places.
### 18\.3\.1 Using additional aesthetics
How can we learn about relationship of these two variables to time of year (`seasday`)? We need to include information in the `seasday` variable in our scatter plot somehow. There are different ways we might do this, but the basic trick is to map the `seasday` variable to a new aesthetic. We need to change the way we are using `aes`. One option is to map the `seasday` to the point colours so that the colour of the points correspond to the time of year:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point()
```
Notice that **ggplot2** automatically adds a legend to the figure to help us interpret it. A colour scale is not much use without a legend. Points are now coloured according to whether they are associated with early (dark blue) or late (light blue) observations. There’s a hint that lower intensity storms tend to be at the beginning an end of the storm season, but it’s hard to be sure because there is so much overplotting—i.e. many points are in the same place.
We could no doubt improves on this visualisation, but nonetheless, it illustrates the important concept: we can add information to a plot by mapping additional variables to new aesthetics. There is nothing to stop us using different aesthetics if we wanted to squeeze even more information into this plot. For example, we could map the storm type variable (`type`) to the point shape if we wanted, using `shape = type` inside `aes`. However, this graph is already a bit too crowded, so this might not be too helpful in this instance.
### 18\.3\.2 Using facets
What if we want to see how the wind speed and pressure relationship might vary among years? One way to do this is to make a separate scatter plot for each year. We don’t have to do this manually though. We can use the faceting facility of **ggplot2** instead. This allows us to break up our data set up into subsets according to the unique values of one or two variables and produce a separate plot for each subset, but without having to write much R code.
Faceting operates on the whole figure so we can’t apply it by changing the properties of a layer. Instead, we have to use a new function to add the faceting information. Here’s how we split things up by year using the `facet_wrap` function:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point() +
facet_wrap(~ year, nrow = 2, ncol = 3)
```
The first argument of `facet_wrap` (`~ year`) says to split up the data set according to the values of `year`. The `nrow` and `ncol` arguments just specify how to split the panels across rows and columns of the resulting plot. Notice that the panels share the same scales for the ‘x’ and ‘y’ axes. This makes it easy to compare information.
The plot indicates that the wind speed – pressure relationship is more or less invariant across years, and that perhaps 1997 and 2000 were not such bad storm years compared to the others. This isn’t really surprising. The occurrence of tropical storms is somewhat stochastic, but the laws of atmospheric physics don’t change from one year to the next!
#### Don’t forget the `~`
We have to include the `~` symbol at the beginning of the `~ year` part of the `facet_wrap` specification. Trust us, the faceting won’t work without it. The `~` specifies something called a ‘formula’. The main job of a formula is to specify relationships among variables. These are usually used in R’s statistical modelling functions (not covered in this book), but they occasionally pop up in other places.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/customising-plots.html |
Chapter 19 Customising plots
============================
The default formatting used by **ggplot2** is generally fine for exploratory purposes. In fact, although they aren’t universally popular, the defaults are carefully chosen to ensure that the information in a plot is easy to discern. These choices are a little unconventional though. For example, published figures usually use a white background. For this reason, we often need to change the appearance of a plot once we’re ready to include it in a report.
Our aim in this chapter is to learn a little bit about the underlying logic of how to customise **ggplot2**. We aren’t going to attempt to cover the many different permutations. Instead, we’ll focus on the main principles underlying the different routes to customisation. We’ll build on these as we review a range of different visualisations in later chapters. Using the storms data once again, we’ll work on improving the simple scatter plot initially produced in the [Introduction to **ggplot2**](introduction-to-ggplot2.html#introduction-to-ggplot2) chapter:
```
# 1. make the storms data available
library(nasaweather)
# 2. plot wind speed against atmospheric pressure
ggplot(storms, aes(x = pressure, y = wind)) + geom_point()
```
19\.1 Working with layer specific geom properties
-------------------------------------------------
What do we do if we if we need to change the properties of a geom? We’re using the point geom at the moment. How might we change the colour or size of points in our scatter plot? It’s quite intuitive—we set the appropriate arguments in the `geom_point` function. Let’s rebuild our example, this time setting the colour, size and transparency of points:
```
ggplot(storms, aes(x = pressure, y = wind)) +
geom_point(colour = "steelblue", size = 1.5, alpha = 0.3)
```
The point colour is set with the `colour` argument. There are many ways to specify colours in R, but if we only need to specify a few the simplest is to use a name R recognises. The point size is specified with the `size` argument. The baseline is 1, and so here we increased the point size by assigning this a value of 1\.5\. Finally, we made the points somewhat transparent by setting the value of the `alpha` argument to be less than 1\. In graphical systems the ‘alpha channel’ essentially specifies transparency of something—a value of 0 is taken to mean ‘completely invisible’ and a value of 1 means ‘completely opaque’.
#### Built\-in colours in R
There is nothing special about “steelblue” other than the fact that it is colour name ‘known’ to R. There are over 650 colour names built\-in to R. To see them, we can use a function called `colours` to print these to the Console. Try it.
There are other arguments—such as `fill` and `shape`—that can be used to adjust the way the points are rendered. We won’t look at these here. The best way to learn how these work is to simply experiment with them.
The **key message** to take away from this little customisation example is this: if we want to set the properties of a geom in a particular layer, we do so by specifying the appropriate arguments in the `geom_NAME` function that defines that layer. We **do not** change the arguments passed to the `ggplot` function.
#### How should we format **ggplot2** code?
Take another look at that last example. Notice that we split the **ggplot2** definition over two lines, placing each function on its own line. The whole thing will still be treated as a single expression when we do this because each line, apart from the last one, ends in a `+`. R doesn’t care about white space. As long as we leave the `+` at the end of each line R will consider each new line to be part of the same definition. Splitting the different parts of a graphical object definition across lines like this is a very good idea. It makes everything more readable and helps us spot errors. This way of formatting **ggplot2** code is pretty much essential once we start working with complex plots. We will always use this convention from now on.
### 19\.1\.1 The relationship between aesthetics and geom properties.
We’ve seen that we can introduce new information into a plot by setting up additional aesthetics. In the previous chapter we added the information about the time of year an observation was made by mapping the `seasday` variable to the colour aesthetic. Let’s try to add this to our new figure:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(colour = "steelblue", size = 1.5, alpha = 0.3)
```
This doesn’t seem to have worked as hoped as the resulting scatter plot looks exactly like the previous one, i.e. all the points are the same colour. What went wrong? We’re still setting the colour argument of `geom_point`. When we add a layer, any layer\-specific properties that we set will override the aesthetic mappings. We need to remove the `colour = "steelblue"` from inside `geom_point` to remedy this:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(size = 1.5, alpha = 0.3)
```
That’s what we were aiming for. The points are now coloured again according to how they are associated with early (dark blue) or late (light blue) season observations.
The **key message** to take away from this example is this: if we decide to change the properties of the geom in a particular layer, we will override any aesthetic mappings that conflict with our choice of customisation.
19\.2 Working with layer specific position adjustments
------------------------------------------------------
What else might we do to make the plot a little easier to read? Wind speed is only measured to the nearest 5 mph, which is causing many points to be plotted on top of one another. One option to solve this problem is to randomly shuffle the vertical position of each point a little to avoid this over\-plotting. This is called ‘jittering’. We do this by specifying a position adjustment in our layer. Remember, position adjustments are part of individual layers, not the whole plot. Here’s one way to do this:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(alpha = 0.3, size = 1.5, position = position_jitter(w = 0, h = 4))
```
We used the `position_jitter` function to associate the necessary information with the position argument of `geom_point`. The `w` (’w’idth) and `h` (’h’eight) arguments of the `position_jitter` function specify how much to jitter points in the x and y directions. The resulting plot is a little easier to read.
The **key message** to take away from this second customisation example is this: every layer has its own position adjustment, which we can change by setting the `position` argument inside the `geom_NAME` function that defines the corresponding layer. Just keep in mind that, very often, there’s no need to mess about with position adjustments (the defaults are fine).
19\.3 Working with plot specific scales
---------------------------------------
Let’s look at a different way to tweak our plot. So far we have been focussing on customisations that apply in a layer specific manner (geom properties and position adjustments). A second class of customisation applies to the whole plot. Specifically, this new type of customisation applies to the scales used in the plot. Here’s what we said about scales in the last chapter:
> The scale controls how the data is mapped to the aesthetic attributes. A scale takes the data and converts it into something we can perceive, such as an x/y location, or the colour and size of points in a plot. A scale must be defined for every aesthetic in a plot.
Every aesthetic has a scale associated with it. We adjust ‘how the data is mapped to the aesthetic attributes’ by changing some aspect of the corresponding scale. This will seem very abstract at first. As always it’s best understood by example.
We’re going to adjust the scale associated y axis aesthetic (‘y’). Specifically, we want to increase the number of ‘guides’ (the horizontal lines inside the plot) and their accompanying labels. Here is how we place guides at 20, 40, 60, 80, etc, on the y axis:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(alpha = 0.3, size = 1.5, position = position_jitter(w = 0, h = 4)) +
scale_y_continuous(breaks = seq(20, 160, by = 20))
```
What’s going on here? The functions that adjust a scale all have the general form `scale_XX_YY`. The `XX` bit in the name must reference the relevant aesthetics, while the `YY` part refers to the kind of scale we want to define. The aesthetic we wanted to alter was the y axis. It turns out (though it probably wasn’t obvious) that this is a continuous scale because wind is a numeric variable. This means we had to use the `scale_y_continuous` function to tweak the y axis (there is a `scale_x_continuous` function for altering the x axis). The `breaks` argument just takes a vector containing a numeric sequence and uses this to specify where the guides should be drawn.
Scales are the hardest aspect of **ggplot2** to get to grips with. For one, there are a lot of them—type `scale_` at the Console and hit the tab key to see how many there are. Each of them can take a variety of different arguments. Luckily, the defaults used by **ggplot2** are often good enough that we can arrive at a good plot without having to manipulate the scales. We’ll take a look at a few more options as we progress through different visualisations.
The **key message** to take away from this third customisation example is this: every aesthetic mapping has a scale associated with it. If we want to change how the information associated with an aesthetic is displayed we should change the corresponding scale. For example, if we want to change the way point colours are associated with the `seasday` variable we have to use one of the `scale_colour_YY` functions.
19\.4 Adding titles and labels
------------------------------
What else might we like to tweak? Look at the x and y axis labels. These are just the names of the data variables used to define the aesthetic mapping. These labels aren’t too bad in this case, but they could be more informative. We know “wind” stands for “wind speed”, but someone reading this figure may not realise this immediately. There are also no units – generally a big no\-no for serious figures. Here is how to set the axis labels:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(alpha = 0.3, size = 1.5, position = position_jitter(w = 0, h = 4)) +
scale_y_continuous(breaks = seq(20, 160, by = 20)) +
xlab("Atmospheric Pressure (mbar)") + ylab("Wind Speed (mph)")
```
The axes labels are a feature of the whole plot. They do not belong to a particular layer. This is why we don’t alter axis labels by passing arguments to the function that built a layer (`geom_point` in this case). Instead, we use the `xlab` and `ylab` functions to set the x and y labels, respectively, using `+` to add them to our graphical object. If we need to add a title to a graph we can use the `ggtitle` function in the same way.
The `labs` function provides a more flexible alternative to `xlab` and `ylab`. It’s more flexible because `labs` can be used to change the label of every aesthetic in the plot. For example, if we want to set the labels of the x and y axes, and the label associated with `seasday`, we use:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(alpha = 0.3, size = 1.5, position = position_jitter(w = 0, h = 4)) +
scale_y_continuous(breaks = seq(20, 160, by = 20)) +
labs(x = "Atmospheric Pressure (mbar)",
y = "Wind Speed (mph)",
colour = "Day of \nSeason")
```
We snuck one last trick into that last example. Notice that the “Day of Season” label is split over two lines. We did this by inserting the special sequnce `\n` into the label text. The `\n` inside a quoted label tells R to start a new line.
19\.5 Themes
------------
The final route to customisation we’ll consider concerns something called the ‘theme’ of a plot. We haven’t considered the **ggplot2** theme system at all yet. In simple terms, **ggplot2** themes deal with the visual aspects of a plot that aren’t directly handled by adjusting geom properties or scales, i.e. the ‘non\-data’ parts of a plot. This includes features such as the colour of the plotting region and the grid lines, whether or not those grid lines are even displayed, the position of labels, the font used in labels, and so on.
The **ggplot2** theme system is extremely powerful. Once we know how to use it, we can set up a custom theme to meet our requirements and then apply it as needed with very little effort. However, it’s not an entirely trivial thing to learn about because there are so many components of every plot. Fortunately there are a range of themes built into **ggplot2** that are easily good enough for producing publication ready figures. Let’s assume we have made a plot object `final_plt` containing all the information and data formatting we want:
```
final_plt <-
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(alpha = 0.3, size = 1.5, position = position_jitter(w = 0, h = 4)) +
scale_y_continuous(breaks = seq(20, 160, by = 20)) +
labs(x = "Atmospheric Pressure (mbar)",
y = "Wind Speed (mph)",
colour = "Day of \nSeason")
```
Here’s how to use the built in themes to alter the themes used to plot the `final_plt` object:
```
final_plt + theme_bw()
```
In this example we use `+` with the `theme_bw` function to use the built in ‘black and white’ theme. This removes the grey background that so many people dislike in **ggplot2**.
There aren’t that many themes built into **ggplot2**—type `theme_` at the Console and hit the tab key to see the others. One popular alternative to `theme_bw` is the ‘classic’ theme, via `theme_classic`:
```
final_plt + theme_classic(base_size = 15)
```
This produces a very stripped down plot that’s much closer to those produced by the base graphics system. Notice that we did one more thing here: we set the `base_size` argument to 15 to increase the size of all the text in the figure (the default is 11\). This can be used with any `theme_XX` function to quickly change the relative size of all the text in a plot.
19\.1 Working with layer specific geom properties
-------------------------------------------------
What do we do if we if we need to change the properties of a geom? We’re using the point geom at the moment. How might we change the colour or size of points in our scatter plot? It’s quite intuitive—we set the appropriate arguments in the `geom_point` function. Let’s rebuild our example, this time setting the colour, size and transparency of points:
```
ggplot(storms, aes(x = pressure, y = wind)) +
geom_point(colour = "steelblue", size = 1.5, alpha = 0.3)
```
The point colour is set with the `colour` argument. There are many ways to specify colours in R, but if we only need to specify a few the simplest is to use a name R recognises. The point size is specified with the `size` argument. The baseline is 1, and so here we increased the point size by assigning this a value of 1\.5\. Finally, we made the points somewhat transparent by setting the value of the `alpha` argument to be less than 1\. In graphical systems the ‘alpha channel’ essentially specifies transparency of something—a value of 0 is taken to mean ‘completely invisible’ and a value of 1 means ‘completely opaque’.
#### Built\-in colours in R
There is nothing special about “steelblue” other than the fact that it is colour name ‘known’ to R. There are over 650 colour names built\-in to R. To see them, we can use a function called `colours` to print these to the Console. Try it.
There are other arguments—such as `fill` and `shape`—that can be used to adjust the way the points are rendered. We won’t look at these here. The best way to learn how these work is to simply experiment with them.
The **key message** to take away from this little customisation example is this: if we want to set the properties of a geom in a particular layer, we do so by specifying the appropriate arguments in the `geom_NAME` function that defines that layer. We **do not** change the arguments passed to the `ggplot` function.
#### How should we format **ggplot2** code?
Take another look at that last example. Notice that we split the **ggplot2** definition over two lines, placing each function on its own line. The whole thing will still be treated as a single expression when we do this because each line, apart from the last one, ends in a `+`. R doesn’t care about white space. As long as we leave the `+` at the end of each line R will consider each new line to be part of the same definition. Splitting the different parts of a graphical object definition across lines like this is a very good idea. It makes everything more readable and helps us spot errors. This way of formatting **ggplot2** code is pretty much essential once we start working with complex plots. We will always use this convention from now on.
### 19\.1\.1 The relationship between aesthetics and geom properties.
We’ve seen that we can introduce new information into a plot by setting up additional aesthetics. In the previous chapter we added the information about the time of year an observation was made by mapping the `seasday` variable to the colour aesthetic. Let’s try to add this to our new figure:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(colour = "steelblue", size = 1.5, alpha = 0.3)
```
This doesn’t seem to have worked as hoped as the resulting scatter plot looks exactly like the previous one, i.e. all the points are the same colour. What went wrong? We’re still setting the colour argument of `geom_point`. When we add a layer, any layer\-specific properties that we set will override the aesthetic mappings. We need to remove the `colour = "steelblue"` from inside `geom_point` to remedy this:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(size = 1.5, alpha = 0.3)
```
That’s what we were aiming for. The points are now coloured again according to how they are associated with early (dark blue) or late (light blue) season observations.
The **key message** to take away from this example is this: if we decide to change the properties of the geom in a particular layer, we will override any aesthetic mappings that conflict with our choice of customisation.
### 19\.1\.1 The relationship between aesthetics and geom properties.
We’ve seen that we can introduce new information into a plot by setting up additional aesthetics. In the previous chapter we added the information about the time of year an observation was made by mapping the `seasday` variable to the colour aesthetic. Let’s try to add this to our new figure:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(colour = "steelblue", size = 1.5, alpha = 0.3)
```
This doesn’t seem to have worked as hoped as the resulting scatter plot looks exactly like the previous one, i.e. all the points are the same colour. What went wrong? We’re still setting the colour argument of `geom_point`. When we add a layer, any layer\-specific properties that we set will override the aesthetic mappings. We need to remove the `colour = "steelblue"` from inside `geom_point` to remedy this:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(size = 1.5, alpha = 0.3)
```
That’s what we were aiming for. The points are now coloured again according to how they are associated with early (dark blue) or late (light blue) season observations.
The **key message** to take away from this example is this: if we decide to change the properties of the geom in a particular layer, we will override any aesthetic mappings that conflict with our choice of customisation.
19\.2 Working with layer specific position adjustments
------------------------------------------------------
What else might we do to make the plot a little easier to read? Wind speed is only measured to the nearest 5 mph, which is causing many points to be plotted on top of one another. One option to solve this problem is to randomly shuffle the vertical position of each point a little to avoid this over\-plotting. This is called ‘jittering’. We do this by specifying a position adjustment in our layer. Remember, position adjustments are part of individual layers, not the whole plot. Here’s one way to do this:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(alpha = 0.3, size = 1.5, position = position_jitter(w = 0, h = 4))
```
We used the `position_jitter` function to associate the necessary information with the position argument of `geom_point`. The `w` (’w’idth) and `h` (’h’eight) arguments of the `position_jitter` function specify how much to jitter points in the x and y directions. The resulting plot is a little easier to read.
The **key message** to take away from this second customisation example is this: every layer has its own position adjustment, which we can change by setting the `position` argument inside the `geom_NAME` function that defines the corresponding layer. Just keep in mind that, very often, there’s no need to mess about with position adjustments (the defaults are fine).
19\.3 Working with plot specific scales
---------------------------------------
Let’s look at a different way to tweak our plot. So far we have been focussing on customisations that apply in a layer specific manner (geom properties and position adjustments). A second class of customisation applies to the whole plot. Specifically, this new type of customisation applies to the scales used in the plot. Here’s what we said about scales in the last chapter:
> The scale controls how the data is mapped to the aesthetic attributes. A scale takes the data and converts it into something we can perceive, such as an x/y location, or the colour and size of points in a plot. A scale must be defined for every aesthetic in a plot.
Every aesthetic has a scale associated with it. We adjust ‘how the data is mapped to the aesthetic attributes’ by changing some aspect of the corresponding scale. This will seem very abstract at first. As always it’s best understood by example.
We’re going to adjust the scale associated y axis aesthetic (‘y’). Specifically, we want to increase the number of ‘guides’ (the horizontal lines inside the plot) and their accompanying labels. Here is how we place guides at 20, 40, 60, 80, etc, on the y axis:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(alpha = 0.3, size = 1.5, position = position_jitter(w = 0, h = 4)) +
scale_y_continuous(breaks = seq(20, 160, by = 20))
```
What’s going on here? The functions that adjust a scale all have the general form `scale_XX_YY`. The `XX` bit in the name must reference the relevant aesthetics, while the `YY` part refers to the kind of scale we want to define. The aesthetic we wanted to alter was the y axis. It turns out (though it probably wasn’t obvious) that this is a continuous scale because wind is a numeric variable. This means we had to use the `scale_y_continuous` function to tweak the y axis (there is a `scale_x_continuous` function for altering the x axis). The `breaks` argument just takes a vector containing a numeric sequence and uses this to specify where the guides should be drawn.
Scales are the hardest aspect of **ggplot2** to get to grips with. For one, there are a lot of them—type `scale_` at the Console and hit the tab key to see how many there are. Each of them can take a variety of different arguments. Luckily, the defaults used by **ggplot2** are often good enough that we can arrive at a good plot without having to manipulate the scales. We’ll take a look at a few more options as we progress through different visualisations.
The **key message** to take away from this third customisation example is this: every aesthetic mapping has a scale associated with it. If we want to change how the information associated with an aesthetic is displayed we should change the corresponding scale. For example, if we want to change the way point colours are associated with the `seasday` variable we have to use one of the `scale_colour_YY` functions.
19\.4 Adding titles and labels
------------------------------
What else might we like to tweak? Look at the x and y axis labels. These are just the names of the data variables used to define the aesthetic mapping. These labels aren’t too bad in this case, but they could be more informative. We know “wind” stands for “wind speed”, but someone reading this figure may not realise this immediately. There are also no units – generally a big no\-no for serious figures. Here is how to set the axis labels:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(alpha = 0.3, size = 1.5, position = position_jitter(w = 0, h = 4)) +
scale_y_continuous(breaks = seq(20, 160, by = 20)) +
xlab("Atmospheric Pressure (mbar)") + ylab("Wind Speed (mph)")
```
The axes labels are a feature of the whole plot. They do not belong to a particular layer. This is why we don’t alter axis labels by passing arguments to the function that built a layer (`geom_point` in this case). Instead, we use the `xlab` and `ylab` functions to set the x and y labels, respectively, using `+` to add them to our graphical object. If we need to add a title to a graph we can use the `ggtitle` function in the same way.
The `labs` function provides a more flexible alternative to `xlab` and `ylab`. It’s more flexible because `labs` can be used to change the label of every aesthetic in the plot. For example, if we want to set the labels of the x and y axes, and the label associated with `seasday`, we use:
```
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(alpha = 0.3, size = 1.5, position = position_jitter(w = 0, h = 4)) +
scale_y_continuous(breaks = seq(20, 160, by = 20)) +
labs(x = "Atmospheric Pressure (mbar)",
y = "Wind Speed (mph)",
colour = "Day of \nSeason")
```
We snuck one last trick into that last example. Notice that the “Day of Season” label is split over two lines. We did this by inserting the special sequnce `\n` into the label text. The `\n` inside a quoted label tells R to start a new line.
19\.5 Themes
------------
The final route to customisation we’ll consider concerns something called the ‘theme’ of a plot. We haven’t considered the **ggplot2** theme system at all yet. In simple terms, **ggplot2** themes deal with the visual aspects of a plot that aren’t directly handled by adjusting geom properties or scales, i.e. the ‘non\-data’ parts of a plot. This includes features such as the colour of the plotting region and the grid lines, whether or not those grid lines are even displayed, the position of labels, the font used in labels, and so on.
The **ggplot2** theme system is extremely powerful. Once we know how to use it, we can set up a custom theme to meet our requirements and then apply it as needed with very little effort. However, it’s not an entirely trivial thing to learn about because there are so many components of every plot. Fortunately there are a range of themes built into **ggplot2** that are easily good enough for producing publication ready figures. Let’s assume we have made a plot object `final_plt` containing all the information and data formatting we want:
```
final_plt <-
ggplot(storms, aes(x = pressure, y = wind, colour = seasday)) +
geom_point(alpha = 0.3, size = 1.5, position = position_jitter(w = 0, h = 4)) +
scale_y_continuous(breaks = seq(20, 160, by = 20)) +
labs(x = "Atmospheric Pressure (mbar)",
y = "Wind Speed (mph)",
colour = "Day of \nSeason")
```
Here’s how to use the built in themes to alter the themes used to plot the `final_plt` object:
```
final_plt + theme_bw()
```
In this example we use `+` with the `theme_bw` function to use the built in ‘black and white’ theme. This removes the grey background that so many people dislike in **ggplot2**.
There aren’t that many themes built into **ggplot2**—type `theme_` at the Console and hit the tab key to see the others. One popular alternative to `theme_bw` is the ‘classic’ theme, via `theme_classic`:
```
final_plt + theme_classic(base_size = 15)
```
This produces a very stripped down plot that’s much closer to those produced by the base graphics system. Notice that we did one more thing here: we set the `base_size` argument to 15 to increase the size of all the text in the figure (the default is 11\). This can be used with any `theme_XX` function to quickly change the relative size of all the text in a plot.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/exploring-one-numeric-variable.html |
Chapter 20 Exploring one numeric variable
=========================================
This chapter will consider how to go about exploring the sample distribution of a numeric variable. Using the `storms` data from the **nasaweather** package (remember to load and attach the package), we’ll review some basic descriptive statistics and visualisations that are appropriate for numeric variables.
20\.1 Understanding numerical variables
---------------------------------------
We’ll work with the `wind` and `pressure` variables in `storms` to illustrate the key ideas. Wind speed and atmospheric pressure are clearly numeric variables. We can say a bit more. They are both numeric variables that are measured on a ratio scale because zero really is zero: it makes sense to say that 20 mph is twice as fast as 10 mph and 1000 mbar exerts twice as much pressure on objects as 500 mbar. Are these continuous or discrete variables? Think about the possible values that wind speed and atmospheric pressure can take. A wind speed and atmospheric pressure of 40\.52 mph and 1000\.23 mbar are perfectly reasonable values, so fundamentally, these are continuous variables.
The simplest way to understand our data, if not the most effective, is to view it in its raw form. We can always use the `View` function to do this in RStudio. However, since this doesn’t work on a web page, we’ll take a quick look at the first 100 values of the `wind` and `pressure` variables in `storms`. We can print these to the Console by extracting each of them with the `$` operator, using the `[` construct to subset the first 100 elements of each vector:
```
# first 100 values of atmospheric pressure
storms$pressure[1:100]
```
```
## [1] 1005 1004 1003 1001 997 995 987 988 988 990 990 993 993 994
## [15] 995 995 992 990 988 984 982 984 989 993 995 996 997 1000
## [29] 997 990 992 992 993 1019 1019 1018 1017 1016 1013 1011 1009 1007
## [43] 1004 1001 997 997 997 997 996 995 993 991 990 989 1012 1012
## [57] 1012 1011 1011 1011 1010 1010 1006 1008 1009 1010 1009 1006 1006 1005
## [71] 1004 999 999 997 991 995 997 995 994 994 995 996 997 997
## [85] 998 998 999 999 1000 1000 1001 1002 1003 1005 1005 1009 1008 1008
## [99] 1008 1007
```
```
# first 100 values of wind speed
storms$wind[1:100]
```
```
## [1] 30 30 35 40 50 60 65 65 65 60 60 45 30 35 35 40 40 45 45 45 50 50 50
## [24] 45 40 40 40 40 40 40 40 35 35 20 20 20 25 25 30 30 30 35 40 60 60 55
## [47] 50 50 50 50 50 50 45 40 25 25 25 30 30 30 30 30 35 35 35 40 40 45 45
## [70] 45 45 50 50 55 60 60 60 55 55 55 50 50 50 50 50 50 50 50 50 50 50 50
## [93] 50 50 50 25 30 30 30 30
```
Notice that even though `pressure` is continuous variables it looks like a discrete variable because it has only been measured to the nearest whole millibar. Similarly, `wind` is only measured to the nearest 5 mph. These differences reflect the limitations of the methodology used to measure each variable, e.g. measuring wind speed is hard because it varies so much in space and time.
This illustrates an important idea: we can’t just look at the values a numeric variable takes in a sample to determine whether it is discrete or continuous. In one sense the `pressure` variable is a discrete variable because of the way it was measured, even though we know that atmospheric pressure is really continuous.
Whether we treat it as continuous or discrete is an analysis decision. These sorts of distinctions often don’t matter too much when we’re exploring data, but they can matter when we’re deciding how to analyse it statistically. We have to make a decision about how to classify a variable based on knowledge of its true nature and the measurement process. For example, imagine that we were only able to measure wind speed to the nearest 25 mph. In this situation we would only “see” a few different categories of wind speed, so it might be sensible to treat the `wind` variable as an ordinal, categorical variable.
20\.2 Graphical summaries
-------------------------
We only looked at the first 100 values of the `wind` and `pressure` variables because the `storms` data set is too large to look everything at once. It’s hard to say much about the sample distribution of these two variables by just looking at such a small subset of values. If the data set has been sorted, these might not even be representative of the wider sample.
What else might we do? One useful tool is ‘binning’. The idea behind binning a variable is very simple. It involves two steps. First, we take the set of possible values of our numeric variable and divide this into a equal sized, non\-overlapping intervals. We can use any interval size we like, as long as it is large enough to span at least two observations some of the time, though in practice some choices are more sensible than others. We then have to work out how many values of the our variable fall inside each bin. The resulting set of counts tells us quiet a lot about the sample distribution.
Let’s see how this works with an example. Binning is very tedious to do by hand, but as we might expect, there are a couple of base R function that can do this for us: `cut` and `table`. Here’s how to use these to bin the `pressure` variable into intervals of 10 mbar:
```
presure_bins <- cut(storms$pressure,
breaks = seq(900, 1020, by = 5), right = FALSE)
table(presure_bins)
```
```
## presure_bins
## [900,905) [905,910) [910,915) [915,920) [920,925) [925,930)
## 0 1 2 2 7 4
## [930,935) [935,940) [940,945) [945,950) [950,955) [955,960)
## 14 23 35 44 47 39
## [960,965) [965,970) [970,975) [975,980) [980,985) [985,990)
## 79 81 156 127 170 240
## [990,995) [995,1000) [1000,1005) [1005,1010) [1010,1015) [1015,1020)
## 252 291 466 515 134 18
```
We won’t explain how `cut` and `table` work as we only need to understand the output. The output of `table` is a named numeric vector. The names of each element describe an interval, and the corresponding values are the observation counts in that interval. What does this tell us? It shows that most pressure observations associated with storm systems are round about 1000 mbar. Values higher than 1000 mbar are rare, but a range of values below this are possible, with lower and lower values becoming less frequent.
These binned data tell us quite a lot about the sample distribution of `pressure`. It’s still difficult to perceive the information in this output when it is presented as a series of numbers. What we really need is some kind of visualisation to help us interpret these numbers. This is what a **histogram** provides. Histograms are designed to summarise the sample distribution of a variable by showing the counts of binned data as a series of bars. The position and width of each bar corresponds to an interval and the height shows the count. Here’s a histogram that corresponds to the binned data we just made:
This gives a clear summary of the sample distribution of pressure. It reveals: 1\) the most common values, which are just above 1000 mbar; 2\) the range of the data, which is about 100 mbar; and 3\) the shape of the distribution, which is asymmetric, with a tendency toward low values.
We used `ggplot2` to make that histogram. We could do this by building a new data set with the binned data, and then use this with `ggplot2` to construct the histogram manually. There is a much easier way to achieve the same result though. Rather than do it one one step with a single R expression, we will break the process up into two steps, storing the the **ggplot2** object as we build it.
The first step uses the `ggplot` function with `aes` to set up the default data and aesthetic mapping:
```
plt_hist <- ggplot(storms, aes(x = pressure))
```
This is no different than the extended scatter plot example we stepped through earlier. The only difference is that a histogram requires only one aesthetic mapping. We supplied the argument `x = pressure` to `aes` because we want to display the map intervals associated with `pressure` to the x axis. We don’t need to supply an aesthetic mapping for the y axis because `ggplot2` is going to handle this for us.
The second step adds a layer to the `plt_hist` object. We need to find the right `geom_XX` function to do this. Unsurprisingly, this is called `geom_histogram`:
```
plt_hist <- plt_hist + geom_histogram()
summary(plt_hist)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~pressure
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
## -----------------------------------
## geom_bar: na.rm = FALSE
## stat_bin: binwidth = NULL, bins = NULL, na.rm = FALSE, pad = FALSE
## position_stack
```
Look at the text of the summary of the added layer below the `----`. This shows that `geom_histogram` adds a stat to the layer, the `stat_bin`. What this means is that `ggplot2` is going to take the raw `pressure` data and bin it for us. Everything we need to plot a histogram is now set up. Here’s the resulting plot:
```
plt_hist
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
The resulting plot is not quite the same as the example we saw above because it uses different bins. It’s a good idea to play around with the bin size to arrive at an informative histogram. We set the properties of the `geom_histogram` to tweak this kind of thing—the `binwidth` argument adjusts the width of the bins used. Let’s construct the histogram again with 7 mbar wide bins, as well as adjust the colour scheme and axis labels a bit:
```
ggplot(storms, aes(x = pressure)) +
geom_histogram(binwidth = 7, fill = "steelblue", colour="darkgrey", alpha = 0.8) +
xlab("Atmospheric Pressure (mbar)") + ylab("Count")
```
Whether or not that colour scheme is an improvement is a matter of taste. Mostly we wanted to demonstrate how the `fill`, `colour`, and `alpha` arguments change the output. Notice that the effect of increasing the bin width is to ‘smooth’ the histogram, i.e. this version looks less jagged than the last.
We can use pretty much the same R code to produce a histogram summarising the wind speed sample distribution:
```
ggplot(storms, aes(x = wind)) +
geom_histogram(binwidth = 10, fill = "steelblue", colour="darkgrey", alpha = 0.8) +
xlab("Wind Speed (mph)") + ylab("Count")
```
The only things that changed in this example were the aesthetic mapping and the bin width, which we set to 10\. It reveals that the wind speed during a storm tends to be about 40 mph, though the range of wind speeds is about 100 mph and the shape of the distribution is asymmetric.
We have to choose the bin widths carefully. Remember that wind speed is measured to the nearest 5 mph. This means we should choose a bin width that is a multiple of 5 to produce a meaningful histogram. Look what happens if we set the bin width to 3:
```
ggplot(storms, aes(x = wind)) +
geom_histogram(binwidth = 3, fill = "steelblue", colour="darkgrey", alpha = 0.8) +
xlab("Wind Speed (mph)") + ylab("Count")
```
We end up with gaps in the histogram because some intervals do not include multiples of 5\. This is not a good histogram because it fails to reliably summarise the distribution. Similar problems would occur if we chose a bin width that is greater than, but not a multiple of 5, because different bins would cover a different number of values that make up the `wind` variable. The take home message is that we have to know our data in order to produce meaningful summaries of it.
We’ll finish up this subsection by briefly reviewing one alternative to the histogram. Histograms are good for visualising sample distributions when we have a reasonable sample size (at least dozens, and ideally, hundreds of observations). They aren’t very effective when the sample is quite small. In this ‘small data’ situation it’s better to use something called a **dot plot**[8](#fn8).
Let’s use **dplyr** to extract a small(ish) subset of the storms data:
```
storms_small <-
storms %>%
filter(year == 1998, type == "Hurricane")
```
This just extracts the subset of hurricane observations from 1998\. The **ggplot2** code to make a dot plot with these data is very similar to the histogram case:
```
ggplot(storms_small, aes(x = pressure)) +
geom_dotplot(binwidth = 2) +
xlab("Atmospheric Pressure (mbar)") + ylab("Count")
```
Here, each observation in the data adds one dot, and dots that fall into the same bin are stacked up on top of one another. The resulting plot displays the same information about a sample distribution as a histogram, but it tends to be more informative when there are relatively few observations.
20\.3 Descriptive statistics
----------------------------
So far we’ve been describing the properties of sample distributions in very general terms, using phrases like ‘most common values’ and ‘the range of the data’ without really saying what we mean. Statisticians have devised specific terms to describe these kinds of properties, as well as different descriptive statistics to quantify them. The two that matter most are the **central tendency** and the **dispersion**:
* A measure of **central tendency** describes a typical (‘central’) value of a distribution. Most people know at least one measure of central tendency. The “average” that they calculated at school is the arithmetic mean of a sample. There are many different measures of central tendency, each with their own pros and cons. Take a look at the [Wikipedia](http://en.wikipedia.org/wiki/Central_tendency) to see the most common ones. Among these, the median is the one that is used most often in exploratory analyses.
* A measure of **dispersion** describes how spread out a distribution is. Dispersion measures quantify the variability or scatter of a variable. If one distribution is more dispersed than another it means that in some sense it encompasses a wider range of values. What this means in practice depends on the kind of measure we’re working with. Basic statistics courses tend to focus on the variance, and its square root, the standard deviation. There [are others](http://en.wikipedia.org/wiki/Statistical_dispersion) though.
### 20\.3\.1 Measuring central tendency
There are two descriptive statistics that are typically used to describe the central tendency of the sample distribution of numeric variables. The first is the **arithmetic mean** of a sample. People often say ‘empirical mean’, ‘sample mean’ or just ‘the mean’ when referring to the arithmetic sample mean. This is fine, but keep in mind that there are other kinds of mean (e.g. the harmonic mean and the geometric mean)[9](#fn9).
How do we calculate the arithmetic sample mean of a variable? Here’s the mathematical definition: \\\[
\\bar{x} \= \\frac{1}{N}\\sum\\limits\_{i\=1}^{N}{x\_i}
\\] We need to define the terms to make sense of this. The \\(\\bar{x}\\) stands for the arithmetic sample mean. The \\(N\\) in the right hand side of this expression is the sample size, i.e. the number of observations in a sample. The \\(x\_i\\) refer to the set of values the variable takes in the sample. The \\(i\\) is an index used to reference each observation: the first observation has value \\(x\_1\\), the second has value \\(x\_2\\), and so on, up to the last value, \\(x\_N\\). Finally, the \\(\\Sigma\_{i\=1}^{N}\\) stands for summation (‘adding up’) from \\(i \= 1\\) to \\(N\\).
Most people have used this formula at some point even though they may not have realised it. The `mean` function in R will calculate the arithmetic mean for us:
```
mean(storms$wind)
```
```
## [1] 54.68329
```
This tells us that the arithmetic sample mean of wind speed is 55 mph. How useful is this?
One limitation of the arithmetic mean is that it is affected by the shape of a distribution. It’s very sensitive to the extremes of a sample distribution. This is why, for example, it does not make much sense to look at the mean income of workers in a country to get a sense of what a ‘typical’ person earns. Income distribution are highly asymmetric, and those few who are lucky enough to earn very good salaries tend to shift the mean upward and well past anything that is really ‘typical’. The sample mean is also strongly affected by the presence of ‘outliers’. It’s difficult to give a precise definition of outliers—the appropriate definition depends on the context—but roughly speaking, these are unusually large or small values.
Because the sample mean is sensitive to the shape of a distribution and the presence of outliers we often prefer a second measure of central tendency: the **sample median**. The median of a sample is the number separating the upper half from the lower half[10](#fn10). We can find the sample median in R with the `median` function:
```
median(storms$wind)
```
```
## [1] 50
```
The sample median of wind speed is 50 mph. This is still to the right of the most common values of wind speed, but it shifted less than the mean.
#### 20\.3\.1\.1 What about ‘the mode’?
What does the phrase “the most common values” (e.g. of wind speed) really mean when describing a distribution? In fact, this is an indirect reference to something called the **mode** of the distribution. The mode of a distribution is essentially its peak, i.e. it locates the most likely value of a variable. Notice that we didn’t use the phrase ‘sample mode’. It’s easy to calculate the mode of a theoretical distribution. Unfortunately, it’s not a simple matter to reliably estimate the mode of a sample from such a distribution.
If a numeric variable is discrete, and we have a lot of data, we can sometimes arrive at an estimate of the mode by tabulating the number of observations in each numeric category. Although in truth wind speed is a continuous variable, it is only measured to the nearest 5mph in the `storms` data set. Therefore, it looks like a discrete variable. We can use the `table` function here to tabulate the number of observations at each value:
```
table(storms$wind)
```
```
##
## 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
## 3 122 168 345 234 212 228 208 136 154 189 136 119 94 61 76 40 57
## 105 110 115 120 125 130 135 140 145 150 155
## 26 30 42 27 20 8 5 1 2 2 2
```
The names of each element in the resulting vector are the recorded wind speeds and the corresponding values are the associated counts of each value. The most common wind speed is 30 mph, with 345 observations. The categories either side of this (25 and 35 mph) contain much lower counts. This provides a fairly good indication that the mode of the `wind` distribution is about 30mph.
Tabulating the counts in of numeric categories to identify the likely mode is only sensible when a numerical variable is genuinely discrete, or looks discrete as a result of how it was measured. Even then, there is no guarantee that this approach will produce a sensible estimate of the ‘true’ mode. If a variable is continuous then tabulating counts simply does not work. Methods exist to estimate a mode from a sample, but they are not simple. Nonetheless, it’s important to know what the mode represents and to be able to identify its approximate value by inspecting a histogram.
### 20\.3\.2 Measuring dispersion
There are many ways to quantify the dispersion of a sample distribution. The most important quantities from the standpoint of statistics are the sample **variance** and **standard deviation**. The sample variance (\\(s^2\\)) is ‘the sum of squared deviations’ (i.e. the differences) of each observation from the sample mean, divided by the sample size minus one. Here’s the mathematical definition: \\\[
s^2 \= \\frac{1}{N\-1}\\sum\\limits\_{i\=1}^{N}{(x\_i\-\\bar{x})^2}
\\] The meaning of these terms is the same as for the sample mean. The \\(\\bar{x}\\) is the sample mean, the \\(N\\) is the sample size, and the \\(x\_i\\) refers to the set of values the variable takes. We don’t have to actually apply this formula in R. There’s a function to calculate the sample variance:
```
var(storms$wind)
```
```
## [1] 668.1444
```
What does that number actually mean? Variances are always non\-negative. A small variance indicates that observations tend to be close to the mean (and to one another), while a high variance indicates that observations are very spread out. A variance of zero only occurs if all values are identical. However, it is difficult to interpret whether a sample variance is really “small” or “large” because the calculation involves squared deviations. For example, changing the measurement scale of a variable by 10 involves a 100\-fold change (102) in the variance.
The variance is a important quantity in statistics that crops up over and over again. Many common statistical tools use changes in variance to formally compare how well different models describe a data set. However, it is very difficult to interpret variances, which is why we seldom use them in exploratory work. A better statistic is to describe sample dispersion is a closely related quantity called the **standard deviation** of the sample, usually denoted \\(s\\). The standard deviation is the square root of the variance. We calculate it using the `sd` function:
```
sd(storms$wind)
```
```
## [1] 25.84849
```
Why do we prefer the standard deviation over the variance? Consider the wind speed again. The standard deviation of the wind speed sample is 26\. Take another look at the wind speed histogram. This shows that the wind speed measurements span about 5 standard deviations. If we had instead measured wind speed in kilometers per hour (kph), the standard deviation of the sample would be 42, because 1 mph \~ 1\.6 kph. If we plot the histogram of wind speed in kph it is still the case that the data are spanned by about 5 standard deviations. The variance on the other hand increases from to approximately 668 to 1730, a factor of 1\.62. This is the reason we often use the standard deviation compare dispersion: it reflects the dispersion we perceive in the data.
The sample standard deviation is not without problems though. Like the sample mean, it is sensitive to the shape of a distribution and the presence of outliers. A measure of dispersion that is more robust to these kinds of features is the **interquartile range**.
#### What are quartiles?
We need to know what a quartile is to understand the interquartile range. Three quartiles are defined for any sample. These divide the data into four equal sized groups, from the set of smallest numbers up to the set of largest numbers. The second quartile (\\(Q\_2\\)) is the median, i.e. it divides the data into an upper and lower half. The first quartile (\\(Q\_1\\)) is the number that divides the lower 50% of values into two equal sized groups. The third quartile (\\(Q\_3\\)) is the number that divides the upper 50% of values into two equal sized groups.
The quartiles also have other names. The first quartile is sometimes called the lower quartile, or the 25th percentile; the second quartile (the median) is the 50th percentile; and the third quartile is also called the upper quartile, or the 75th percentile.
The interquartile range (IQR) is defined as the difference between the third and first quartile. This means the IQR contains the middle 50% of values of a variable. Obviously, the more spread out the data are, the larger the IQR will be. The reason we prefer to use IQR to measure dispersion is that it only depends on the data in the “middle” of a sample distribution. This makes it robust to the presence of outliers. We can use the `IQR` function to find the interquartile range of the wind variable:
```
IQR(storms$wind)
```
```
## [1] 35
```
The IQR is used as the basis for a useful data summary plot called a ‘box and whiskers’ plot. We’ll see how to construct this in a later next chapter.
### 20\.3\.3 Skewness
A well\-defined hierarchy has been defined to describe and quantify the shape of distributions. It’s essential to know about the first two, central tendency and dispersion, because these are the basis of many standard analyses. The next most important aspect of a distribution is its **skewness** (or just ‘skew’). Skewness describes the asymmetry of a distribution. Just as with central tendency and dispersion, there are many different ways to quantify the skewness of a sample distribution. These are quite difficult to interpret because their interpretation depends on other features of a distribution. We’ll just explore skewness in the simplest case: the skewness of a **unimodal** distribution.
A unimodal distribution is one that has a single peak. We can never say for certain that a sample distribution is unimodal or not—unimodality is really a property of theoretical distributions—but with enough data and a sensible histogram we can at least say that a distribution is ‘probably’ unimodal. The histograms we produced to describe the sample distributions of `wind` and `pressure` certainly appear to be unimodal. Each has a single, distinct peak.
These two unimodal distributions are also asymmetric—they exhibit skewness. The `pressure` distribution is said to be skewed to the left because it has a long ‘tail’ that spreads out in this direction. In contrast, we say that the `wind` distribution is skewed to the right, because it has a long ‘tail’ that spreads out to right. Left skewness and right skewness are also called negative and positive skew, respectively. A sample distribution that looks symmetric is said to have (approximately) zero skew[11](#fn11).
The reason we care about skewness is that many common statistical models assume that the distributions we’re sampling from, after controlling for other variables, are not skewed. This is an issue for a statistics course. For now we just need to understand what skewness means and be able to describe distributions in terms of left (negative) and right (positive) skew.
20\.1 Understanding numerical variables
---------------------------------------
We’ll work with the `wind` and `pressure` variables in `storms` to illustrate the key ideas. Wind speed and atmospheric pressure are clearly numeric variables. We can say a bit more. They are both numeric variables that are measured on a ratio scale because zero really is zero: it makes sense to say that 20 mph is twice as fast as 10 mph and 1000 mbar exerts twice as much pressure on objects as 500 mbar. Are these continuous or discrete variables? Think about the possible values that wind speed and atmospheric pressure can take. A wind speed and atmospheric pressure of 40\.52 mph and 1000\.23 mbar are perfectly reasonable values, so fundamentally, these are continuous variables.
The simplest way to understand our data, if not the most effective, is to view it in its raw form. We can always use the `View` function to do this in RStudio. However, since this doesn’t work on a web page, we’ll take a quick look at the first 100 values of the `wind` and `pressure` variables in `storms`. We can print these to the Console by extracting each of them with the `$` operator, using the `[` construct to subset the first 100 elements of each vector:
```
# first 100 values of atmospheric pressure
storms$pressure[1:100]
```
```
## [1] 1005 1004 1003 1001 997 995 987 988 988 990 990 993 993 994
## [15] 995 995 992 990 988 984 982 984 989 993 995 996 997 1000
## [29] 997 990 992 992 993 1019 1019 1018 1017 1016 1013 1011 1009 1007
## [43] 1004 1001 997 997 997 997 996 995 993 991 990 989 1012 1012
## [57] 1012 1011 1011 1011 1010 1010 1006 1008 1009 1010 1009 1006 1006 1005
## [71] 1004 999 999 997 991 995 997 995 994 994 995 996 997 997
## [85] 998 998 999 999 1000 1000 1001 1002 1003 1005 1005 1009 1008 1008
## [99] 1008 1007
```
```
# first 100 values of wind speed
storms$wind[1:100]
```
```
## [1] 30 30 35 40 50 60 65 65 65 60 60 45 30 35 35 40 40 45 45 45 50 50 50
## [24] 45 40 40 40 40 40 40 40 35 35 20 20 20 25 25 30 30 30 35 40 60 60 55
## [47] 50 50 50 50 50 50 45 40 25 25 25 30 30 30 30 30 35 35 35 40 40 45 45
## [70] 45 45 50 50 55 60 60 60 55 55 55 50 50 50 50 50 50 50 50 50 50 50 50
## [93] 50 50 50 25 30 30 30 30
```
Notice that even though `pressure` is continuous variables it looks like a discrete variable because it has only been measured to the nearest whole millibar. Similarly, `wind` is only measured to the nearest 5 mph. These differences reflect the limitations of the methodology used to measure each variable, e.g. measuring wind speed is hard because it varies so much in space and time.
This illustrates an important idea: we can’t just look at the values a numeric variable takes in a sample to determine whether it is discrete or continuous. In one sense the `pressure` variable is a discrete variable because of the way it was measured, even though we know that atmospheric pressure is really continuous.
Whether we treat it as continuous or discrete is an analysis decision. These sorts of distinctions often don’t matter too much when we’re exploring data, but they can matter when we’re deciding how to analyse it statistically. We have to make a decision about how to classify a variable based on knowledge of its true nature and the measurement process. For example, imagine that we were only able to measure wind speed to the nearest 25 mph. In this situation we would only “see” a few different categories of wind speed, so it might be sensible to treat the `wind` variable as an ordinal, categorical variable.
20\.2 Graphical summaries
-------------------------
We only looked at the first 100 values of the `wind` and `pressure` variables because the `storms` data set is too large to look everything at once. It’s hard to say much about the sample distribution of these two variables by just looking at such a small subset of values. If the data set has been sorted, these might not even be representative of the wider sample.
What else might we do? One useful tool is ‘binning’. The idea behind binning a variable is very simple. It involves two steps. First, we take the set of possible values of our numeric variable and divide this into a equal sized, non\-overlapping intervals. We can use any interval size we like, as long as it is large enough to span at least two observations some of the time, though in practice some choices are more sensible than others. We then have to work out how many values of the our variable fall inside each bin. The resulting set of counts tells us quiet a lot about the sample distribution.
Let’s see how this works with an example. Binning is very tedious to do by hand, but as we might expect, there are a couple of base R function that can do this for us: `cut` and `table`. Here’s how to use these to bin the `pressure` variable into intervals of 10 mbar:
```
presure_bins <- cut(storms$pressure,
breaks = seq(900, 1020, by = 5), right = FALSE)
table(presure_bins)
```
```
## presure_bins
## [900,905) [905,910) [910,915) [915,920) [920,925) [925,930)
## 0 1 2 2 7 4
## [930,935) [935,940) [940,945) [945,950) [950,955) [955,960)
## 14 23 35 44 47 39
## [960,965) [965,970) [970,975) [975,980) [980,985) [985,990)
## 79 81 156 127 170 240
## [990,995) [995,1000) [1000,1005) [1005,1010) [1010,1015) [1015,1020)
## 252 291 466 515 134 18
```
We won’t explain how `cut` and `table` work as we only need to understand the output. The output of `table` is a named numeric vector. The names of each element describe an interval, and the corresponding values are the observation counts in that interval. What does this tell us? It shows that most pressure observations associated with storm systems are round about 1000 mbar. Values higher than 1000 mbar are rare, but a range of values below this are possible, with lower and lower values becoming less frequent.
These binned data tell us quite a lot about the sample distribution of `pressure`. It’s still difficult to perceive the information in this output when it is presented as a series of numbers. What we really need is some kind of visualisation to help us interpret these numbers. This is what a **histogram** provides. Histograms are designed to summarise the sample distribution of a variable by showing the counts of binned data as a series of bars. The position and width of each bar corresponds to an interval and the height shows the count. Here’s a histogram that corresponds to the binned data we just made:
This gives a clear summary of the sample distribution of pressure. It reveals: 1\) the most common values, which are just above 1000 mbar; 2\) the range of the data, which is about 100 mbar; and 3\) the shape of the distribution, which is asymmetric, with a tendency toward low values.
We used `ggplot2` to make that histogram. We could do this by building a new data set with the binned data, and then use this with `ggplot2` to construct the histogram manually. There is a much easier way to achieve the same result though. Rather than do it one one step with a single R expression, we will break the process up into two steps, storing the the **ggplot2** object as we build it.
The first step uses the `ggplot` function with `aes` to set up the default data and aesthetic mapping:
```
plt_hist <- ggplot(storms, aes(x = pressure))
```
This is no different than the extended scatter plot example we stepped through earlier. The only difference is that a histogram requires only one aesthetic mapping. We supplied the argument `x = pressure` to `aes` because we want to display the map intervals associated with `pressure` to the x axis. We don’t need to supply an aesthetic mapping for the y axis because `ggplot2` is going to handle this for us.
The second step adds a layer to the `plt_hist` object. We need to find the right `geom_XX` function to do this. Unsurprisingly, this is called `geom_histogram`:
```
plt_hist <- plt_hist + geom_histogram()
summary(plt_hist)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~pressure
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
## -----------------------------------
## geom_bar: na.rm = FALSE
## stat_bin: binwidth = NULL, bins = NULL, na.rm = FALSE, pad = FALSE
## position_stack
```
Look at the text of the summary of the added layer below the `----`. This shows that `geom_histogram` adds a stat to the layer, the `stat_bin`. What this means is that `ggplot2` is going to take the raw `pressure` data and bin it for us. Everything we need to plot a histogram is now set up. Here’s the resulting plot:
```
plt_hist
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
The resulting plot is not quite the same as the example we saw above because it uses different bins. It’s a good idea to play around with the bin size to arrive at an informative histogram. We set the properties of the `geom_histogram` to tweak this kind of thing—the `binwidth` argument adjusts the width of the bins used. Let’s construct the histogram again with 7 mbar wide bins, as well as adjust the colour scheme and axis labels a bit:
```
ggplot(storms, aes(x = pressure)) +
geom_histogram(binwidth = 7, fill = "steelblue", colour="darkgrey", alpha = 0.8) +
xlab("Atmospheric Pressure (mbar)") + ylab("Count")
```
Whether or not that colour scheme is an improvement is a matter of taste. Mostly we wanted to demonstrate how the `fill`, `colour`, and `alpha` arguments change the output. Notice that the effect of increasing the bin width is to ‘smooth’ the histogram, i.e. this version looks less jagged than the last.
We can use pretty much the same R code to produce a histogram summarising the wind speed sample distribution:
```
ggplot(storms, aes(x = wind)) +
geom_histogram(binwidth = 10, fill = "steelblue", colour="darkgrey", alpha = 0.8) +
xlab("Wind Speed (mph)") + ylab("Count")
```
The only things that changed in this example were the aesthetic mapping and the bin width, which we set to 10\. It reveals that the wind speed during a storm tends to be about 40 mph, though the range of wind speeds is about 100 mph and the shape of the distribution is asymmetric.
We have to choose the bin widths carefully. Remember that wind speed is measured to the nearest 5 mph. This means we should choose a bin width that is a multiple of 5 to produce a meaningful histogram. Look what happens if we set the bin width to 3:
```
ggplot(storms, aes(x = wind)) +
geom_histogram(binwidth = 3, fill = "steelblue", colour="darkgrey", alpha = 0.8) +
xlab("Wind Speed (mph)") + ylab("Count")
```
We end up with gaps in the histogram because some intervals do not include multiples of 5\. This is not a good histogram because it fails to reliably summarise the distribution. Similar problems would occur if we chose a bin width that is greater than, but not a multiple of 5, because different bins would cover a different number of values that make up the `wind` variable. The take home message is that we have to know our data in order to produce meaningful summaries of it.
We’ll finish up this subsection by briefly reviewing one alternative to the histogram. Histograms are good for visualising sample distributions when we have a reasonable sample size (at least dozens, and ideally, hundreds of observations). They aren’t very effective when the sample is quite small. In this ‘small data’ situation it’s better to use something called a **dot plot**[8](#fn8).
Let’s use **dplyr** to extract a small(ish) subset of the storms data:
```
storms_small <-
storms %>%
filter(year == 1998, type == "Hurricane")
```
This just extracts the subset of hurricane observations from 1998\. The **ggplot2** code to make a dot plot with these data is very similar to the histogram case:
```
ggplot(storms_small, aes(x = pressure)) +
geom_dotplot(binwidth = 2) +
xlab("Atmospheric Pressure (mbar)") + ylab("Count")
```
Here, each observation in the data adds one dot, and dots that fall into the same bin are stacked up on top of one another. The resulting plot displays the same information about a sample distribution as a histogram, but it tends to be more informative when there are relatively few observations.
20\.3 Descriptive statistics
----------------------------
So far we’ve been describing the properties of sample distributions in very general terms, using phrases like ‘most common values’ and ‘the range of the data’ without really saying what we mean. Statisticians have devised specific terms to describe these kinds of properties, as well as different descriptive statistics to quantify them. The two that matter most are the **central tendency** and the **dispersion**:
* A measure of **central tendency** describes a typical (‘central’) value of a distribution. Most people know at least one measure of central tendency. The “average” that they calculated at school is the arithmetic mean of a sample. There are many different measures of central tendency, each with their own pros and cons. Take a look at the [Wikipedia](http://en.wikipedia.org/wiki/Central_tendency) to see the most common ones. Among these, the median is the one that is used most often in exploratory analyses.
* A measure of **dispersion** describes how spread out a distribution is. Dispersion measures quantify the variability or scatter of a variable. If one distribution is more dispersed than another it means that in some sense it encompasses a wider range of values. What this means in practice depends on the kind of measure we’re working with. Basic statistics courses tend to focus on the variance, and its square root, the standard deviation. There [are others](http://en.wikipedia.org/wiki/Statistical_dispersion) though.
### 20\.3\.1 Measuring central tendency
There are two descriptive statistics that are typically used to describe the central tendency of the sample distribution of numeric variables. The first is the **arithmetic mean** of a sample. People often say ‘empirical mean’, ‘sample mean’ or just ‘the mean’ when referring to the arithmetic sample mean. This is fine, but keep in mind that there are other kinds of mean (e.g. the harmonic mean and the geometric mean)[9](#fn9).
How do we calculate the arithmetic sample mean of a variable? Here’s the mathematical definition: \\\[
\\bar{x} \= \\frac{1}{N}\\sum\\limits\_{i\=1}^{N}{x\_i}
\\] We need to define the terms to make sense of this. The \\(\\bar{x}\\) stands for the arithmetic sample mean. The \\(N\\) in the right hand side of this expression is the sample size, i.e. the number of observations in a sample. The \\(x\_i\\) refer to the set of values the variable takes in the sample. The \\(i\\) is an index used to reference each observation: the first observation has value \\(x\_1\\), the second has value \\(x\_2\\), and so on, up to the last value, \\(x\_N\\). Finally, the \\(\\Sigma\_{i\=1}^{N}\\) stands for summation (‘adding up’) from \\(i \= 1\\) to \\(N\\).
Most people have used this formula at some point even though they may not have realised it. The `mean` function in R will calculate the arithmetic mean for us:
```
mean(storms$wind)
```
```
## [1] 54.68329
```
This tells us that the arithmetic sample mean of wind speed is 55 mph. How useful is this?
One limitation of the arithmetic mean is that it is affected by the shape of a distribution. It’s very sensitive to the extremes of a sample distribution. This is why, for example, it does not make much sense to look at the mean income of workers in a country to get a sense of what a ‘typical’ person earns. Income distribution are highly asymmetric, and those few who are lucky enough to earn very good salaries tend to shift the mean upward and well past anything that is really ‘typical’. The sample mean is also strongly affected by the presence of ‘outliers’. It’s difficult to give a precise definition of outliers—the appropriate definition depends on the context—but roughly speaking, these are unusually large or small values.
Because the sample mean is sensitive to the shape of a distribution and the presence of outliers we often prefer a second measure of central tendency: the **sample median**. The median of a sample is the number separating the upper half from the lower half[10](#fn10). We can find the sample median in R with the `median` function:
```
median(storms$wind)
```
```
## [1] 50
```
The sample median of wind speed is 50 mph. This is still to the right of the most common values of wind speed, but it shifted less than the mean.
#### 20\.3\.1\.1 What about ‘the mode’?
What does the phrase “the most common values” (e.g. of wind speed) really mean when describing a distribution? In fact, this is an indirect reference to something called the **mode** of the distribution. The mode of a distribution is essentially its peak, i.e. it locates the most likely value of a variable. Notice that we didn’t use the phrase ‘sample mode’. It’s easy to calculate the mode of a theoretical distribution. Unfortunately, it’s not a simple matter to reliably estimate the mode of a sample from such a distribution.
If a numeric variable is discrete, and we have a lot of data, we can sometimes arrive at an estimate of the mode by tabulating the number of observations in each numeric category. Although in truth wind speed is a continuous variable, it is only measured to the nearest 5mph in the `storms` data set. Therefore, it looks like a discrete variable. We can use the `table` function here to tabulate the number of observations at each value:
```
table(storms$wind)
```
```
##
## 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
## 3 122 168 345 234 212 228 208 136 154 189 136 119 94 61 76 40 57
## 105 110 115 120 125 130 135 140 145 150 155
## 26 30 42 27 20 8 5 1 2 2 2
```
The names of each element in the resulting vector are the recorded wind speeds and the corresponding values are the associated counts of each value. The most common wind speed is 30 mph, with 345 observations. The categories either side of this (25 and 35 mph) contain much lower counts. This provides a fairly good indication that the mode of the `wind` distribution is about 30mph.
Tabulating the counts in of numeric categories to identify the likely mode is only sensible when a numerical variable is genuinely discrete, or looks discrete as a result of how it was measured. Even then, there is no guarantee that this approach will produce a sensible estimate of the ‘true’ mode. If a variable is continuous then tabulating counts simply does not work. Methods exist to estimate a mode from a sample, but they are not simple. Nonetheless, it’s important to know what the mode represents and to be able to identify its approximate value by inspecting a histogram.
### 20\.3\.2 Measuring dispersion
There are many ways to quantify the dispersion of a sample distribution. The most important quantities from the standpoint of statistics are the sample **variance** and **standard deviation**. The sample variance (\\(s^2\\)) is ‘the sum of squared deviations’ (i.e. the differences) of each observation from the sample mean, divided by the sample size minus one. Here’s the mathematical definition: \\\[
s^2 \= \\frac{1}{N\-1}\\sum\\limits\_{i\=1}^{N}{(x\_i\-\\bar{x})^2}
\\] The meaning of these terms is the same as for the sample mean. The \\(\\bar{x}\\) is the sample mean, the \\(N\\) is the sample size, and the \\(x\_i\\) refers to the set of values the variable takes. We don’t have to actually apply this formula in R. There’s a function to calculate the sample variance:
```
var(storms$wind)
```
```
## [1] 668.1444
```
What does that number actually mean? Variances are always non\-negative. A small variance indicates that observations tend to be close to the mean (and to one another), while a high variance indicates that observations are very spread out. A variance of zero only occurs if all values are identical. However, it is difficult to interpret whether a sample variance is really “small” or “large” because the calculation involves squared deviations. For example, changing the measurement scale of a variable by 10 involves a 100\-fold change (102) in the variance.
The variance is a important quantity in statistics that crops up over and over again. Many common statistical tools use changes in variance to formally compare how well different models describe a data set. However, it is very difficult to interpret variances, which is why we seldom use them in exploratory work. A better statistic is to describe sample dispersion is a closely related quantity called the **standard deviation** of the sample, usually denoted \\(s\\). The standard deviation is the square root of the variance. We calculate it using the `sd` function:
```
sd(storms$wind)
```
```
## [1] 25.84849
```
Why do we prefer the standard deviation over the variance? Consider the wind speed again. The standard deviation of the wind speed sample is 26\. Take another look at the wind speed histogram. This shows that the wind speed measurements span about 5 standard deviations. If we had instead measured wind speed in kilometers per hour (kph), the standard deviation of the sample would be 42, because 1 mph \~ 1\.6 kph. If we plot the histogram of wind speed in kph it is still the case that the data are spanned by about 5 standard deviations. The variance on the other hand increases from to approximately 668 to 1730, a factor of 1\.62. This is the reason we often use the standard deviation compare dispersion: it reflects the dispersion we perceive in the data.
The sample standard deviation is not without problems though. Like the sample mean, it is sensitive to the shape of a distribution and the presence of outliers. A measure of dispersion that is more robust to these kinds of features is the **interquartile range**.
#### What are quartiles?
We need to know what a quartile is to understand the interquartile range. Three quartiles are defined for any sample. These divide the data into four equal sized groups, from the set of smallest numbers up to the set of largest numbers. The second quartile (\\(Q\_2\\)) is the median, i.e. it divides the data into an upper and lower half. The first quartile (\\(Q\_1\\)) is the number that divides the lower 50% of values into two equal sized groups. The third quartile (\\(Q\_3\\)) is the number that divides the upper 50% of values into two equal sized groups.
The quartiles also have other names. The first quartile is sometimes called the lower quartile, or the 25th percentile; the second quartile (the median) is the 50th percentile; and the third quartile is also called the upper quartile, or the 75th percentile.
The interquartile range (IQR) is defined as the difference between the third and first quartile. This means the IQR contains the middle 50% of values of a variable. Obviously, the more spread out the data are, the larger the IQR will be. The reason we prefer to use IQR to measure dispersion is that it only depends on the data in the “middle” of a sample distribution. This makes it robust to the presence of outliers. We can use the `IQR` function to find the interquartile range of the wind variable:
```
IQR(storms$wind)
```
```
## [1] 35
```
The IQR is used as the basis for a useful data summary plot called a ‘box and whiskers’ plot. We’ll see how to construct this in a later next chapter.
### 20\.3\.3 Skewness
A well\-defined hierarchy has been defined to describe and quantify the shape of distributions. It’s essential to know about the first two, central tendency and dispersion, because these are the basis of many standard analyses. The next most important aspect of a distribution is its **skewness** (or just ‘skew’). Skewness describes the asymmetry of a distribution. Just as with central tendency and dispersion, there are many different ways to quantify the skewness of a sample distribution. These are quite difficult to interpret because their interpretation depends on other features of a distribution. We’ll just explore skewness in the simplest case: the skewness of a **unimodal** distribution.
A unimodal distribution is one that has a single peak. We can never say for certain that a sample distribution is unimodal or not—unimodality is really a property of theoretical distributions—but with enough data and a sensible histogram we can at least say that a distribution is ‘probably’ unimodal. The histograms we produced to describe the sample distributions of `wind` and `pressure` certainly appear to be unimodal. Each has a single, distinct peak.
These two unimodal distributions are also asymmetric—they exhibit skewness. The `pressure` distribution is said to be skewed to the left because it has a long ‘tail’ that spreads out in this direction. In contrast, we say that the `wind` distribution is skewed to the right, because it has a long ‘tail’ that spreads out to right. Left skewness and right skewness are also called negative and positive skew, respectively. A sample distribution that looks symmetric is said to have (approximately) zero skew[11](#fn11).
The reason we care about skewness is that many common statistical models assume that the distributions we’re sampling from, after controlling for other variables, are not skewed. This is an issue for a statistics course. For now we just need to understand what skewness means and be able to describe distributions in terms of left (negative) and right (positive) skew.
### 20\.3\.1 Measuring central tendency
There are two descriptive statistics that are typically used to describe the central tendency of the sample distribution of numeric variables. The first is the **arithmetic mean** of a sample. People often say ‘empirical mean’, ‘sample mean’ or just ‘the mean’ when referring to the arithmetic sample mean. This is fine, but keep in mind that there are other kinds of mean (e.g. the harmonic mean and the geometric mean)[9](#fn9).
How do we calculate the arithmetic sample mean of a variable? Here’s the mathematical definition: \\\[
\\bar{x} \= \\frac{1}{N}\\sum\\limits\_{i\=1}^{N}{x\_i}
\\] We need to define the terms to make sense of this. The \\(\\bar{x}\\) stands for the arithmetic sample mean. The \\(N\\) in the right hand side of this expression is the sample size, i.e. the number of observations in a sample. The \\(x\_i\\) refer to the set of values the variable takes in the sample. The \\(i\\) is an index used to reference each observation: the first observation has value \\(x\_1\\), the second has value \\(x\_2\\), and so on, up to the last value, \\(x\_N\\). Finally, the \\(\\Sigma\_{i\=1}^{N}\\) stands for summation (‘adding up’) from \\(i \= 1\\) to \\(N\\).
Most people have used this formula at some point even though they may not have realised it. The `mean` function in R will calculate the arithmetic mean for us:
```
mean(storms$wind)
```
```
## [1] 54.68329
```
This tells us that the arithmetic sample mean of wind speed is 55 mph. How useful is this?
One limitation of the arithmetic mean is that it is affected by the shape of a distribution. It’s very sensitive to the extremes of a sample distribution. This is why, for example, it does not make much sense to look at the mean income of workers in a country to get a sense of what a ‘typical’ person earns. Income distribution are highly asymmetric, and those few who are lucky enough to earn very good salaries tend to shift the mean upward and well past anything that is really ‘typical’. The sample mean is also strongly affected by the presence of ‘outliers’. It’s difficult to give a precise definition of outliers—the appropriate definition depends on the context—but roughly speaking, these are unusually large or small values.
Because the sample mean is sensitive to the shape of a distribution and the presence of outliers we often prefer a second measure of central tendency: the **sample median**. The median of a sample is the number separating the upper half from the lower half[10](#fn10). We can find the sample median in R with the `median` function:
```
median(storms$wind)
```
```
## [1] 50
```
The sample median of wind speed is 50 mph. This is still to the right of the most common values of wind speed, but it shifted less than the mean.
#### 20\.3\.1\.1 What about ‘the mode’?
What does the phrase “the most common values” (e.g. of wind speed) really mean when describing a distribution? In fact, this is an indirect reference to something called the **mode** of the distribution. The mode of a distribution is essentially its peak, i.e. it locates the most likely value of a variable. Notice that we didn’t use the phrase ‘sample mode’. It’s easy to calculate the mode of a theoretical distribution. Unfortunately, it’s not a simple matter to reliably estimate the mode of a sample from such a distribution.
If a numeric variable is discrete, and we have a lot of data, we can sometimes arrive at an estimate of the mode by tabulating the number of observations in each numeric category. Although in truth wind speed is a continuous variable, it is only measured to the nearest 5mph in the `storms` data set. Therefore, it looks like a discrete variable. We can use the `table` function here to tabulate the number of observations at each value:
```
table(storms$wind)
```
```
##
## 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
## 3 122 168 345 234 212 228 208 136 154 189 136 119 94 61 76 40 57
## 105 110 115 120 125 130 135 140 145 150 155
## 26 30 42 27 20 8 5 1 2 2 2
```
The names of each element in the resulting vector are the recorded wind speeds and the corresponding values are the associated counts of each value. The most common wind speed is 30 mph, with 345 observations. The categories either side of this (25 and 35 mph) contain much lower counts. This provides a fairly good indication that the mode of the `wind` distribution is about 30mph.
Tabulating the counts in of numeric categories to identify the likely mode is only sensible when a numerical variable is genuinely discrete, or looks discrete as a result of how it was measured. Even then, there is no guarantee that this approach will produce a sensible estimate of the ‘true’ mode. If a variable is continuous then tabulating counts simply does not work. Methods exist to estimate a mode from a sample, but they are not simple. Nonetheless, it’s important to know what the mode represents and to be able to identify its approximate value by inspecting a histogram.
#### 20\.3\.1\.1 What about ‘the mode’?
What does the phrase “the most common values” (e.g. of wind speed) really mean when describing a distribution? In fact, this is an indirect reference to something called the **mode** of the distribution. The mode of a distribution is essentially its peak, i.e. it locates the most likely value of a variable. Notice that we didn’t use the phrase ‘sample mode’. It’s easy to calculate the mode of a theoretical distribution. Unfortunately, it’s not a simple matter to reliably estimate the mode of a sample from such a distribution.
If a numeric variable is discrete, and we have a lot of data, we can sometimes arrive at an estimate of the mode by tabulating the number of observations in each numeric category. Although in truth wind speed is a continuous variable, it is only measured to the nearest 5mph in the `storms` data set. Therefore, it looks like a discrete variable. We can use the `table` function here to tabulate the number of observations at each value:
```
table(storms$wind)
```
```
##
## 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
## 3 122 168 345 234 212 228 208 136 154 189 136 119 94 61 76 40 57
## 105 110 115 120 125 130 135 140 145 150 155
## 26 30 42 27 20 8 5 1 2 2 2
```
The names of each element in the resulting vector are the recorded wind speeds and the corresponding values are the associated counts of each value. The most common wind speed is 30 mph, with 345 observations. The categories either side of this (25 and 35 mph) contain much lower counts. This provides a fairly good indication that the mode of the `wind` distribution is about 30mph.
Tabulating the counts in of numeric categories to identify the likely mode is only sensible when a numerical variable is genuinely discrete, or looks discrete as a result of how it was measured. Even then, there is no guarantee that this approach will produce a sensible estimate of the ‘true’ mode. If a variable is continuous then tabulating counts simply does not work. Methods exist to estimate a mode from a sample, but they are not simple. Nonetheless, it’s important to know what the mode represents and to be able to identify its approximate value by inspecting a histogram.
### 20\.3\.2 Measuring dispersion
There are many ways to quantify the dispersion of a sample distribution. The most important quantities from the standpoint of statistics are the sample **variance** and **standard deviation**. The sample variance (\\(s^2\\)) is ‘the sum of squared deviations’ (i.e. the differences) of each observation from the sample mean, divided by the sample size minus one. Here’s the mathematical definition: \\\[
s^2 \= \\frac{1}{N\-1}\\sum\\limits\_{i\=1}^{N}{(x\_i\-\\bar{x})^2}
\\] The meaning of these terms is the same as for the sample mean. The \\(\\bar{x}\\) is the sample mean, the \\(N\\) is the sample size, and the \\(x\_i\\) refers to the set of values the variable takes. We don’t have to actually apply this formula in R. There’s a function to calculate the sample variance:
```
var(storms$wind)
```
```
## [1] 668.1444
```
What does that number actually mean? Variances are always non\-negative. A small variance indicates that observations tend to be close to the mean (and to one another), while a high variance indicates that observations are very spread out. A variance of zero only occurs if all values are identical. However, it is difficult to interpret whether a sample variance is really “small” or “large” because the calculation involves squared deviations. For example, changing the measurement scale of a variable by 10 involves a 100\-fold change (102) in the variance.
The variance is a important quantity in statistics that crops up over and over again. Many common statistical tools use changes in variance to formally compare how well different models describe a data set. However, it is very difficult to interpret variances, which is why we seldom use them in exploratory work. A better statistic is to describe sample dispersion is a closely related quantity called the **standard deviation** of the sample, usually denoted \\(s\\). The standard deviation is the square root of the variance. We calculate it using the `sd` function:
```
sd(storms$wind)
```
```
## [1] 25.84849
```
Why do we prefer the standard deviation over the variance? Consider the wind speed again. The standard deviation of the wind speed sample is 26\. Take another look at the wind speed histogram. This shows that the wind speed measurements span about 5 standard deviations. If we had instead measured wind speed in kilometers per hour (kph), the standard deviation of the sample would be 42, because 1 mph \~ 1\.6 kph. If we plot the histogram of wind speed in kph it is still the case that the data are spanned by about 5 standard deviations. The variance on the other hand increases from to approximately 668 to 1730, a factor of 1\.62. This is the reason we often use the standard deviation compare dispersion: it reflects the dispersion we perceive in the data.
The sample standard deviation is not without problems though. Like the sample mean, it is sensitive to the shape of a distribution and the presence of outliers. A measure of dispersion that is more robust to these kinds of features is the **interquartile range**.
#### What are quartiles?
We need to know what a quartile is to understand the interquartile range. Three quartiles are defined for any sample. These divide the data into four equal sized groups, from the set of smallest numbers up to the set of largest numbers. The second quartile (\\(Q\_2\\)) is the median, i.e. it divides the data into an upper and lower half. The first quartile (\\(Q\_1\\)) is the number that divides the lower 50% of values into two equal sized groups. The third quartile (\\(Q\_3\\)) is the number that divides the upper 50% of values into two equal sized groups.
The quartiles also have other names. The first quartile is sometimes called the lower quartile, or the 25th percentile; the second quartile (the median) is the 50th percentile; and the third quartile is also called the upper quartile, or the 75th percentile.
The interquartile range (IQR) is defined as the difference between the third and first quartile. This means the IQR contains the middle 50% of values of a variable. Obviously, the more spread out the data are, the larger the IQR will be. The reason we prefer to use IQR to measure dispersion is that it only depends on the data in the “middle” of a sample distribution. This makes it robust to the presence of outliers. We can use the `IQR` function to find the interquartile range of the wind variable:
```
IQR(storms$wind)
```
```
## [1] 35
```
The IQR is used as the basis for a useful data summary plot called a ‘box and whiskers’ plot. We’ll see how to construct this in a later next chapter.
### 20\.3\.3 Skewness
A well\-defined hierarchy has been defined to describe and quantify the shape of distributions. It’s essential to know about the first two, central tendency and dispersion, because these are the basis of many standard analyses. The next most important aspect of a distribution is its **skewness** (or just ‘skew’). Skewness describes the asymmetry of a distribution. Just as with central tendency and dispersion, there are many different ways to quantify the skewness of a sample distribution. These are quite difficult to interpret because their interpretation depends on other features of a distribution. We’ll just explore skewness in the simplest case: the skewness of a **unimodal** distribution.
A unimodal distribution is one that has a single peak. We can never say for certain that a sample distribution is unimodal or not—unimodality is really a property of theoretical distributions—but with enough data and a sensible histogram we can at least say that a distribution is ‘probably’ unimodal. The histograms we produced to describe the sample distributions of `wind` and `pressure` certainly appear to be unimodal. Each has a single, distinct peak.
These two unimodal distributions are also asymmetric—they exhibit skewness. The `pressure` distribution is said to be skewed to the left because it has a long ‘tail’ that spreads out in this direction. In contrast, we say that the `wind` distribution is skewed to the right, because it has a long ‘tail’ that spreads out to right. Left skewness and right skewness are also called negative and positive skew, respectively. A sample distribution that looks symmetric is said to have (approximately) zero skew[11](#fn11).
The reason we care about skewness is that many common statistical models assume that the distributions we’re sampling from, after controlling for other variables, are not skewed. This is an issue for a statistics course. For now we just need to understand what skewness means and be able to describe distributions in terms of left (negative) and right (positive) skew.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/exploring-categorical-variables.html |
Chapter 21 Exploring categorical variables
==========================================
This chapter will consider how to go about exploring the sample distribution of a categorical variable. Using the `storms` data from the **nasaweather** package (remember to load and attach the package), we’ll review some basic descriptive statistics and visualisations that are appropriate for categorical variables.
21\.1 Understanding categorical variables
-----------------------------------------
Exploring categorical variables is generally simpler than working with numeric variables because we have fewer options, or at least life is simpler if we only require basic summaries. We’ll work with the `year` and `type` variables in `storms` to illustrate the key ideas.
Which kind of categorical variable is `type`? There are four storm categories in type. We can use the `unique` function to print these for us:
```
unique(storms$type)
```
```
## [1] "Tropical Depression" "Tropical Storm" "Hurricane"
## [4] "Extratropical"
```
The first question we should ask is, is `type` an ordinal or nominal variable? It’s hard to know how to classify `type` without knowing something about tropical storms. Some googling indicates that `type` can reasonably be considered an ordinal variable: a tropical depression is the least severe storm and a hurricane is the most severe class; in between are extra tropical and tropical storm categories.
What about the `year` variable? Years are obviously ordered from early to late and we might be interested in how some aspect of our data changes over time. In this case we might consider treating year either as a numeric variable, or perhaps as an ordinal categorical variable. Alternatively, if the question is simply, ‘do the data vary from one year to the next’ without any concern for trends, it’s perfectly reasonable to treat year as a nominal categorical variable.
This illustrates an important idea: the classification of a variable will often depend on the objectives of an analysis. The classification of a variable matters because it influences how we choose to summarise it, how we interpret its relationship with other variables, and whether a specific statistical model is appropriate for our data or not. Fortunately, our choice of classification is less important when we are just trying to summarise the variable numerically or graphically. For now, let’s assume that it’s fine to treat year as a categorical variable.
### 21\.1\.1 Numerical summaries
When we calculate summaries of categorical variables we are aiming to describe the sample distribution of the variable, just as with numeric variables. The general question we need to address is, ‘what are the relative frequencies of different categories?’ We need to understand which categories are common and which are rare. Since a categorical variable takes a finite number of possible values, the simplest thing to do is tabulate the number of occurances of each type. We’ve seen how the `table` function is used to do this:
```
table(storms$type)
```
```
##
## Extratropical Hurricane Tropical Depression
## 412 896 513
## Tropical Storm
## 926
```
This shows that the number of observations associated with hurricanes and tropical storms are about equal, that the number of observations associated with extratropical and tropical systems is similar, and the former pair of categories are more common than the latter. This indicates that in general, storm systems in Central America spend relatively more time in the more severe classes.
Raw frequencies give us information about the rates of occurance of different categories in a dataset. However, it’s difficult to compare raw counts across different data sets if the sample sizes vary (which they usually do). This is why we often convert counts to proportions. To do this, we have to divide each count by the total count across all categories. This is easy to do in R because it’s vectorised:
```
type_counts <- table(storms$type)
type_counts / sum(type_counts)
```
```
##
## Extratropical Hurricane Tropical Depression
## 0.1499818 0.3261740 0.1867492
## Tropical Storm
## 0.3370950
```
So about 2/3 of observations are associated with hurricanes and tropical storms, with a roughly equal split, and the remaining 1/3 associated with less severe storms.
What about measuring the central tendency of a categorical sample distribution? Various measures exist, but these tend to be less useful than those used to describe numeric variables. We can find the **sample mode** of ordinal and nominal variables easily though (in contrast to numeric variables, where it is difficult to define). This is just the most common category. For example, the tropical storm category is the modal value of the `type` variable. Only just though. The proportion of tropical storm observations is 0\.34, while the proportion of hurricane observations is 0\.33\. These are very similar, and it’s not hard to imagine that modal observation might have been the hurricane category in a different sample. The sample mode is sensitive to chance variation when two categories occur at similar frequencies.
It is possible to calculate a **sample median** of a categorical variable, but only for the ordinal case. The median value is the one that lies in the middle of an ordered set of values—it makes no sense to talk about “the middle” of a set of nominal values that have no inherent order. Unfortunately, even for ordinal variables the sample median is not precisely defined. Imagine that we’re working with a variable with only two categories: ‘big’ vs. ‘small’, and exactly 50% of the values are ‘small’ value and 50% are large. What is the median in this case? Because the median is not always well\-defined, the developers of base R have chosen not to implement a function to calculate the median of ordinal variables (a few packages contain functions to do this though).
#### Be careful with `median`
Unfortunately, if we apply the `median` function to a character vector it will give us an answer, e.g. `median(storms$type)` will spit something out. It is very likely to give us the wrong answer though. R has no way of knowing which categories are “high” and which are “low”, so just sorts the elements of `type` alphabetically and then finds the middle value of this vector. If we really have to find the median value of an ordinal value we can do it by first converting the categories to integers— assigning 1 to the lowest category, 2 to the next lowest, and so on— and then use the median function to find out which value is the median.
21\.2 Graphical summaries of categorical variables
--------------------------------------------------
The most common graphical tool used to summarise a categorical variable is a bar chart. A bar chart (or bar graph) is a plot that presents summaries of grouped data with rectangular bars. The lengths of the bars is proportional to the values they represent. When summarising a single categorical variable, the length of the bars should show the raw counts or proportions of each category.
Constructing a bar graph to display the counts is very easy with `ggplot2`. We will do this for the `type` variable. As always, we start by using the `ggplot` function to construct a graphical object containing the necessary default data and aesthetic mapping.
```
bar_plt <- ggplot(storms, aes(x = type))
```
We’ve called the object `bar_plt`, for obvious reasons. Notice that we only need to define one aesthetic mapping: we mapped `type` to the x axis. This produces a bar plot with vertical bars.
From here we follow the usual **ggplot2** workflow, meaning the next step is to add a layer using one of the `geom_XX` functions. There are two functions we can use to create bar charts in ggplot, `geom_bar` and `geom_col`. By default `geom_col` counts the number of observations in each category, whilst `geom_bar` plots the actual numbers in the data frame. In this case as we want the number of storms of each `type` we will use `geom_bar`:
```
bar_plt <- bar_plt + geom_bar()
summary(bar_plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~type
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
## -----------------------------------
## geom_bar: width = NULL, na.rm = FALSE
## stat_count: width = NULL, na.rm = FALSE
## position_stack
```
Look at the layer information below `----`. The `geom_bar` function sets the stat to “count”. Counting a categorical variable is analogous to binning a numeric variable. The only difference is that there is no need to specify bin widths because `type` is categorical, i.e. `ggplot2` will sum up the number of observations associated with every category of `type`. Here’s the resulting figure:
```
bar_plt
```
This is the same summary information we produced using the `table` function, only now it’s presented in graphical form. We can customise this bar graph if needed with functions like `xlab` and `ylab`, and by setting various properties inside `geom_bar`. For example:
```
ggplot(storms, aes(x = type)) +
geom_bar(fill = "orange", width = 0.7) +
xlab("Storm Type") + ylab("Number of Observations")
```
The only new thing here is that we used the `width` argument of `geom_bar` to make the bars a little narrower than the default.
There is one slight problem with this graph: the order in which the different groups is presented does not reflect the ordinal scale. This occurs because `ggplot2` does not “know” that we want to treat `type` as a ordinal variable. There is no way for `ggplot2` to “guess” the appropriate order, so it uses the alphabetical ordering of the category names to set the order of the bars.
To fix this we need to customise the scale associated with the ‘x’ aesthetic. We can start by making a short character vector containing all the category names in the focal variable, ensuring these are listed in the order they need to be plotted in:
```
ords <- c("Tropical Depression", "Extratropical", "Tropical Storm", "Hurricane")
```
Keep an eye on the spelling too—R is not forgiving of spelling errors. We use this with the `limits` argument of the `scale_x_discrete` function to fix the ordering:
```
ggplot(storms, aes(x = type)) +
geom_bar(fill = "orange", width = 0.7) +
scale_x_discrete(limits = ords) +
xlab("Storm Type") + ylab("Number of Observations")
```
We had to use one of the `scale_x_YY` functions here because we needed to change the way an aesthetic appears. We use `scale_x_discrete` because ‘discrete’ is **gplot2**\-speak for ‘categorical’, which is what we have mapped to the ‘x’ aesthetic.
What else might we change? The categories of `type` have quite long names, meaning the axis labels are all bunched together. One way to fix this is to make the labels smaller or rotate them via the ‘themes’ system. Here’s an alternative solution: just flip the x and y axes to make a horizontal bar chart. We can do this with the `coord_flip` function (this is new):
```
ggplot(storms, aes(x = type)) +
geom_bar(fill = "orange", width = 0.7) +
scale_x_discrete(limits = ords) +
coord_flip() +
xlab("Storm Type") + ylab("Number of Observations")
```
21\.1 Understanding categorical variables
-----------------------------------------
Exploring categorical variables is generally simpler than working with numeric variables because we have fewer options, or at least life is simpler if we only require basic summaries. We’ll work with the `year` and `type` variables in `storms` to illustrate the key ideas.
Which kind of categorical variable is `type`? There are four storm categories in type. We can use the `unique` function to print these for us:
```
unique(storms$type)
```
```
## [1] "Tropical Depression" "Tropical Storm" "Hurricane"
## [4] "Extratropical"
```
The first question we should ask is, is `type` an ordinal or nominal variable? It’s hard to know how to classify `type` without knowing something about tropical storms. Some googling indicates that `type` can reasonably be considered an ordinal variable: a tropical depression is the least severe storm and a hurricane is the most severe class; in between are extra tropical and tropical storm categories.
What about the `year` variable? Years are obviously ordered from early to late and we might be interested in how some aspect of our data changes over time. In this case we might consider treating year either as a numeric variable, or perhaps as an ordinal categorical variable. Alternatively, if the question is simply, ‘do the data vary from one year to the next’ without any concern for trends, it’s perfectly reasonable to treat year as a nominal categorical variable.
This illustrates an important idea: the classification of a variable will often depend on the objectives of an analysis. The classification of a variable matters because it influences how we choose to summarise it, how we interpret its relationship with other variables, and whether a specific statistical model is appropriate for our data or not. Fortunately, our choice of classification is less important when we are just trying to summarise the variable numerically or graphically. For now, let’s assume that it’s fine to treat year as a categorical variable.
### 21\.1\.1 Numerical summaries
When we calculate summaries of categorical variables we are aiming to describe the sample distribution of the variable, just as with numeric variables. The general question we need to address is, ‘what are the relative frequencies of different categories?’ We need to understand which categories are common and which are rare. Since a categorical variable takes a finite number of possible values, the simplest thing to do is tabulate the number of occurances of each type. We’ve seen how the `table` function is used to do this:
```
table(storms$type)
```
```
##
## Extratropical Hurricane Tropical Depression
## 412 896 513
## Tropical Storm
## 926
```
This shows that the number of observations associated with hurricanes and tropical storms are about equal, that the number of observations associated with extratropical and tropical systems is similar, and the former pair of categories are more common than the latter. This indicates that in general, storm systems in Central America spend relatively more time in the more severe classes.
Raw frequencies give us information about the rates of occurance of different categories in a dataset. However, it’s difficult to compare raw counts across different data sets if the sample sizes vary (which they usually do). This is why we often convert counts to proportions. To do this, we have to divide each count by the total count across all categories. This is easy to do in R because it’s vectorised:
```
type_counts <- table(storms$type)
type_counts / sum(type_counts)
```
```
##
## Extratropical Hurricane Tropical Depression
## 0.1499818 0.3261740 0.1867492
## Tropical Storm
## 0.3370950
```
So about 2/3 of observations are associated with hurricanes and tropical storms, with a roughly equal split, and the remaining 1/3 associated with less severe storms.
What about measuring the central tendency of a categorical sample distribution? Various measures exist, but these tend to be less useful than those used to describe numeric variables. We can find the **sample mode** of ordinal and nominal variables easily though (in contrast to numeric variables, where it is difficult to define). This is just the most common category. For example, the tropical storm category is the modal value of the `type` variable. Only just though. The proportion of tropical storm observations is 0\.34, while the proportion of hurricane observations is 0\.33\. These are very similar, and it’s not hard to imagine that modal observation might have been the hurricane category in a different sample. The sample mode is sensitive to chance variation when two categories occur at similar frequencies.
It is possible to calculate a **sample median** of a categorical variable, but only for the ordinal case. The median value is the one that lies in the middle of an ordered set of values—it makes no sense to talk about “the middle” of a set of nominal values that have no inherent order. Unfortunately, even for ordinal variables the sample median is not precisely defined. Imagine that we’re working with a variable with only two categories: ‘big’ vs. ‘small’, and exactly 50% of the values are ‘small’ value and 50% are large. What is the median in this case? Because the median is not always well\-defined, the developers of base R have chosen not to implement a function to calculate the median of ordinal variables (a few packages contain functions to do this though).
#### Be careful with `median`
Unfortunately, if we apply the `median` function to a character vector it will give us an answer, e.g. `median(storms$type)` will spit something out. It is very likely to give us the wrong answer though. R has no way of knowing which categories are “high” and which are “low”, so just sorts the elements of `type` alphabetically and then finds the middle value of this vector. If we really have to find the median value of an ordinal value we can do it by first converting the categories to integers— assigning 1 to the lowest category, 2 to the next lowest, and so on— and then use the median function to find out which value is the median.
### 21\.1\.1 Numerical summaries
When we calculate summaries of categorical variables we are aiming to describe the sample distribution of the variable, just as with numeric variables. The general question we need to address is, ‘what are the relative frequencies of different categories?’ We need to understand which categories are common and which are rare. Since a categorical variable takes a finite number of possible values, the simplest thing to do is tabulate the number of occurances of each type. We’ve seen how the `table` function is used to do this:
```
table(storms$type)
```
```
##
## Extratropical Hurricane Tropical Depression
## 412 896 513
## Tropical Storm
## 926
```
This shows that the number of observations associated with hurricanes and tropical storms are about equal, that the number of observations associated with extratropical and tropical systems is similar, and the former pair of categories are more common than the latter. This indicates that in general, storm systems in Central America spend relatively more time in the more severe classes.
Raw frequencies give us information about the rates of occurance of different categories in a dataset. However, it’s difficult to compare raw counts across different data sets if the sample sizes vary (which they usually do). This is why we often convert counts to proportions. To do this, we have to divide each count by the total count across all categories. This is easy to do in R because it’s vectorised:
```
type_counts <- table(storms$type)
type_counts / sum(type_counts)
```
```
##
## Extratropical Hurricane Tropical Depression
## 0.1499818 0.3261740 0.1867492
## Tropical Storm
## 0.3370950
```
So about 2/3 of observations are associated with hurricanes and tropical storms, with a roughly equal split, and the remaining 1/3 associated with less severe storms.
What about measuring the central tendency of a categorical sample distribution? Various measures exist, but these tend to be less useful than those used to describe numeric variables. We can find the **sample mode** of ordinal and nominal variables easily though (in contrast to numeric variables, where it is difficult to define). This is just the most common category. For example, the tropical storm category is the modal value of the `type` variable. Only just though. The proportion of tropical storm observations is 0\.34, while the proportion of hurricane observations is 0\.33\. These are very similar, and it’s not hard to imagine that modal observation might have been the hurricane category in a different sample. The sample mode is sensitive to chance variation when two categories occur at similar frequencies.
It is possible to calculate a **sample median** of a categorical variable, but only for the ordinal case. The median value is the one that lies in the middle of an ordered set of values—it makes no sense to talk about “the middle” of a set of nominal values that have no inherent order. Unfortunately, even for ordinal variables the sample median is not precisely defined. Imagine that we’re working with a variable with only two categories: ‘big’ vs. ‘small’, and exactly 50% of the values are ‘small’ value and 50% are large. What is the median in this case? Because the median is not always well\-defined, the developers of base R have chosen not to implement a function to calculate the median of ordinal variables (a few packages contain functions to do this though).
#### Be careful with `median`
Unfortunately, if we apply the `median` function to a character vector it will give us an answer, e.g. `median(storms$type)` will spit something out. It is very likely to give us the wrong answer though. R has no way of knowing which categories are “high” and which are “low”, so just sorts the elements of `type` alphabetically and then finds the middle value of this vector. If we really have to find the median value of an ordinal value we can do it by first converting the categories to integers— assigning 1 to the lowest category, 2 to the next lowest, and so on— and then use the median function to find out which value is the median.
21\.2 Graphical summaries of categorical variables
--------------------------------------------------
The most common graphical tool used to summarise a categorical variable is a bar chart. A bar chart (or bar graph) is a plot that presents summaries of grouped data with rectangular bars. The lengths of the bars is proportional to the values they represent. When summarising a single categorical variable, the length of the bars should show the raw counts or proportions of each category.
Constructing a bar graph to display the counts is very easy with `ggplot2`. We will do this for the `type` variable. As always, we start by using the `ggplot` function to construct a graphical object containing the necessary default data and aesthetic mapping.
```
bar_plt <- ggplot(storms, aes(x = type))
```
We’ve called the object `bar_plt`, for obvious reasons. Notice that we only need to define one aesthetic mapping: we mapped `type` to the x axis. This produces a bar plot with vertical bars.
From here we follow the usual **ggplot2** workflow, meaning the next step is to add a layer using one of the `geom_XX` functions. There are two functions we can use to create bar charts in ggplot, `geom_bar` and `geom_col`. By default `geom_col` counts the number of observations in each category, whilst `geom_bar` plots the actual numbers in the data frame. In this case as we want the number of storms of each `type` we will use `geom_bar`:
```
bar_plt <- bar_plt + geom_bar()
summary(bar_plt)
```
```
## data: name, year, month, day, hour, lat, long, pressure, wind,
## type, seasday [2747x11]
## mapping: x = ~type
## faceting: <ggproto object: Class FacetNull, Facet, gg>
## compute_layout: function
## draw_back: function
## draw_front: function
## draw_labels: function
## draw_panels: function
## finish_data: function
## init_scales: function
## map_data: function
## params: list
## setup_data: function
## setup_params: function
## shrink: TRUE
## train_scales: function
## vars: function
## super: <ggproto object: Class FacetNull, Facet, gg>
## -----------------------------------
## geom_bar: width = NULL, na.rm = FALSE
## stat_count: width = NULL, na.rm = FALSE
## position_stack
```
Look at the layer information below `----`. The `geom_bar` function sets the stat to “count”. Counting a categorical variable is analogous to binning a numeric variable. The only difference is that there is no need to specify bin widths because `type` is categorical, i.e. `ggplot2` will sum up the number of observations associated with every category of `type`. Here’s the resulting figure:
```
bar_plt
```
This is the same summary information we produced using the `table` function, only now it’s presented in graphical form. We can customise this bar graph if needed with functions like `xlab` and `ylab`, and by setting various properties inside `geom_bar`. For example:
```
ggplot(storms, aes(x = type)) +
geom_bar(fill = "orange", width = 0.7) +
xlab("Storm Type") + ylab("Number of Observations")
```
The only new thing here is that we used the `width` argument of `geom_bar` to make the bars a little narrower than the default.
There is one slight problem with this graph: the order in which the different groups is presented does not reflect the ordinal scale. This occurs because `ggplot2` does not “know” that we want to treat `type` as a ordinal variable. There is no way for `ggplot2` to “guess” the appropriate order, so it uses the alphabetical ordering of the category names to set the order of the bars.
To fix this we need to customise the scale associated with the ‘x’ aesthetic. We can start by making a short character vector containing all the category names in the focal variable, ensuring these are listed in the order they need to be plotted in:
```
ords <- c("Tropical Depression", "Extratropical", "Tropical Storm", "Hurricane")
```
Keep an eye on the spelling too—R is not forgiving of spelling errors. We use this with the `limits` argument of the `scale_x_discrete` function to fix the ordering:
```
ggplot(storms, aes(x = type)) +
geom_bar(fill = "orange", width = 0.7) +
scale_x_discrete(limits = ords) +
xlab("Storm Type") + ylab("Number of Observations")
```
We had to use one of the `scale_x_YY` functions here because we needed to change the way an aesthetic appears. We use `scale_x_discrete` because ‘discrete’ is **gplot2**\-speak for ‘categorical’, which is what we have mapped to the ‘x’ aesthetic.
What else might we change? The categories of `type` have quite long names, meaning the axis labels are all bunched together. One way to fix this is to make the labels smaller or rotate them via the ‘themes’ system. Here’s an alternative solution: just flip the x and y axes to make a horizontal bar chart. We can do this with the `coord_flip` function (this is new):
```
ggplot(storms, aes(x = type)) +
geom_bar(fill = "orange", width = 0.7) +
scale_x_discrete(limits = ords) +
coord_flip() +
xlab("Storm Type") + ylab("Number of Observations")
```
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/relationships-between-two-variables.html |
Chapter 22 Relationships between two variables
==============================================
This chapter is about exploring the **associations** between pairs of variables in a sample. These are called **bivariate associations**. An association is any relationship between two variables that makes them dependent, i.e. knowing the value of one variable gives us some information about the possible values of the second variable. The main goal of this chapter is to show how to use descriptive statistics and visualisations to explore associations among different kinds of variables.
22\.1 Associations between numeric variables
--------------------------------------------
### 22\.1\.1 Descriptive statistics
Statisticians have devised various different ways to quantify an association between two numeric variables in a sample. The common measures seek to calculate some kind of **correlation coefficient**. The terms ‘association’ and ‘correlation’ are closely related; so much so that they are often used interchangeably. Strictly speaking correlation has a narrower definition: a correlation is defined by a metric (the ‘correlation coefficient’) that quantifies the degree to which an association tends to a certain pattern.
The most widely used measure of correlation is **Pearson’s correlation coefficient** (also called the Pearson product\-moment correlation coefficient). Pearson’s correlation coefficient is something called the covariance of the two variables, divided by the product of their standard deviations. The mathematical formula for the Pearson’s correlation coefficient applied to a sample is: \\\[
r\_{xy} \= \\frac{1}{N\-1}\\sum\\limits\_{i\=1}^{N}{\\frac{x\_i\-\\bar{x}}{s\_x} \\frac{y\_i\-\\bar{y}}{s\_y}}
\\] We’re using \\(x\\) and \\(y\\) here to refer to each of the variables in the sample. The \\(r\_{xy}\\) denotes the correlation coefficient, \\(s\_x\\) and \\(s\_y\\) denote the standard deviation of each sample, \\(\\bar{x}\\) and \\(\\bar{y}\\) are the sample means, and \\(N\\) is the sample size.
Remember, a correlation coefficient quantifies the degree to which an association tends to *a certain pattern*. In the case of Pearson’s correlation coefficient, the coefficient is designed to summarise the strength of a **linear** (i.e. ‘straight line’) association. We’ll return to this idea in a moment.
Pearson’s correlation coefficient takes a value of 0 if two variables are uncorrelated, and a value of \+1 or \-1 if they are perfectly related. ‘Perfectly related’ means we can predict the exact value of one variable given knowledge of the other. A positive value indicates that high values in one variable is associated with high values of the second. A negative value indicates that high values of one variable is associated with low values of the second. The words ‘high’ and ‘low’ are relative to the arithmetic mean.
In R we can use the `cor` function to calculate Pearson’s correlation coefficient. For example, the Pearson correlation coefficient between `pressure` and `wind` is given by:
```
cor(storms$wind, storms$pressure)
```
```
## [1] -0.9254911
```
This is negative, indicating wind speed tends to decline with increasing pressure. It is also quite close to \-1, indicating that this association is very strong. We saw this in the [Introduction to **ggplot2**](introduction-to-ggplot2.html#introduction-to-ggplot2) chapter when we plotted atmospheric pressure against wind speed.
The Pearson’s correlation coefficient must be interpreted with care. Two points are worth noting:
1. Because it is designed to summarise the strength of a **linear** relationship, Pearson’s correlation coefficient will be misleading when this relationship is curved, or even worse, hump\-shaped.
2. Even if the relationship between two variables really is linear, Pearson’s correlation coefficient tells us nothing about the slope (i.e. the steepness) of the relationship.
If those last two statements don’t make immediate sense, take a close look at this figure:
This shows a variety of different relationships between pairs of numeric variables. The numbers in each subplot are the Pearson’s correlation coefficients associated with the pattern. Consider each row:
1. The first row shows a series of linear relationships that vary in their strength and direction. These are all linear in the sense that the general form of the relationship can be described by a straight line. This means that it is appropriate to use Pearson’s correlation coefficient in these cases to quantify the strength of association, i.e. the coefficient is a reliable measure of association.
2. The second row shows a series of linear relationships that vary in their direction, but are all examples of a perfect relationship—we can predict the exact value of one variable given knowledge of the other. What these plots show is that Pearson’s correlation coefficient measures the strength of association without telling us anything the steepness of the relationship.
3. The third row shows a series of different cases where it is definitely inappropriate to Pearson’s correlation coefficient. In each case, the variables are related to one another in some way, yet the correlation coefficient is always 0\. Pearson’s correlation coefficient completely fails to flag the relationship because it is not even close to being linear.
#### 22\.1\.1\.1 Other measures of correlation
What should we do if we think the relationship between two variables is non\-linear? We should not use Pearson correlation coefficient to measure association in this case. Instead, we can calculate something called a **rank correlation**. The idea is quite simple. Instead of working with the actual values of each variable we ‘rank’ them, i.e. we sort each variable from lowest to highest and the assign the labels ‘first, ’second’, ‘third’, etc. to different observations. Measures of rank correlation are based on a comparison of the resulting ranks. The two most popular are Spearman’s \\(\\rho\\) (‘rho’) and Kendall’s \\(\\tau\\) (‘tau’).
We won’t examine the mathematical formula for each of these as they don’t really help us understand them much. We do need to know how to interpret rank correlation coefficients though. The key point is that both coefficients behave in a very similar way to Pearson’s correlation coefficient. They take a value of 0 if the ranks are uncorrelated, and a value of \+1 or \-1 if they are perfectly related. Again, the sign tells us about the direction of the association.
We can calculate both rank correlation coefficients in R using the `cor` function again. This time we need to set the `method` argument to the appropriate value: `method = "kendall"` or `method = "spearman"`. For example, the Spearman’s \\(\\rho\\) and Kendall’s \\(\\tau\\) measures of correlation between `pressure` and `wind` are given by:
```
cor(storms$wind, storms$pressure, method = "kendall")
```
```
## [1] -0.7627645
```
```
cor(storms$wind, storms$pressure, method = "spearman")
```
```
## [1] -0.9025831
```
These roughly agree with the Pearson correlation coefficient, though Kendall’s \\(\\tau\\) seems to suggest that the relationship is weaker. Kendall’s \\(\\tau\\) is often smaller than Spearman’s \\(\\rho\\) correlation. Although Spearman’s \\(\\rho\\) is used more widely, it is more sensitive to errors and discrepancies in the data than Kendall’s \\(\\tau\\).
### 22\.1\.2 Graphical summaries
Correlation coefficients give us a simple way to summarise associations between numeric variables. They are limited though, because a single number can never summarise every aspect of the relationship between two variables. This is why we always visualise the relationship between two variables. The standard graph for displaying associations among numeric variables is a scatter plot, using horizontal and vertical axes to plot two variables as a series of points. We saw how to construct scatter plots using **ggplot2** in the \[Introduction to ggplot2] chapter so we won’t step through the details again.
There are a few other options beyond the standard scatter plot. Specifically, **ggplot2** provides a couple of different `geom_XX` functions for producing a visual summary of relationships between numeric variables in situations where over\-plotting of points is obscuring the relationship. One such example is the `geom_count` function:
```
ggplot(storms, aes(x = pressure, y = wind)) +
geom_count(alpha = 0.5)
```
The `geom_count` function is used to construct a layer in which data are first grouped into sets of identical observations. The number of cases in each group is counted, and this number (‘n’) is used to scale the size of points. Take note—it may be necessary to round numeric variables first (e.g. via `mutate`) to make a usable plot if they aren’t already discrete.
Two further options for dealing with excessive over\-plotting are the `geom_bin_2d` and `geom_hex` functions. The the `geom_bin_2d` divides the plane into rectangles, counts the number of cases in each rectangle, and then uses the number of cases to assign the rectangle’s fill colour. The `geom_hex` function does essentially the same thing, but instead divides the plane into regular hexagons. Note that `geom_hex` relies on the **hexbin** package, so this need to be installed to use it. Here’s an example of `geom_hex` in action:
```
ggplot(storms, aes(x = pressure, y = wind)) +
geom_hex(bins = 25)
```
Notice that this looks exactly like the **ggplot2** code for making a scatter plot, other than the fact that we’re now using `geom_hex` in place of `geom_point`.
22\.2 Associations between categorical variables
------------------------------------------------
### 22\.2\.1 Numerical summaries
Numerically exploring associations between pairs of categorical variables is not as simple as the numeric variable case. The general question we need to address is, “do different **combinations** of categories seem to be under or over represented?” We need to understand which combinations are common and which are rare. The simplest thing we can do is ‘cross\-tabulate’ the number of occurrences of each combination. The resulting table is called a **contingency table**. The counts in the table are sometimes referred to as frequencies.
The `xtabs` function (\= ‘cross\-tabulation’) can do this for us. For example, the frequencies of each storm category and month combination is given by:
```
xtabs(~ type + month, data = storms)
```
```
## month
## type 6 7 8 9 10 11 12
## Extratropical 27 38 23 149 129 42 4
## Hurricane 3 31 300 383 152 25 2
## Tropical Depression 22 59 150 156 84 42 0
## Tropical Storm 31 123 247 259 204 61 1
```
The first argument sets the variables to cross\-tabulate. The `xtabs` function uses R’s special formula language, so we can’t leave out that `~` at the beginning. After that, we just provide the list of variables to cross\-tabulate, separated by the `+` sign. The second argument tells the function which data set to use. This isn’t a **dplyr** function, so the first argument is *not* the data for once.
What does this tell us? It shows us how many observations are associated with each combination of values of `type` and `month`. We have to stare at the numbers for a while, but eventually it should be apparent that hurricanes and tropical storms are more common in August and September (month ‘8’ and ‘9’). More severe storms occur in the middle of the storm season—perhaps not all that surprising.
If both variables are ordinal we can also calculate a descriptive statistic of association from a contingency table. It makes no sense to do this for nominal variables because their values are not ordered. Pearson’s correlation coefficient is not appropriate here. Instead, we have to use some kind of rank correlation coefficient that accounts for the categorical nature of the data. Spearman’s \\(\\rho\\) and Kendall’s \\(\\tau\\) are designed for numeric data, so they can’t be used either.
One measure of association that *is* appropriate for categorical data is Goodman and Kruskal’s \\(\\gamma\\) (“gamma”). This behaves just like the other correlation coefficients we’ve looked at: it takes a value of 0 if the categories are uncorrelated, and a value of \+1 or \-1 if they are perfectly associated. The sign tells us about the direction of the association. Unfortunately, there isn’t a base R function to compute Goodman and Kruskal’s \\(\\gamma\\), so we have to use a function from one of the packages that implements it (e.g. the `GKgamma` function in the `vcdExtra` package) if we need it.
### 22\.2\.2 Graphical summaries
Bar charts can be used to summarise the relationship between two categorical variables. The basic idea is to produce a separate bar for *each combination* of categories in the two variables. The lengths of these bars is proportional to the values they represent, which is either the raw counts or the proportions in each category combination. This is the same information displayed in a contingency table. Using **ggplot2** to display this information is not very different from producing a bar graph to summarise a single categorical variable.
Let’s do this for the `type` and `year` variables in `storms`, breaking the process up into two steps. As always, we start by using the `ggplot` function to construct a graphical object containing the necessary default data and aesthetic mapping:
```
bar_plt <- ggplot(storms, aes(x = year, fill = type))
```
Notice that we’ve included two aesthetic mappings. We mapped the `year` variable to the x axis, and the storm category (`type`) to the fill colour. We want to display information from two categorical variables, so we have to define two aesthetic mappings. The next step is to add a layer using `geom_bar` (we want a bar plot) and display the results:
```
bar_plt <- bar_plt + geom_bar()
bar_plt
```
This is called a stacked bar chart. Each year has its own bar (`x = year`), and each bar has been divided up into different coloured segments, the length of which is determined by the number of observations associated with each storm type in that year (`fill = type`).
We have all the right information in this graph, but it could be improved. Look at the labels on the x axis. Not every bar is labelled. This occurs because `year` is stored as a numeric vector in `storms`, yet we are treating it as a categorical variable in this analysis—**ggplot2** has no way of knowing this of course. We need a new trick here. We need to convert `year` to something that won’t be interpreted as a number. One way to do this is to convert `year` to a character vector[12](#fn12). Once it’s in this format, **ggplot2** will assume that `year` is a categorical variable.
We can convert a numeric vector to a character vector with the `as.character` function. We could transform `year` inside `aes` ‘on the fly’, or alternatively, we can use the `mutate` function to construct a new version of `storms` containing the character version of `year`. We’ll do the latter so that we can keep reusing the new data frame:
```
storms_alter <- mutate(storms, year = as.character(year))
```
We must load and attach **dplyr** to make this work. The new data frame `storms_alter` is identical to storms, except that `year` is now a character vector.
Now we just need to construct and display the **ggplot2** object again using this new data frame:
```
ggplot(storms_alter, aes(x = year, fill = type)) +
geom_bar()
```
That’s an improvement. However, the ordering of the storm categories is not ideal because the order in which the different groups are presented does not reflect the ordinal scale we have in mind for storm category. We saw this same problem in the [Exploring categorical variables](exploring-categorical-variables.html#exploring-categorical-variables) chapter—**ggplot2** treats does not ‘know’ the correct order of the `type` categories. Time for a new trick.
We need to somehow embed the information about the required category order of `type` into our data. It turns out that R has a special kind of augmented vector, called a **factor**, that’s designed to do just this. We make use of this we need to know how to convert something into a factor. We use the `factor` function, setting its `levels` argument to be a vector of category names in the correct order:
```
# 1. make a vector of storm type names in the required order
storm_names <- c("Tropical Depression", "Extratropical", "Tropical Storm", "Hurricane")
# 2. now convert year to a character and type to a factor
storms_alter <-
storms %>%
mutate(year = as.character(year),
type = factor(type, levels = storm_names))
```
This may look a little confusing at first glance, but all we did here was create a vector of ordered category names called `storm_names`, and then use mutate to change `type` to a factor using the ordering implied by `storm_names`. Just be careful with the spelling—the values in `storm_names` must match those in `type`. We did this with **dplyr**’s `mutate` function, again calling the modified data set `storms_alter`. Once we’ve applied the factor trick we can remake the bar chart:
```
# 3. make the bar plot
ggplot(storms_alter, aes(x = year, fill = type)) +
geom_bar()
```
#### Factors
Factors are very useful. They crop up all the time in R. Unfortunately, they are also a pain to work with and a frequent source of errors. A complete treatment of factors would require a whole new chapter, so to save space, we’ve just shown one way to work with them via the `factor` function. This is enough to solve the reordering trick required to get **ggplot2** to work the way we want it to, but there’s a lot more to learn about factors.
A stacked bar chart is the default produced by `geom_bar`. One problem with this kind of chart is that it can be hard to spot associations among the two categorical variables. If we want to know how they are associated it’s often better to plot the counts for each combination of categories side\-by\-side. This isn’t hard to do. We switch to a side\-by\-side bar chart by assigning a value of `"dodge"` to the `position` argument of `geom_bar`:
```
ggplot(storms_alter, aes(x = year, fill = type)) +
geom_bar(position = "dodge") +
labs(x = "Year",
y = "Number of Observations",
fill = "Storm Category")
```
The `position = "dodge"` argument says that we want the bars to ‘dodge’ one another along the x axis so that they are displayed next to one another. We snuck in one more tweak. Remember, we can use `labs` to set the labels of any aesthetic mapping we’ve defined—we used it here to set the label of the aesthetic mapping associated with the fill colour and the x/y axes.
This final figure shows that on average, storm systems spend more time as hurricanes and tropical storms than tropical depressions or extratropical systems. Other than that, the story is a little messy. For example, 1997 was an odd year, with few storm events and relatively few hurricanes.
22\.3 Categorical\-numerical associations
-----------------------------------------
We’ve seen how to summarise the relationship between a pair of variables when they are of the same type: numeric vs. numeric or categorical vs. categorical. The obvious next question is, “How do we display the relationship between a categorical and numeric variable?” As usual, there are a range of different options.
### 22\.3\.1 Descriptive statistics
Numerical summaries can be constructed by taking the various ideas we’ve explored for numeric variables (means, medians, etc), and applying them to subsets of data defined by the values of the categorical variable. This is easy to do with the **dplyr** `group_by` and `summarise` pipeline. We won’t review it here though, because we’re going to do this in the next chapter.
### 22\.3\.2 Graphical summaries
The most common visualisation for exploring categorical\-numerical relationships is the ‘box and whiskers plot’ (or just ‘box plot’). It’s easier to understand these plots once we’ve seen an example. To construct a box and whiskers plot we need to set ‘x’ and ‘y’ axis aesthetics for the categorical and numeric variable, and we use the `geom_boxplot` function to add the appropriate layer. Let’s examine the relationship between storm category and atmospheric pressure:
```
ggplot(storms_alter, aes(x = type, y = pressure)) +
geom_boxplot() +
xlab("Storm category") + ylab("Pressure (mbar)")
```
It’s fairly obvious why this is called a box and whiskers plot. Here’s a quick overview of the component parts of each box and whiskers:
* The horizontal line inside the box is the sample median. This is our measure of central tendency. It allows us to compare the most likely value of the numeric variable across the different categories.
* The boxes display the interquartile range (IQR) of the numeric variable in each category, i.e. the middle 50% of observations in each group according to their rank. This allows us to compare the spread of the numeric values in each category.
* The vertical lines that extend above and below each box are the “whiskers”. The interpretation of these depends on which kind of box plot we are making. By default, **ggplot2** produces a traditional Tukey box plot. Each whisker is drawn from each end of the box (the upper and lower quartiles) to a well\-defined point. To find where the upper whisker ends we have to find the largest observation that is no more than 1\.5 times the IQR away from the upper quartile. The lower whisker ends at the smallest observation that is no more than 1\.5 times the IQR away from the lower quartile.
* Any points that do not fall inside the whiskers are plotted as an individual point. These may be outliers, although they could also be perfectly consistent with the wider distribution.
The resulting plot compactly summarises the distribution of the numeric variable within each of the categories. We can see information about the central tendency, dispersion and skewness of each distribution. In addition, we can get a sense of whether there are potential outliers by noting the presence of individual points outside the whiskers.
What does the above plot tell us about atmospheric pressure and storm type? It shows that `pressure` tends to display negative skew in all four storm categories, though the skewness seems to be higher in tropical storms and hurricanes. The pressure values of tropical depression, tropical storm, and hurricane histograms overlap, though not by much. The extratropical storm system seems to be something ‘in between’ a tropical storm and a tropical depression.
### 22\.3\.3 Alternatives to box and whiskers plots
Box and whiskers plots are a good choice for exploring categorical\-numerical relationships. They provide a lot of information about how the distribution of the numeric variable changes across categories. Sometimes we may want to squeeze even more information about these distributions into a plot. One way to do this is to make multiple histograms (or dot plots, if we don’t have much data).
We already know how to make a histogram, and we have seen how aesthetic properties such as `colour` and `fill` are used to distinguish different categories of a variable in a layer. This suggests that we can overlay more than one histogram on a single plot. Let’s use this idea to see how the sample distribution of wind speed (`wind`) differs among the storm classes:
```
ggplot(storms_alter, aes(x = wind, fill = type)) +
geom_histogram(position = "identity", alpha = 0.6, binwidth = 5) +
xlab("Wind Speed (mph)")
```
We define two mappings: the continuous variable (`wind`) is mapped to the x axis, and the categorical variable (`type`) is mapped to the the fill colour. Notice that we also set the `position` argument to `"identity"`. This tells **ggplot2** not to stack the histograms on top of one another. Instead, they are allowed to overlap. It’s for this reason that we also made them semi\-transparent by setting the `alpha` argument.
Plotting several histograms in one layer like this places a lot of information in one plot, but it can be hard to make sense of this when the histograms overlap a lot. If the overlapping histograms are too difficult to interpret we might consider producing a separate one for each category. We’ve already seen a quick way to do this. Faceting works well here:
```
ggplot(storms_alter, aes(x = wind)) +
geom_histogram(alpha = 0.8, binwidth = 5) +
xlab("Wind Speed (mph)") +
facet_wrap(~ type, ncol = 4)
```
We can see quite a lot in this plot and the last. The tropical depression, tropical storm, and hurricane histograms do not overlap (with a few minor exceptions). These three storm categories are obviously defined with respect to wind speed. Perhaps they represent different phases of one underlying physical phenomenon? The extratropical storm system seems to be something altogether different. In fact, an extratropical storm is a different kind of weather system from the other three. It can turn into a tropical depression (winds \< 39 mph) or a subtropical storm (winds \> 39 mph), but only a subtropical can turn into a hurricane.
We’re oversimplifying, but the point is that the simple ordinal scale that we envisaged for the `type` variable is probably not very sensible. It’s not really true that an extratropical is “greater than” a subtropical depression (or vice versa). We should probably have characterised `type` as a nominal variable, although this designation ignores the fact that three of the storm types have a clear ordering. The take home message is that we have to understand our data before we start to really analyse it. This is why exploratory data analysis is so important.
22\.1 Associations between numeric variables
--------------------------------------------
### 22\.1\.1 Descriptive statistics
Statisticians have devised various different ways to quantify an association between two numeric variables in a sample. The common measures seek to calculate some kind of **correlation coefficient**. The terms ‘association’ and ‘correlation’ are closely related; so much so that they are often used interchangeably. Strictly speaking correlation has a narrower definition: a correlation is defined by a metric (the ‘correlation coefficient’) that quantifies the degree to which an association tends to a certain pattern.
The most widely used measure of correlation is **Pearson’s correlation coefficient** (also called the Pearson product\-moment correlation coefficient). Pearson’s correlation coefficient is something called the covariance of the two variables, divided by the product of their standard deviations. The mathematical formula for the Pearson’s correlation coefficient applied to a sample is: \\\[
r\_{xy} \= \\frac{1}{N\-1}\\sum\\limits\_{i\=1}^{N}{\\frac{x\_i\-\\bar{x}}{s\_x} \\frac{y\_i\-\\bar{y}}{s\_y}}
\\] We’re using \\(x\\) and \\(y\\) here to refer to each of the variables in the sample. The \\(r\_{xy}\\) denotes the correlation coefficient, \\(s\_x\\) and \\(s\_y\\) denote the standard deviation of each sample, \\(\\bar{x}\\) and \\(\\bar{y}\\) are the sample means, and \\(N\\) is the sample size.
Remember, a correlation coefficient quantifies the degree to which an association tends to *a certain pattern*. In the case of Pearson’s correlation coefficient, the coefficient is designed to summarise the strength of a **linear** (i.e. ‘straight line’) association. We’ll return to this idea in a moment.
Pearson’s correlation coefficient takes a value of 0 if two variables are uncorrelated, and a value of \+1 or \-1 if they are perfectly related. ‘Perfectly related’ means we can predict the exact value of one variable given knowledge of the other. A positive value indicates that high values in one variable is associated with high values of the second. A negative value indicates that high values of one variable is associated with low values of the second. The words ‘high’ and ‘low’ are relative to the arithmetic mean.
In R we can use the `cor` function to calculate Pearson’s correlation coefficient. For example, the Pearson correlation coefficient between `pressure` and `wind` is given by:
```
cor(storms$wind, storms$pressure)
```
```
## [1] -0.9254911
```
This is negative, indicating wind speed tends to decline with increasing pressure. It is also quite close to \-1, indicating that this association is very strong. We saw this in the [Introduction to **ggplot2**](introduction-to-ggplot2.html#introduction-to-ggplot2) chapter when we plotted atmospheric pressure against wind speed.
The Pearson’s correlation coefficient must be interpreted with care. Two points are worth noting:
1. Because it is designed to summarise the strength of a **linear** relationship, Pearson’s correlation coefficient will be misleading when this relationship is curved, or even worse, hump\-shaped.
2. Even if the relationship between two variables really is linear, Pearson’s correlation coefficient tells us nothing about the slope (i.e. the steepness) of the relationship.
If those last two statements don’t make immediate sense, take a close look at this figure:
This shows a variety of different relationships between pairs of numeric variables. The numbers in each subplot are the Pearson’s correlation coefficients associated with the pattern. Consider each row:
1. The first row shows a series of linear relationships that vary in their strength and direction. These are all linear in the sense that the general form of the relationship can be described by a straight line. This means that it is appropriate to use Pearson’s correlation coefficient in these cases to quantify the strength of association, i.e. the coefficient is a reliable measure of association.
2. The second row shows a series of linear relationships that vary in their direction, but are all examples of a perfect relationship—we can predict the exact value of one variable given knowledge of the other. What these plots show is that Pearson’s correlation coefficient measures the strength of association without telling us anything the steepness of the relationship.
3. The third row shows a series of different cases where it is definitely inappropriate to Pearson’s correlation coefficient. In each case, the variables are related to one another in some way, yet the correlation coefficient is always 0\. Pearson’s correlation coefficient completely fails to flag the relationship because it is not even close to being linear.
#### 22\.1\.1\.1 Other measures of correlation
What should we do if we think the relationship between two variables is non\-linear? We should not use Pearson correlation coefficient to measure association in this case. Instead, we can calculate something called a **rank correlation**. The idea is quite simple. Instead of working with the actual values of each variable we ‘rank’ them, i.e. we sort each variable from lowest to highest and the assign the labels ‘first, ’second’, ‘third’, etc. to different observations. Measures of rank correlation are based on a comparison of the resulting ranks. The two most popular are Spearman’s \\(\\rho\\) (‘rho’) and Kendall’s \\(\\tau\\) (‘tau’).
We won’t examine the mathematical formula for each of these as they don’t really help us understand them much. We do need to know how to interpret rank correlation coefficients though. The key point is that both coefficients behave in a very similar way to Pearson’s correlation coefficient. They take a value of 0 if the ranks are uncorrelated, and a value of \+1 or \-1 if they are perfectly related. Again, the sign tells us about the direction of the association.
We can calculate both rank correlation coefficients in R using the `cor` function again. This time we need to set the `method` argument to the appropriate value: `method = "kendall"` or `method = "spearman"`. For example, the Spearman’s \\(\\rho\\) and Kendall’s \\(\\tau\\) measures of correlation between `pressure` and `wind` are given by:
```
cor(storms$wind, storms$pressure, method = "kendall")
```
```
## [1] -0.7627645
```
```
cor(storms$wind, storms$pressure, method = "spearman")
```
```
## [1] -0.9025831
```
These roughly agree with the Pearson correlation coefficient, though Kendall’s \\(\\tau\\) seems to suggest that the relationship is weaker. Kendall’s \\(\\tau\\) is often smaller than Spearman’s \\(\\rho\\) correlation. Although Spearman’s \\(\\rho\\) is used more widely, it is more sensitive to errors and discrepancies in the data than Kendall’s \\(\\tau\\).
### 22\.1\.2 Graphical summaries
Correlation coefficients give us a simple way to summarise associations between numeric variables. They are limited though, because a single number can never summarise every aspect of the relationship between two variables. This is why we always visualise the relationship between two variables. The standard graph for displaying associations among numeric variables is a scatter plot, using horizontal and vertical axes to plot two variables as a series of points. We saw how to construct scatter plots using **ggplot2** in the \[Introduction to ggplot2] chapter so we won’t step through the details again.
There are a few other options beyond the standard scatter plot. Specifically, **ggplot2** provides a couple of different `geom_XX` functions for producing a visual summary of relationships between numeric variables in situations where over\-plotting of points is obscuring the relationship. One such example is the `geom_count` function:
```
ggplot(storms, aes(x = pressure, y = wind)) +
geom_count(alpha = 0.5)
```
The `geom_count` function is used to construct a layer in which data are first grouped into sets of identical observations. The number of cases in each group is counted, and this number (‘n’) is used to scale the size of points. Take note—it may be necessary to round numeric variables first (e.g. via `mutate`) to make a usable plot if they aren’t already discrete.
Two further options for dealing with excessive over\-plotting are the `geom_bin_2d` and `geom_hex` functions. The the `geom_bin_2d` divides the plane into rectangles, counts the number of cases in each rectangle, and then uses the number of cases to assign the rectangle’s fill colour. The `geom_hex` function does essentially the same thing, but instead divides the plane into regular hexagons. Note that `geom_hex` relies on the **hexbin** package, so this need to be installed to use it. Here’s an example of `geom_hex` in action:
```
ggplot(storms, aes(x = pressure, y = wind)) +
geom_hex(bins = 25)
```
Notice that this looks exactly like the **ggplot2** code for making a scatter plot, other than the fact that we’re now using `geom_hex` in place of `geom_point`.
### 22\.1\.1 Descriptive statistics
Statisticians have devised various different ways to quantify an association between two numeric variables in a sample. The common measures seek to calculate some kind of **correlation coefficient**. The terms ‘association’ and ‘correlation’ are closely related; so much so that they are often used interchangeably. Strictly speaking correlation has a narrower definition: a correlation is defined by a metric (the ‘correlation coefficient’) that quantifies the degree to which an association tends to a certain pattern.
The most widely used measure of correlation is **Pearson’s correlation coefficient** (also called the Pearson product\-moment correlation coefficient). Pearson’s correlation coefficient is something called the covariance of the two variables, divided by the product of their standard deviations. The mathematical formula for the Pearson’s correlation coefficient applied to a sample is: \\\[
r\_{xy} \= \\frac{1}{N\-1}\\sum\\limits\_{i\=1}^{N}{\\frac{x\_i\-\\bar{x}}{s\_x} \\frac{y\_i\-\\bar{y}}{s\_y}}
\\] We’re using \\(x\\) and \\(y\\) here to refer to each of the variables in the sample. The \\(r\_{xy}\\) denotes the correlation coefficient, \\(s\_x\\) and \\(s\_y\\) denote the standard deviation of each sample, \\(\\bar{x}\\) and \\(\\bar{y}\\) are the sample means, and \\(N\\) is the sample size.
Remember, a correlation coefficient quantifies the degree to which an association tends to *a certain pattern*. In the case of Pearson’s correlation coefficient, the coefficient is designed to summarise the strength of a **linear** (i.e. ‘straight line’) association. We’ll return to this idea in a moment.
Pearson’s correlation coefficient takes a value of 0 if two variables are uncorrelated, and a value of \+1 or \-1 if they are perfectly related. ‘Perfectly related’ means we can predict the exact value of one variable given knowledge of the other. A positive value indicates that high values in one variable is associated with high values of the second. A negative value indicates that high values of one variable is associated with low values of the second. The words ‘high’ and ‘low’ are relative to the arithmetic mean.
In R we can use the `cor` function to calculate Pearson’s correlation coefficient. For example, the Pearson correlation coefficient between `pressure` and `wind` is given by:
```
cor(storms$wind, storms$pressure)
```
```
## [1] -0.9254911
```
This is negative, indicating wind speed tends to decline with increasing pressure. It is also quite close to \-1, indicating that this association is very strong. We saw this in the [Introduction to **ggplot2**](introduction-to-ggplot2.html#introduction-to-ggplot2) chapter when we plotted atmospheric pressure against wind speed.
The Pearson’s correlation coefficient must be interpreted with care. Two points are worth noting:
1. Because it is designed to summarise the strength of a **linear** relationship, Pearson’s correlation coefficient will be misleading when this relationship is curved, or even worse, hump\-shaped.
2. Even if the relationship between two variables really is linear, Pearson’s correlation coefficient tells us nothing about the slope (i.e. the steepness) of the relationship.
If those last two statements don’t make immediate sense, take a close look at this figure:
This shows a variety of different relationships between pairs of numeric variables. The numbers in each subplot are the Pearson’s correlation coefficients associated with the pattern. Consider each row:
1. The first row shows a series of linear relationships that vary in their strength and direction. These are all linear in the sense that the general form of the relationship can be described by a straight line. This means that it is appropriate to use Pearson’s correlation coefficient in these cases to quantify the strength of association, i.e. the coefficient is a reliable measure of association.
2. The second row shows a series of linear relationships that vary in their direction, but are all examples of a perfect relationship—we can predict the exact value of one variable given knowledge of the other. What these plots show is that Pearson’s correlation coefficient measures the strength of association without telling us anything the steepness of the relationship.
3. The third row shows a series of different cases where it is definitely inappropriate to Pearson’s correlation coefficient. In each case, the variables are related to one another in some way, yet the correlation coefficient is always 0\. Pearson’s correlation coefficient completely fails to flag the relationship because it is not even close to being linear.
#### 22\.1\.1\.1 Other measures of correlation
What should we do if we think the relationship between two variables is non\-linear? We should not use Pearson correlation coefficient to measure association in this case. Instead, we can calculate something called a **rank correlation**. The idea is quite simple. Instead of working with the actual values of each variable we ‘rank’ them, i.e. we sort each variable from lowest to highest and the assign the labels ‘first, ’second’, ‘third’, etc. to different observations. Measures of rank correlation are based on a comparison of the resulting ranks. The two most popular are Spearman’s \\(\\rho\\) (‘rho’) and Kendall’s \\(\\tau\\) (‘tau’).
We won’t examine the mathematical formula for each of these as they don’t really help us understand them much. We do need to know how to interpret rank correlation coefficients though. The key point is that both coefficients behave in a very similar way to Pearson’s correlation coefficient. They take a value of 0 if the ranks are uncorrelated, and a value of \+1 or \-1 if they are perfectly related. Again, the sign tells us about the direction of the association.
We can calculate both rank correlation coefficients in R using the `cor` function again. This time we need to set the `method` argument to the appropriate value: `method = "kendall"` or `method = "spearman"`. For example, the Spearman’s \\(\\rho\\) and Kendall’s \\(\\tau\\) measures of correlation between `pressure` and `wind` are given by:
```
cor(storms$wind, storms$pressure, method = "kendall")
```
```
## [1] -0.7627645
```
```
cor(storms$wind, storms$pressure, method = "spearman")
```
```
## [1] -0.9025831
```
These roughly agree with the Pearson correlation coefficient, though Kendall’s \\(\\tau\\) seems to suggest that the relationship is weaker. Kendall’s \\(\\tau\\) is often smaller than Spearman’s \\(\\rho\\) correlation. Although Spearman’s \\(\\rho\\) is used more widely, it is more sensitive to errors and discrepancies in the data than Kendall’s \\(\\tau\\).
#### 22\.1\.1\.1 Other measures of correlation
What should we do if we think the relationship between two variables is non\-linear? We should not use Pearson correlation coefficient to measure association in this case. Instead, we can calculate something called a **rank correlation**. The idea is quite simple. Instead of working with the actual values of each variable we ‘rank’ them, i.e. we sort each variable from lowest to highest and the assign the labels ‘first, ’second’, ‘third’, etc. to different observations. Measures of rank correlation are based on a comparison of the resulting ranks. The two most popular are Spearman’s \\(\\rho\\) (‘rho’) and Kendall’s \\(\\tau\\) (‘tau’).
We won’t examine the mathematical formula for each of these as they don’t really help us understand them much. We do need to know how to interpret rank correlation coefficients though. The key point is that both coefficients behave in a very similar way to Pearson’s correlation coefficient. They take a value of 0 if the ranks are uncorrelated, and a value of \+1 or \-1 if they are perfectly related. Again, the sign tells us about the direction of the association.
We can calculate both rank correlation coefficients in R using the `cor` function again. This time we need to set the `method` argument to the appropriate value: `method = "kendall"` or `method = "spearman"`. For example, the Spearman’s \\(\\rho\\) and Kendall’s \\(\\tau\\) measures of correlation between `pressure` and `wind` are given by:
```
cor(storms$wind, storms$pressure, method = "kendall")
```
```
## [1] -0.7627645
```
```
cor(storms$wind, storms$pressure, method = "spearman")
```
```
## [1] -0.9025831
```
These roughly agree with the Pearson correlation coefficient, though Kendall’s \\(\\tau\\) seems to suggest that the relationship is weaker. Kendall’s \\(\\tau\\) is often smaller than Spearman’s \\(\\rho\\) correlation. Although Spearman’s \\(\\rho\\) is used more widely, it is more sensitive to errors and discrepancies in the data than Kendall’s \\(\\tau\\).
### 22\.1\.2 Graphical summaries
Correlation coefficients give us a simple way to summarise associations between numeric variables. They are limited though, because a single number can never summarise every aspect of the relationship between two variables. This is why we always visualise the relationship between two variables. The standard graph for displaying associations among numeric variables is a scatter plot, using horizontal and vertical axes to plot two variables as a series of points. We saw how to construct scatter plots using **ggplot2** in the \[Introduction to ggplot2] chapter so we won’t step through the details again.
There are a few other options beyond the standard scatter plot. Specifically, **ggplot2** provides a couple of different `geom_XX` functions for producing a visual summary of relationships between numeric variables in situations where over\-plotting of points is obscuring the relationship. One such example is the `geom_count` function:
```
ggplot(storms, aes(x = pressure, y = wind)) +
geom_count(alpha = 0.5)
```
The `geom_count` function is used to construct a layer in which data are first grouped into sets of identical observations. The number of cases in each group is counted, and this number (‘n’) is used to scale the size of points. Take note—it may be necessary to round numeric variables first (e.g. via `mutate`) to make a usable plot if they aren’t already discrete.
Two further options for dealing with excessive over\-plotting are the `geom_bin_2d` and `geom_hex` functions. The the `geom_bin_2d` divides the plane into rectangles, counts the number of cases in each rectangle, and then uses the number of cases to assign the rectangle’s fill colour. The `geom_hex` function does essentially the same thing, but instead divides the plane into regular hexagons. Note that `geom_hex` relies on the **hexbin** package, so this need to be installed to use it. Here’s an example of `geom_hex` in action:
```
ggplot(storms, aes(x = pressure, y = wind)) +
geom_hex(bins = 25)
```
Notice that this looks exactly like the **ggplot2** code for making a scatter plot, other than the fact that we’re now using `geom_hex` in place of `geom_point`.
22\.2 Associations between categorical variables
------------------------------------------------
### 22\.2\.1 Numerical summaries
Numerically exploring associations between pairs of categorical variables is not as simple as the numeric variable case. The general question we need to address is, “do different **combinations** of categories seem to be under or over represented?” We need to understand which combinations are common and which are rare. The simplest thing we can do is ‘cross\-tabulate’ the number of occurrences of each combination. The resulting table is called a **contingency table**. The counts in the table are sometimes referred to as frequencies.
The `xtabs` function (\= ‘cross\-tabulation’) can do this for us. For example, the frequencies of each storm category and month combination is given by:
```
xtabs(~ type + month, data = storms)
```
```
## month
## type 6 7 8 9 10 11 12
## Extratropical 27 38 23 149 129 42 4
## Hurricane 3 31 300 383 152 25 2
## Tropical Depression 22 59 150 156 84 42 0
## Tropical Storm 31 123 247 259 204 61 1
```
The first argument sets the variables to cross\-tabulate. The `xtabs` function uses R’s special formula language, so we can’t leave out that `~` at the beginning. After that, we just provide the list of variables to cross\-tabulate, separated by the `+` sign. The second argument tells the function which data set to use. This isn’t a **dplyr** function, so the first argument is *not* the data for once.
What does this tell us? It shows us how many observations are associated with each combination of values of `type` and `month`. We have to stare at the numbers for a while, but eventually it should be apparent that hurricanes and tropical storms are more common in August and September (month ‘8’ and ‘9’). More severe storms occur in the middle of the storm season—perhaps not all that surprising.
If both variables are ordinal we can also calculate a descriptive statistic of association from a contingency table. It makes no sense to do this for nominal variables because their values are not ordered. Pearson’s correlation coefficient is not appropriate here. Instead, we have to use some kind of rank correlation coefficient that accounts for the categorical nature of the data. Spearman’s \\(\\rho\\) and Kendall’s \\(\\tau\\) are designed for numeric data, so they can’t be used either.
One measure of association that *is* appropriate for categorical data is Goodman and Kruskal’s \\(\\gamma\\) (“gamma”). This behaves just like the other correlation coefficients we’ve looked at: it takes a value of 0 if the categories are uncorrelated, and a value of \+1 or \-1 if they are perfectly associated. The sign tells us about the direction of the association. Unfortunately, there isn’t a base R function to compute Goodman and Kruskal’s \\(\\gamma\\), so we have to use a function from one of the packages that implements it (e.g. the `GKgamma` function in the `vcdExtra` package) if we need it.
### 22\.2\.2 Graphical summaries
Bar charts can be used to summarise the relationship between two categorical variables. The basic idea is to produce a separate bar for *each combination* of categories in the two variables. The lengths of these bars is proportional to the values they represent, which is either the raw counts or the proportions in each category combination. This is the same information displayed in a contingency table. Using **ggplot2** to display this information is not very different from producing a bar graph to summarise a single categorical variable.
Let’s do this for the `type` and `year` variables in `storms`, breaking the process up into two steps. As always, we start by using the `ggplot` function to construct a graphical object containing the necessary default data and aesthetic mapping:
```
bar_plt <- ggplot(storms, aes(x = year, fill = type))
```
Notice that we’ve included two aesthetic mappings. We mapped the `year` variable to the x axis, and the storm category (`type`) to the fill colour. We want to display information from two categorical variables, so we have to define two aesthetic mappings. The next step is to add a layer using `geom_bar` (we want a bar plot) and display the results:
```
bar_plt <- bar_plt + geom_bar()
bar_plt
```
This is called a stacked bar chart. Each year has its own bar (`x = year`), and each bar has been divided up into different coloured segments, the length of which is determined by the number of observations associated with each storm type in that year (`fill = type`).
We have all the right information in this graph, but it could be improved. Look at the labels on the x axis. Not every bar is labelled. This occurs because `year` is stored as a numeric vector in `storms`, yet we are treating it as a categorical variable in this analysis—**ggplot2** has no way of knowing this of course. We need a new trick here. We need to convert `year` to something that won’t be interpreted as a number. One way to do this is to convert `year` to a character vector[12](#fn12). Once it’s in this format, **ggplot2** will assume that `year` is a categorical variable.
We can convert a numeric vector to a character vector with the `as.character` function. We could transform `year` inside `aes` ‘on the fly’, or alternatively, we can use the `mutate` function to construct a new version of `storms` containing the character version of `year`. We’ll do the latter so that we can keep reusing the new data frame:
```
storms_alter <- mutate(storms, year = as.character(year))
```
We must load and attach **dplyr** to make this work. The new data frame `storms_alter` is identical to storms, except that `year` is now a character vector.
Now we just need to construct and display the **ggplot2** object again using this new data frame:
```
ggplot(storms_alter, aes(x = year, fill = type)) +
geom_bar()
```
That’s an improvement. However, the ordering of the storm categories is not ideal because the order in which the different groups are presented does not reflect the ordinal scale we have in mind for storm category. We saw this same problem in the [Exploring categorical variables](exploring-categorical-variables.html#exploring-categorical-variables) chapter—**ggplot2** treats does not ‘know’ the correct order of the `type` categories. Time for a new trick.
We need to somehow embed the information about the required category order of `type` into our data. It turns out that R has a special kind of augmented vector, called a **factor**, that’s designed to do just this. We make use of this we need to know how to convert something into a factor. We use the `factor` function, setting its `levels` argument to be a vector of category names in the correct order:
```
# 1. make a vector of storm type names in the required order
storm_names <- c("Tropical Depression", "Extratropical", "Tropical Storm", "Hurricane")
# 2. now convert year to a character and type to a factor
storms_alter <-
storms %>%
mutate(year = as.character(year),
type = factor(type, levels = storm_names))
```
This may look a little confusing at first glance, but all we did here was create a vector of ordered category names called `storm_names`, and then use mutate to change `type` to a factor using the ordering implied by `storm_names`. Just be careful with the spelling—the values in `storm_names` must match those in `type`. We did this with **dplyr**’s `mutate` function, again calling the modified data set `storms_alter`. Once we’ve applied the factor trick we can remake the bar chart:
```
# 3. make the bar plot
ggplot(storms_alter, aes(x = year, fill = type)) +
geom_bar()
```
#### Factors
Factors are very useful. They crop up all the time in R. Unfortunately, they are also a pain to work with and a frequent source of errors. A complete treatment of factors would require a whole new chapter, so to save space, we’ve just shown one way to work with them via the `factor` function. This is enough to solve the reordering trick required to get **ggplot2** to work the way we want it to, but there’s a lot more to learn about factors.
A stacked bar chart is the default produced by `geom_bar`. One problem with this kind of chart is that it can be hard to spot associations among the two categorical variables. If we want to know how they are associated it’s often better to plot the counts for each combination of categories side\-by\-side. This isn’t hard to do. We switch to a side\-by\-side bar chart by assigning a value of `"dodge"` to the `position` argument of `geom_bar`:
```
ggplot(storms_alter, aes(x = year, fill = type)) +
geom_bar(position = "dodge") +
labs(x = "Year",
y = "Number of Observations",
fill = "Storm Category")
```
The `position = "dodge"` argument says that we want the bars to ‘dodge’ one another along the x axis so that they are displayed next to one another. We snuck in one more tweak. Remember, we can use `labs` to set the labels of any aesthetic mapping we’ve defined—we used it here to set the label of the aesthetic mapping associated with the fill colour and the x/y axes.
This final figure shows that on average, storm systems spend more time as hurricanes and tropical storms than tropical depressions or extratropical systems. Other than that, the story is a little messy. For example, 1997 was an odd year, with few storm events and relatively few hurricanes.
### 22\.2\.1 Numerical summaries
Numerically exploring associations between pairs of categorical variables is not as simple as the numeric variable case. The general question we need to address is, “do different **combinations** of categories seem to be under or over represented?” We need to understand which combinations are common and which are rare. The simplest thing we can do is ‘cross\-tabulate’ the number of occurrences of each combination. The resulting table is called a **contingency table**. The counts in the table are sometimes referred to as frequencies.
The `xtabs` function (\= ‘cross\-tabulation’) can do this for us. For example, the frequencies of each storm category and month combination is given by:
```
xtabs(~ type + month, data = storms)
```
```
## month
## type 6 7 8 9 10 11 12
## Extratropical 27 38 23 149 129 42 4
## Hurricane 3 31 300 383 152 25 2
## Tropical Depression 22 59 150 156 84 42 0
## Tropical Storm 31 123 247 259 204 61 1
```
The first argument sets the variables to cross\-tabulate. The `xtabs` function uses R’s special formula language, so we can’t leave out that `~` at the beginning. After that, we just provide the list of variables to cross\-tabulate, separated by the `+` sign. The second argument tells the function which data set to use. This isn’t a **dplyr** function, so the first argument is *not* the data for once.
What does this tell us? It shows us how many observations are associated with each combination of values of `type` and `month`. We have to stare at the numbers for a while, but eventually it should be apparent that hurricanes and tropical storms are more common in August and September (month ‘8’ and ‘9’). More severe storms occur in the middle of the storm season—perhaps not all that surprising.
If both variables are ordinal we can also calculate a descriptive statistic of association from a contingency table. It makes no sense to do this for nominal variables because their values are not ordered. Pearson’s correlation coefficient is not appropriate here. Instead, we have to use some kind of rank correlation coefficient that accounts for the categorical nature of the data. Spearman’s \\(\\rho\\) and Kendall’s \\(\\tau\\) are designed for numeric data, so they can’t be used either.
One measure of association that *is* appropriate for categorical data is Goodman and Kruskal’s \\(\\gamma\\) (“gamma”). This behaves just like the other correlation coefficients we’ve looked at: it takes a value of 0 if the categories are uncorrelated, and a value of \+1 or \-1 if they are perfectly associated. The sign tells us about the direction of the association. Unfortunately, there isn’t a base R function to compute Goodman and Kruskal’s \\(\\gamma\\), so we have to use a function from one of the packages that implements it (e.g. the `GKgamma` function in the `vcdExtra` package) if we need it.
### 22\.2\.2 Graphical summaries
Bar charts can be used to summarise the relationship between two categorical variables. The basic idea is to produce a separate bar for *each combination* of categories in the two variables. The lengths of these bars is proportional to the values they represent, which is either the raw counts or the proportions in each category combination. This is the same information displayed in a contingency table. Using **ggplot2** to display this information is not very different from producing a bar graph to summarise a single categorical variable.
Let’s do this for the `type` and `year` variables in `storms`, breaking the process up into two steps. As always, we start by using the `ggplot` function to construct a graphical object containing the necessary default data and aesthetic mapping:
```
bar_plt <- ggplot(storms, aes(x = year, fill = type))
```
Notice that we’ve included two aesthetic mappings. We mapped the `year` variable to the x axis, and the storm category (`type`) to the fill colour. We want to display information from two categorical variables, so we have to define two aesthetic mappings. The next step is to add a layer using `geom_bar` (we want a bar plot) and display the results:
```
bar_plt <- bar_plt + geom_bar()
bar_plt
```
This is called a stacked bar chart. Each year has its own bar (`x = year`), and each bar has been divided up into different coloured segments, the length of which is determined by the number of observations associated with each storm type in that year (`fill = type`).
We have all the right information in this graph, but it could be improved. Look at the labels on the x axis. Not every bar is labelled. This occurs because `year` is stored as a numeric vector in `storms`, yet we are treating it as a categorical variable in this analysis—**ggplot2** has no way of knowing this of course. We need a new trick here. We need to convert `year` to something that won’t be interpreted as a number. One way to do this is to convert `year` to a character vector[12](#fn12). Once it’s in this format, **ggplot2** will assume that `year` is a categorical variable.
We can convert a numeric vector to a character vector with the `as.character` function. We could transform `year` inside `aes` ‘on the fly’, or alternatively, we can use the `mutate` function to construct a new version of `storms` containing the character version of `year`. We’ll do the latter so that we can keep reusing the new data frame:
```
storms_alter <- mutate(storms, year = as.character(year))
```
We must load and attach **dplyr** to make this work. The new data frame `storms_alter` is identical to storms, except that `year` is now a character vector.
Now we just need to construct and display the **ggplot2** object again using this new data frame:
```
ggplot(storms_alter, aes(x = year, fill = type)) +
geom_bar()
```
That’s an improvement. However, the ordering of the storm categories is not ideal because the order in which the different groups are presented does not reflect the ordinal scale we have in mind for storm category. We saw this same problem in the [Exploring categorical variables](exploring-categorical-variables.html#exploring-categorical-variables) chapter—**ggplot2** treats does not ‘know’ the correct order of the `type` categories. Time for a new trick.
We need to somehow embed the information about the required category order of `type` into our data. It turns out that R has a special kind of augmented vector, called a **factor**, that’s designed to do just this. We make use of this we need to know how to convert something into a factor. We use the `factor` function, setting its `levels` argument to be a vector of category names in the correct order:
```
# 1. make a vector of storm type names in the required order
storm_names <- c("Tropical Depression", "Extratropical", "Tropical Storm", "Hurricane")
# 2. now convert year to a character and type to a factor
storms_alter <-
storms %>%
mutate(year = as.character(year),
type = factor(type, levels = storm_names))
```
This may look a little confusing at first glance, but all we did here was create a vector of ordered category names called `storm_names`, and then use mutate to change `type` to a factor using the ordering implied by `storm_names`. Just be careful with the spelling—the values in `storm_names` must match those in `type`. We did this with **dplyr**’s `mutate` function, again calling the modified data set `storms_alter`. Once we’ve applied the factor trick we can remake the bar chart:
```
# 3. make the bar plot
ggplot(storms_alter, aes(x = year, fill = type)) +
geom_bar()
```
#### Factors
Factors are very useful. They crop up all the time in R. Unfortunately, they are also a pain to work with and a frequent source of errors. A complete treatment of factors would require a whole new chapter, so to save space, we’ve just shown one way to work with them via the `factor` function. This is enough to solve the reordering trick required to get **ggplot2** to work the way we want it to, but there’s a lot more to learn about factors.
A stacked bar chart is the default produced by `geom_bar`. One problem with this kind of chart is that it can be hard to spot associations among the two categorical variables. If we want to know how they are associated it’s often better to plot the counts for each combination of categories side\-by\-side. This isn’t hard to do. We switch to a side\-by\-side bar chart by assigning a value of `"dodge"` to the `position` argument of `geom_bar`:
```
ggplot(storms_alter, aes(x = year, fill = type)) +
geom_bar(position = "dodge") +
labs(x = "Year",
y = "Number of Observations",
fill = "Storm Category")
```
The `position = "dodge"` argument says that we want the bars to ‘dodge’ one another along the x axis so that they are displayed next to one another. We snuck in one more tweak. Remember, we can use `labs` to set the labels of any aesthetic mapping we’ve defined—we used it here to set the label of the aesthetic mapping associated with the fill colour and the x/y axes.
This final figure shows that on average, storm systems spend more time as hurricanes and tropical storms than tropical depressions or extratropical systems. Other than that, the story is a little messy. For example, 1997 was an odd year, with few storm events and relatively few hurricanes.
22\.3 Categorical\-numerical associations
-----------------------------------------
We’ve seen how to summarise the relationship between a pair of variables when they are of the same type: numeric vs. numeric or categorical vs. categorical. The obvious next question is, “How do we display the relationship between a categorical and numeric variable?” As usual, there are a range of different options.
### 22\.3\.1 Descriptive statistics
Numerical summaries can be constructed by taking the various ideas we’ve explored for numeric variables (means, medians, etc), and applying them to subsets of data defined by the values of the categorical variable. This is easy to do with the **dplyr** `group_by` and `summarise` pipeline. We won’t review it here though, because we’re going to do this in the next chapter.
### 22\.3\.2 Graphical summaries
The most common visualisation for exploring categorical\-numerical relationships is the ‘box and whiskers plot’ (or just ‘box plot’). It’s easier to understand these plots once we’ve seen an example. To construct a box and whiskers plot we need to set ‘x’ and ‘y’ axis aesthetics for the categorical and numeric variable, and we use the `geom_boxplot` function to add the appropriate layer. Let’s examine the relationship between storm category and atmospheric pressure:
```
ggplot(storms_alter, aes(x = type, y = pressure)) +
geom_boxplot() +
xlab("Storm category") + ylab("Pressure (mbar)")
```
It’s fairly obvious why this is called a box and whiskers plot. Here’s a quick overview of the component parts of each box and whiskers:
* The horizontal line inside the box is the sample median. This is our measure of central tendency. It allows us to compare the most likely value of the numeric variable across the different categories.
* The boxes display the interquartile range (IQR) of the numeric variable in each category, i.e. the middle 50% of observations in each group according to their rank. This allows us to compare the spread of the numeric values in each category.
* The vertical lines that extend above and below each box are the “whiskers”. The interpretation of these depends on which kind of box plot we are making. By default, **ggplot2** produces a traditional Tukey box plot. Each whisker is drawn from each end of the box (the upper and lower quartiles) to a well\-defined point. To find where the upper whisker ends we have to find the largest observation that is no more than 1\.5 times the IQR away from the upper quartile. The lower whisker ends at the smallest observation that is no more than 1\.5 times the IQR away from the lower quartile.
* Any points that do not fall inside the whiskers are plotted as an individual point. These may be outliers, although they could also be perfectly consistent with the wider distribution.
The resulting plot compactly summarises the distribution of the numeric variable within each of the categories. We can see information about the central tendency, dispersion and skewness of each distribution. In addition, we can get a sense of whether there are potential outliers by noting the presence of individual points outside the whiskers.
What does the above plot tell us about atmospheric pressure and storm type? It shows that `pressure` tends to display negative skew in all four storm categories, though the skewness seems to be higher in tropical storms and hurricanes. The pressure values of tropical depression, tropical storm, and hurricane histograms overlap, though not by much. The extratropical storm system seems to be something ‘in between’ a tropical storm and a tropical depression.
### 22\.3\.3 Alternatives to box and whiskers plots
Box and whiskers plots are a good choice for exploring categorical\-numerical relationships. They provide a lot of information about how the distribution of the numeric variable changes across categories. Sometimes we may want to squeeze even more information about these distributions into a plot. One way to do this is to make multiple histograms (or dot plots, if we don’t have much data).
We already know how to make a histogram, and we have seen how aesthetic properties such as `colour` and `fill` are used to distinguish different categories of a variable in a layer. This suggests that we can overlay more than one histogram on a single plot. Let’s use this idea to see how the sample distribution of wind speed (`wind`) differs among the storm classes:
```
ggplot(storms_alter, aes(x = wind, fill = type)) +
geom_histogram(position = "identity", alpha = 0.6, binwidth = 5) +
xlab("Wind Speed (mph)")
```
We define two mappings: the continuous variable (`wind`) is mapped to the x axis, and the categorical variable (`type`) is mapped to the the fill colour. Notice that we also set the `position` argument to `"identity"`. This tells **ggplot2** not to stack the histograms on top of one another. Instead, they are allowed to overlap. It’s for this reason that we also made them semi\-transparent by setting the `alpha` argument.
Plotting several histograms in one layer like this places a lot of information in one plot, but it can be hard to make sense of this when the histograms overlap a lot. If the overlapping histograms are too difficult to interpret we might consider producing a separate one for each category. We’ve already seen a quick way to do this. Faceting works well here:
```
ggplot(storms_alter, aes(x = wind)) +
geom_histogram(alpha = 0.8, binwidth = 5) +
xlab("Wind Speed (mph)") +
facet_wrap(~ type, ncol = 4)
```
We can see quite a lot in this plot and the last. The tropical depression, tropical storm, and hurricane histograms do not overlap (with a few minor exceptions). These three storm categories are obviously defined with respect to wind speed. Perhaps they represent different phases of one underlying physical phenomenon? The extratropical storm system seems to be something altogether different. In fact, an extratropical storm is a different kind of weather system from the other three. It can turn into a tropical depression (winds \< 39 mph) or a subtropical storm (winds \> 39 mph), but only a subtropical can turn into a hurricane.
We’re oversimplifying, but the point is that the simple ordinal scale that we envisaged for the `type` variable is probably not very sensible. It’s not really true that an extratropical is “greater than” a subtropical depression (or vice versa). We should probably have characterised `type` as a nominal variable, although this designation ignores the fact that three of the storm types have a clear ordering. The take home message is that we have to understand our data before we start to really analyse it. This is why exploratory data analysis is so important.
### 22\.3\.1 Descriptive statistics
Numerical summaries can be constructed by taking the various ideas we’ve explored for numeric variables (means, medians, etc), and applying them to subsets of data defined by the values of the categorical variable. This is easy to do with the **dplyr** `group_by` and `summarise` pipeline. We won’t review it here though, because we’re going to do this in the next chapter.
### 22\.3\.2 Graphical summaries
The most common visualisation for exploring categorical\-numerical relationships is the ‘box and whiskers plot’ (or just ‘box plot’). It’s easier to understand these plots once we’ve seen an example. To construct a box and whiskers plot we need to set ‘x’ and ‘y’ axis aesthetics for the categorical and numeric variable, and we use the `geom_boxplot` function to add the appropriate layer. Let’s examine the relationship between storm category and atmospheric pressure:
```
ggplot(storms_alter, aes(x = type, y = pressure)) +
geom_boxplot() +
xlab("Storm category") + ylab("Pressure (mbar)")
```
It’s fairly obvious why this is called a box and whiskers plot. Here’s a quick overview of the component parts of each box and whiskers:
* The horizontal line inside the box is the sample median. This is our measure of central tendency. It allows us to compare the most likely value of the numeric variable across the different categories.
* The boxes display the interquartile range (IQR) of the numeric variable in each category, i.e. the middle 50% of observations in each group according to their rank. This allows us to compare the spread of the numeric values in each category.
* The vertical lines that extend above and below each box are the “whiskers”. The interpretation of these depends on which kind of box plot we are making. By default, **ggplot2** produces a traditional Tukey box plot. Each whisker is drawn from each end of the box (the upper and lower quartiles) to a well\-defined point. To find where the upper whisker ends we have to find the largest observation that is no more than 1\.5 times the IQR away from the upper quartile. The lower whisker ends at the smallest observation that is no more than 1\.5 times the IQR away from the lower quartile.
* Any points that do not fall inside the whiskers are plotted as an individual point. These may be outliers, although they could also be perfectly consistent with the wider distribution.
The resulting plot compactly summarises the distribution of the numeric variable within each of the categories. We can see information about the central tendency, dispersion and skewness of each distribution. In addition, we can get a sense of whether there are potential outliers by noting the presence of individual points outside the whiskers.
What does the above plot tell us about atmospheric pressure and storm type? It shows that `pressure` tends to display negative skew in all four storm categories, though the skewness seems to be higher in tropical storms and hurricanes. The pressure values of tropical depression, tropical storm, and hurricane histograms overlap, though not by much. The extratropical storm system seems to be something ‘in between’ a tropical storm and a tropical depression.
### 22\.3\.3 Alternatives to box and whiskers plots
Box and whiskers plots are a good choice for exploring categorical\-numerical relationships. They provide a lot of information about how the distribution of the numeric variable changes across categories. Sometimes we may want to squeeze even more information about these distributions into a plot. One way to do this is to make multiple histograms (or dot plots, if we don’t have much data).
We already know how to make a histogram, and we have seen how aesthetic properties such as `colour` and `fill` are used to distinguish different categories of a variable in a layer. This suggests that we can overlay more than one histogram on a single plot. Let’s use this idea to see how the sample distribution of wind speed (`wind`) differs among the storm classes:
```
ggplot(storms_alter, aes(x = wind, fill = type)) +
geom_histogram(position = "identity", alpha = 0.6, binwidth = 5) +
xlab("Wind Speed (mph)")
```
We define two mappings: the continuous variable (`wind`) is mapped to the x axis, and the categorical variable (`type`) is mapped to the the fill colour. Notice that we also set the `position` argument to `"identity"`. This tells **ggplot2** not to stack the histograms on top of one another. Instead, they are allowed to overlap. It’s for this reason that we also made them semi\-transparent by setting the `alpha` argument.
Plotting several histograms in one layer like this places a lot of information in one plot, but it can be hard to make sense of this when the histograms overlap a lot. If the overlapping histograms are too difficult to interpret we might consider producing a separate one for each category. We’ve already seen a quick way to do this. Faceting works well here:
```
ggplot(storms_alter, aes(x = wind)) +
geom_histogram(alpha = 0.8, binwidth = 5) +
xlab("Wind Speed (mph)") +
facet_wrap(~ type, ncol = 4)
```
We can see quite a lot in this plot and the last. The tropical depression, tropical storm, and hurricane histograms do not overlap (with a few minor exceptions). These three storm categories are obviously defined with respect to wind speed. Perhaps they represent different phases of one underlying physical phenomenon? The extratropical storm system seems to be something altogether different. In fact, an extratropical storm is a different kind of weather system from the other three. It can turn into a tropical depression (winds \< 39 mph) or a subtropical storm (winds \> 39 mph), but only a subtropical can turn into a hurricane.
We’re oversimplifying, but the point is that the simple ordinal scale that we envisaged for the `type` variable is probably not very sensible. It’s not really true that an extratropical is “greater than” a subtropical depression (or vice versa). We should probably have characterised `type` as a nominal variable, although this designation ignores the fact that three of the storm types have a clear ordering. The take home message is that we have to understand our data before we start to really analyse it. This is why exploratory data analysis is so important.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/building-in-complexity.html |
Chapter 23 Building in complexity
=================================
23\.1 Multivariate relationships
--------------------------------
We examined various plots that summarise associations between two variables in the last chapter. How do we explore relationships between more than two variables in a single graph? That is, how do we explore **multivariate associations**? It’s difficult to give a concrete answer to this question, because it depends on the question we’re trying to address, the kinds of variables we’re working with, and to a large extent, our creativity and aptitude with an advanced graphing framework like **ggplot2**. Nonetheless, we already know enough about how **ggplot2** works to build some fairly sophisticated visualisations. There are two ways to add additional information to a visualisation:
1. Define aesthetic mappings to allow the properties of a layer to depend on the different values of one or more variable.
2. Use faceting to construct a multipanel plot according to the values of categorical variables.
We can adopt both of these approaches at the same time, meaning we can get information form 4\-6 variables into a single graph if we need to (though this does not always produce an easy\-to\-read plot). We’ve already seen these two approaches used together in the [Introduction to **ggplot2**](introduction-to-ggplot2.html#introduction-to-ggplot2) chapter. We’ll look at one more example to illustrate the approach again.
We want to understand how the sample distribution of wind speed during a storm varies over the course of a year. We also want to visualise how this differs among storm categories. One way to do this is to produce a stacked histogram for each month of the year, where the colour of the stacked histograms changes with respect to storm category. We do this using the `facet_wrap` function to specify separate panels for each month, colouring the histograms by the `type` variable. Stacking the histograms happens by default:
```
ggplot(storms_alter, aes(x = wind, fill = type)) +
geom_histogram(binwidth = 15) +
xlab("Wind Speed (mph)") + ylab("Count") +
labs(fill = "Storm Type") +
facet_wrap(~ month, ncol = 3)
```
Notice that we’re using `storms_alter` from the last chapter, the version of `storms` where the `type` variable was converted to a factor. We haven’t used any new tricks here though. We just set a couple of aesthetics and used faceting to squeeze many histograms onto one plot. It mostly shows that if we’re planning a holiday in Central America we should probably avoid travelling from August to October…
23\.2 Comparing descriptive statistics
--------------------------------------
Until now we have been focusing on plots that display either the raw data (e.g. scatter plots), or a summary of the raw data that captures as much detail as possible (e.g. histograms and box plots). We’ve tended to treat descriptive statistics like the sample mean as ‘a number’ to be examined in isolation. These are often placed in the text of a report or in a table. However, there’s nothing to stop us visualising a set of means (or any other descriptive statistics) and a figure is much more informative than than a table. Moreover, many common statistical tools focus on a few aspects of sample distributions (e.g. means and variances) so it’s a good idea to plot these.
We need to know how construct graphs that display such summaries. Let’s start with a simple question: how does the (arithmetic) mean wind speed vary across different types of storm? One strategy is to produce a bar plot in which the lengths of the bars represent the mean wind speed in each category. There are two different ways to produce this with **ggplot2**.
The first is simplest, but requires a new **ggplot2** trick. When we add a layer using `geom_bar` we have to set two new arguments. The first is `stat = "summary"`. This tells **ggplot2** not to plot the raw values of the y aesthetic mapping, but instead, to construct a summary of the ‘y’ variable. The second argument is `fun.y = mean`. This tells **ggplot2** how to summarise this variable. The part on the right hand side can be any R function that takes a vector of values and returns a single number. Obviously we want the `mean` function. See how this works in practice:
```
ggplot(storms_alter, aes(x = type, y = wind)) +
geom_bar(stat = "summary", fun.y = mean) +
coord_flip() +
xlab("Storm Category") + ylab("Mean Wind Speed (mph)")
```
We also flipped the coordinates here with `coord_flip` to make this a horizontal bar plot. We’ve seen this before in the [Exploring categorical variables](exploring-categorical-variables.html#exploring-categorical-variables) chapter. The only new idea is our use of the `stat` and `fun.y` arguments.
The second way to build a bar plot showing some kind of summary statistic breaks the problem into two steps. In the first step we have to calculate whatever it is we want to display, i.e. the category\-specific mean in this case. This information needs to be stored in a data frame or tibble so **dplyr** is the best tool to use for this:
```
storms_sum <-
storms_alter %>%
group_by(type) %>%
summarise(mean_wind = mean(wind))
storms_sum
```
```
## # A tibble: 4 x 2
## type mean_wind
## <fct> <dbl>
## 1 Tropical Depression 27.4
## 2 Extratropical 40.1
## 3 Tropical Storm 47.3
## 4 Hurricane 84.7
```
We used `group_by` and `summarise` to calculate the set of means, which we called `mean_wind`. The second step uses the new data frame (called `storms_sum`) as the default data in a new graphical object, sets x and y aesthetic mappings from `type` and `mean_wind`, and adds a layer with `geom_col`:
```
mean.plt <-
ggplot(storms_sum, aes(x = type, y = mean_wind)) +
geom_col() +
coord_flip() + xlab("Storm Category") + ylab("Mean Wind Speed (mph)")
mean.plt
```
The result is the same as the last plot. Note that we have used `geom_col` instead of `geom_bar` here. Remember here that the default behaviour of `geom_bar` is to count the observations in each category. Using the `geom_col` function tells it that the information in `mean_wind` must be plotted ‘as is’ instead.
Which approach is better? The first approach is more compact though. We recommend the second long\-winded way approach for new users because it separates the summary calculations from the plotting. This way, as long as we’re comfortable with **dplyr**, we can get away with remembering less about how **ggplot2** works. It also makes it a bit easier to fix mistakes, as we can first check whether the right information is in the summary data frame, before we worry about plotting it.
The two\-step method is easy to extend to different kinds of plots as well. An example will help to clarify what we mean by this. Remember the wind speed vs. atmospheric pressure scatter plots we first produced? One criticism of those plots is that they don’t really summarise differences among storm events. We plotted all the data, which means a storm system that lasts a long time contributes relatively more points to the plot than a short storm.
Why not plot the storm\-specific means instead? We know how to do this using **dplyr** and **ggplot2**. First construct a new data frame containing the means:
```
storms_means <-
storms_alter %>%
group_by(name) %>%
summarise(wind = mean(wind), pressure = mean(pressure))
```
Then it is just a matter producing a scatter plot with the new data frame. There are no new tricks to learn:
```
ggplot(storms_means, aes(x = pressure, y = wind)) +
geom_point(alpha = 0.8) +
xlab("Atmospheric Pressure (mbar)") + ylab("Wind Speed (mph)")
```
23\.1 Multivariate relationships
--------------------------------
We examined various plots that summarise associations between two variables in the last chapter. How do we explore relationships between more than two variables in a single graph? That is, how do we explore **multivariate associations**? It’s difficult to give a concrete answer to this question, because it depends on the question we’re trying to address, the kinds of variables we’re working with, and to a large extent, our creativity and aptitude with an advanced graphing framework like **ggplot2**. Nonetheless, we already know enough about how **ggplot2** works to build some fairly sophisticated visualisations. There are two ways to add additional information to a visualisation:
1. Define aesthetic mappings to allow the properties of a layer to depend on the different values of one or more variable.
2. Use faceting to construct a multipanel plot according to the values of categorical variables.
We can adopt both of these approaches at the same time, meaning we can get information form 4\-6 variables into a single graph if we need to (though this does not always produce an easy\-to\-read plot). We’ve already seen these two approaches used together in the [Introduction to **ggplot2**](introduction-to-ggplot2.html#introduction-to-ggplot2) chapter. We’ll look at one more example to illustrate the approach again.
We want to understand how the sample distribution of wind speed during a storm varies over the course of a year. We also want to visualise how this differs among storm categories. One way to do this is to produce a stacked histogram for each month of the year, where the colour of the stacked histograms changes with respect to storm category. We do this using the `facet_wrap` function to specify separate panels for each month, colouring the histograms by the `type` variable. Stacking the histograms happens by default:
```
ggplot(storms_alter, aes(x = wind, fill = type)) +
geom_histogram(binwidth = 15) +
xlab("Wind Speed (mph)") + ylab("Count") +
labs(fill = "Storm Type") +
facet_wrap(~ month, ncol = 3)
```
Notice that we’re using `storms_alter` from the last chapter, the version of `storms` where the `type` variable was converted to a factor. We haven’t used any new tricks here though. We just set a couple of aesthetics and used faceting to squeeze many histograms onto one plot. It mostly shows that if we’re planning a holiday in Central America we should probably avoid travelling from August to October…
23\.2 Comparing descriptive statistics
--------------------------------------
Until now we have been focusing on plots that display either the raw data (e.g. scatter plots), or a summary of the raw data that captures as much detail as possible (e.g. histograms and box plots). We’ve tended to treat descriptive statistics like the sample mean as ‘a number’ to be examined in isolation. These are often placed in the text of a report or in a table. However, there’s nothing to stop us visualising a set of means (or any other descriptive statistics) and a figure is much more informative than than a table. Moreover, many common statistical tools focus on a few aspects of sample distributions (e.g. means and variances) so it’s a good idea to plot these.
We need to know how construct graphs that display such summaries. Let’s start with a simple question: how does the (arithmetic) mean wind speed vary across different types of storm? One strategy is to produce a bar plot in which the lengths of the bars represent the mean wind speed in each category. There are two different ways to produce this with **ggplot2**.
The first is simplest, but requires a new **ggplot2** trick. When we add a layer using `geom_bar` we have to set two new arguments. The first is `stat = "summary"`. This tells **ggplot2** not to plot the raw values of the y aesthetic mapping, but instead, to construct a summary of the ‘y’ variable. The second argument is `fun.y = mean`. This tells **ggplot2** how to summarise this variable. The part on the right hand side can be any R function that takes a vector of values and returns a single number. Obviously we want the `mean` function. See how this works in practice:
```
ggplot(storms_alter, aes(x = type, y = wind)) +
geom_bar(stat = "summary", fun.y = mean) +
coord_flip() +
xlab("Storm Category") + ylab("Mean Wind Speed (mph)")
```
We also flipped the coordinates here with `coord_flip` to make this a horizontal bar plot. We’ve seen this before in the [Exploring categorical variables](exploring-categorical-variables.html#exploring-categorical-variables) chapter. The only new idea is our use of the `stat` and `fun.y` arguments.
The second way to build a bar plot showing some kind of summary statistic breaks the problem into two steps. In the first step we have to calculate whatever it is we want to display, i.e. the category\-specific mean in this case. This information needs to be stored in a data frame or tibble so **dplyr** is the best tool to use for this:
```
storms_sum <-
storms_alter %>%
group_by(type) %>%
summarise(mean_wind = mean(wind))
storms_sum
```
```
## # A tibble: 4 x 2
## type mean_wind
## <fct> <dbl>
## 1 Tropical Depression 27.4
## 2 Extratropical 40.1
## 3 Tropical Storm 47.3
## 4 Hurricane 84.7
```
We used `group_by` and `summarise` to calculate the set of means, which we called `mean_wind`. The second step uses the new data frame (called `storms_sum`) as the default data in a new graphical object, sets x and y aesthetic mappings from `type` and `mean_wind`, and adds a layer with `geom_col`:
```
mean.plt <-
ggplot(storms_sum, aes(x = type, y = mean_wind)) +
geom_col() +
coord_flip() + xlab("Storm Category") + ylab("Mean Wind Speed (mph)")
mean.plt
```
The result is the same as the last plot. Note that we have used `geom_col` instead of `geom_bar` here. Remember here that the default behaviour of `geom_bar` is to count the observations in each category. Using the `geom_col` function tells it that the information in `mean_wind` must be plotted ‘as is’ instead.
Which approach is better? The first approach is more compact though. We recommend the second long\-winded way approach for new users because it separates the summary calculations from the plotting. This way, as long as we’re comfortable with **dplyr**, we can get away with remembering less about how **ggplot2** works. It also makes it a bit easier to fix mistakes, as we can first check whether the right information is in the summary data frame, before we worry about plotting it.
The two\-step method is easy to extend to different kinds of plots as well. An example will help to clarify what we mean by this. Remember the wind speed vs. atmospheric pressure scatter plots we first produced? One criticism of those plots is that they don’t really summarise differences among storm events. We plotted all the data, which means a storm system that lasts a long time contributes relatively more points to the plot than a short storm.
Why not plot the storm\-specific means instead? We know how to do this using **dplyr** and **ggplot2**. First construct a new data frame containing the means:
```
storms_means <-
storms_alter %>%
group_by(name) %>%
summarise(wind = mean(wind), pressure = mean(pressure))
```
Then it is just a matter producing a scatter plot with the new data frame. There are no new tricks to learn:
```
ggplot(storms_means, aes(x = pressure, y = wind)) +
geom_point(alpha = 0.8) +
xlab("Atmospheric Pressure (mbar)") + ylab("Wind Speed (mph)")
```
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/doing-more-with-ggplot2.html |
Chapter 24 Doing more with **ggplot2**
======================================
Throughout this block we have learnt a range of ways to plot our data using **ggplot2**. Here we provide a few more examples of ways you may wish to customise your plots. You will not be examined on the material in this chapter, however it will be helpful when you have to make plots for other modules, such as during your Level 1 projects.
We will use the `storms` data set again from the **nasaweather** package. As in the [Relationships between two variables](relationships-between-two-variables.html#relationships-between-two-variables) chapter we will reorder the levels of the `type` variable so that they get increasingly fierce.
```
# 1. make a vector of storm type names in the required order
storm_names <- c("Tropical Depression", "Extratropical", "Tropical Storm", "Hurricane")
# 2. now convert type to a factor
storms_alter <-
storms %>%
mutate(type = factor(type, levels = storm_names))
```
24\.1 Adding error bars
-----------------------
In the [Building in Complexity](building-in-complexity.html#building-in-complexity) chapter we learnt how to make a bar chart showing the means of our data. However, we generally want to show how variable the data are as well as the central tendency (e.g. mean) of the data. To do this we can include error bars showing for example the standard deviation or standard error of the mean. We’ll demonstrate this using the storms data set again, by plotting the means and standard errors of wind speed for each storm type.
We start by calculating the means and standard deviations for each group.
```
storms_sum <-
storms_alter %>%
group_by(type) %>%
summarise(mean_wind = mean(wind), std = sd(wind))
storms_sum
```
```
## # A tibble: 4 x 3
## type mean_wind std
## <fct> <dbl> <dbl>
## 1 Tropical Depression 27.4 3.52
## 2 Extratropical 40.1 13.2
## 3 Tropical Storm 47.3 11.1
## 4 Hurricane 84.7 18.8
```
We can now use this data frame to make the plot, using `geom_col` to plot the means and the unsurprisingly named `geom_errorbar` to add the error bars.
```
ggplot(storms_sum, aes(x=type, y = mean_wind)) +
# Plot the means
geom_col(fill = "orange") +
# Add the error bars
geom_errorbar(aes(ymin = mean_wind - std, ymax = mean_wind + std), width = 0.1) +
# Flip the axes round to prevent labels overlapping
coord_flip() +
# Use a more professional theme
theme_classic(base_size = 12) +
# Change the axes labels
xlab("Storm Category") + ylab("Mean Wind Speed (mph)")
```
The `ymin` and `ymax` arguments of the `geom_errorbar` function give the lower and upper limits of error bars. Here, we have plotted the mean \+/\- 1 standard deviation. Note that we can change the width of the error bars using the `width` argument. Also remember if you’re including error bars on a plot that you **MUST** specify in the figure legend what they show (e.g. standard deviation, standard error of the mean, 95% confidence intervals).
24\.2 Adding text to plots
--------------------------
There may be some cases in which you want to add text to plots, for example to show the sample size for each group or to show which categories are significantly different from each other if you’ve performed a statistical test (we’ll come back to this at Level 2\).
Here we’re going to add a label for each bar on our bar chart. To do this we start by adding the labels that we want to use to the data frame. For example, here we will calculate the mean (using the `mean` function) and the sample size (using the function `n`) for each group.
```
storms_sum <-
storms %>%
group_by(type) %>%
summarise(mean_wind = mean(wind), samp = n())
storms_sum
```
```
## # A tibble: 4 x 3
## type mean_wind samp
## <chr> <dbl> <int>
## 1 Extratropical 40.1 412
## 2 Hurricane 84.7 896
## 3 Tropical Depression 27.4 513
## 4 Tropical Storm 47.3 926
```
Then we can add the text showing the sample size to our plot using the function `geom_text`.
```
ggplot(storms_sum, aes(x = type, y = mean_wind)) +
# Add the bars
geom_col(fill = "orange") +
# Flip the axes round
coord_flip() +
# Change the axes labels
xlab("Storm Category") + ylab("Mean Wind Speed (mph)") +
# Add the text
geom_text(aes(label = samp, y = 10)) +
# Use a more professional theme
theme_classic(base_size = 12)
```
24\.3 Customising text
----------------------
Sometimes we may want to change the appearance of the text on the plot. For example, sometimes if the axis labels are quite long they may be bunched together or overlap each other, making it difficult to read them. We saw this before in the [Exploring Categorical Variables](exploring-categorical-variables.html#exploring-categorical-variables) chapter.
```
ggplot(storms_alter, aes(x = type)) +
geom_bar(fill = "orange", width = 0.7) +
xlab("Storm Type") + ylab("Number of Observations")
```
Here it is very difficult to read the categories on the \\(x\\) axis as the text is overlapping. In the [Exploring Categorical Variables](exploring-categorical-variables.html#exploring-categorical-variables) chapter we saw one way to deal with this, by using `coord_flip` to rotate the axes. An alternative to this is to change the size of text \- a simple way to do this is to use the `base_size` argument within a ggplot `theme_XX` function as follows:
```
ggplot(storms, aes(x = type)) +
geom_bar(fill = "orange", width = 0.7) +
xlab("Storm Type") + ylab("Number of Observations") +
theme_classic(base_size = 10)
```
The `base_size` argument changes the size of all of the text within the plot.
It is also possible to rotate the labels themselves rather than the whole plot. Here, we use the `angle` argument of the `element_text` function again inside `theme`.
```
ggplot(storms, aes(x = type)) +
geom_bar(fill = "orange", width = 0.7) +
xlab("Storm Type") + ylab("Number of Observations") +
theme(axis.text.x = element_text(angle = 90))
```
Here we used the argument ‘axis.text.x’ so that only the labels on the \\(x\\) axis were rotated.
24\.4 Saving plots
------------------
When using RStudio plots can be saved using the `Export` button. However, such plots are often pixelated. R also has a range of functions that can be used to save plots. When making figures with `ggplot` we can use the `ggsave` function.
For example, here we will create a scatter plot using the `storms` data set again.
```
ggplot(storms, aes(x = pressure, y = wind)) +
# Add the points
geom_point() +
# Change the axis labels
labs(x="Atmospheric pressure (mbar)", y = "Wind speed (mph)")
```
Once you’re happy with the plot you can use the `ggsave` function to save it as follows:
```
ggsave("Stormsplot.pdf", height = 5, width = 5)
```
The first argument that this function takes is the name of the file that you will save. By default `ggsave` will save the last plot that you’ve made. You can also provide the name of a plot as the second argument to the function if you have assigned it a name. Note that R will save the plot to your working directory (you can change where the plot is saved to using the `path` argument in the `ggsave` function). Note that if you do not specify the `width` and `height` arguments to `ggsave` it will use the current size of your plotting window.
You can also add the `ggsave` function on to the code for a specific plot
```
ggplot(storms, aes(x = pressure, y = wind)) +
# Add the points
geom_point() +
# Change the axis labels
labs(x="Atmospheric pressure (mbar)", y = "Wind speed (mph)") +
# Save the figure
ggsave("Stormsplot.pdf", height = 5, width = 5)
```
24\.5 Panel plots
-----------------
We have already seen how the `facet_wrap` function can be used to produce multiple panels in the [Introduction to **ggplot2**](introduction-to-ggplot2.html#introduction-to-ggplot2) chapter. This function can be used where you want to make multiple plots each showing a different level of a factor. However, sometimes you may wish to present a multi\-panel plot using different variables in the different panels.
There are multiple ways to do this, we’re going to show you one using the **cowplot** package. First make sure that this package is installed (if you haven’t used it before) and loaded (every time you use it). Then make the individual plots that you want to include your multi\-panel plot using `ggplot` as normal. For example we might want to look at a) the relative frequency of different storm types occuring and b) the mean wind speed associated with each storm type. First we make these two plots that we want to include in the panel and assign these to names.
```
plta <- ggplot(storms, aes(x = type)) +
geom_bar(fill = "orange") +
xlab("Storm Type") + ylab("Number of Observations") +
theme_classic(base_size = 10)
pltb <- ggplot(storms_sum, aes(x=type, y = mean_wind)) +
# Plot the means
geom_col(fill = "orange") +
# Change the axes labels
xlab("Storm Category") + ylab("Mean Wind Speed (mph)") +
theme_classic(base_size = 10)
```
Then we can use the `plot_grid` function from the **cowplot** package to create the multi\-panel plot.
```
plot_grid(plta, pltb, nrow = 1, labels = c("auto"), label_size = 10)
```
The `plot_grid` function allows the panel to be customised easily, for example by changing the number of plots in each row (`nrow` argument) and including labels for each panel (`labels` argument).
We can then use the `ggsave` function to save our multi\-panel plot as before.
```
# Create the multi-panel plot
plot_grid(plta, pltb, nrow = 1, labels = c("auto"), label_size = 10) +
# Save it
ggsave("Stormsplot.pdf", height = 4, width = 8)
```
24\.1 Adding error bars
-----------------------
In the [Building in Complexity](building-in-complexity.html#building-in-complexity) chapter we learnt how to make a bar chart showing the means of our data. However, we generally want to show how variable the data are as well as the central tendency (e.g. mean) of the data. To do this we can include error bars showing for example the standard deviation or standard error of the mean. We’ll demonstrate this using the storms data set again, by plotting the means and standard errors of wind speed for each storm type.
We start by calculating the means and standard deviations for each group.
```
storms_sum <-
storms_alter %>%
group_by(type) %>%
summarise(mean_wind = mean(wind), std = sd(wind))
storms_sum
```
```
## # A tibble: 4 x 3
## type mean_wind std
## <fct> <dbl> <dbl>
## 1 Tropical Depression 27.4 3.52
## 2 Extratropical 40.1 13.2
## 3 Tropical Storm 47.3 11.1
## 4 Hurricane 84.7 18.8
```
We can now use this data frame to make the plot, using `geom_col` to plot the means and the unsurprisingly named `geom_errorbar` to add the error bars.
```
ggplot(storms_sum, aes(x=type, y = mean_wind)) +
# Plot the means
geom_col(fill = "orange") +
# Add the error bars
geom_errorbar(aes(ymin = mean_wind - std, ymax = mean_wind + std), width = 0.1) +
# Flip the axes round to prevent labels overlapping
coord_flip() +
# Use a more professional theme
theme_classic(base_size = 12) +
# Change the axes labels
xlab("Storm Category") + ylab("Mean Wind Speed (mph)")
```
The `ymin` and `ymax` arguments of the `geom_errorbar` function give the lower and upper limits of error bars. Here, we have plotted the mean \+/\- 1 standard deviation. Note that we can change the width of the error bars using the `width` argument. Also remember if you’re including error bars on a plot that you **MUST** specify in the figure legend what they show (e.g. standard deviation, standard error of the mean, 95% confidence intervals).
24\.2 Adding text to plots
--------------------------
There may be some cases in which you want to add text to plots, for example to show the sample size for each group or to show which categories are significantly different from each other if you’ve performed a statistical test (we’ll come back to this at Level 2\).
Here we’re going to add a label for each bar on our bar chart. To do this we start by adding the labels that we want to use to the data frame. For example, here we will calculate the mean (using the `mean` function) and the sample size (using the function `n`) for each group.
```
storms_sum <-
storms %>%
group_by(type) %>%
summarise(mean_wind = mean(wind), samp = n())
storms_sum
```
```
## # A tibble: 4 x 3
## type mean_wind samp
## <chr> <dbl> <int>
## 1 Extratropical 40.1 412
## 2 Hurricane 84.7 896
## 3 Tropical Depression 27.4 513
## 4 Tropical Storm 47.3 926
```
Then we can add the text showing the sample size to our plot using the function `geom_text`.
```
ggplot(storms_sum, aes(x = type, y = mean_wind)) +
# Add the bars
geom_col(fill = "orange") +
# Flip the axes round
coord_flip() +
# Change the axes labels
xlab("Storm Category") + ylab("Mean Wind Speed (mph)") +
# Add the text
geom_text(aes(label = samp, y = 10)) +
# Use a more professional theme
theme_classic(base_size = 12)
```
24\.3 Customising text
----------------------
Sometimes we may want to change the appearance of the text on the plot. For example, sometimes if the axis labels are quite long they may be bunched together or overlap each other, making it difficult to read them. We saw this before in the [Exploring Categorical Variables](exploring-categorical-variables.html#exploring-categorical-variables) chapter.
```
ggplot(storms_alter, aes(x = type)) +
geom_bar(fill = "orange", width = 0.7) +
xlab("Storm Type") + ylab("Number of Observations")
```
Here it is very difficult to read the categories on the \\(x\\) axis as the text is overlapping. In the [Exploring Categorical Variables](exploring-categorical-variables.html#exploring-categorical-variables) chapter we saw one way to deal with this, by using `coord_flip` to rotate the axes. An alternative to this is to change the size of text \- a simple way to do this is to use the `base_size` argument within a ggplot `theme_XX` function as follows:
```
ggplot(storms, aes(x = type)) +
geom_bar(fill = "orange", width = 0.7) +
xlab("Storm Type") + ylab("Number of Observations") +
theme_classic(base_size = 10)
```
The `base_size` argument changes the size of all of the text within the plot.
It is also possible to rotate the labels themselves rather than the whole plot. Here, we use the `angle` argument of the `element_text` function again inside `theme`.
```
ggplot(storms, aes(x = type)) +
geom_bar(fill = "orange", width = 0.7) +
xlab("Storm Type") + ylab("Number of Observations") +
theme(axis.text.x = element_text(angle = 90))
```
Here we used the argument ‘axis.text.x’ so that only the labels on the \\(x\\) axis were rotated.
24\.4 Saving plots
------------------
When using RStudio plots can be saved using the `Export` button. However, such plots are often pixelated. R also has a range of functions that can be used to save plots. When making figures with `ggplot` we can use the `ggsave` function.
For example, here we will create a scatter plot using the `storms` data set again.
```
ggplot(storms, aes(x = pressure, y = wind)) +
# Add the points
geom_point() +
# Change the axis labels
labs(x="Atmospheric pressure (mbar)", y = "Wind speed (mph)")
```
Once you’re happy with the plot you can use the `ggsave` function to save it as follows:
```
ggsave("Stormsplot.pdf", height = 5, width = 5)
```
The first argument that this function takes is the name of the file that you will save. By default `ggsave` will save the last plot that you’ve made. You can also provide the name of a plot as the second argument to the function if you have assigned it a name. Note that R will save the plot to your working directory (you can change where the plot is saved to using the `path` argument in the `ggsave` function). Note that if you do not specify the `width` and `height` arguments to `ggsave` it will use the current size of your plotting window.
You can also add the `ggsave` function on to the code for a specific plot
```
ggplot(storms, aes(x = pressure, y = wind)) +
# Add the points
geom_point() +
# Change the axis labels
labs(x="Atmospheric pressure (mbar)", y = "Wind speed (mph)") +
# Save the figure
ggsave("Stormsplot.pdf", height = 5, width = 5)
```
24\.5 Panel plots
-----------------
We have already seen how the `facet_wrap` function can be used to produce multiple panels in the [Introduction to **ggplot2**](introduction-to-ggplot2.html#introduction-to-ggplot2) chapter. This function can be used where you want to make multiple plots each showing a different level of a factor. However, sometimes you may wish to present a multi\-panel plot using different variables in the different panels.
There are multiple ways to do this, we’re going to show you one using the **cowplot** package. First make sure that this package is installed (if you haven’t used it before) and loaded (every time you use it). Then make the individual plots that you want to include your multi\-panel plot using `ggplot` as normal. For example we might want to look at a) the relative frequency of different storm types occuring and b) the mean wind speed associated with each storm type. First we make these two plots that we want to include in the panel and assign these to names.
```
plta <- ggplot(storms, aes(x = type)) +
geom_bar(fill = "orange") +
xlab("Storm Type") + ylab("Number of Observations") +
theme_classic(base_size = 10)
pltb <- ggplot(storms_sum, aes(x=type, y = mean_wind)) +
# Plot the means
geom_col(fill = "orange") +
# Change the axes labels
xlab("Storm Category") + ylab("Mean Wind Speed (mph)") +
theme_classic(base_size = 10)
```
Then we can use the `plot_grid` function from the **cowplot** package to create the multi\-panel plot.
```
plot_grid(plta, pltb, nrow = 1, labels = c("auto"), label_size = 10)
```
The `plot_grid` function allows the panel to be customised easily, for example by changing the number of plots in each row (`nrow` argument) and including labels for each panel (`labels` argument).
We can then use the `ggsave` function to save our multi\-panel plot as before.
```
# Create the multi-panel plot
plot_grid(plta, pltb, nrow = 1, labels = c("auto"), label_size = 10) +
# Save it
ggsave("Stormsplot.pdf", height = 4, width = 8)
```
| Data Science |
ubc-dsci.github.io | https://ubc-dsci.github.io/introduction-to-datascience/intro.html |
Chapter 1 R and the Tidyverse
=============================
1\.1 Overview
-------------
This chapter provides an introduction to data science and the R programming language.
The goal here is to get your hands dirty right from the start! We will walk through an entire data analysis,
and along the way introduce different types of data analysis question, some fundamental programming
concepts in R, and the basics of loading, cleaning, and visualizing data. In the following chapters, we will
dig into each of these steps in much more detail; but for now, let’s jump in to see how much we can do
with data science!
1\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Identify the different types of data analysis question and categorize a question into the correct type.
* Load the `tidyverse` package into R.
* Read tabular data with `read_csv`.
* Create new variables and objects in R using the assignment symbol.
* Create and organize subsets of tabular data using `filter`, `select`, `arrange`, and `slice`.
* Add and modify columns in tabular data using `mutate`.
* Visualize data with a `ggplot` bar plot.
* Use `?` to access help and documentation tools in R.
1\.3 Canadian languages data set
--------------------------------
In this chapter, we will walk through a full analysis of a data set relating to
languages spoken at home by Canadian residents (Figure [1\.1](intro.html#fig:canada-map)). Many Indigenous peoples exist in Canada
with their own cultures and languages; these languages are often unique to Canada and not spoken
anywhere else in the world ([Statistics Canada 2018](#ref-statcan2018mothertongue)). Sadly, colonization has
led to the loss of many of these languages. For instance, generations of
children were not allowed to speak their mother tongue (the first language an
individual learns in childhood) in Canadian residential schools. Colonizers
also renamed places they had “discovered” ([K. Wilson 2018](#ref-wilson2018)). Acts such as these
have significantly harmed the continuity of Indigenous languages in Canada, and
some languages are considered “endangered” as few people report speaking them.
To learn more, please see *Canadian Geographic*’s article, “Mapping Indigenous Languages in
Canada” ([Walker 2017](#ref-walker2017)),
*They Came for the Children: Canada, Aboriginal
peoples, and Residential Schools* ([Truth and Reconciliation Commission of Canada 2012](#ref-children2012))
and the *Truth and Reconciliation Commission of Canada’s*
*Calls to Action* ([Truth and Reconciliation Commission of Canada 2015](#ref-calls2015)).
Figure 1\.1: Map of Canada.
The data set we will study in this chapter is taken from
[the `canlang` R data package](https://ttimbers.github.io/canlang/)
([Timbers 2020](#ref-timbers2020canlang)), which has
population language data collected during the 2016 Canadian census ([Statistics Canada 2016a](#ref-cancensus2016)).
In this data, there are 214 languages recorded, each having six different properties:
1. `category`: Higher\-level language category, describing whether the language is an Official Canadian language, an Aboriginal (i.e., Indigenous) language, or a Non\-Official and Non\-Aboriginal language.
2. `language`: The name of the language.
3. `mother_tongue`: Number of Canadian residents who reported the language as their mother tongue. Mother tongue is generally defined as the language someone was exposed to since birth.
4. `most_at_home`: Number of Canadian residents who reported the language as being spoken most often at home.
5. `most_at_work`: Number of Canadian residents who reported the language as being used most often at work.
6. `lang_known`: Number of Canadian residents who reported knowledge of the language.
According to the census, more than 60 Aboriginal languages were reported
as being spoken in Canada. Suppose we want to know which are the most common;
then we might ask the following question, which we wish to answer using our data:
*Which ten Aboriginal languages were most often reported in 2016 as mother
tongues in Canada, and how many people speak each of them?*
> **Note:** Data science cannot be done without
> a deep understanding of the data and
> problem domain. In this book, we have simplified the data sets used in our
> examples to concentrate on methods and fundamental concepts. But in real
> life, you cannot and should not do data science without a domain expert.
> Alternatively, it is common to practice data science in your own domain of
> expertise! Remember that when you work with data, it is essential to think
> about *how* the data were collected, which affects the conclusions you can
> draw. If your data are biased, then your results will be biased!
1\.4 Asking a question
----------------------
Every good data analysis begins with a *question*—like the
above—that you aim to answer using data. As it turns out, there
are actually a number of different *types* of question regarding data:
descriptive, exploratory, predictive, inferential, causal, and mechanistic,
all of which are defined in Table [1\.1](intro.html#tab:questions-table).
Carefully formulating a question as early as possible in your analysis—and
correctly identifying which type of question it is—will guide your overall approach to
the analysis as well as the selection of appropriate tools.
Table 1\.1: Types of data analysis question ([Leek and Peng 2015](#ref-leek2015question); [Peng and Matsui 2015](#ref-peng2015art)).
| Question type | Description | Example |
| --- | --- | --- |
| Descriptive | A question that asks about summarized characteristics of a data set without interpretation (i.e., report a fact). | How many people live in each province and territory in Canada? |
| Exploratory | A question that asks if there are patterns, trends, or relationships within a single data set. Often used to propose hypotheses for future study. | Does political party voting change with indicators of wealth in a set of data collected on 2,000 people living in Canada? |
| Predictive | A question that asks about predicting measurements or labels for individuals (people or things). The focus is on what things predict some outcome, but not what causes the outcome. | What political party will someone vote for in the next Canadian election? |
| Inferential | A question that looks for patterns, trends, or relationships in a single data set **and** also asks for quantification of how applicable these findings are to the wider population. | Does political party voting change with indicators of wealth for all people living in Canada? |
| Causal | A question that asks about whether changing one factor will lead to a change in another factor, on average, in the wider population. | Does wealth lead to voting for a certain political party in Canadian elections? |
| Mechanistic | A question that asks about the underlying mechanism of the observed patterns, trends, or relationships (i.e., how does it happen?) | How does wealth lead to voting for a certain political party in Canadian elections? |
In this book, you will learn techniques to answer the
first four types of question: descriptive, exploratory, predictive, and inferential;
causal and mechanistic questions are beyond the scope of this book.
In particular, you will learn how to apply the following analysis tools:
1. **Summarization:** computing and reporting aggregated values pertaining to a data set.
Summarization is most often used to answer descriptive questions,
and can occasionally help with answering exploratory questions.
For example, you might use summarization to answer the following question:
*What is the average race time for runners in this data set?*
Tools for summarization are covered in detail in Chapters [2](reading.html#reading)
and [3](wrangling.html#wrangling), but appear regularly throughout the text.
2. **Visualization:** plotting data graphically.
Visualization is typically used to answer descriptive and exploratory questions,
but plays a critical supporting role in answering all of the types of question in Table [1\.1](intro.html#tab:questions-table).
For example, you might use visualization to answer the following question:
*Is there any relationship between race time and age for runners in this data set?*
This is covered in detail in Chapter [4](viz.html#viz), but again appears regularly throughout the book.
3. **Classification:** predicting a class or category for a new observation.
Classification is used to answer predictive questions.
For example, you might use classification to answer the following question:
*Given measurements of a tumor’s average cell area and perimeter, is the tumor benign or malignant?*
Classification is covered in Chapters [5](classification1.html#classification1) and [6](classification2.html#classification2).
4. **Regression:** predicting a quantitative value for a new observation.
Regression is also used to answer predictive questions.
For example, you might use regression to answer the following question:
*What will be the race time for a 20\-year\-old runner who weighs 50kg?*
Regression is covered in Chapters [7](regression1.html#regression1) and [8](regression2.html#regression2).
5. **Clustering:** finding previously unknown/unlabeled subgroups in a
data set. Clustering is often used to answer exploratory questions.
For example, you might use clustering to answer the following question:
*What products are commonly bought together on Amazon?*
Clustering is covered in Chapter [9](clustering.html#clustering).
6. **Estimation:** taking measurements for a small number of items from a large group
and making a good guess for the average or proportion for the large group. Estimation
is used to answer inferential questions.
For example, you might use estimation to answer the following question:
*Given a survey of cellphone ownership of 100 Canadians, what proportion
of the entire Canadian population own Android phones?*
Estimation is covered in Chapter [10](inference.html#inference).
Referring to Table [1\.1](intro.html#tab:questions-table), our question about
Aboriginal languages is an example of a *descriptive question*: we are
summarizing the characteristics of a data set without further interpretation.
And referring to the list above, it looks like we should use visualization
and perhaps some summarization to answer the question. So in the remainder
of this chapter, we will work towards making a visualization that shows
us the ten most common Aboriginal languages in Canada and their associated counts,
according to the 2016 census.
1\.5 Loading a tabular data set
-------------------------------
A data set is, at its core essence, a structured collection of numbers and characters.
Aside from that, there are really no strict rules; data sets can come in
many different forms! Perhaps the most common form of data set that you will
find in the wild, however, is *tabular data*. Think spreadsheets in Microsoft Excel: tabular data are
rectangular\-shaped and spreadsheet\-like, as shown in Figure
[1\.2](intro.html#fig:img-spreadsheet-vs-dataframe). In this book, we will focus primarily on tabular data.
Since we are using R for data analysis in this book, the first step for us is to
load the data into R. When we load tabular data into
R, it is represented as a *data frame* object. Figure
[1\.2](intro.html#fig:img-spreadsheet-vs-dataframe) shows that an R data frame is very similar
to a spreadsheet. We refer to the rows as **observations**;
these are the individual objects
for which we collect data. In Figure [1\.2](intro.html#fig:img-spreadsheet-vs-dataframe), the observations are
languages. We refer to the columns as **variables**; these are the characteristics of each
observation. In Figure [1\.2](intro.html#fig:img-spreadsheet-vs-dataframe), the variables are the the
language’s category, its name, the number of mother tongue speakers, etc.
Figure 1\.2: A spreadsheet versus a data frame in R.
The first kind of data file that we will learn how to load into R as a data
frame is the *comma\-separated values* format (`.csv` for short). These files
have names ending in `.csv`, and can be opened and saved using common
spreadsheet programs like Microsoft Excel and Google Sheets. For example, the
`.csv` file named `can_lang.csv`
is included with [the code for this book](https://github.com/UBC-DSCI/introduction-to-datascience/tree/master/data).
If we were to open this data in a plain text editor (a program like Notepad that just shows
text with no formatting), we would see each row on its own line, and each entry in the table separated by a comma:
```
category,language,mother_tongue,most_at_home,most_at_work,lang_known
Aboriginal languages,"Aboriginal languages, n.o.s.",590,235,30,665
Non-Official & Non-Aboriginal languages,Afrikaans,10260,4785,85,23415
Non-Official & Non-Aboriginal languages,"Afro-Asiatic languages, n.i.e.",1150,44
Non-Official & Non-Aboriginal languages,Akan (Twi),13460,5985,25,22150
Non-Official & Non-Aboriginal languages,Albanian,26895,13135,345,31930
Aboriginal languages,"Algonquian languages, n.i.e.",45,10,0,120
Aboriginal languages,Algonquin,1260,370,40,2480
Non-Official & Non-Aboriginal languages,American Sign Language,2685,3020,1145,21
Non-Official & Non-Aboriginal languages,Amharic,22465,12785,200,33670
```
To load this data into R so that we can do things with it (e.g., perform
analyses or create data visualizations), we will need to use a *function.* A
function is a special word in R that takes instructions (we call these
*arguments*) and does something. The function we will use to load a `.csv` file
into R is called `read_csv`. In its most basic
use\-case, `read_csv` expects that the data file:
* has column names (or *headers*),
* uses a comma (`,`) to separate the columns, and
* does not have row names.
Below you’ll see the code used to load the data into R using the `read_csv`
function. Note that the `read_csv` function is not included in the base
installation of R, meaning that it is not one of the primary functions ready to
use when you install R. Therefore, you need to load it from somewhere else
before you can use it. The place from which we will load it is called an R *package*.
An R package is a collection of functions that can be used in addition to the
built\-in R package functions once loaded. The `read_csv` function, in
particular, can be made accessible by loading
[the `tidyverse` R package](https://tidyverse.tidyverse.org/) ([Wickham 2021b](#ref-tidyverse); [Wickham et al. 2019](#ref-wickham2019tidverse))
using the `library` function. The `tidyverse` package contains many
functions that we will use throughout this book to load, clean, wrangle,
and visualize data.
```
library(tidyverse)
```
```
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.2 ✔ readr 2.1.4
## ✔ forcats 1.0.0 ✔ stringr 1.5.0
## ✔ ggplot2 3.4.2 ✔ tibble 3.2.1
## ✔ lubridate 1.9.2 ✔ tidyr 1.3.0
## ✔ purrr 1.0.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
```
> **Note:** You may have noticed that we got some extra
> output from R regarding attached packages and conflicts below our code
> line. These are examples of *messages* in R, which give the user more
> information that might be handy to know. The `Attaching packages` message is
> natural when loading `tidyverse`, since `tidyverse` actually automatically
> causes other packages to be imported too, such as `dplyr`. In the future,
> when we load `tidyverse` in this book, we will silence these messages to help
> with the readability of the book. The `Conflicts` message is also totally normal
> in this circumstance. This message tells you if functions from different
> packages share the same name, which is confusing to R. For example, in this
> case, the `dplyr` package and the `stats` package both provide a function
> called `filter`. The message above (`dplyr::filter() masks stats::filter()`)
> is R telling you that it is going to default to the `dplyr` package version
> of this function. So if you use the `filter` function, you will be using the
> `dplyr` version. In order to use the `stats` version, you need to use its
> full name `stats::filter`. Messages are not errors, so generally you don’t
> need to take action when you see a message; but you should always read the message
> and critically think about what it means and whether you need to do anything
> about it.
After loading the `tidyverse` package, we can call the `read_csv` function and
pass it a single argument: the name of the file, `"can_lang.csv"`. We have to
put quotes around file names and other letters and words that we use in our
code to distinguish it from the special words (like functions!) that make up the R programming
language. The file’s name is the only argument we need to provide because our
file satisfies everything else that the `read_csv` function expects in the default
use\-case. Figure [1\.3](intro.html#fig:img-read-csv) describes how we use the `read_csv`
to read data into R.
Figure 1\.3: Syntax for the `read_csv` function.
```
read_csv("data/can_lang.csv")
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
> **Note:** There is another function
> that also loads csv files named `read.csv`. We will *always* use
> `read_csv` in this book, as it is designed to play nicely with all of the
> other `tidyverse` functions, which we will use extensively. Be
> careful not to accidentally use `read.csv`, as it can cause some tricky
> errors to occur in your code that are hard to track down!
1\.6 Naming things in R
-----------------------
When we loaded the 2016 Canadian census language data
using `read_csv`, we did not give this data frame a name.
Therefore the data was just printed on the screen,
and we cannot do anything else with it. That isn’t very useful.
What would be more useful would be to give a name
to the data frame that `read_csv` outputs,
so that we can refer to it later for analysis and visualization.
The way to assign a name to a value in R is via the *assignment symbol* `<-`.
On the left side of the assignment symbol you put the name that you want
to use, and on the right side of the assignment symbol
you put the value that you want the name to refer to.
Names can be used to refer to almost anything in R, such as numbers,
words (also known as *strings* of characters), and data frames!
Below, we set `my_number` to `3` (the result of `1+2`)
and we set `name` to the string `"Alice"`.
```
my_number <- 1 + 2
name <- "Alice"
```
Note that when
we name something in R using the assignment symbol, `<-`,
we do not need to surround the name we are creating with quotes. This is
because we are formally telling R that this special word denotes
the value of whatever is on the right\-hand side.
Only characters and words that act as *values* on the right\-hand side of the assignment
symbol—e.g., the file name `"data/can_lang.csv"` that we specified before, or `"Alice"` above—need
to be surrounded by quotes.
After making the assignment, we can use the special name words we have created in
place of their values. For example, if we want to do something with the value `3` later on,
we can just use `my_number` instead. Let’s try adding 2 to `my_number`; you will see that
R just interprets this as adding 3 and 2:
```
my_number + 2
```
```
## [1] 5
```
Object names can consist of letters, numbers, periods `.` and underscores `_`.
Other symbols won’t work since they have their own meanings in R. For example,
`-` is the subtraction symbol; if we try to assign a name with
the `-` symbol, R will complain and we will get an error!
```
my-number <- 1
```
```
Error in my - number <- 1 : object 'my' not found
```
There are certain conventions for naming objects in R.
When naming an object we
suggest using only lowercase letters, numbers and underscores `_` to separate
the words in a name. R is case sensitive, which means that `Letter` and
`letter` would be two different objects in R. You should also try to give your
objects meaningful names. For instance, you *can* name a data frame `x`.
However, using more meaningful terms, such as `language_data`, will help you
remember what each name in your code represents. We recommend following the
Tidyverse naming conventions outlined in the *Tidyverse Style Guide* ([Wickham 2020](#ref-tidyversestyleguide)). Let’s
now use the assignment symbol to give the name
`can_lang` to the 2016 Canadian census language data frame that we get from
`read_csv`.
```
can_lang <- read_csv("data/can_lang.csv")
```
Wait a minute, nothing happened this time! Where’s our data?
Actually, something did happen: the data was loaded in
and now has the name `can_lang` associated with it.
And we can use that name to access the data frame and do things with it.
For example, we can type the name of the data frame to print the first few rows
on the screen. You will also see at the top that the number of observations (i.e., rows) and
variables (i.e., columns) are printed. Printing the first few rows of a data frame
like this is a handy way to get a quick sense for what is contained in a data frame.
```
can_lang
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
1\.7 Creating subsets of data frames with `filter` \& `select`
--------------------------------------------------------------
Now that we’ve loaded our data into R, we can start wrangling the data to
find the ten Aboriginal languages that were most often reported
in 2016 as mother tongues in Canada. In particular, we will construct
a table with the ten Aboriginal languages that have the largest
counts in the `mother_tongue` column.
The `filter` and `select` functions from the `tidyverse` package will help us
here. The `filter` function allows you to obtain a subset of the
rows with specific values, while the `select` function allows you
to obtain a subset of the columns. Therefore, we can `filter` the rows to extract the
Aboriginal languages in the data set, and then use `select` to obtain only the
columns we want to include in our table.
### 1\.7\.1 Using `filter` to extract rows
Looking at the `can_lang` data above, we see the `category` column contains different
high\-level categories of languages, which include “Aboriginal languages”,
“Non\-Official \& Non\-Aboriginal languages” and “Official languages”. To answer
our question we want to filter our data set so we restrict our attention
to only those languages in the “Aboriginal languages” category.
We can use the `filter` function to obtain the subset of rows with desired
values from a data frame. Figure [1\.4](intro.html#fig:img-filter) outlines what arguments we need to specify to use `filter`. The first argument to `filter` is the name of the data frame
object, `can_lang`. The second argument is a *logical statement* to use when
filtering the rows. A logical statement evaluates to either `TRUE` or `FALSE`;
`filter` keeps only those rows for which the logical statement evaluates to `TRUE`.
For example, in our analysis, we are interested in keeping only languages in the
“Aboriginal languages” higher\-level category. We can use
the *equivalency operator* `==` to compare the values
of the `category` column with the value `"Aboriginal languages"`; you will learn about
many other kinds of logical statements in Chapter [3](wrangling.html#wrangling). Similar to
when we loaded the data file and put quotes around the file name, here we need
to put quotes around `"Aboriginal languages"`. Using quotes tells R that this
is a string *value* and not one of the special words that make up the R
programming language, or one of the names we have given to data frames in the
code we have already written.
Figure 1\.4: Syntax for the `filter` function.
With these arguments, `filter` returns a data frame that has all the columns of
the input data frame, but only those rows we asked for in our logical filter
statement.
```
aboriginal_lang <- filter(can_lang, category == "Aboriginal languages")
aboriginal_lang
```
```
## # A tibble: 67 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Aboriginal langu… Algonqu… 45 10 0 120
## 3 Aboriginal langu… Algonqu… 1260 370 40 2480
## 4 Aboriginal langu… Athabas… 50 10 0 85
## 5 Aboriginal langu… Atikame… 6150 5465 1100 6645
## 6 Aboriginal langu… Babine … 110 20 10 210
## 7 Aboriginal langu… Beaver 190 50 0 340
## 8 Aboriginal langu… Blackfo… 2815 1110 85 5645
## 9 Aboriginal langu… Carrier 1025 250 15 2100
## 10 Aboriginal langu… Cayuga 45 10 10 125
## # ℹ 57 more rows
```
It’s good practice to check the output after using a
function in R. We can see the original `can_lang` data set contained 214 rows
with multiple kinds of `category`. The data frame
`aboriginal_lang` contains only 67 rows, and looks like it only contains languages in
the “Aboriginal languages” in the `category` column. So it looks like the function
gave us the result we wanted!
### 1\.7\.2 Using `select` to extract columns
Now let’s use `select` to extract the `language` and `mother_tongue` columns
from this data frame. Figure [1\.5](intro.html#fig:img-select) shows us the syntax for the `select` function. To extract these columns, we need to provide the `select`
function with three arguments. The first argument is the name of the data frame
object, which in this example is `aboriginal_lang`. The second and third
arguments are the column names that we want to select: `language` and
`mother_tongue`. After passing these three arguments, the `select` function
returns two columns (the `language` and `mother_tongue` columns that we asked
for) as a data frame. This code is also a great example of why being
able to name things in R is useful: you can see that we are using the
result of our earlier `filter` step (which we named `aboriginal_lang`) here
in the next step of the analysis!
Figure 1\.5: Syntax for the `select` function.
```
selected_lang <- select(aboriginal_lang, language, mother_tongue)
selected_lang
```
```
## # A tibble: 67 × 2
## language mother_tongue
## <chr> <dbl>
## 1 Aboriginal languages, n.o.s. 590
## 2 Algonquian languages, n.i.e. 45
## 3 Algonquin 1260
## 4 Athabaskan languages, n.i.e. 50
## 5 Atikamekw 6150
## 6 Babine (Wetsuwet'en) 110
## 7 Beaver 190
## 8 Blackfoot 2815
## 9 Carrier 1025
## 10 Cayuga 45
## # ℹ 57 more rows
```
1\.8 Using `arrange` to order and `slice` to select rows by index number
------------------------------------------------------------------------
We have used `filter` and `select` to obtain a table with only the Aboriginal
languages in the data set and their associated counts. However, we want to know
the **ten** languages that are spoken most often. As a next step, we will
order the `mother_tongue` column from largest to smallest value and then extract only
the top ten rows. This is where the `arrange` and `slice` functions come to the
rescue!
The `arrange` function allows us to order the rows of a data frame by the
values of a particular column. Figure [1\.6](intro.html#fig:img-arrange) details what arguments we need to specify to
use the `arrange` function. We need to pass the data frame as the first
argument to this function, and the variable to order by as the second argument.
Since we want to choose the ten Aboriginal languages most often reported as a mother tongue
language, we will use the `arrange` function to order the rows in our
`selected_lang` data frame by the `mother_tongue` column. We want to
arrange the rows in descending order (from largest to smallest),
so we pass the column to the `desc` function before using it as an argument.
Figure 1\.6: Syntax for the `arrange` function.
```
arranged_lang <- arrange(selected_lang, by = desc(mother_tongue))
arranged_lang
```
```
## # A tibble: 67 × 2
## language mother_tongue
## <chr> <dbl>
## 1 Cree, n.o.s. 64050
## 2 Inuktitut 35210
## 3 Ojibway 17885
## 4 Oji-Cree 12855
## 5 Dene 10700
## 6 Montagnais (Innu) 10235
## 7 Mi'kmaq 6690
## 8 Atikamekw 6150
## 9 Plains Cree 3065
## 10 Stoney 3025
## # ℹ 57 more rows
```
Next we will use the `slice` function, which selects rows according to their
row number. Since we want to choose the most common ten languages, we will indicate we want the
rows 1 to 10 using the argument `1:10`.
```
ten_lang <- slice(arranged_lang, 1:10)
ten_lang
```
```
## # A tibble: 10 × 2
## language mother_tongue
## <chr> <dbl>
## 1 Cree, n.o.s. 64050
## 2 Inuktitut 35210
## 3 Ojibway 17885
## 4 Oji-Cree 12855
## 5 Dene 10700
## 6 Montagnais (Innu) 10235
## 7 Mi'kmaq 6690
## 8 Atikamekw 6150
## 9 Plains Cree 3065
## 10 Stoney 3025
```
1\.9 Adding and modifying columns using `mutate`
------------------------------------------------
Recall that our data analysis question referred to the *count* of Canadians
that speak each of the top ten most commonly reported Aboriginal languages as
their mother tongue, and the `ten_lang` data frame indeed contains those
counts… But perhaps, seeing these numbers, we became curious about the
*percentage* of the population of Canada associated with each count. It is
common to come up with new data analysis questions in the process of answering
a first one—so fear not and explore! To answer this small
question along the way, we need to divide each count in the `mother_tongue`
column by the total Canadian population according to the 2016
census—i.e., 35,151,728—and multiply it by 100\. We can perform
this computation using the `mutate` function. We pass the `ten_lang`
data frame as its first argument, then specify the equation that computes the percentages
in the second argument. By using a new variable name on the left hand side of the equation,
we will create a new column in the data frame; and if we use an existing name, we will
modify that variable. In this case, we will opt to
create a new column called `mother_tongue_percent`.
```
canadian_population = 35151728
ten_lang_percent = mutate(ten_lang, mother_tongue_percent = 100 * mother_tongue / canadian_population)
ten_lang_percent
```
```
## # A tibble: 10 × 3
## language mother_tongue mother_tongue_percent
## <chr> <dbl> <dbl>
## 1 Cree, n.o.s. 64050 0.182
## 2 Inuktitut 35210 0.100
## 3 Ojibway 17885 0.0509
## 4 Oji-Cree 12855 0.0366
## 5 Dene 10700 0.0304
## 6 Montagnais (Innu) 10235 0.0291
## 7 Mi'kmaq 6690 0.0190
## 8 Atikamekw 6150 0.0175
## 9 Plains Cree 3065 0.00872
## 10 Stoney 3025 0.00861
```
The `ten_lang_percent` data frame shows that
the ten Aboriginal languages in the `ten_lang` data frame were spoken
as a mother tongue by between 0\.008% and 0\.18% of the Canadian population.
1\.10 Exploring data with visualizations
----------------------------------------
The `ten_lang` table we generated in Section [1\.8](intro.html#arrangesliceintro) answers our initial data analysis question.
Are we done? Well, not quite; tables are almost never the best way to present
the result of your analysis to your audience. Even the `ten_lang` table with
only two columns presents some difficulty: for example, you have to scrutinize
the table quite closely to get a sense for the relative numbers of speakers of
each language. When you move on to more complicated analyses, this issue only
gets worse. In contrast, a *visualization* would convey this information in a much
more easily understood format.
Visualizations are a great tool for summarizing information to help you
effectively communicate with your audience, and
creating effective data visualizations is an essential component of any data
analysis. In this section we will develop a visualization of the
ten Aboriginal languages that were most often reported in 2016 as mother tongues in
Canada, as well as the number of people that speak each of them.
### 1\.10\.1 Using `ggplot` to create a bar plot
In our data set, we can see that `language` and `mother_tongue` are in separate
columns (or variables). In addition, there is a single row (or observation) for each language.
The data are, therefore, in what we call a *tidy data* format. Tidy data is a
fundamental concept and will be a significant focus in the remainder of this
book: many of the functions from `tidyverse` require tidy data, including the
`ggplot` function that we will use shortly for our visualization. We will
formally introduce tidy data in Chapter [3](wrangling.html#wrangling).
We will make a bar plot to visualize our data. A bar plot is a chart where the
lengths of the bars represent certain values, like counts or proportions. We
will make a bar plot using the `mother_tongue` and `language` columns from our
`ten_lang` data frame. To create a bar plot of these two variables using the
`ggplot` function, we must specify the data frame, which variables
to put on the x and y axes, and what kind of plot to create. The `ggplot`
function and its common usage is illustrated in Figure [1\.7](intro.html#fig:img-ggplot).
Figure [1\.8](intro.html#fig:barplot-mother-tongue) shows the resulting bar plot
generated by following the instructions in Figure [1\.7](intro.html#fig:img-ggplot).
Figure 1\.7: Creating a bar plot with the `ggplot` function.
```
ggplot(ten_lang, aes(x = language, y = mother_tongue)) +
geom_bar(stat = "identity")
```
Figure 1\.8: Bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue. Note that this visualization is not done yet; there are still improvements to be made.
> **Note:** The vast majority of the
> time, a single expression in R must be contained in a single line of code.
> However, there *are* a small number of situations in which you can have a
> single R expression span multiple lines. Above is one such case: here, R knows that a line cannot
> end with a `+` symbol, and so it keeps reading the next line to figure out
> what the right\-hand side of the `+` symbol should be. We could, of course,
> put all of the added layers on one line of code, but splitting them across
> multiple lines helps a lot with code readability.
### 1\.10\.2 Formatting ggplot objects
It is exciting that we can already visualize our data to help answer our
question, but we are not done yet! We can (and should) do more to improve the
interpretability of the data visualization that we created. For example, by
default, R uses the column names as the axis labels. Usually these
column names do not have enough information about the variable in the column.
We really should replace this default with a more informative label. For the
example above, R uses the column name `mother_tongue` as the label for the
y axis, but most people will not know what that is. And even if they did, they
will not know how we measured this variable, or the group of people on which the
measurements were taken. An axis label that reads “Mother Tongue (Number of
Canadian Residents)” would be much more informative.
Adding additional layers to our visualizations that we create in `ggplot` is
one common and easy way to improve and refine our data visualizations. New
layers are added to `ggplot` objects using the `+` symbol. For example, we can
use the `xlab` (short for x axis label) and `ylab` (short for y axis label) functions
to add layers where we specify meaningful
and informative labels for the x and y axes. Again, since we are specifying
words (e.g. `"Mother Tongue (Number of Canadian Residents)"`) as arguments to
`xlab` and `ylab`, we surround them with double quotation marks. We can add many more
layers to format the plot further, and we will explore these in Chapter
[4](viz.html#viz).
```
ggplot(ten_lang, aes(x = language, y = mother_tongue)) +
geom_bar(stat = "identity") +
xlab("Language") +
ylab("Mother Tongue (Number of Canadian Residents)")
```
Figure 1\.9: Bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue with x and y labels. Note that this visualization is not done yet; there are still improvements to be made.
The result is shown in Figure [1\.9](intro.html#fig:barplot-mother-tongue-labs).
This is already quite an improvement! Let’s tackle the next major issue with the visualization
in Figure [1\.9](intro.html#fig:barplot-mother-tongue-labs): the overlapping x axis labels, which are
currently making it difficult to read the different language names.
One solution is to rotate the plot such that the bars are horizontal rather than vertical.
To accomplish this, we will swap the x and y coordinate axes:
```
ggplot(ten_lang, aes(x = mother_tongue, y = language)) +
geom_bar(stat = "identity") +
xlab("Mother Tongue (Number of Canadian Residents)") +
ylab("Language")
```
Figure 1\.10: Horizontal bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue. There are no more serious issues with this visualization, but it could be refined further.
Another big step forward, as shown in Figure [1\.10](intro.html#fig:barplot-mother-tongue-flipped)! There
are no more serious issues with the visualization. Now comes time to refine
the visualization to make it even more well\-suited to answering the question
we asked earlier in this chapter. For example, the visualization could be made more transparent by
organizing the bars according to the number of Canadian residents reporting
each language, rather than in alphabetical order. We can reorder the bars using
the `reorder` function, which orders a variable (here `language`) based on the
values of the second variable (`mother_tongue`).
```
ggplot(ten_lang, aes(x = mother_tongue,
y = reorder(language, mother_tongue))) +
geom_bar(stat = "identity") +
xlab("Mother Tongue (Number of Canadian Residents)") +
ylab("Language")
```
Figure 1\.11: Bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue with bars reordered.
Figure [1\.11](intro.html#fig:barplot-mother-tongue-reorder) provides a very clear and well\-organized
answer to our original question; we can see what the ten most often reported Aboriginal languages
were, according to the 2016 Canadian census, and how many people speak each of them. For
instance, we can see that the Aboriginal language most often reported was Cree
n.o.s. with over 60,000 Canadian residents reporting it as their mother tongue.
> **Note:** “n.o.s.” means “not otherwise specified”, so Cree n.o.s. refers to
> individuals who reported Cree as their mother tongue. In this data set, the
> Cree languages include the following categories: Cree n.o.s., Swampy Cree,
> Plains Cree, Woods Cree, and a ‘Cree not included elsewhere’ category (which
> includes Moose Cree, Northern East Cree and Southern East Cree)
> ([Statistics Canada 2016b](#ref-language2016)).
### 1\.10\.3 Putting it all together
In the block of code below, we put everything from this chapter together, with a few
modifications. In particular, we have actually skipped the
`select` step that we did above; since you specify the variable names to plot
in the `ggplot` function, you don’t actually need to `select` the columns in advance
when creating a visualization. We have also provided *comments* next to
many of the lines of code below using the
hash symbol `#`. When R sees a `#` sign, it
will ignore all of the text that
comes after the symbol on that line. So you can use comments to explain lines
of code for others, and perhaps more importantly, your future self!
It’s good practice to get in the habit of
commenting your code to improve its readability.
This exercise demonstrates the power of R. In relatively few lines of code, we
performed an entire data science workflow with a highly effective data
visualization! We asked a question, loaded the data into R, wrangled the data
(using `filter`, `arrange` and `slice`) and created a data visualization to
help answer our question. In this chapter, you got a quick taste of the data
science workflow; continue on with the next few chapters to learn each of
these steps in much more detail!
```
library(tidyverse)
# load the data set
can_lang <- read_csv("data/can_lang.csv")
# obtain the 10 most common Aboriginal languages
aboriginal_lang <- filter(can_lang, category == "Aboriginal languages")
arranged_lang <- arrange(aboriginal_lang, by = desc(mother_tongue))
ten_lang <- slice(arranged_lang, 1:10)
# create the visualization
ggplot(ten_lang, aes(x = mother_tongue,
y = reorder(language, mother_tongue))) +
geom_bar(stat = "identity") +
xlab("Mother Tongue (Number of Canadian Residents)") +
ylab("Language")
```
Figure 1\.12: Putting it all together: bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue.
1\.11 Accessing documentation
-----------------------------
There are many R functions in the `tidyverse` package (and beyond!), and
nobody can be expected to remember what every one of them does
or all of the arguments we have to give them. Fortunately, R provides
the `?` symbol, which
provides an easy way to pull up the documentation for
most functions quickly. To use the `?` symbol to access documentation, you
just put the name of the function you are curious about after the `?` symbol.
For example, if you had forgotten what the `filter` function
did or exactly what arguments to pass in, you could run the following
code:
```
?filter
```
Figure [1\.13](intro.html#fig:01-help) shows the documentation that will pop up,
including a high\-level description of the function, its arguments,
a description of each, and more. Note that you may find some of the
text in the documentation a bit too technical right now
(for example, what is `dbplyr`, and what is a lazy data frame?).
Fear not: as you work through this book, many of these terms will be introduced
to you, and slowly but surely you will become more adept at understanding and navigating
documentation like that shown in Figure [1\.13](intro.html#fig:01-help). But do keep in mind that the documentation
is not written to *teach* you about a function; it is just there as a reference to *remind*
you about the different arguments and usage of functions that you have already learned about elsewhere.
Figure 1\.13: The documentation for the `filter` function, including a high\-level description, a list of arguments and their meanings, and more.
1\.12 Exercises
---------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “R and the tidyverse” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
1\.1 Overview
-------------
This chapter provides an introduction to data science and the R programming language.
The goal here is to get your hands dirty right from the start! We will walk through an entire data analysis,
and along the way introduce different types of data analysis question, some fundamental programming
concepts in R, and the basics of loading, cleaning, and visualizing data. In the following chapters, we will
dig into each of these steps in much more detail; but for now, let’s jump in to see how much we can do
with data science!
1\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Identify the different types of data analysis question and categorize a question into the correct type.
* Load the `tidyverse` package into R.
* Read tabular data with `read_csv`.
* Create new variables and objects in R using the assignment symbol.
* Create and organize subsets of tabular data using `filter`, `select`, `arrange`, and `slice`.
* Add and modify columns in tabular data using `mutate`.
* Visualize data with a `ggplot` bar plot.
* Use `?` to access help and documentation tools in R.
1\.3 Canadian languages data set
--------------------------------
In this chapter, we will walk through a full analysis of a data set relating to
languages spoken at home by Canadian residents (Figure [1\.1](intro.html#fig:canada-map)). Many Indigenous peoples exist in Canada
with their own cultures and languages; these languages are often unique to Canada and not spoken
anywhere else in the world ([Statistics Canada 2018](#ref-statcan2018mothertongue)). Sadly, colonization has
led to the loss of many of these languages. For instance, generations of
children were not allowed to speak their mother tongue (the first language an
individual learns in childhood) in Canadian residential schools. Colonizers
also renamed places they had “discovered” ([K. Wilson 2018](#ref-wilson2018)). Acts such as these
have significantly harmed the continuity of Indigenous languages in Canada, and
some languages are considered “endangered” as few people report speaking them.
To learn more, please see *Canadian Geographic*’s article, “Mapping Indigenous Languages in
Canada” ([Walker 2017](#ref-walker2017)),
*They Came for the Children: Canada, Aboriginal
peoples, and Residential Schools* ([Truth and Reconciliation Commission of Canada 2012](#ref-children2012))
and the *Truth and Reconciliation Commission of Canada’s*
*Calls to Action* ([Truth and Reconciliation Commission of Canada 2015](#ref-calls2015)).
Figure 1\.1: Map of Canada.
The data set we will study in this chapter is taken from
[the `canlang` R data package](https://ttimbers.github.io/canlang/)
([Timbers 2020](#ref-timbers2020canlang)), which has
population language data collected during the 2016 Canadian census ([Statistics Canada 2016a](#ref-cancensus2016)).
In this data, there are 214 languages recorded, each having six different properties:
1. `category`: Higher\-level language category, describing whether the language is an Official Canadian language, an Aboriginal (i.e., Indigenous) language, or a Non\-Official and Non\-Aboriginal language.
2. `language`: The name of the language.
3. `mother_tongue`: Number of Canadian residents who reported the language as their mother tongue. Mother tongue is generally defined as the language someone was exposed to since birth.
4. `most_at_home`: Number of Canadian residents who reported the language as being spoken most often at home.
5. `most_at_work`: Number of Canadian residents who reported the language as being used most often at work.
6. `lang_known`: Number of Canadian residents who reported knowledge of the language.
According to the census, more than 60 Aboriginal languages were reported
as being spoken in Canada. Suppose we want to know which are the most common;
then we might ask the following question, which we wish to answer using our data:
*Which ten Aboriginal languages were most often reported in 2016 as mother
tongues in Canada, and how many people speak each of them?*
> **Note:** Data science cannot be done without
> a deep understanding of the data and
> problem domain. In this book, we have simplified the data sets used in our
> examples to concentrate on methods and fundamental concepts. But in real
> life, you cannot and should not do data science without a domain expert.
> Alternatively, it is common to practice data science in your own domain of
> expertise! Remember that when you work with data, it is essential to think
> about *how* the data were collected, which affects the conclusions you can
> draw. If your data are biased, then your results will be biased!
1\.4 Asking a question
----------------------
Every good data analysis begins with a *question*—like the
above—that you aim to answer using data. As it turns out, there
are actually a number of different *types* of question regarding data:
descriptive, exploratory, predictive, inferential, causal, and mechanistic,
all of which are defined in Table [1\.1](intro.html#tab:questions-table).
Carefully formulating a question as early as possible in your analysis—and
correctly identifying which type of question it is—will guide your overall approach to
the analysis as well as the selection of appropriate tools.
Table 1\.1: Types of data analysis question ([Leek and Peng 2015](#ref-leek2015question); [Peng and Matsui 2015](#ref-peng2015art)).
| Question type | Description | Example |
| --- | --- | --- |
| Descriptive | A question that asks about summarized characteristics of a data set without interpretation (i.e., report a fact). | How many people live in each province and territory in Canada? |
| Exploratory | A question that asks if there are patterns, trends, or relationships within a single data set. Often used to propose hypotheses for future study. | Does political party voting change with indicators of wealth in a set of data collected on 2,000 people living in Canada? |
| Predictive | A question that asks about predicting measurements or labels for individuals (people or things). The focus is on what things predict some outcome, but not what causes the outcome. | What political party will someone vote for in the next Canadian election? |
| Inferential | A question that looks for patterns, trends, or relationships in a single data set **and** also asks for quantification of how applicable these findings are to the wider population. | Does political party voting change with indicators of wealth for all people living in Canada? |
| Causal | A question that asks about whether changing one factor will lead to a change in another factor, on average, in the wider population. | Does wealth lead to voting for a certain political party in Canadian elections? |
| Mechanistic | A question that asks about the underlying mechanism of the observed patterns, trends, or relationships (i.e., how does it happen?) | How does wealth lead to voting for a certain political party in Canadian elections? |
In this book, you will learn techniques to answer the
first four types of question: descriptive, exploratory, predictive, and inferential;
causal and mechanistic questions are beyond the scope of this book.
In particular, you will learn how to apply the following analysis tools:
1. **Summarization:** computing and reporting aggregated values pertaining to a data set.
Summarization is most often used to answer descriptive questions,
and can occasionally help with answering exploratory questions.
For example, you might use summarization to answer the following question:
*What is the average race time for runners in this data set?*
Tools for summarization are covered in detail in Chapters [2](reading.html#reading)
and [3](wrangling.html#wrangling), but appear regularly throughout the text.
2. **Visualization:** plotting data graphically.
Visualization is typically used to answer descriptive and exploratory questions,
but plays a critical supporting role in answering all of the types of question in Table [1\.1](intro.html#tab:questions-table).
For example, you might use visualization to answer the following question:
*Is there any relationship between race time and age for runners in this data set?*
This is covered in detail in Chapter [4](viz.html#viz), but again appears regularly throughout the book.
3. **Classification:** predicting a class or category for a new observation.
Classification is used to answer predictive questions.
For example, you might use classification to answer the following question:
*Given measurements of a tumor’s average cell area and perimeter, is the tumor benign or malignant?*
Classification is covered in Chapters [5](classification1.html#classification1) and [6](classification2.html#classification2).
4. **Regression:** predicting a quantitative value for a new observation.
Regression is also used to answer predictive questions.
For example, you might use regression to answer the following question:
*What will be the race time for a 20\-year\-old runner who weighs 50kg?*
Regression is covered in Chapters [7](regression1.html#regression1) and [8](regression2.html#regression2).
5. **Clustering:** finding previously unknown/unlabeled subgroups in a
data set. Clustering is often used to answer exploratory questions.
For example, you might use clustering to answer the following question:
*What products are commonly bought together on Amazon?*
Clustering is covered in Chapter [9](clustering.html#clustering).
6. **Estimation:** taking measurements for a small number of items from a large group
and making a good guess for the average or proportion for the large group. Estimation
is used to answer inferential questions.
For example, you might use estimation to answer the following question:
*Given a survey of cellphone ownership of 100 Canadians, what proportion
of the entire Canadian population own Android phones?*
Estimation is covered in Chapter [10](inference.html#inference).
Referring to Table [1\.1](intro.html#tab:questions-table), our question about
Aboriginal languages is an example of a *descriptive question*: we are
summarizing the characteristics of a data set without further interpretation.
And referring to the list above, it looks like we should use visualization
and perhaps some summarization to answer the question. So in the remainder
of this chapter, we will work towards making a visualization that shows
us the ten most common Aboriginal languages in Canada and their associated counts,
according to the 2016 census.
1\.5 Loading a tabular data set
-------------------------------
A data set is, at its core essence, a structured collection of numbers and characters.
Aside from that, there are really no strict rules; data sets can come in
many different forms! Perhaps the most common form of data set that you will
find in the wild, however, is *tabular data*. Think spreadsheets in Microsoft Excel: tabular data are
rectangular\-shaped and spreadsheet\-like, as shown in Figure
[1\.2](intro.html#fig:img-spreadsheet-vs-dataframe). In this book, we will focus primarily on tabular data.
Since we are using R for data analysis in this book, the first step for us is to
load the data into R. When we load tabular data into
R, it is represented as a *data frame* object. Figure
[1\.2](intro.html#fig:img-spreadsheet-vs-dataframe) shows that an R data frame is very similar
to a spreadsheet. We refer to the rows as **observations**;
these are the individual objects
for which we collect data. In Figure [1\.2](intro.html#fig:img-spreadsheet-vs-dataframe), the observations are
languages. We refer to the columns as **variables**; these are the characteristics of each
observation. In Figure [1\.2](intro.html#fig:img-spreadsheet-vs-dataframe), the variables are the the
language’s category, its name, the number of mother tongue speakers, etc.
Figure 1\.2: A spreadsheet versus a data frame in R.
The first kind of data file that we will learn how to load into R as a data
frame is the *comma\-separated values* format (`.csv` for short). These files
have names ending in `.csv`, and can be opened and saved using common
spreadsheet programs like Microsoft Excel and Google Sheets. For example, the
`.csv` file named `can_lang.csv`
is included with [the code for this book](https://github.com/UBC-DSCI/introduction-to-datascience/tree/master/data).
If we were to open this data in a plain text editor (a program like Notepad that just shows
text with no formatting), we would see each row on its own line, and each entry in the table separated by a comma:
```
category,language,mother_tongue,most_at_home,most_at_work,lang_known
Aboriginal languages,"Aboriginal languages, n.o.s.",590,235,30,665
Non-Official & Non-Aboriginal languages,Afrikaans,10260,4785,85,23415
Non-Official & Non-Aboriginal languages,"Afro-Asiatic languages, n.i.e.",1150,44
Non-Official & Non-Aboriginal languages,Akan (Twi),13460,5985,25,22150
Non-Official & Non-Aboriginal languages,Albanian,26895,13135,345,31930
Aboriginal languages,"Algonquian languages, n.i.e.",45,10,0,120
Aboriginal languages,Algonquin,1260,370,40,2480
Non-Official & Non-Aboriginal languages,American Sign Language,2685,3020,1145,21
Non-Official & Non-Aboriginal languages,Amharic,22465,12785,200,33670
```
To load this data into R so that we can do things with it (e.g., perform
analyses or create data visualizations), we will need to use a *function.* A
function is a special word in R that takes instructions (we call these
*arguments*) and does something. The function we will use to load a `.csv` file
into R is called `read_csv`. In its most basic
use\-case, `read_csv` expects that the data file:
* has column names (or *headers*),
* uses a comma (`,`) to separate the columns, and
* does not have row names.
Below you’ll see the code used to load the data into R using the `read_csv`
function. Note that the `read_csv` function is not included in the base
installation of R, meaning that it is not one of the primary functions ready to
use when you install R. Therefore, you need to load it from somewhere else
before you can use it. The place from which we will load it is called an R *package*.
An R package is a collection of functions that can be used in addition to the
built\-in R package functions once loaded. The `read_csv` function, in
particular, can be made accessible by loading
[the `tidyverse` R package](https://tidyverse.tidyverse.org/) ([Wickham 2021b](#ref-tidyverse); [Wickham et al. 2019](#ref-wickham2019tidverse))
using the `library` function. The `tidyverse` package contains many
functions that we will use throughout this book to load, clean, wrangle,
and visualize data.
```
library(tidyverse)
```
```
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.2 ✔ readr 2.1.4
## ✔ forcats 1.0.0 ✔ stringr 1.5.0
## ✔ ggplot2 3.4.2 ✔ tibble 3.2.1
## ✔ lubridate 1.9.2 ✔ tidyr 1.3.0
## ✔ purrr 1.0.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
```
> **Note:** You may have noticed that we got some extra
> output from R regarding attached packages and conflicts below our code
> line. These are examples of *messages* in R, which give the user more
> information that might be handy to know. The `Attaching packages` message is
> natural when loading `tidyverse`, since `tidyverse` actually automatically
> causes other packages to be imported too, such as `dplyr`. In the future,
> when we load `tidyverse` in this book, we will silence these messages to help
> with the readability of the book. The `Conflicts` message is also totally normal
> in this circumstance. This message tells you if functions from different
> packages share the same name, which is confusing to R. For example, in this
> case, the `dplyr` package and the `stats` package both provide a function
> called `filter`. The message above (`dplyr::filter() masks stats::filter()`)
> is R telling you that it is going to default to the `dplyr` package version
> of this function. So if you use the `filter` function, you will be using the
> `dplyr` version. In order to use the `stats` version, you need to use its
> full name `stats::filter`. Messages are not errors, so generally you don’t
> need to take action when you see a message; but you should always read the message
> and critically think about what it means and whether you need to do anything
> about it.
After loading the `tidyverse` package, we can call the `read_csv` function and
pass it a single argument: the name of the file, `"can_lang.csv"`. We have to
put quotes around file names and other letters and words that we use in our
code to distinguish it from the special words (like functions!) that make up the R programming
language. The file’s name is the only argument we need to provide because our
file satisfies everything else that the `read_csv` function expects in the default
use\-case. Figure [1\.3](intro.html#fig:img-read-csv) describes how we use the `read_csv`
to read data into R.
Figure 1\.3: Syntax for the `read_csv` function.
```
read_csv("data/can_lang.csv")
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
> **Note:** There is another function
> that also loads csv files named `read.csv`. We will *always* use
> `read_csv` in this book, as it is designed to play nicely with all of the
> other `tidyverse` functions, which we will use extensively. Be
> careful not to accidentally use `read.csv`, as it can cause some tricky
> errors to occur in your code that are hard to track down!
1\.6 Naming things in R
-----------------------
When we loaded the 2016 Canadian census language data
using `read_csv`, we did not give this data frame a name.
Therefore the data was just printed on the screen,
and we cannot do anything else with it. That isn’t very useful.
What would be more useful would be to give a name
to the data frame that `read_csv` outputs,
so that we can refer to it later for analysis and visualization.
The way to assign a name to a value in R is via the *assignment symbol* `<-`.
On the left side of the assignment symbol you put the name that you want
to use, and on the right side of the assignment symbol
you put the value that you want the name to refer to.
Names can be used to refer to almost anything in R, such as numbers,
words (also known as *strings* of characters), and data frames!
Below, we set `my_number` to `3` (the result of `1+2`)
and we set `name` to the string `"Alice"`.
```
my_number <- 1 + 2
name <- "Alice"
```
Note that when
we name something in R using the assignment symbol, `<-`,
we do not need to surround the name we are creating with quotes. This is
because we are formally telling R that this special word denotes
the value of whatever is on the right\-hand side.
Only characters and words that act as *values* on the right\-hand side of the assignment
symbol—e.g., the file name `"data/can_lang.csv"` that we specified before, or `"Alice"` above—need
to be surrounded by quotes.
After making the assignment, we can use the special name words we have created in
place of their values. For example, if we want to do something with the value `3` later on,
we can just use `my_number` instead. Let’s try adding 2 to `my_number`; you will see that
R just interprets this as adding 3 and 2:
```
my_number + 2
```
```
## [1] 5
```
Object names can consist of letters, numbers, periods `.` and underscores `_`.
Other symbols won’t work since they have their own meanings in R. For example,
`-` is the subtraction symbol; if we try to assign a name with
the `-` symbol, R will complain and we will get an error!
```
my-number <- 1
```
```
Error in my - number <- 1 : object 'my' not found
```
There are certain conventions for naming objects in R.
When naming an object we
suggest using only lowercase letters, numbers and underscores `_` to separate
the words in a name. R is case sensitive, which means that `Letter` and
`letter` would be two different objects in R. You should also try to give your
objects meaningful names. For instance, you *can* name a data frame `x`.
However, using more meaningful terms, such as `language_data`, will help you
remember what each name in your code represents. We recommend following the
Tidyverse naming conventions outlined in the *Tidyverse Style Guide* ([Wickham 2020](#ref-tidyversestyleguide)). Let’s
now use the assignment symbol to give the name
`can_lang` to the 2016 Canadian census language data frame that we get from
`read_csv`.
```
can_lang <- read_csv("data/can_lang.csv")
```
Wait a minute, nothing happened this time! Where’s our data?
Actually, something did happen: the data was loaded in
and now has the name `can_lang` associated with it.
And we can use that name to access the data frame and do things with it.
For example, we can type the name of the data frame to print the first few rows
on the screen. You will also see at the top that the number of observations (i.e., rows) and
variables (i.e., columns) are printed. Printing the first few rows of a data frame
like this is a handy way to get a quick sense for what is contained in a data frame.
```
can_lang
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
1\.7 Creating subsets of data frames with `filter` \& `select`
--------------------------------------------------------------
Now that we’ve loaded our data into R, we can start wrangling the data to
find the ten Aboriginal languages that were most often reported
in 2016 as mother tongues in Canada. In particular, we will construct
a table with the ten Aboriginal languages that have the largest
counts in the `mother_tongue` column.
The `filter` and `select` functions from the `tidyverse` package will help us
here. The `filter` function allows you to obtain a subset of the
rows with specific values, while the `select` function allows you
to obtain a subset of the columns. Therefore, we can `filter` the rows to extract the
Aboriginal languages in the data set, and then use `select` to obtain only the
columns we want to include in our table.
### 1\.7\.1 Using `filter` to extract rows
Looking at the `can_lang` data above, we see the `category` column contains different
high\-level categories of languages, which include “Aboriginal languages”,
“Non\-Official \& Non\-Aboriginal languages” and “Official languages”. To answer
our question we want to filter our data set so we restrict our attention
to only those languages in the “Aboriginal languages” category.
We can use the `filter` function to obtain the subset of rows with desired
values from a data frame. Figure [1\.4](intro.html#fig:img-filter) outlines what arguments we need to specify to use `filter`. The first argument to `filter` is the name of the data frame
object, `can_lang`. The second argument is a *logical statement* to use when
filtering the rows. A logical statement evaluates to either `TRUE` or `FALSE`;
`filter` keeps only those rows for which the logical statement evaluates to `TRUE`.
For example, in our analysis, we are interested in keeping only languages in the
“Aboriginal languages” higher\-level category. We can use
the *equivalency operator* `==` to compare the values
of the `category` column with the value `"Aboriginal languages"`; you will learn about
many other kinds of logical statements in Chapter [3](wrangling.html#wrangling). Similar to
when we loaded the data file and put quotes around the file name, here we need
to put quotes around `"Aboriginal languages"`. Using quotes tells R that this
is a string *value* and not one of the special words that make up the R
programming language, or one of the names we have given to data frames in the
code we have already written.
Figure 1\.4: Syntax for the `filter` function.
With these arguments, `filter` returns a data frame that has all the columns of
the input data frame, but only those rows we asked for in our logical filter
statement.
```
aboriginal_lang <- filter(can_lang, category == "Aboriginal languages")
aboriginal_lang
```
```
## # A tibble: 67 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Aboriginal langu… Algonqu… 45 10 0 120
## 3 Aboriginal langu… Algonqu… 1260 370 40 2480
## 4 Aboriginal langu… Athabas… 50 10 0 85
## 5 Aboriginal langu… Atikame… 6150 5465 1100 6645
## 6 Aboriginal langu… Babine … 110 20 10 210
## 7 Aboriginal langu… Beaver 190 50 0 340
## 8 Aboriginal langu… Blackfo… 2815 1110 85 5645
## 9 Aboriginal langu… Carrier 1025 250 15 2100
## 10 Aboriginal langu… Cayuga 45 10 10 125
## # ℹ 57 more rows
```
It’s good practice to check the output after using a
function in R. We can see the original `can_lang` data set contained 214 rows
with multiple kinds of `category`. The data frame
`aboriginal_lang` contains only 67 rows, and looks like it only contains languages in
the “Aboriginal languages” in the `category` column. So it looks like the function
gave us the result we wanted!
### 1\.7\.2 Using `select` to extract columns
Now let’s use `select` to extract the `language` and `mother_tongue` columns
from this data frame. Figure [1\.5](intro.html#fig:img-select) shows us the syntax for the `select` function. To extract these columns, we need to provide the `select`
function with three arguments. The first argument is the name of the data frame
object, which in this example is `aboriginal_lang`. The second and third
arguments are the column names that we want to select: `language` and
`mother_tongue`. After passing these three arguments, the `select` function
returns two columns (the `language` and `mother_tongue` columns that we asked
for) as a data frame. This code is also a great example of why being
able to name things in R is useful: you can see that we are using the
result of our earlier `filter` step (which we named `aboriginal_lang`) here
in the next step of the analysis!
Figure 1\.5: Syntax for the `select` function.
```
selected_lang <- select(aboriginal_lang, language, mother_tongue)
selected_lang
```
```
## # A tibble: 67 × 2
## language mother_tongue
## <chr> <dbl>
## 1 Aboriginal languages, n.o.s. 590
## 2 Algonquian languages, n.i.e. 45
## 3 Algonquin 1260
## 4 Athabaskan languages, n.i.e. 50
## 5 Atikamekw 6150
## 6 Babine (Wetsuwet'en) 110
## 7 Beaver 190
## 8 Blackfoot 2815
## 9 Carrier 1025
## 10 Cayuga 45
## # ℹ 57 more rows
```
### 1\.7\.1 Using `filter` to extract rows
Looking at the `can_lang` data above, we see the `category` column contains different
high\-level categories of languages, which include “Aboriginal languages”,
“Non\-Official \& Non\-Aboriginal languages” and “Official languages”. To answer
our question we want to filter our data set so we restrict our attention
to only those languages in the “Aboriginal languages” category.
We can use the `filter` function to obtain the subset of rows with desired
values from a data frame. Figure [1\.4](intro.html#fig:img-filter) outlines what arguments we need to specify to use `filter`. The first argument to `filter` is the name of the data frame
object, `can_lang`. The second argument is a *logical statement* to use when
filtering the rows. A logical statement evaluates to either `TRUE` or `FALSE`;
`filter` keeps only those rows for which the logical statement evaluates to `TRUE`.
For example, in our analysis, we are interested in keeping only languages in the
“Aboriginal languages” higher\-level category. We can use
the *equivalency operator* `==` to compare the values
of the `category` column with the value `"Aboriginal languages"`; you will learn about
many other kinds of logical statements in Chapter [3](wrangling.html#wrangling). Similar to
when we loaded the data file and put quotes around the file name, here we need
to put quotes around `"Aboriginal languages"`. Using quotes tells R that this
is a string *value* and not one of the special words that make up the R
programming language, or one of the names we have given to data frames in the
code we have already written.
Figure 1\.4: Syntax for the `filter` function.
With these arguments, `filter` returns a data frame that has all the columns of
the input data frame, but only those rows we asked for in our logical filter
statement.
```
aboriginal_lang <- filter(can_lang, category == "Aboriginal languages")
aboriginal_lang
```
```
## # A tibble: 67 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Aboriginal langu… Algonqu… 45 10 0 120
## 3 Aboriginal langu… Algonqu… 1260 370 40 2480
## 4 Aboriginal langu… Athabas… 50 10 0 85
## 5 Aboriginal langu… Atikame… 6150 5465 1100 6645
## 6 Aboriginal langu… Babine … 110 20 10 210
## 7 Aboriginal langu… Beaver 190 50 0 340
## 8 Aboriginal langu… Blackfo… 2815 1110 85 5645
## 9 Aboriginal langu… Carrier 1025 250 15 2100
## 10 Aboriginal langu… Cayuga 45 10 10 125
## # ℹ 57 more rows
```
It’s good practice to check the output after using a
function in R. We can see the original `can_lang` data set contained 214 rows
with multiple kinds of `category`. The data frame
`aboriginal_lang` contains only 67 rows, and looks like it only contains languages in
the “Aboriginal languages” in the `category` column. So it looks like the function
gave us the result we wanted!
### 1\.7\.2 Using `select` to extract columns
Now let’s use `select` to extract the `language` and `mother_tongue` columns
from this data frame. Figure [1\.5](intro.html#fig:img-select) shows us the syntax for the `select` function. To extract these columns, we need to provide the `select`
function with three arguments. The first argument is the name of the data frame
object, which in this example is `aboriginal_lang`. The second and third
arguments are the column names that we want to select: `language` and
`mother_tongue`. After passing these three arguments, the `select` function
returns two columns (the `language` and `mother_tongue` columns that we asked
for) as a data frame. This code is also a great example of why being
able to name things in R is useful: you can see that we are using the
result of our earlier `filter` step (which we named `aboriginal_lang`) here
in the next step of the analysis!
Figure 1\.5: Syntax for the `select` function.
```
selected_lang <- select(aboriginal_lang, language, mother_tongue)
selected_lang
```
```
## # A tibble: 67 × 2
## language mother_tongue
## <chr> <dbl>
## 1 Aboriginal languages, n.o.s. 590
## 2 Algonquian languages, n.i.e. 45
## 3 Algonquin 1260
## 4 Athabaskan languages, n.i.e. 50
## 5 Atikamekw 6150
## 6 Babine (Wetsuwet'en) 110
## 7 Beaver 190
## 8 Blackfoot 2815
## 9 Carrier 1025
## 10 Cayuga 45
## # ℹ 57 more rows
```
1\.8 Using `arrange` to order and `slice` to select rows by index number
------------------------------------------------------------------------
We have used `filter` and `select` to obtain a table with only the Aboriginal
languages in the data set and their associated counts. However, we want to know
the **ten** languages that are spoken most often. As a next step, we will
order the `mother_tongue` column from largest to smallest value and then extract only
the top ten rows. This is where the `arrange` and `slice` functions come to the
rescue!
The `arrange` function allows us to order the rows of a data frame by the
values of a particular column. Figure [1\.6](intro.html#fig:img-arrange) details what arguments we need to specify to
use the `arrange` function. We need to pass the data frame as the first
argument to this function, and the variable to order by as the second argument.
Since we want to choose the ten Aboriginal languages most often reported as a mother tongue
language, we will use the `arrange` function to order the rows in our
`selected_lang` data frame by the `mother_tongue` column. We want to
arrange the rows in descending order (from largest to smallest),
so we pass the column to the `desc` function before using it as an argument.
Figure 1\.6: Syntax for the `arrange` function.
```
arranged_lang <- arrange(selected_lang, by = desc(mother_tongue))
arranged_lang
```
```
## # A tibble: 67 × 2
## language mother_tongue
## <chr> <dbl>
## 1 Cree, n.o.s. 64050
## 2 Inuktitut 35210
## 3 Ojibway 17885
## 4 Oji-Cree 12855
## 5 Dene 10700
## 6 Montagnais (Innu) 10235
## 7 Mi'kmaq 6690
## 8 Atikamekw 6150
## 9 Plains Cree 3065
## 10 Stoney 3025
## # ℹ 57 more rows
```
Next we will use the `slice` function, which selects rows according to their
row number. Since we want to choose the most common ten languages, we will indicate we want the
rows 1 to 10 using the argument `1:10`.
```
ten_lang <- slice(arranged_lang, 1:10)
ten_lang
```
```
## # A tibble: 10 × 2
## language mother_tongue
## <chr> <dbl>
## 1 Cree, n.o.s. 64050
## 2 Inuktitut 35210
## 3 Ojibway 17885
## 4 Oji-Cree 12855
## 5 Dene 10700
## 6 Montagnais (Innu) 10235
## 7 Mi'kmaq 6690
## 8 Atikamekw 6150
## 9 Plains Cree 3065
## 10 Stoney 3025
```
1\.9 Adding and modifying columns using `mutate`
------------------------------------------------
Recall that our data analysis question referred to the *count* of Canadians
that speak each of the top ten most commonly reported Aboriginal languages as
their mother tongue, and the `ten_lang` data frame indeed contains those
counts… But perhaps, seeing these numbers, we became curious about the
*percentage* of the population of Canada associated with each count. It is
common to come up with new data analysis questions in the process of answering
a first one—so fear not and explore! To answer this small
question along the way, we need to divide each count in the `mother_tongue`
column by the total Canadian population according to the 2016
census—i.e., 35,151,728—and multiply it by 100\. We can perform
this computation using the `mutate` function. We pass the `ten_lang`
data frame as its first argument, then specify the equation that computes the percentages
in the second argument. By using a new variable name on the left hand side of the equation,
we will create a new column in the data frame; and if we use an existing name, we will
modify that variable. In this case, we will opt to
create a new column called `mother_tongue_percent`.
```
canadian_population = 35151728
ten_lang_percent = mutate(ten_lang, mother_tongue_percent = 100 * mother_tongue / canadian_population)
ten_lang_percent
```
```
## # A tibble: 10 × 3
## language mother_tongue mother_tongue_percent
## <chr> <dbl> <dbl>
## 1 Cree, n.o.s. 64050 0.182
## 2 Inuktitut 35210 0.100
## 3 Ojibway 17885 0.0509
## 4 Oji-Cree 12855 0.0366
## 5 Dene 10700 0.0304
## 6 Montagnais (Innu) 10235 0.0291
## 7 Mi'kmaq 6690 0.0190
## 8 Atikamekw 6150 0.0175
## 9 Plains Cree 3065 0.00872
## 10 Stoney 3025 0.00861
```
The `ten_lang_percent` data frame shows that
the ten Aboriginal languages in the `ten_lang` data frame were spoken
as a mother tongue by between 0\.008% and 0\.18% of the Canadian population.
1\.10 Exploring data with visualizations
----------------------------------------
The `ten_lang` table we generated in Section [1\.8](intro.html#arrangesliceintro) answers our initial data analysis question.
Are we done? Well, not quite; tables are almost never the best way to present
the result of your analysis to your audience. Even the `ten_lang` table with
only two columns presents some difficulty: for example, you have to scrutinize
the table quite closely to get a sense for the relative numbers of speakers of
each language. When you move on to more complicated analyses, this issue only
gets worse. In contrast, a *visualization* would convey this information in a much
more easily understood format.
Visualizations are a great tool for summarizing information to help you
effectively communicate with your audience, and
creating effective data visualizations is an essential component of any data
analysis. In this section we will develop a visualization of the
ten Aboriginal languages that were most often reported in 2016 as mother tongues in
Canada, as well as the number of people that speak each of them.
### 1\.10\.1 Using `ggplot` to create a bar plot
In our data set, we can see that `language` and `mother_tongue` are in separate
columns (or variables). In addition, there is a single row (or observation) for each language.
The data are, therefore, in what we call a *tidy data* format. Tidy data is a
fundamental concept and will be a significant focus in the remainder of this
book: many of the functions from `tidyverse` require tidy data, including the
`ggplot` function that we will use shortly for our visualization. We will
formally introduce tidy data in Chapter [3](wrangling.html#wrangling).
We will make a bar plot to visualize our data. A bar plot is a chart where the
lengths of the bars represent certain values, like counts or proportions. We
will make a bar plot using the `mother_tongue` and `language` columns from our
`ten_lang` data frame. To create a bar plot of these two variables using the
`ggplot` function, we must specify the data frame, which variables
to put on the x and y axes, and what kind of plot to create. The `ggplot`
function and its common usage is illustrated in Figure [1\.7](intro.html#fig:img-ggplot).
Figure [1\.8](intro.html#fig:barplot-mother-tongue) shows the resulting bar plot
generated by following the instructions in Figure [1\.7](intro.html#fig:img-ggplot).
Figure 1\.7: Creating a bar plot with the `ggplot` function.
```
ggplot(ten_lang, aes(x = language, y = mother_tongue)) +
geom_bar(stat = "identity")
```
Figure 1\.8: Bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue. Note that this visualization is not done yet; there are still improvements to be made.
> **Note:** The vast majority of the
> time, a single expression in R must be contained in a single line of code.
> However, there *are* a small number of situations in which you can have a
> single R expression span multiple lines. Above is one such case: here, R knows that a line cannot
> end with a `+` symbol, and so it keeps reading the next line to figure out
> what the right\-hand side of the `+` symbol should be. We could, of course,
> put all of the added layers on one line of code, but splitting them across
> multiple lines helps a lot with code readability.
### 1\.10\.2 Formatting ggplot objects
It is exciting that we can already visualize our data to help answer our
question, but we are not done yet! We can (and should) do more to improve the
interpretability of the data visualization that we created. For example, by
default, R uses the column names as the axis labels. Usually these
column names do not have enough information about the variable in the column.
We really should replace this default with a more informative label. For the
example above, R uses the column name `mother_tongue` as the label for the
y axis, but most people will not know what that is. And even if they did, they
will not know how we measured this variable, or the group of people on which the
measurements were taken. An axis label that reads “Mother Tongue (Number of
Canadian Residents)” would be much more informative.
Adding additional layers to our visualizations that we create in `ggplot` is
one common and easy way to improve and refine our data visualizations. New
layers are added to `ggplot` objects using the `+` symbol. For example, we can
use the `xlab` (short for x axis label) and `ylab` (short for y axis label) functions
to add layers where we specify meaningful
and informative labels for the x and y axes. Again, since we are specifying
words (e.g. `"Mother Tongue (Number of Canadian Residents)"`) as arguments to
`xlab` and `ylab`, we surround them with double quotation marks. We can add many more
layers to format the plot further, and we will explore these in Chapter
[4](viz.html#viz).
```
ggplot(ten_lang, aes(x = language, y = mother_tongue)) +
geom_bar(stat = "identity") +
xlab("Language") +
ylab("Mother Tongue (Number of Canadian Residents)")
```
Figure 1\.9: Bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue with x and y labels. Note that this visualization is not done yet; there are still improvements to be made.
The result is shown in Figure [1\.9](intro.html#fig:barplot-mother-tongue-labs).
This is already quite an improvement! Let’s tackle the next major issue with the visualization
in Figure [1\.9](intro.html#fig:barplot-mother-tongue-labs): the overlapping x axis labels, which are
currently making it difficult to read the different language names.
One solution is to rotate the plot such that the bars are horizontal rather than vertical.
To accomplish this, we will swap the x and y coordinate axes:
```
ggplot(ten_lang, aes(x = mother_tongue, y = language)) +
geom_bar(stat = "identity") +
xlab("Mother Tongue (Number of Canadian Residents)") +
ylab("Language")
```
Figure 1\.10: Horizontal bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue. There are no more serious issues with this visualization, but it could be refined further.
Another big step forward, as shown in Figure [1\.10](intro.html#fig:barplot-mother-tongue-flipped)! There
are no more serious issues with the visualization. Now comes time to refine
the visualization to make it even more well\-suited to answering the question
we asked earlier in this chapter. For example, the visualization could be made more transparent by
organizing the bars according to the number of Canadian residents reporting
each language, rather than in alphabetical order. We can reorder the bars using
the `reorder` function, which orders a variable (here `language`) based on the
values of the second variable (`mother_tongue`).
```
ggplot(ten_lang, aes(x = mother_tongue,
y = reorder(language, mother_tongue))) +
geom_bar(stat = "identity") +
xlab("Mother Tongue (Number of Canadian Residents)") +
ylab("Language")
```
Figure 1\.11: Bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue with bars reordered.
Figure [1\.11](intro.html#fig:barplot-mother-tongue-reorder) provides a very clear and well\-organized
answer to our original question; we can see what the ten most often reported Aboriginal languages
were, according to the 2016 Canadian census, and how many people speak each of them. For
instance, we can see that the Aboriginal language most often reported was Cree
n.o.s. with over 60,000 Canadian residents reporting it as their mother tongue.
> **Note:** “n.o.s.” means “not otherwise specified”, so Cree n.o.s. refers to
> individuals who reported Cree as their mother tongue. In this data set, the
> Cree languages include the following categories: Cree n.o.s., Swampy Cree,
> Plains Cree, Woods Cree, and a ‘Cree not included elsewhere’ category (which
> includes Moose Cree, Northern East Cree and Southern East Cree)
> ([Statistics Canada 2016b](#ref-language2016)).
### 1\.10\.3 Putting it all together
In the block of code below, we put everything from this chapter together, with a few
modifications. In particular, we have actually skipped the
`select` step that we did above; since you specify the variable names to plot
in the `ggplot` function, you don’t actually need to `select` the columns in advance
when creating a visualization. We have also provided *comments* next to
many of the lines of code below using the
hash symbol `#`. When R sees a `#` sign, it
will ignore all of the text that
comes after the symbol on that line. So you can use comments to explain lines
of code for others, and perhaps more importantly, your future self!
It’s good practice to get in the habit of
commenting your code to improve its readability.
This exercise demonstrates the power of R. In relatively few lines of code, we
performed an entire data science workflow with a highly effective data
visualization! We asked a question, loaded the data into R, wrangled the data
(using `filter`, `arrange` and `slice`) and created a data visualization to
help answer our question. In this chapter, you got a quick taste of the data
science workflow; continue on with the next few chapters to learn each of
these steps in much more detail!
```
library(tidyverse)
# load the data set
can_lang <- read_csv("data/can_lang.csv")
# obtain the 10 most common Aboriginal languages
aboriginal_lang <- filter(can_lang, category == "Aboriginal languages")
arranged_lang <- arrange(aboriginal_lang, by = desc(mother_tongue))
ten_lang <- slice(arranged_lang, 1:10)
# create the visualization
ggplot(ten_lang, aes(x = mother_tongue,
y = reorder(language, mother_tongue))) +
geom_bar(stat = "identity") +
xlab("Mother Tongue (Number of Canadian Residents)") +
ylab("Language")
```
Figure 1\.12: Putting it all together: bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue.
### 1\.10\.1 Using `ggplot` to create a bar plot
In our data set, we can see that `language` and `mother_tongue` are in separate
columns (or variables). In addition, there is a single row (or observation) for each language.
The data are, therefore, in what we call a *tidy data* format. Tidy data is a
fundamental concept and will be a significant focus in the remainder of this
book: many of the functions from `tidyverse` require tidy data, including the
`ggplot` function that we will use shortly for our visualization. We will
formally introduce tidy data in Chapter [3](wrangling.html#wrangling).
We will make a bar plot to visualize our data. A bar plot is a chart where the
lengths of the bars represent certain values, like counts or proportions. We
will make a bar plot using the `mother_tongue` and `language` columns from our
`ten_lang` data frame. To create a bar plot of these two variables using the
`ggplot` function, we must specify the data frame, which variables
to put on the x and y axes, and what kind of plot to create. The `ggplot`
function and its common usage is illustrated in Figure [1\.7](intro.html#fig:img-ggplot).
Figure [1\.8](intro.html#fig:barplot-mother-tongue) shows the resulting bar plot
generated by following the instructions in Figure [1\.7](intro.html#fig:img-ggplot).
Figure 1\.7: Creating a bar plot with the `ggplot` function.
```
ggplot(ten_lang, aes(x = language, y = mother_tongue)) +
geom_bar(stat = "identity")
```
Figure 1\.8: Bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue. Note that this visualization is not done yet; there are still improvements to be made.
> **Note:** The vast majority of the
> time, a single expression in R must be contained in a single line of code.
> However, there *are* a small number of situations in which you can have a
> single R expression span multiple lines. Above is one such case: here, R knows that a line cannot
> end with a `+` symbol, and so it keeps reading the next line to figure out
> what the right\-hand side of the `+` symbol should be. We could, of course,
> put all of the added layers on one line of code, but splitting them across
> multiple lines helps a lot with code readability.
### 1\.10\.2 Formatting ggplot objects
It is exciting that we can already visualize our data to help answer our
question, but we are not done yet! We can (and should) do more to improve the
interpretability of the data visualization that we created. For example, by
default, R uses the column names as the axis labels. Usually these
column names do not have enough information about the variable in the column.
We really should replace this default with a more informative label. For the
example above, R uses the column name `mother_tongue` as the label for the
y axis, but most people will not know what that is. And even if they did, they
will not know how we measured this variable, or the group of people on which the
measurements were taken. An axis label that reads “Mother Tongue (Number of
Canadian Residents)” would be much more informative.
Adding additional layers to our visualizations that we create in `ggplot` is
one common and easy way to improve and refine our data visualizations. New
layers are added to `ggplot` objects using the `+` symbol. For example, we can
use the `xlab` (short for x axis label) and `ylab` (short for y axis label) functions
to add layers where we specify meaningful
and informative labels for the x and y axes. Again, since we are specifying
words (e.g. `"Mother Tongue (Number of Canadian Residents)"`) as arguments to
`xlab` and `ylab`, we surround them with double quotation marks. We can add many more
layers to format the plot further, and we will explore these in Chapter
[4](viz.html#viz).
```
ggplot(ten_lang, aes(x = language, y = mother_tongue)) +
geom_bar(stat = "identity") +
xlab("Language") +
ylab("Mother Tongue (Number of Canadian Residents)")
```
Figure 1\.9: Bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue with x and y labels. Note that this visualization is not done yet; there are still improvements to be made.
The result is shown in Figure [1\.9](intro.html#fig:barplot-mother-tongue-labs).
This is already quite an improvement! Let’s tackle the next major issue with the visualization
in Figure [1\.9](intro.html#fig:barplot-mother-tongue-labs): the overlapping x axis labels, which are
currently making it difficult to read the different language names.
One solution is to rotate the plot such that the bars are horizontal rather than vertical.
To accomplish this, we will swap the x and y coordinate axes:
```
ggplot(ten_lang, aes(x = mother_tongue, y = language)) +
geom_bar(stat = "identity") +
xlab("Mother Tongue (Number of Canadian Residents)") +
ylab("Language")
```
Figure 1\.10: Horizontal bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue. There are no more serious issues with this visualization, but it could be refined further.
Another big step forward, as shown in Figure [1\.10](intro.html#fig:barplot-mother-tongue-flipped)! There
are no more serious issues with the visualization. Now comes time to refine
the visualization to make it even more well\-suited to answering the question
we asked earlier in this chapter. For example, the visualization could be made more transparent by
organizing the bars according to the number of Canadian residents reporting
each language, rather than in alphabetical order. We can reorder the bars using
the `reorder` function, which orders a variable (here `language`) based on the
values of the second variable (`mother_tongue`).
```
ggplot(ten_lang, aes(x = mother_tongue,
y = reorder(language, mother_tongue))) +
geom_bar(stat = "identity") +
xlab("Mother Tongue (Number of Canadian Residents)") +
ylab("Language")
```
Figure 1\.11: Bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue with bars reordered.
Figure [1\.11](intro.html#fig:barplot-mother-tongue-reorder) provides a very clear and well\-organized
answer to our original question; we can see what the ten most often reported Aboriginal languages
were, according to the 2016 Canadian census, and how many people speak each of them. For
instance, we can see that the Aboriginal language most often reported was Cree
n.o.s. with over 60,000 Canadian residents reporting it as their mother tongue.
> **Note:** “n.o.s.” means “not otherwise specified”, so Cree n.o.s. refers to
> individuals who reported Cree as their mother tongue. In this data set, the
> Cree languages include the following categories: Cree n.o.s., Swampy Cree,
> Plains Cree, Woods Cree, and a ‘Cree not included elsewhere’ category (which
> includes Moose Cree, Northern East Cree and Southern East Cree)
> ([Statistics Canada 2016b](#ref-language2016)).
### 1\.10\.3 Putting it all together
In the block of code below, we put everything from this chapter together, with a few
modifications. In particular, we have actually skipped the
`select` step that we did above; since you specify the variable names to plot
in the `ggplot` function, you don’t actually need to `select` the columns in advance
when creating a visualization. We have also provided *comments* next to
many of the lines of code below using the
hash symbol `#`. When R sees a `#` sign, it
will ignore all of the text that
comes after the symbol on that line. So you can use comments to explain lines
of code for others, and perhaps more importantly, your future self!
It’s good practice to get in the habit of
commenting your code to improve its readability.
This exercise demonstrates the power of R. In relatively few lines of code, we
performed an entire data science workflow with a highly effective data
visualization! We asked a question, loaded the data into R, wrangled the data
(using `filter`, `arrange` and `slice`) and created a data visualization to
help answer our question. In this chapter, you got a quick taste of the data
science workflow; continue on with the next few chapters to learn each of
these steps in much more detail!
```
library(tidyverse)
# load the data set
can_lang <- read_csv("data/can_lang.csv")
# obtain the 10 most common Aboriginal languages
aboriginal_lang <- filter(can_lang, category == "Aboriginal languages")
arranged_lang <- arrange(aboriginal_lang, by = desc(mother_tongue))
ten_lang <- slice(arranged_lang, 1:10)
# create the visualization
ggplot(ten_lang, aes(x = mother_tongue,
y = reorder(language, mother_tongue))) +
geom_bar(stat = "identity") +
xlab("Mother Tongue (Number of Canadian Residents)") +
ylab("Language")
```
Figure 1\.12: Putting it all together: bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue.
1\.11 Accessing documentation
-----------------------------
There are many R functions in the `tidyverse` package (and beyond!), and
nobody can be expected to remember what every one of them does
or all of the arguments we have to give them. Fortunately, R provides
the `?` symbol, which
provides an easy way to pull up the documentation for
most functions quickly. To use the `?` symbol to access documentation, you
just put the name of the function you are curious about after the `?` symbol.
For example, if you had forgotten what the `filter` function
did or exactly what arguments to pass in, you could run the following
code:
```
?filter
```
Figure [1\.13](intro.html#fig:01-help) shows the documentation that will pop up,
including a high\-level description of the function, its arguments,
a description of each, and more. Note that you may find some of the
text in the documentation a bit too technical right now
(for example, what is `dbplyr`, and what is a lazy data frame?).
Fear not: as you work through this book, many of these terms will be introduced
to you, and slowly but surely you will become more adept at understanding and navigating
documentation like that shown in Figure [1\.13](intro.html#fig:01-help). But do keep in mind that the documentation
is not written to *teach* you about a function; it is just there as a reference to *remind*
you about the different arguments and usage of functions that you have already learned about elsewhere.
Figure 1\.13: The documentation for the `filter` function, including a high\-level description, a list of arguments and their meanings, and more.
| Data Science |
ubc-dsci.github.io | https://ubc-dsci.github.io/introduction-to-datascience/reading.html |
Chapter 2 Reading in data locally and from the web
==================================================
2\.1 Overview
-------------
In this chapter, you’ll learn to read tabular data of various formats into R
from your local device (e.g., your laptop) and the web. “Reading” (or “loading”)
is the process of
converting data (stored as plain text, a database, HTML, etc.) into an object
(e.g., a data frame) that R can easily access and manipulate. Thus reading data
is the gateway to any data analysis; you won’t be able to analyze data unless
you’ve loaded it first. And because there are many ways to store data, there
are similarly many ways to read data into R. The more time you spend upfront
matching the data reading method to the type of data you have, the less time
you will have to devote to re\-formatting, cleaning and wrangling your data (the
second step to all data analyses). It’s like making sure your shoelaces are
tied well before going for a run so that you don’t trip later on!
2\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Define the types of path and use them to locate files:
+ absolute file path
+ relative file path
+ Uniform Resource Locator (URL)
* Read data into R from various types of path using:
+ `read_csv`
+ `read_tsv`
+ `read_csv2`
+ `read_delim`
+ `read_excel`
* Compare and contrast the `read_*` functions.
* Describe when to use the following `read_*` function arguments:
+ `skip`
+ `delim`
+ `col_names`
* Choose the appropriate `tidyverse` `read_*` function and function arguments to load a given plain text tabular data set into R.
* Use the `rename` function to rename columns in a data frame.
* Use `read_excel` function and arguments to load a sheet from an excel file into R.
* Work with databases using functions from `dbplyr` and `DBI`:
+ Connect to a database with `dbConnect`.
+ List tables in the database with `dbListTables`.
+ Create a reference to a database table with `tbl`.
+ Bring data from a database into R using `collect`.
* Use `write_csv` to save a data frame to a `.csv` file.
* (*Optional*) Obtain data from the web using scraping and application programming interfaces (APIs):
+ Read HTML source code from a URL using the `rvest` package.
+ Read data from the NASA “Astronomy Picture of the Day” API using the `httr2` package.
+ Compare downloading tabular data from a plain text file (e.g., `.csv`), accessing data from an API, and scraping the HTML source code from a website.
2\.3 Absolute and relative file paths
-------------------------------------
This chapter will discuss the different functions we can use to import data
into R, but before we can talk about *how* we read the data into R with these
functions, we first need to talk about *where* the data lives. When you load a
data set into R, you first need to tell R where those files live. The file
could live on your computer (*local*)
or somewhere on the internet (*remote*).
The place where the file lives on your computer is referred to as its “path”. You can
think of the path as directions to the file. There are two kinds of paths:
*relative* paths and *absolute* paths. A relative path indicates where the file is
with respect to your *working directory* (i.e., “where you are currently”) on the computer.
On the other hand, an absolute path indicates where the file is
with respect to the computer’s filesystem base (or *root*) folder, regardless of where you are working.
Suppose our computer’s filesystem looks like the picture in Figure
[2\.1](reading.html#fig:file-system-for-export-to-intro-datascience). We are working in a
file titled `project3.ipynb`, and our current working directory is `project3`;
typically, as is the case here, the working directory is the directory containing the file you are currently
working on.
Figure 2\.1: Example file system.
Let’s say we wanted to open the `happiness_report.csv` file. We have two options to indicate
where the file is: using a relative path, or using an absolute path.
The absolute path of the file always starts with a slash `/`—representing the root folder on the computer—and
proceeds by listing out the sequence of folders you would have to enter to reach the file, each separated by another slash `/`.
So in this case, `happiness_report.csv` would be reached by starting at the root, and entering the `home` folder,
then the `dsci-100` folder, then the `project3` folder, and then finally the `data` folder. So its absolute
path would be `/home/dsci-100/project3/data/happiness_report.csv`. We can load the file using its absolute path
as a string passed to the `read_csv` function.
```
happy_data <- read_csv("/home/dsci-100/project3/data/happiness_report.csv")
```
If we instead wanted to use a relative path, we would need to list out the sequence of steps needed to get from our current
working directory to the file, with slashes `/` separating each step. Since we are currently in the `project3` folder,
we just need to enter the `data` folder to reach our desired file. Hence the relative path is `data/happiness_report.csv`,
and we can load the file using its relative path as a string passed to `read_csv`.
```
happy_data <- read_csv("data/happiness_report.csv")
```
Note that there is no forward slash at the beginning of a relative path; if we accidentally typed `"/data/happiness_report.csv"`,
R would look for a folder named `data` in the root folder of the computer—but that doesn’t exist!
Aside from specifying places to go in a path using folder names (like `data` and `project3`), we can also specify two additional
special places: the *current directory* and the *previous directory*.
We indicate the current working directory with a single dot `.`, and
the previous directory with two dots `..`. So for instance, if we wanted to reach the `bike_share.csv` file from the `project3` folder, we could
use the relative path `../project2/bike_share.csv`. We can even combine these two; for example, we could reach the `bike_share.csv` file using
the (very silly) path `../project2/../project2/./bike_share.csv` with quite a few redundant directions: it says to go back a folder, then open `project2`,
then go back a folder again, then open `project2` again, then stay in the current directory, then finally get to `bike_share.csv`. Whew, what a long trip!
So which kind of path should you use: relative, or absolute? Generally speaking, you should use relative paths.
Using a relative path helps ensure that your code can be run
on a different computer (and as an added bonus, relative paths are often shorter—easier to type!).
This is because a file’s relative path is often the same across different computers, while a
file’s absolute path (the names of
all of the folders between the computer’s root, represented by `/`, and the file) isn’t usually the same
across different computers. For example, suppose Fatima and Jayden are working on a
project together on the `happiness_report.csv` data. Fatima’s file is stored at
`/home/Fatima/project3/data/happiness_report.csv`,
while Jayden’s is stored at
`/home/Jayden/project3/data/happiness_report.csv`.
Even though Fatima and Jayden stored their files in the same place on their
computers (in their home folders), the absolute paths are different due to
their different usernames. If Jayden has code that loads the
`happiness_report.csv` data using an absolute path, the code won’t work on
Fatima’s computer. But the relative path from inside the `project3` folder
(`data/happiness_report.csv`) is the same on both computers; any code that uses
relative paths will work on both! In the additional resources section,
we include a link to a short video on the
difference between absolute and relative paths. You can also check out the
`here` package, which provides methods for finding and constructing file paths
in R.
Beyond files stored on your computer (i.e., locally), we also need a way to locate resources
stored elsewhere on the internet (i.e., remotely). For this purpose we use a
*Uniform Resource Locator (URL)*, i.e., a web address that looks something
like <https://datasciencebook.ca/>. URLs indicate the location of a resource on the internet, and
start with a web domain, followed by a forward slash `/`, and then a path
to where the resource is located on the remote machine.
2\.4 Reading tabular data from a plain text file into R
-------------------------------------------------------
### 2\.4\.1 `read_csv` to read in comma\-separated values files
Now that we have learned about *where* data could be, we will learn about *how*
to import data into R using various functions. Specifically, we will learn how
to *read* tabular data from a plain text file (a document containing only text)
*into* R and *write* tabular data to a file *out of* R. The function we use to do this
depends on the file’s format. For example, in the last chapter, we learned about using
the `tidyverse` `read_csv` function when reading `.csv` (**c**omma\-**s**eparated **v**alues)
files. In that case, the separator or *delimiter* that divided our columns was a
comma (`,`). We only learned the case where the data matched the expected defaults
of the `read_csv` function
(column names are present, and commas are used as the delimiter between columns).
In this section, we will learn how to read
files that do not satisfy the default expectations of `read_csv`.
Before we jump into the cases where the data aren’t in the expected default format
for `tidyverse` and `read_csv`, let’s revisit the more straightforward
case where the defaults hold, and the only argument we need to give to the function
is the path to the file, `data/can_lang.csv`. The `can_lang` data set contains
language data from the 2016 Canadian census.
We put `data/` before the file’s
name when we are loading the data set because this data set is located in a
sub\-folder, named `data`, relative to where we are running our R code.
Here is what the text in the file `data/can_lang.csv` looks like.
```
category,language,mother_tongue,most_at_home,most_at_work,lang_known
Aboriginal languages,"Aboriginal languages, n.o.s.",590,235,30,665
Non-Official & Non-Aboriginal languages,Afrikaans,10260,4785,85,23415
Non-Official & Non-Aboriginal languages,"Afro-Asiatic languages, n.i.e.",1150,44
Non-Official & Non-Aboriginal languages,Akan (Twi),13460,5985,25,22150
Non-Official & Non-Aboriginal languages,Albanian,26895,13135,345,31930
Aboriginal languages,"Algonquian languages, n.i.e.",45,10,0,120
Aboriginal languages,Algonquin,1260,370,40,2480
Non-Official & Non-Aboriginal languages,American Sign Language,2685,3020,1145,21
Non-Official & Non-Aboriginal languages,Amharic,22465,12785,200,33670
```
And here is a review of how we can use `read_csv` to load it into R. First we
load the `tidyverse` package to gain access to useful
functions for reading the data.
```
library(tidyverse)
```
Next we use `read_csv` to load the data into R, and in that call we specify the
relative path to the file. Note that it is normal and expected that a message is
printed out after using the `read_csv` and related functions. This message lets you know the data types
of each of the columns that R inferred while reading the data into R. In the
future when we use this and related functions to load data in this book, we will
silence these messages to help with the readability of the book.
```
canlang_data <- read_csv("data/can_lang.csv")
```
```
## Rows: 214 Columns: 6
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (2): category, language
## dbl (4): mother_tongue, most_at_home, most_at_work, lang_known
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
```
Finally, to view the first 10 rows of the data frame,
we must call it:
```
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
### 2\.4\.2 Skipping rows when reading in data
Oftentimes, information about how data was collected, or other relevant
information, is included at the top of the data file. This information is
usually written in sentence and paragraph form, with no delimiter because it is
not organized into columns. An example of this is shown below. This information
gives the data scientist useful context and information about the data,
however, it is not well formatted or intended to be read into a data frame cell
along with the tabular data that follows later in the file.
```
Data source: https://ttimbers.github.io/canlang/
Data originally published in: Statistics Canada Census of Population 2016.
Reproduced and distributed on an as-is basis with their permission.
category,language,mother_tongue,most_at_home,most_at_work,lang_known
Aboriginal languages,"Aboriginal languages, n.o.s.",590,235,30,665
Non-Official & Non-Aboriginal languages,Afrikaans,10260,4785,85,23415
Non-Official & Non-Aboriginal languages,"Afro-Asiatic languages, n.i.e.",1150,44
Non-Official & Non-Aboriginal languages,Akan (Twi),13460,5985,25,22150
Non-Official & Non-Aboriginal languages,Albanian,26895,13135,345,31930
Aboriginal languages,"Algonquian languages, n.i.e.",45,10,0,120
Aboriginal languages,Algonquin,1260,370,40,2480
Non-Official & Non-Aboriginal languages,American Sign Language,2685,3020,1145,21
Non-Official & Non-Aboriginal languages,Amharic,22465,12785,200,33670
```
With this extra information being present at the top of the file, using
`read_csv` as we did previously does not allow us to correctly load the data
into R. In the case of this file we end up only reading in one column of the
data set. In contrast to the normal and expected messages above, this time R
prints out a warning for us indicating that there might be a problem with how
our data is being read in.
```
canlang_data <- read_csv("data/can_lang_meta-data.csv")
```
```
## Warning: One or more parsing issues, call `problems()` on your data frame for details,
## e.g.:
## dat <- vroom(...)
## problems(dat)
```
```
canlang_data
```
```
## # A tibble: 217 × 1
## `Data source: https://ttimbers.github.io/canlang/`
## <chr>
## 1 "Data originally published in: Statistics Canada Census of Population 2016."
## 2 "Reproduced and distributed on an as-is basis with their permission."
## 3 "category,language,mother_tongue,most_at_home,most_at_work,lang_known"
## 4 "Aboriginal languages,\"Aboriginal languages, n.o.s.\",590,235,30,665"
## 5 "Non-Official & Non-Aboriginal languages,Afrikaans,10260,4785,85,23415"
## 6 "Non-Official & Non-Aboriginal languages,\"Afro-Asiatic languages, n.i.e.\",…
## 7 "Non-Official & Non-Aboriginal languages,Akan (Twi),13460,5985,25,22150"
## 8 "Non-Official & Non-Aboriginal languages,Albanian,26895,13135,345,31930"
## 9 "Aboriginal languages,\"Algonquian languages, n.i.e.\",45,10,0,120"
## 10 "Aboriginal languages,Algonquin,1260,370,40,2480"
## # ℹ 207 more rows
```
To successfully read data like this into R, the `skip`
argument can be useful to tell R
how many lines to skip before
it should start reading in the data. In the example above, we would set this
value to 3\.
```
canlang_data <- read_csv("data/can_lang_meta-data.csv",
skip = 3)
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
How did we know to skip three lines? We looked at the data! The first three lines
of the data had information we didn’t need to import:
```
Data source: https://ttimbers.github.io/canlang/
Data originally published in: Statistics Canada Census of Population 2016.
Reproduced and distributed on an as-is basis with their permission.
```
The column names began at line 4, so we skipped the first three lines.
### 2\.4\.3 `read_tsv` to read in tab\-separated values files
Another common way data is stored is with tabs as the delimiter. Notice the
data file, `can_lang.tsv`, has tabs in between the columns instead of
commas.
```
category language mother_tongue most_at_home most_at_work lang_kno
Aboriginal languages Aboriginal languages, n.o.s. 590 235 30 665
Non-Official & Non-Aboriginal languages Afrikaans 10260 4785 85 23415
Non-Official & Non-Aboriginal languages Afro-Asiatic languages, n.i.e. 1150
Non-Official & Non-Aboriginal languages Akan (Twi) 13460 5985 25 22150
Non-Official & Non-Aboriginal languages Albanian 26895 13135 345 31930
Aboriginal languages Algonquian languages, n.i.e. 45 10 0 120
Aboriginal languages Algonquin 1260 370 40 2480
Non-Official & Non-Aboriginal languages American Sign Language 2685 3020
Non-Official & Non-Aboriginal languages Amharic 22465 12785 200 33670
```
We can use the `read_tsv` function
to read in `.tsv` (**t**ab **s**eparated **v**alues) files.
```
canlang_data <- read_tsv("data/can_lang.tsv")
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
If you compare the data frame here to the data frame we obtained in Section
[2\.4\.1](reading.html#readcsv) using `read_csv`, you’ll notice that they look identical:
they have the same number of columns and rows, the same column names, and the same entries! So
even though we needed to use a different
function depending on the file format, our resulting data frame
(`canlang_data`) in both cases was the same.
### 2\.4\.4 `read_delim` as a more flexible method to get tabular data into R
The `read_csv` and `read_tsv` functions are actually just special cases of the more general
`read_delim` function. We can use
`read_delim` to import both comma and tab\-separated values files, and more; we just
have to specify the delimiter.
For example, the `can_lang_no_names.tsv` file contains a different version of
this same data set with no column names and uses tabs as the delimiter
instead of commas.
Here is how the file would look in a plain text editor:
```
Aboriginal languages Aboriginal languages, n.o.s. 590 235 30 665
Non-Official & Non-Aboriginal languages Afrikaans 10260 4785 85 23415
Non-Official & Non-Aboriginal languages Afro-Asiatic languages, n.i.e. 1150
Non-Official & Non-Aboriginal languages Akan (Twi) 13460 5985 25 22150
Non-Official & Non-Aboriginal languages Albanian 26895 13135 345 31930
Aboriginal languages Algonquian languages, n.i.e. 45 10 0 120
Aboriginal languages Algonquin 1260 370 40 2480
Non-Official & Non-Aboriginal languages American Sign Language 2685 3020
Non-Official & Non-Aboriginal languages Amharic 22465 12785 200 33670
Non-Official & Non-Aboriginal languages Arabic 419890 223535 5585 629055
```
To read this into R using the `read_delim` function, we specify the path
to the file as the first argument, provide
the tab character `"\t"` as the `delim` argument,
and set the `col_names` argument to `FALSE` to denote that there are no column names
provided in the data. Note that the `read_csv`, `read_tsv`, and `read_delim` functions
all have a `col_names` argument with
the default value `TRUE`.
> **Note:** `\t` is an example of an *escaped character*,
> which always starts with a backslash (`\`).
> Escaped characters are used to represent non\-printing characters
> (like the tab) or those with special meanings (such as quotation marks).
```
canlang_data <- read_delim("data/can_lang_no_names.tsv",
delim = "\t",
col_names = FALSE)
canlang_data
```
```
## # A tibble: 214 × 6
## X1 X2 X3 X4 X5 X6
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal languages Aborigina… 590 235 30 665
## 2 Non-Official & Non-Aboriginal languages Afrikaans 10260 4785 85 23415
## 3 Non-Official & Non-Aboriginal languages Afro-Asia… 1150 445 10 2775
## 4 Non-Official & Non-Aboriginal languages Akan (Twi) 13460 5985 25 22150
## 5 Non-Official & Non-Aboriginal languages Albanian 26895 13135 345 31930
## 6 Aboriginal languages Algonquia… 45 10 0 120
## 7 Aboriginal languages Algonquin 1260 370 40 2480
## 8 Non-Official & Non-Aboriginal languages American … 2685 3020 1145 21930
## 9 Non-Official & Non-Aboriginal languages Amharic 22465 12785 200 33670
## 10 Non-Official & Non-Aboriginal languages Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
Data frames in R need to have column names. Thus if you read in data
without column names, R will assign names automatically. In this example,
R assigns the column names `X1, X2, X3, X4, X5, X6`.
It is best to rename your columns manually in this scenario. The current
column names (`X1, X2`, etc.) are not very descriptive and will make your analysis confusing.
To rename your columns, you can use the `rename` function
from [the `dplyr` R package](https://dplyr.tidyverse.org/) ([Wickham, François, et al. 2021](#ref-dplyr))
(one of the packages
loaded with `tidyverse`, so we don’t need to load it separately). The first
argument is the data set, and in the subsequent arguments you
write `new_name = old_name` for the selected variables to
rename. We rename the `X1, X2, ..., X6`
columns in the `canlang_data` data frame to more descriptive names below.
```
canlang_data <- rename(canlang_data,
category = X1,
language = X2,
mother_tongue = X3,
most_at_home = X4,
most_at_work = X5,
lang_known = X6)
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
### 2\.4\.5 Reading tabular data directly from a URL
We can also use `read_csv`, `read_tsv`, or `read_delim` (and related functions)
to read in data directly from a **U**niform **R**esource **L**ocator (URL) that
contains tabular data. Here, we provide the URL of a remote file to
`read_*`, instead of a path to a local file on our
computer. We need to surround the URL with quotes similar to when we specify a
path on our local computer. All other arguments that we use are the same as
when using these functions with a local file on our computer.
```
url <- "https://raw.githubusercontent.com/UBC-DSCI/data/main/can_lang.csv"
canlang_data <- read_csv(url)
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
### 2\.4\.6 Downloading data from a URL
Occasionally the data available at a URL is not formatted nicely enough to use
`read_csv`, `read_tsv`, `read_delim`, or other related functions to read the data
directly into R. In situations where it is necessary to download a file
to our local computer prior to working with it in R, we can use the `download.file`
function. The first argument is the URL, and the second is a path where we would
like to store the downloaded file.
```
download.file(url, "data/can_lang.csv")
canlang_data <- read_csv("data/can_lang.csv")
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
### 2\.4\.7 Previewing a data file before reading it into R
In many of the examples above, we gave you previews of the data file before we read
it into R. Previewing data is essential to see whether or not there are column
names, what the delimiters are, and if there are lines you need to skip.
You should do this yourself when trying to read in data files: open the file in
whichever text editor you prefer to inspect its contents prior to reading it into R.
2\.5 Reading tabular data from a Microsoft Excel file
-----------------------------------------------------
There are many other ways to store tabular data sets beyond plain text files,
and similarly, many ways to load those data sets into R. For example, it is
very common to encounter, and need to load into R, data stored as a Microsoft
Excel spreadsheet (with the file name
extension `.xlsx`). To be able to do this, a key thing to know is that even
though `.csv` and `.xlsx` files look almost identical when loaded into Excel,
the data themselves are stored completely differently. While `.csv` files are
plain text files, where the characters you see when you open the file in a text
editor are exactly the data they represent, this is not the case for `.xlsx`
files. Take a look at a snippet of what a `.xlsx` file would look like in a text editor:
```
,?'O
_rels/.rels???J1??>E?{7?
<?V????w8?'J???'QrJ???Tf?d??d?o?wZ'???@>?4'?|??hlIo??F
t 8f??3wn
????t??u"/
%~Ed2??<?w??
?Pd(??J-?E???7?'t(?-GZ?????y???c~N?g[^_r?4
yG?O
?K??G?
]TUEe??O??c[???????6q??s??d?m???\???H?^????3} ?rZY? ?:L60?^?????XTP+?|?
X?a??4VT?,D?Jq
```
This type of file representation allows Excel files to store additional things
that you cannot store in a `.csv` file, such as fonts, text formatting,
graphics, multiple sheets and more. And despite looking odd in a plain text
editor, we can read Excel spreadsheets into R using the `readxl` package
developed specifically for this
purpose.
```
library(readxl)
canlang_data <- read_excel("data/can_lang.xlsx")
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
If the `.xlsx` file has multiple sheets, you have to use the `sheet` argument
to specify the sheet number or name. You can also specify cell ranges using the
`range` argument. This functionality is useful when a single sheet contains
multiple tables (a sad thing that happens to many Excel spreadsheets since this
makes reading in data more difficult).
As with plain text files, you should always explore the data file before
importing it into R. Exploring the data beforehand helps you decide which
arguments you need to load the data into R successfully. If you do not have
the Excel program on your computer, you can use other programs to preview the
file. Examples include Google Sheets and Libre Office.
In Table [2\.1](reading.html#tab:read-table) we summarize the `read_*` functions we covered
in this chapter. We also include the `read_csv2` function for data separated by
semicolons `;`, which you may run into with data sets where the decimal is
represented by a comma instead of a period (as with some data sets from
European countries).
Table 2\.1: Summary of `read_*` functions
| Data File Type | R Function | R Package |
| --- | --- | --- |
| Comma (`,`) separated files | `read_csv` | `readr` |
| Tab (`\t`) separated files | `read_tsv` | `readr` |
| Semicolon (`;`) separated files | `read_csv2` | `readr` |
| Various formats (`.csv`, `.tsv`) | `read_delim` | `readr` |
| Excel files (`.xlsx`) | `read_excel` | `readxl` |
> **Note:** `readr` is a part of the `tidyverse` package so we did not need to load
> this package separately since we loaded `tidyverse`.
2\.6 Reading data from a database
---------------------------------
Another very common form of data storage is the relational database. Databases
are great when you have large data sets or multiple users
working on a project. There are many relational database management systems,
such as SQLite, MySQL, PostgreSQL, Oracle,
and many more. These
different relational database management systems each have their own advantages
and limitations. Almost all employ SQL (*structured query language*) to obtain
data from the database. But you don’t need to know SQL to analyze data from
a database; several packages have been written that allow you to connect to
relational databases and use the R programming language
to obtain data. In this book, we will give examples of how to do this
using R with SQLite and PostgreSQL databases.
### 2\.6\.1 Reading data from a SQLite database
SQLite is probably the simplest relational database system
that one can use in combination with R. SQLite databases are self\-contained, and are
usually stored and accessed locally on one computer from
a file with a `.db` extension (or sometimes an `.sqlite` extension).
Similar to Excel files, these are not plain text
files and cannot be read in a plain text editor.
The first thing you need to do to read data into R from a database is to
connect to the database. We do that using the `dbConnect` function from the
`DBI` (database interface) package. This does not read
in the data, but simply tells R where the database is and opens up a
communication channel that R can use to send SQL commands to the database.
```
library(DBI)
canlang_conn <- dbConnect(RSQLite::SQLite(), "data/can_lang.db")
```
Often relational databases have many tables; thus, in order to retrieve
data from a database, you need to know the name of the table
in which the data is stored. You can get the names of
all the tables in the database using the `dbListTables`
function:
```
tables <- dbListTables(canlang_conn)
tables
```
```
## [1] "lang"
```
The `dbListTables` function returned only one name, which tells us
that there is only one table in this database. To reference a table in the
database (so that we can perform operations like selecting columns and filtering rows), we
use the `tbl` function from the `dbplyr` package. The object returned
by the `tbl` function allows us to work with data
stored in databases as if they were just regular data frames; but secretly, behind
the scenes, `dbplyr` is turning your function calls (e.g., `select` and `filter`)
into SQL queries!
```
library(dbplyr)
lang_db <- tbl(canlang_conn, "lang")
lang_db
```
```
## # Source: table<lang> [?? x 6]
## # Database: sqlite 3.41.2 [/home/rstudio/introduction-to-datascience/data/can_lang.db]
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ more rows
```
Although it looks like we just got a data frame from the database, we didn’t!
It’s a *reference*; the data is still stored only in the SQLite database. The
`dbplyr` package works this way because databases are often more efficient at selecting, filtering
and joining large data sets than R. And typically the database will not even
be stored on your computer, but rather a more powerful machine somewhere on the
web. So R is lazy and waits to bring this data into memory until you explicitly
tell it to using the `collect` function.
Figure [2\.2](reading.html#fig:01-ref-vs-tibble) highlights the difference
between a `tibble` object in R and the output we just created. Notice in the table
on the right, the first two lines of the output indicate the source is SQL. The
last line doesn’t show how many rows there are (R is trying to avoid performing
expensive query operations), whereas the output for the `tibble` object does.
Figure 2\.2: Comparison of a reference to data in a database and a tibble in R.
We can look at the SQL commands that are sent to the database when we write
`tbl(canlang_conn, "lang")` in R with the `show_query` function from the
`dbplyr` package.
```
show_query(tbl(canlang_conn, "lang"))
```
```
## <SQL>
## SELECT *
## FROM `lang`
```
The output above shows the SQL code that is sent to the database. When we
write `tbl(canlang_conn, "lang")` in R, in the background, the function is
translating the R code into SQL, sending that SQL to the database, and then translating the
response for us. So `dbplyr` does all the hard work of translating from R to SQL and back for us;
we can just stick with R!
With our `lang_db` table reference for the 2016 Canadian Census data in hand, we
can mostly continue onward as if it were a regular data frame. For example, let’s do the same exercise
from Chapter [1](intro.html#intro): we will obtain only those rows corresponding to Aboriginal languages, and keep only
the `language` and `mother_tongue` columns.
We can use the `filter` function to obtain only certain rows. Below we filter the data to include only Aboriginal languages.
```
aboriginal_lang_db <- filter(lang_db, category == "Aboriginal languages")
aboriginal_lang_db
```
```
## # Source: SQL [?? x 6]
## # Database: sqlite 3.41.2 [/home/rstudio/introduction-to-datascience/data/can_lang.db]
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Aboriginal langu… Algonqu… 45 10 0 120
## 3 Aboriginal langu… Algonqu… 1260 370 40 2480
## 4 Aboriginal langu… Athabas… 50 10 0 85
## 5 Aboriginal langu… Atikame… 6150 5465 1100 6645
## 6 Aboriginal langu… Babine … 110 20 10 210
## 7 Aboriginal langu… Beaver 190 50 0 340
## 8 Aboriginal langu… Blackfo… 2815 1110 85 5645
## 9 Aboriginal langu… Carrier 1025 250 15 2100
## 10 Aboriginal langu… Cayuga 45 10 10 125
## # ℹ more rows
```
Above you can again see the hints that this data is not actually stored in R yet:
the source is `SQL [?? x 6]` and the output says `... more rows` at the end
(both indicating that R does not know how many rows there are in total!),
and a database type `sqlite` is listed.
We didn’t use the `collect` function because we are not ready to bring the data into R yet.
We can still use the database to do some work to obtain *only* the small amount of data we want to work with locally
in R. Let’s add the second part of our database query: selecting only the `language` and `mother_tongue` columns
using the `select` function.
```
aboriginal_lang_selected_db <- select(aboriginal_lang_db, language, mother_tongue)
aboriginal_lang_selected_db
```
```
## # Source: SQL [?? x 2]
## # Database: sqlite 3.41.2 [/home/rstudio/introduction-to-datascience/data/can_lang.db]
## language mother_tongue
## <chr> <dbl>
## 1 Aboriginal languages, n.o.s. 590
## 2 Algonquian languages, n.i.e. 45
## 3 Algonquin 1260
## 4 Athabaskan languages, n.i.e. 50
## 5 Atikamekw 6150
## 6 Babine (Wetsuwet'en) 110
## 7 Beaver 190
## 8 Blackfoot 2815
## 9 Carrier 1025
## 10 Cayuga 45
## # ℹ more rows
```
Now you can see that the database will return only the two columns we asked for with the `select` function.
In order to actually retrieve this data in R as a data frame,
we use the `collect` function.
Below you will see that after running `collect`, R knows that the retrieved
data has 67 rows, and there is no database listed any more.
```
aboriginal_lang_data <- collect(aboriginal_lang_selected_db)
aboriginal_lang_data
```
```
## # A tibble: 67 × 2
## language mother_tongue
## <chr> <dbl>
## 1 Aboriginal languages, n.o.s. 590
## 2 Algonquian languages, n.i.e. 45
## 3 Algonquin 1260
## 4 Athabaskan languages, n.i.e. 50
## 5 Atikamekw 6150
## 6 Babine (Wetsuwet'en) 110
## 7 Beaver 190
## 8 Blackfoot 2815
## 9 Carrier 1025
## 10 Cayuga 45
## # ℹ 57 more rows
```
Aside from knowing the number of rows, the data looks pretty similar in both
outputs shown above. And `dbplyr` provides many more functions (not just `filter`)
that you can use to directly feed the database reference (`lang_db`) into
downstream analysis functions (e.g., `ggplot2` for data visualization).
But `dbplyr` does not provide *every* function that we need for analysis;
we do eventually need to call `collect`.
For example, look what happens when we try to use `nrow` to count rows
in a data frame:
```
nrow(aboriginal_lang_selected_db)
```
```
## [1] NA
```
or `tail` to preview the last six rows of a data frame:
```
tail(aboriginal_lang_selected_db)
```
```
## Error: tail() is not supported by sql sources
```
Additionally, some operations will not work to extract columns or single values
from the reference given by the `tbl` function. Thus, once you have finished
your data wrangling of the `tbl` database reference object, it is advisable to
bring it into R as a data frame using `collect`.
But be very careful using `collect`: databases are often *very* big,
and reading an entire table into R might take a long time to run or even possibly
crash your machine. So make sure you use `filter` and `select` on the database table
to reduce the data to a reasonable size before using `collect` to read it into R!
### 2\.6\.2 Reading data from a PostgreSQL database
PostgreSQL (also called Postgres) is a very popular
and open\-source option for relational database software.
Unlike SQLite,
PostgreSQL uses a client–server database engine, as it was designed to be used
and accessed on a network. This means that you have to provide more information
to R when connecting to Postgres databases. The additional information that you
need to include when you call the `dbConnect` function is listed below:
* `dbname`: the name of the database (a single PostgreSQL instance can host more than one database)
* `host`: the URL pointing to where the database is located
* `port`: the communication endpoint between R and the PostgreSQL database (usually `5432`)
* `user`: the username for accessing the database
* `password`: the password for accessing the database
Additionally, we must use the `RPostgres` package instead of `RSQLite` in the
`dbConnect` function call. Below we demonstrate how to connect to a version of
the `can_mov_db` database, which contains information about Canadian movies.
Note that the `host` (`fakeserver.stat.ubc.ca`), `user` (`user0001`), and
`password` (`abc123`) below are *not real*; you will not actually
be able to connect to a database using this information.
```
library(RPostgres)
canmov_conn <- dbConnect(RPostgres::Postgres(), dbname = "can_mov_db",
host = "fakeserver.stat.ubc.ca", port = 5432,
user = "user0001", password = "abc123")
```
After opening the connection, everything looks and behaves almost identically
to when we were using an SQLite database in R. For example, we can again use
`dbListTables` to find out what tables are in the `can_mov_db` database:
```
dbListTables(canmov_conn)
```
```
[1] "themes" "medium" "titles" "title_aliases" "forms"
[6] "episodes" "names" "names_occupations" "occupation" "ratings"
```
We see that there are 10 tables in this database. Let’s first look at the
`"ratings"` table to find the lowest rating that exists in the `can_mov_db`
database:
```
ratings_db <- tbl(canmov_conn, "ratings")
ratings_db
```
```
# Source: table<ratings> [?? x 3]
# Database: postgres [user0001@fakeserver.stat.ubc.ca:5432/can_mov_db]
title average_rating num_votes
<chr> <dbl> <int>
1 The Grand Seduction 6.6 150
2 Rhymes for Young Ghouls 6.3 1685
3 Mommy 7.5 1060
4 Incendies 6.1 1101
5 Bon Cop, Bad Cop 7.0 894
6 Goon 5.5 1111
7 Monsieur Lazhar 5.6 610
8 What if 5.3 1401
9 The Barbarian Invations 5.8 99
10 Away from Her 6.9 2311
# … with more rows
```
To find the lowest rating that exists in the data base, we first need to
extract the `average_rating` column using `select`:
```
avg_rating_db <- select(ratings_db, average_rating)
avg_rating_db
```
```
# Source: lazy query [?? x 1]
# Database: postgres [user0001@fakeserver.stat.ubc.ca:5432/can_mov_db]
average_rating
<dbl>
1 6.6
2 6.3
3 7.5
4 6.1
5 7.0
6 5.5
7 5.6
8 5.3
9 5.8
10 6.9
# … with more rows
```
Next we use `min` to find the minimum rating in that column:
```
min(avg_rating_db)
```
```
Error in min(avg_rating_db) : invalid 'type' (list) of argument
```
Instead of the minimum, we get an error! This is another example of when we
need to use the `collect` function to bring the data into R for further
computation:
```
avg_rating_data <- collect(avg_rating_db)
min(avg_rating_data)
```
```
[1] 1
```
We see the lowest rating given to a movie is 1, indicating that it must have
been a really bad movie…
### 2\.6\.3 Why should we bother with databases at all?
Opening a database
involved a lot more effort than just opening a `.csv`, `.tsv`, or any of the
other plain text or Excel formats. We had to open a connection to the database,
then use `dbplyr` to translate `tidyverse`\-like
commands (`filter`, `select` etc.) into SQL commands that the database
understands, and then finally `collect` the results. And not
all `tidyverse` commands can currently be translated to work with
databases. For example, we can compute a mean with a database
but can’t easily compute a median. So you might be wondering: why should we use
databases at all?
Databases are beneficial in a large\-scale setting:
* They enable storing large data sets across multiple computers with backups.
* They provide mechanisms for ensuring data integrity and validating input.
* They provide security and data access control.
* They allow multiple users to access data simultaneously
and remotely without conflicts and errors.
For example, there are billions of Google searches conducted daily in 2021 ([Real Time Statistics Project 2021](#ref-googlesearches)).
Can you imagine if Google stored all of the data
from those searches in a single `.csv` file!? Chaos would ensue!
2\.7 Writing data from R to a `.csv` file
-----------------------------------------
At the middle and end of a data analysis, we often want to write a data frame
that has changed (either through filtering, selecting, mutating or summarizing)
to a file to share it with others or use it for another step in the analysis.
The most straightforward way to do this is to use the `write_csv` function
from the `tidyverse` package. The default
arguments for this file are to use a comma (`,`) as the delimiter and include
column names. Below we demonstrate creating a new version of the Canadian
languages data set without the official languages category according to the
Canadian 2016 Census, and then writing this to a `.csv` file:
```
no_official_lang_data <- filter(can_lang, category != "Official languages")
write_csv(no_official_lang_data, "data/no_official_languages.csv")
```
2\.8 Obtaining data from the web
--------------------------------
> **Note:** This section is not required reading for the remainder of the textbook. It
> is included for those readers interested in learning a little bit more about
> how to obtain different types of data from the web.
Data doesn’t just magically appear on your computer; you need to get it from
somewhere. Earlier in the chapter we showed you how to access data stored in a
plain text, spreadsheet\-like format (e.g., comma\- or tab\-separated) from a web
URL using one of the `read_*` functions from the `tidyverse`. But as time goes
on, it is increasingly uncommon to find data (especially large amounts of data)
in this format available for download from a URL. Instead, websites now often
offer something known as an **a**pplication **p**rogramming **i**nterface
(API), which
provides a programmatic way to ask for subsets of a data set. This allows the
website owner to control *who* has access to the data, *what portion* of the
data they have access to, and *how much* data they can access. Typically, the
website owner will give you a *token* or *key* (a secret string of characters somewhat
like a password) that you have to provide when accessing the API.
Another interesting thought: websites themselves *are* data! When you type a
URL into your browser window, your browser asks the *web server* (another
computer on the internet whose job it is to respond to requests for the
website) to give it the website’s data, and then your browser translates that
data into something you can see. If the website shows you some information that
you’re interested in, you could *create* a data set for yourself by copying and
pasting that information into a file. This process of taking information
directly from what a website displays is called
*web scraping* (or sometimes *screen scraping*). Now, of course, copying and pasting
information manually is a painstaking and error\-prone process, especially when
there is a lot of information to gather. So instead of asking your browser to
translate the information that the web server provides into something you can
see, you can collect that data programmatically—in the form of
**h**yper**t**ext **m**arkup **l**anguage
(HTML)
and **c**ascading **s**tyle **s**heet (CSS) code—and process it
to extract useful information. HTML provides the
basic structure of a site and tells the webpage how to display the content
(e.g., titles, paragraphs, bullet lists etc.), whereas CSS helps style the
content and tells the webpage how the HTML elements should
be presented (e.g., colors, layouts, fonts etc.).
This subsection will show you the basics of both web scraping
with the [`rvest` R package](https://rvest.tidyverse.org/) ([Wickham 2021a](#ref-rvest))
and accessing the NASA “Astronomy Picture of the Day” API
using the [`httr2` R package](https://httr2.r-lib.org/) ([Wickham 2023](#ref-httr2)).
### 2\.8\.1 Web scraping
#### HTML and CSS selectors
When you enter a URL into your browser, your browser connects to the
web server at that URL and asks for the *source code* for the website.
This is the data that the browser translates
into something you can see; so if we
are going to create our own data by scraping a website, we have to first understand
what that data looks like! For example, let’s say we are interested
in knowing the average rental price (per square foot) of the most recently
available one\-bedroom apartments in Vancouver
on [Craiglist](https://vancouver.craigslist.org). When we visit the Vancouver Craigslist
website and search for one\-bedroom apartments,
we should see something similar to Figure [2\.3](reading.html#fig:craigslist-human).
Figure 2\.3: Craigslist webpage of advertisements for one\-bedroom apartments.
Based on what our browser shows us, it’s pretty easy to find the size and price
for each apartment listed. But we would like to be able to obtain that information
using R, without any manual human effort or copying and pasting. We do this by
examining the *source code* that the web server actually sent our browser to
display for us. We show a snippet of it below; the
entire source
is [included with the code for this book](https://github.com/UBC-DSCI/introduction-to-datascience/blob/main/img/reading/website_source.txt):
```
<span class="result-meta">
<span class="result-price">$800</span>
<span class="housing">
1br -
</span>
<span class="result-hood"> (13768 108th Avenue)</span>
<span class="result-tags">
<span class="maptag" data-pid="6786042973">map</span>
</span>
<span class="banish icon icon-trash" role="button">
<span class="screen-reader-text">hide this posting</span>
</span>
<span class="unbanish icon icon-trash red" role="button"></span>
<a href="#" class="restore-link">
<span class="restore-narrow-text">restore</span>
<span class="restore-wide-text">restore this posting</span>
</a>
<span class="result-price">$2285</span>
</span>
```
Oof…you can tell that the source code for a web page is not really designed
for humans to understand easily. However, if you look through it closely, you
will find that the information we’re interested in is hidden among the muck.
For example, near the top of the snippet
above you can see a line that looks like
```
<span class="result-price">$800</span>
```
That snippet is definitely storing the price of a particular apartment. With some more
investigation, you should be able to find things like the date and time of the
listing, the address of the listing, and more. So this source code most likely
contains all the information we are interested in!
Let’s dig into that line above a bit more. You can see that
that bit of code has an *opening tag* (words between `<` and `>`, like
`<span>`) and a *closing tag* (the same with a slash, like `</span>`). HTML
source code generally stores its data between opening and closing tags like
these. Tags are keywords that tell the web browser how to display or format
the content. Above you can see that the information we want (`$800`) is stored
between an opening and closing tag (`<span>` and `</span>`). In the opening
tag, you can also see a very useful “class” (a special word that is sometimes
included with opening tags): `class="result-price"`. Since we want R to
programmatically sort through all of the source code for the website to find
apartment prices, maybe we can look for all the tags with the `"result-price"`
class, and grab the information between the opening and closing tag. Indeed,
take a look at another line of the source snippet above:
```
<span class="result-price">$2285</span>
```
It’s yet another price for an apartment listing, and the tags surrounding it
have the `"result-price"` class. Wonderful! Now that we know what pattern we
are looking for—a dollar amount between opening and closing tags that have the
`"result-price"` class—we should be able to use code to pull out all of the
matching patterns from the source code to obtain our data. This sort of “pattern”
is known as a *CSS selector* (where CSS stands for **c**ascading **s**tyle **s**heet).
The above was a simple example of “finding the pattern to look for”; many
websites are quite a bit larger and more complex, and so is their website
source code. Fortunately, there are tools available to make this process
easier. For example,
[SelectorGadget](https://selectorgadget.com/) is
an open\-source tool that simplifies identifying the generating
and finding of CSS selectors.
At the end of the chapter in the additional resources section, we include a link to
a short video on how to install and use the SelectorGadget tool to
obtain CSS selectors for use in web scraping.
After installing and enabling the tool, you can click the
website element for which you want an appropriate selector. For
example, if we click the price of an apartment listing, we
find that SelectorGadget shows us the selector `.result-price`
in its toolbar, and highlights all the other apartment
prices that would be obtained using that selector (Figure [2\.4](reading.html#fig:sg1)).
Figure 2\.4: Using the SelectorGadget on a Craigslist webpage to obtain the CCS selector useful for obtaining apartment prices.
If we then click the size of an apartment listing, SelectorGadget shows us
the `span` selector, and highlights many of the lines on the page; this indicates that the
`span` selector is not specific enough to capture only apartment sizes (Figure [2\.5](reading.html#fig:sg3)).
Figure 2\.5: Using the SelectorGadget on a Craigslist webpage to obtain a CCS selector useful for obtaining apartment sizes.
To narrow the selector, we can click one of the highlighted elements that
we *do not* want. For example, we can deselect the “pic/map” links,
resulting in only the data we want highlighted using the `.housing` selector (Figure [2\.6](reading.html#fig:sg2)).
Figure 2\.6: Using the SelectorGadget on a Craigslist webpage to refine the CCS selector to one that is most useful for obtaining apartment sizes.
So to scrape information about the square footage and rental price
of apartment listings, we need to use
the two CSS selectors `.housing` and `.result-price`, respectively.
The selector gadget returns them to us as a comma\-separated list (here
`.housing , .result-price`), which is exactly the format we need to provide to
R if we are using more than one CSS selector.
**Caution: are you allowed to scrape that website?**
*Before* scraping data from the web, you should always check whether or not
you are *allowed* to scrape it! There are two documents that are important
for this: the `robots.txt` file and the Terms of Service
document. If we take a look at [Craigslist’s Terms of Service document](https://www.craigslist.org/about/terms.of.use),
we find the following text: *“You agree not to copy/collect CL content
via robots, spiders, scripts, scrapers, crawlers, or any automated or manual equivalent (e.g., by hand).”*
So unfortunately, without explicit permission, we are not allowed to scrape the website.
What to do now? Well, we *could* ask the owner of Craigslist for permission to scrape.
However, we are not likely to get a response, and even if we did they would not likely give us permission.
The more realistic answer is that we simply cannot scrape Craigslist. If we still want
to find data about rental prices in Vancouver, we must go elsewhere.
To continue learning how to scrape data from the web, let’s instead
scrape data on the population of Canadian cities from Wikipedia.
We have checked the [Terms of Service document](https://foundation.wikimedia.org/wiki/Terms_of_Use/en),
and it does not mention that web scraping is disallowed.
We will use the SelectorGadget tool to pick elements that we are interested in
(city names and population counts) and deselect others to indicate that we are not
interested in them (province names), as shown in Figure [2\.7](reading.html#fig:sg4).
Figure 2\.7: Using the SelectorGadget on a Wikipedia webpage.
We include a link to a short video tutorial on this process at the end of the chapter
in the additional resources section. SelectorGadget provides in its toolbar
the following list of CSS selectors to use:
```
td:nth-child(8) ,
td:nth-child(4) ,
.largestCities-cell-background+ td a
```
Now that we have the CSS selectors that describe the properties of the elements
that we want to target, we can use them to find certain elements in web pages and extract data.
#### Using `rvest`
We will use the `rvest` R package to scrape data from the Wikipedia page.
We start by loading the `rvest` package:
```
library(rvest)
```
Next, we tell R what page we want to scrape by providing the webpage’s URL in quotations to the function `read_html`:
```
page <- read_html("https://en.wikipedia.org/wiki/Canada")
```
The `read_html` function directly downloads the source code for the page at
the URL you specify, just like your browser would if you navigated to that site. But
instead of displaying the website to you, the `read_html` function just returns
the HTML source code itself, which we have
stored in the `page` variable. Next, we send the page object to the `html_nodes`
function, along with the CSS selectors we obtained from
the SelectorGadget tool. Make sure to surround the selectors with quotation marks; the function, `html_nodes`, expects that
argument is a string. We store the result of the `html_nodes` function in the `population_nodes` variable.
Note that below we use the `paste` function with a comma separator (`sep=","`)
to build the list of selectors. The `paste` function converts
elements to characters and combines the values into a list. We use this function to
build the list of selectors to maintain code readability; this avoids
having a very long line of code.
```
selectors <- paste("td:nth-child(8)",
"td:nth-child(4)",
".largestCities-cell-background+ td a", sep = ",")
population_nodes <- html_nodes(page, selectors)
head(population_nodes)
```
```
## {xml_nodeset (6)}
## [1] <a href="/wiki/Greater_Toronto_Area" title="Greater Toronto Area">Toronto ...
## [2] <td style="text-align:right;">6,202,225</td>
## [3] <a href="/wiki/London,_Ontario" title="London, Ontario">London</a>
## [4] <td style="text-align:right;">543,551\n</td>
## [5] <a href="/wiki/Greater_Montreal" title="Greater Montreal">Montreal</a>
## [6] <td style="text-align:right;">4,291,732</td>
```
> **Note:** `head` is a function that is often useful for viewing only a short
> summary of an R object, rather than the whole thing (which may be quite a lot
> to look at). For example, here `head` shows us only the first 6 items in the
> `population_nodes` object. Note that some R objects by default print only a
> small summary. For example, `tibble` data frames only show you the first 10 rows.
> But not *all* R objects do this, and that’s where the `head` function helps
> summarize things for you.
Each of the items in the `population_nodes` list is a *node* from the HTML
document that matches the CSS selectors you specified. A *node* is an HTML tag
pair (e.g., `<td>` and `</td>` which defines the cell of a table) combined with
the content stored between the tags. For our CSS selector `td:nth-child(4)`, an
example node that would be selected would be:
```
<td style="text-align:left;background:#f0f0f0;">
<a href="/wiki/London,_Ontario" title="London, Ontario">London</a>
</td>
```
Next we extract the meaningful data—in other words, we get rid of the
HTML code syntax and tags—from the nodes using the `html_text` function.
In the case of the example node above, `html_text` function returns `"London"`.
```
population_text <- html_text(population_nodes)
head(population_text)
```
```
## [1] "Toronto" "6,202,225" "London" "543,551\n" "Montreal" "4,291,732"
```
Fantastic! We seem to have extracted the data of interest from the
raw HTML source code. But we are not quite done; the data
is not yet in an optimal format for data analysis. Both the city names and
population are encoded as characters in a single vector, instead of being in a
data frame with one character column for city and one numeric column for
population (like a spreadsheet).
Additionally, the populations contain commas (not useful for programmatically
dealing with numbers), and some even contain a line break character at the end
(`\n`). In Chapter [3](wrangling.html#wrangling), we will learn more about how to *wrangle* data
such as this into a more useful format for data analysis using R.
### 2\.8\.2 Using an API
Rather than posting a data file at a URL for you to download, many websites these days
provide an API that must be accessed through a programming language like R. The benefit of using an API
is that data owners have much more control over the data they provide to users. However, unlike
web scraping, there is no consistent way to access an API across websites. Every website typically
has its own API designed especially for its own use case. Therefore we will just provide one example
of accessing data through an API in this book, with the hope that it gives you enough of a basic
idea that you can learn how to use another API if needed. In particular, in this book we will show you the basics
of how to use the `httr2` package in R to access data from the NASA “Astronomy Picture
of the Day” API (a great source of desktop backgrounds, by the way—take a look at the stunning
picture of the Rho\-Ophiuchi cloud complex ([NASA et al. 2023](#ref-rhoophiuchi)) in Figure [2\.8](reading.html#fig:NASA-API-Rho-Ophiuchi) from July 13, 2023!).
Figure 2\.8: The James Webb Space Telescope’s NIRCam image of the Rho Ophiuchi molecular cloud complex.
First, you will need to visit the [NASA APIs page](https://api.nasa.gov/) and generate an API key (i.e., a password used to identify you when accessing the API).
Note that a valid email address is required to
associate with the key. The signup form looks something like Figure [2\.9](reading.html#fig:NASA-API-signup).
After filling out the basic information, you will receive the token via email.
Make sure to store the key in a safe place, and keep it private.
Figure 2\.9: Generating the API access token for the NASA API
**Caution: think about your API usage carefully!**
When you access an API, you are initiating a transfer of data from a web server
to your computer. Web servers are expensive to run and do not have infinite resources.
If you try to ask for *too much data* at once, you can use up a huge amount of the server’s bandwidth.
If you try to ask for data *too frequently*—e.g., if you
make many requests to the server in quick succession—you can also bog the server down and make
it unable to talk to anyone else. Most servers have mechanisms to revoke your access if you are not
careful, but you should try to prevent issues from happening in the first place by being extra careful
with how you write and run your code. You should also keep in mind that when a website owner
grants you API access, they also usually specify a limit (or *quota*) of how much data you can ask for.
Be careful not to overrun your quota! So *before* we try to use the API, we will first visit
[the NASA website](https://api.nasa.gov/) to see what limits we should abide by when using the API.
These limits are outlined in Figure [2\.10](reading.html#fig:NASA-API-limits).
Figure 2\.10: The NASA website specifies an hourly limit of 1,000 requests.
After checking the NASA website, it seems like we can send at most 1,000 requests per hour.
That should be more than enough for our purposes in this section.
#### Accessing the NASA API
The NASA API is what is known as an *HTTP API*: this is a particularly common
kind of API, where you can obtain data simply by accessing a
particular URL as if it were a regular website. To make a query to the NASA
API, we need to specify three things. First, we specify the URL *endpoint* of
the API, which is simply a URL that helps the remote server understand which
API you are trying to access. NASA offers a variety of APIs, each with its own
endpoint; in the case of the NASA “Astronomy Picture of the Day” API, the URL
endpoint is `https://api.nasa.gov/planetary/apod`. Second, we write `?`, which denotes that a
list of *query parameters* will follow. And finally, we specify a list of
query parameters of the form `parameter=value`, separated by `&` characters. The NASA
“Astronomy Picture of the Day” API accepts the parameters shown in
Figure [2\.11](reading.html#fig:NASA-API-parameters).
Figure 2\.11: The set of parameters that you can specify when querying the NASA “Astronomy Picture of the Day” API, along with syntax, default settings, and a description of each.
So for example, to obtain the image of the day
from July 13, 2023, the API query would have two parameters: `api_key=YOUR_API_KEY`
and `date=2023-07-13`. Remember to replace `YOUR_API_KEY` with the API key you
received from NASA in your email! Putting it all together, the query will look like the following:
```
https://api.nasa.gov/planetary/apod?api_key=YOUR_API_KEY&date=2023-07-13
```
If you try putting this URL into your web browser, you’ll actually find that the server
responds to your request with some text:
```
{"date":"2023-07-13","explanation":"A mere 390 light-years away, Sun-like stars
and future planetary systems are forming in the Rho Ophiuchi molecular cloud
complex, the closest star-forming region to our fair planet. The James Webb
Space Telescope's NIRCam peered into the nearby natal chaos to capture this
infrared image at an inspiring scale. The spectacular cosmic snapshot was
released to celebrate the successful first year of Webb's exploration of the
Universe. The frame spans less than a light-year across the Rho Ophiuchi region
and contains about 50 young stars. Brighter stars clearly sport Webb's
characteristic pattern of diffraction spikes. Huge jets of shocked molecular
hydrogen blasting from newborn stars are red in the image, with the large,
yellowish dusty cavity carved out by the energetic young star near its center.
Near some stars in the stunning image are shadows cast by their protoplanetary
disks.","hdurl":"https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph.png",
"media_type":"image","service_version":"v1","title":"Webb's
Rho Ophiuchi","url":"https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph1024.png"}
```
Neat! There is definitely some data there, but it’s a bit hard to
see what it all is. As it turns out, this is a common format for data called
*JSON* (JavaScript Object Notation).
We won’t encounter this kind of data much in this book,
but for now you can interpret this data as `key : value` pairs separated by
commas. For example, if you look closely, you’ll see that the first entry is
`"date":"2023-07-13"`, which indicates that we indeed successfully received
data corresponding to July 13, 2023\.
So now our job is to do all of this programmatically in R. We will load
the `httr2` package, and construct the query using the `request` function, which takes a single URL argument;
you will recognize the same query URL that we pasted into the browser earlier.
We will then send the query using the `req_perform` function, and finally
obtain a JSON representation of the response using the `resp_body_json` function.
```
library(httr2)
req <- request("https://api.nasa.gov/planetary/apod?api_key=YOUR_API_KEY&date=2023-07-13")
resp <- req_perform(req)
nasa_data_single <- resp_body_json(resp)
nasa_data_single
```
```
## $date
## [1] "2023-07-13"
##
## $explanation
## [1] "A mere 390 light-years away, Sun-like stars and future planetary systems are forming in the Rho Ophiuchi molecular cloud complex, the closest star-forming region to our fair planet. The James Webb Space Telescope's NIRCam peered into the nearby natal chaos to capture this infrared image at an inspiring scale. The spectacular cosmic snapshot was released to celebrate the successful first year of Webb's exploration of the Universe. The frame spans less than a light-year across the Rho Ophiuchi region and contains about 50 young stars. Brighter stars clearly sport Webb's characteristic pattern of diffraction spikes. Huge jets of shocked molecular hydrogen blasting from newborn stars are red in the image, with the large, yellowish dusty cavity carved out by the energetic young star near its center. Near some stars in the stunning image are shadows cast by their protoplanetary disks."
##
## $hdurl
## [1] "https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph.png"
##
## $media_type
## [1] "image"
##
## $service_version
## [1] "v1"
##
## $title
## [1] "Webb's Rho Ophiuchi"
##
## $url
## [1] "https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph1024.png"
```
We can obtain more records at once by using the `start_date` and `end_date` parameters, as
shown in the table of parameters in [2\.11](reading.html#fig:NASA-API-parameters).
Let’s obtain all the records between May 1, 2023, and July 13, 2023, and store the result
in an object called `nasa_data`; now the response
will take the form of an R *list* (you’ll learn more about these in Chapter [3](wrangling.html#wrangling)).
Each item in the list will correspond to a single day’s record (just like the `nasa_data_single` object),
and there will be 74 items total, one for each day between the start and end dates:
```
req <- request("https://api.nasa.gov/planetary/apod?api_key=YOUR_API_KEY&start_date=2023-05-01&end_date=2023-07-13")
resp <- req_perform(req)
nasa_data <- resp_body_json(response)
length(nasa_data)
```
```
## [1] 74
```
For further data processing using the techniques in this book, you’ll need to turn this list of items
into a data frame. Here we will extract the `date`, `title`, `copyright`, and `url` variables
from the JSON data, and construct a data frame using the extracted information.
> **Note:** Understanding this code is not required for the remainder of the textbook. It is included for those
> readers who would like to parse JSON data into a data frame in their own data analyses.
```
nasa_df_all <- tibble(bind_rows(lapply(nasa_data, as.data.frame.list)))
nasa_df <- select(nasa_df_all, date, title, copyright, url)
nasa_df
```
```
## # A tibble: 74 × 4
## date title copyright url
## <chr> <chr> <chr> <chr>
## 1 2023-05-01 Carina Nebula North "\nCarlos Tayl… http…
## 2 2023-05-02 Flat Rock Hills on Mars "\nNASA, \nJPL… http…
## 3 2023-05-03 Centaurus A: A Peculiar Island of Stars "\nMarco Loren… http…
## 4 2023-05-04 The Galaxy, the Jet, and a Famous Black Hole <NA> http…
## 5 2023-05-05 Shackleton from ShadowCam <NA> http…
## 6 2023-05-06 Twilight in a Flower "Dario Giannob… http…
## 7 2023-05-07 The Helix Nebula from CFHT <NA> http…
## 8 2023-05-08 The Spanish Dancer Spiral Galaxy <NA> http…
## 9 2023-05-09 Shadows of Earth "\nMarcella Gi… http…
## 10 2023-05-10 Milky Way over Egyptian Desert "\nAmr Abdulwa… http…
## # ℹ 64 more rows
```
Success—we have created a small data set using the NASA
API! This data is also quite different from what we obtained from web scraping;
the extracted information is readily available in a JSON format, as opposed to raw
HTML code (although not *every* API will provide data in such a nice format).
From this point onward, the `nasa_df` data frame is stored on your
machine, and you can play with it to your heart’s content. For example, you can use
`write_csv` to save it to a file and `read_csv` to read it into R again later;
and after reading the next few chapters you will have the skills to
do even more interesting things! If you decide that you want
to ask any of the various NASA APIs for more data
(see [the list of awesome NASA APIS here](https://api.nasa.gov/)
for more examples of what is possible), just be mindful as usual about how much
data you are requesting and how frequently you are making requests.
2\.9 Exercises
--------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Reading in data locally and from the web” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
2\.10 Additional resources
--------------------------
* The [`readr` documentation](https://readr.tidyverse.org/)
provides the documentation for many of the reading functions we cover in this chapter.
It is where you should look if you want to learn more about the functions in this
chapter, the full set of arguments you can use, and other related functions.
The site also provides a very nice cheat sheet that summarizes many of the data
wrangling functions from this chapter.
* Sometimes you might run into data in such poor shape that none of the reading
functions we cover in this chapter work. In that case, you can consult the
[data import chapter](https://r4ds.had.co.nz/data-import.html) from *R for Data
Science* ([Wickham and Grolemund 2016](#ref-wickham2016r)), which goes into a lot more detail about how R parses
text from files into data frames.
* The [`here` R package](https://here.r-lib.org/) ([Müller 2020](#ref-here))
provides a way for you to construct or find your files’ paths.
* The [`readxl` documentation](https://readxl.tidyverse.org/) provides more
details on reading data from Excel, such as reading in data with multiple
sheets, or specifying the cells to read in.
* The [`rio` R package](https://github.com/leeper/rio) ([Leeper 2021](#ref-rio)) provides an alternative
set of tools for reading and writing data in R. It aims to be a “Swiss army
knife” for data reading/writing/converting, and supports a wide variety of data
types (including data formats generated by other statistical software like SPSS
and SAS).
* A [video](https://www.youtube.com/embed/ephId3mYu9o) from the Udacity
course *Linux Command Line Basics* provides a good explanation of absolute versus relative paths.
* If you read the subsection on obtaining data from the web via scraping and
APIs, we provide two companion tutorial video links for how to use the
SelectorGadget tool to obtain desired CSS selectors for:
+ [extracting the data for apartment listings on Craigslist](https://www.youtube.com/embed/YdIWI6K64zo), and
+ [extracting Canadian city names and populations from Wikipedia](https://www.youtube.com/embed/O9HKbdhqYzk).
* The [`polite` R package](https://dmi3kno.github.io/polite/) ([Perepolkin 2021](#ref-polite)) provides
a set of tools for responsibly scraping data from websites.
2\.1 Overview
-------------
In this chapter, you’ll learn to read tabular data of various formats into R
from your local device (e.g., your laptop) and the web. “Reading” (or “loading”)
is the process of
converting data (stored as plain text, a database, HTML, etc.) into an object
(e.g., a data frame) that R can easily access and manipulate. Thus reading data
is the gateway to any data analysis; you won’t be able to analyze data unless
you’ve loaded it first. And because there are many ways to store data, there
are similarly many ways to read data into R. The more time you spend upfront
matching the data reading method to the type of data you have, the less time
you will have to devote to re\-formatting, cleaning and wrangling your data (the
second step to all data analyses). It’s like making sure your shoelaces are
tied well before going for a run so that you don’t trip later on!
2\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Define the types of path and use them to locate files:
+ absolute file path
+ relative file path
+ Uniform Resource Locator (URL)
* Read data into R from various types of path using:
+ `read_csv`
+ `read_tsv`
+ `read_csv2`
+ `read_delim`
+ `read_excel`
* Compare and contrast the `read_*` functions.
* Describe when to use the following `read_*` function arguments:
+ `skip`
+ `delim`
+ `col_names`
* Choose the appropriate `tidyverse` `read_*` function and function arguments to load a given plain text tabular data set into R.
* Use the `rename` function to rename columns in a data frame.
* Use `read_excel` function and arguments to load a sheet from an excel file into R.
* Work with databases using functions from `dbplyr` and `DBI`:
+ Connect to a database with `dbConnect`.
+ List tables in the database with `dbListTables`.
+ Create a reference to a database table with `tbl`.
+ Bring data from a database into R using `collect`.
* Use `write_csv` to save a data frame to a `.csv` file.
* (*Optional*) Obtain data from the web using scraping and application programming interfaces (APIs):
+ Read HTML source code from a URL using the `rvest` package.
+ Read data from the NASA “Astronomy Picture of the Day” API using the `httr2` package.
+ Compare downloading tabular data from a plain text file (e.g., `.csv`), accessing data from an API, and scraping the HTML source code from a website.
2\.3 Absolute and relative file paths
-------------------------------------
This chapter will discuss the different functions we can use to import data
into R, but before we can talk about *how* we read the data into R with these
functions, we first need to talk about *where* the data lives. When you load a
data set into R, you first need to tell R where those files live. The file
could live on your computer (*local*)
or somewhere on the internet (*remote*).
The place where the file lives on your computer is referred to as its “path”. You can
think of the path as directions to the file. There are two kinds of paths:
*relative* paths and *absolute* paths. A relative path indicates where the file is
with respect to your *working directory* (i.e., “where you are currently”) on the computer.
On the other hand, an absolute path indicates where the file is
with respect to the computer’s filesystem base (or *root*) folder, regardless of where you are working.
Suppose our computer’s filesystem looks like the picture in Figure
[2\.1](reading.html#fig:file-system-for-export-to-intro-datascience). We are working in a
file titled `project3.ipynb`, and our current working directory is `project3`;
typically, as is the case here, the working directory is the directory containing the file you are currently
working on.
Figure 2\.1: Example file system.
Let’s say we wanted to open the `happiness_report.csv` file. We have two options to indicate
where the file is: using a relative path, or using an absolute path.
The absolute path of the file always starts with a slash `/`—representing the root folder on the computer—and
proceeds by listing out the sequence of folders you would have to enter to reach the file, each separated by another slash `/`.
So in this case, `happiness_report.csv` would be reached by starting at the root, and entering the `home` folder,
then the `dsci-100` folder, then the `project3` folder, and then finally the `data` folder. So its absolute
path would be `/home/dsci-100/project3/data/happiness_report.csv`. We can load the file using its absolute path
as a string passed to the `read_csv` function.
```
happy_data <- read_csv("/home/dsci-100/project3/data/happiness_report.csv")
```
If we instead wanted to use a relative path, we would need to list out the sequence of steps needed to get from our current
working directory to the file, with slashes `/` separating each step. Since we are currently in the `project3` folder,
we just need to enter the `data` folder to reach our desired file. Hence the relative path is `data/happiness_report.csv`,
and we can load the file using its relative path as a string passed to `read_csv`.
```
happy_data <- read_csv("data/happiness_report.csv")
```
Note that there is no forward slash at the beginning of a relative path; if we accidentally typed `"/data/happiness_report.csv"`,
R would look for a folder named `data` in the root folder of the computer—but that doesn’t exist!
Aside from specifying places to go in a path using folder names (like `data` and `project3`), we can also specify two additional
special places: the *current directory* and the *previous directory*.
We indicate the current working directory with a single dot `.`, and
the previous directory with two dots `..`. So for instance, if we wanted to reach the `bike_share.csv` file from the `project3` folder, we could
use the relative path `../project2/bike_share.csv`. We can even combine these two; for example, we could reach the `bike_share.csv` file using
the (very silly) path `../project2/../project2/./bike_share.csv` with quite a few redundant directions: it says to go back a folder, then open `project2`,
then go back a folder again, then open `project2` again, then stay in the current directory, then finally get to `bike_share.csv`. Whew, what a long trip!
So which kind of path should you use: relative, or absolute? Generally speaking, you should use relative paths.
Using a relative path helps ensure that your code can be run
on a different computer (and as an added bonus, relative paths are often shorter—easier to type!).
This is because a file’s relative path is often the same across different computers, while a
file’s absolute path (the names of
all of the folders between the computer’s root, represented by `/`, and the file) isn’t usually the same
across different computers. For example, suppose Fatima and Jayden are working on a
project together on the `happiness_report.csv` data. Fatima’s file is stored at
`/home/Fatima/project3/data/happiness_report.csv`,
while Jayden’s is stored at
`/home/Jayden/project3/data/happiness_report.csv`.
Even though Fatima and Jayden stored their files in the same place on their
computers (in their home folders), the absolute paths are different due to
their different usernames. If Jayden has code that loads the
`happiness_report.csv` data using an absolute path, the code won’t work on
Fatima’s computer. But the relative path from inside the `project3` folder
(`data/happiness_report.csv`) is the same on both computers; any code that uses
relative paths will work on both! In the additional resources section,
we include a link to a short video on the
difference between absolute and relative paths. You can also check out the
`here` package, which provides methods for finding and constructing file paths
in R.
Beyond files stored on your computer (i.e., locally), we also need a way to locate resources
stored elsewhere on the internet (i.e., remotely). For this purpose we use a
*Uniform Resource Locator (URL)*, i.e., a web address that looks something
like <https://datasciencebook.ca/>. URLs indicate the location of a resource on the internet, and
start with a web domain, followed by a forward slash `/`, and then a path
to where the resource is located on the remote machine.
2\.4 Reading tabular data from a plain text file into R
-------------------------------------------------------
### 2\.4\.1 `read_csv` to read in comma\-separated values files
Now that we have learned about *where* data could be, we will learn about *how*
to import data into R using various functions. Specifically, we will learn how
to *read* tabular data from a plain text file (a document containing only text)
*into* R and *write* tabular data to a file *out of* R. The function we use to do this
depends on the file’s format. For example, in the last chapter, we learned about using
the `tidyverse` `read_csv` function when reading `.csv` (**c**omma\-**s**eparated **v**alues)
files. In that case, the separator or *delimiter* that divided our columns was a
comma (`,`). We only learned the case where the data matched the expected defaults
of the `read_csv` function
(column names are present, and commas are used as the delimiter between columns).
In this section, we will learn how to read
files that do not satisfy the default expectations of `read_csv`.
Before we jump into the cases where the data aren’t in the expected default format
for `tidyverse` and `read_csv`, let’s revisit the more straightforward
case where the defaults hold, and the only argument we need to give to the function
is the path to the file, `data/can_lang.csv`. The `can_lang` data set contains
language data from the 2016 Canadian census.
We put `data/` before the file’s
name when we are loading the data set because this data set is located in a
sub\-folder, named `data`, relative to where we are running our R code.
Here is what the text in the file `data/can_lang.csv` looks like.
```
category,language,mother_tongue,most_at_home,most_at_work,lang_known
Aboriginal languages,"Aboriginal languages, n.o.s.",590,235,30,665
Non-Official & Non-Aboriginal languages,Afrikaans,10260,4785,85,23415
Non-Official & Non-Aboriginal languages,"Afro-Asiatic languages, n.i.e.",1150,44
Non-Official & Non-Aboriginal languages,Akan (Twi),13460,5985,25,22150
Non-Official & Non-Aboriginal languages,Albanian,26895,13135,345,31930
Aboriginal languages,"Algonquian languages, n.i.e.",45,10,0,120
Aboriginal languages,Algonquin,1260,370,40,2480
Non-Official & Non-Aboriginal languages,American Sign Language,2685,3020,1145,21
Non-Official & Non-Aboriginal languages,Amharic,22465,12785,200,33670
```
And here is a review of how we can use `read_csv` to load it into R. First we
load the `tidyverse` package to gain access to useful
functions for reading the data.
```
library(tidyverse)
```
Next we use `read_csv` to load the data into R, and in that call we specify the
relative path to the file. Note that it is normal and expected that a message is
printed out after using the `read_csv` and related functions. This message lets you know the data types
of each of the columns that R inferred while reading the data into R. In the
future when we use this and related functions to load data in this book, we will
silence these messages to help with the readability of the book.
```
canlang_data <- read_csv("data/can_lang.csv")
```
```
## Rows: 214 Columns: 6
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (2): category, language
## dbl (4): mother_tongue, most_at_home, most_at_work, lang_known
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
```
Finally, to view the first 10 rows of the data frame,
we must call it:
```
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
### 2\.4\.2 Skipping rows when reading in data
Oftentimes, information about how data was collected, or other relevant
information, is included at the top of the data file. This information is
usually written in sentence and paragraph form, with no delimiter because it is
not organized into columns. An example of this is shown below. This information
gives the data scientist useful context and information about the data,
however, it is not well formatted or intended to be read into a data frame cell
along with the tabular data that follows later in the file.
```
Data source: https://ttimbers.github.io/canlang/
Data originally published in: Statistics Canada Census of Population 2016.
Reproduced and distributed on an as-is basis with their permission.
category,language,mother_tongue,most_at_home,most_at_work,lang_known
Aboriginal languages,"Aboriginal languages, n.o.s.",590,235,30,665
Non-Official & Non-Aboriginal languages,Afrikaans,10260,4785,85,23415
Non-Official & Non-Aboriginal languages,"Afro-Asiatic languages, n.i.e.",1150,44
Non-Official & Non-Aboriginal languages,Akan (Twi),13460,5985,25,22150
Non-Official & Non-Aboriginal languages,Albanian,26895,13135,345,31930
Aboriginal languages,"Algonquian languages, n.i.e.",45,10,0,120
Aboriginal languages,Algonquin,1260,370,40,2480
Non-Official & Non-Aboriginal languages,American Sign Language,2685,3020,1145,21
Non-Official & Non-Aboriginal languages,Amharic,22465,12785,200,33670
```
With this extra information being present at the top of the file, using
`read_csv` as we did previously does not allow us to correctly load the data
into R. In the case of this file we end up only reading in one column of the
data set. In contrast to the normal and expected messages above, this time R
prints out a warning for us indicating that there might be a problem with how
our data is being read in.
```
canlang_data <- read_csv("data/can_lang_meta-data.csv")
```
```
## Warning: One or more parsing issues, call `problems()` on your data frame for details,
## e.g.:
## dat <- vroom(...)
## problems(dat)
```
```
canlang_data
```
```
## # A tibble: 217 × 1
## `Data source: https://ttimbers.github.io/canlang/`
## <chr>
## 1 "Data originally published in: Statistics Canada Census of Population 2016."
## 2 "Reproduced and distributed on an as-is basis with their permission."
## 3 "category,language,mother_tongue,most_at_home,most_at_work,lang_known"
## 4 "Aboriginal languages,\"Aboriginal languages, n.o.s.\",590,235,30,665"
## 5 "Non-Official & Non-Aboriginal languages,Afrikaans,10260,4785,85,23415"
## 6 "Non-Official & Non-Aboriginal languages,\"Afro-Asiatic languages, n.i.e.\",…
## 7 "Non-Official & Non-Aboriginal languages,Akan (Twi),13460,5985,25,22150"
## 8 "Non-Official & Non-Aboriginal languages,Albanian,26895,13135,345,31930"
## 9 "Aboriginal languages,\"Algonquian languages, n.i.e.\",45,10,0,120"
## 10 "Aboriginal languages,Algonquin,1260,370,40,2480"
## # ℹ 207 more rows
```
To successfully read data like this into R, the `skip`
argument can be useful to tell R
how many lines to skip before
it should start reading in the data. In the example above, we would set this
value to 3\.
```
canlang_data <- read_csv("data/can_lang_meta-data.csv",
skip = 3)
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
How did we know to skip three lines? We looked at the data! The first three lines
of the data had information we didn’t need to import:
```
Data source: https://ttimbers.github.io/canlang/
Data originally published in: Statistics Canada Census of Population 2016.
Reproduced and distributed on an as-is basis with their permission.
```
The column names began at line 4, so we skipped the first three lines.
### 2\.4\.3 `read_tsv` to read in tab\-separated values files
Another common way data is stored is with tabs as the delimiter. Notice the
data file, `can_lang.tsv`, has tabs in between the columns instead of
commas.
```
category language mother_tongue most_at_home most_at_work lang_kno
Aboriginal languages Aboriginal languages, n.o.s. 590 235 30 665
Non-Official & Non-Aboriginal languages Afrikaans 10260 4785 85 23415
Non-Official & Non-Aboriginal languages Afro-Asiatic languages, n.i.e. 1150
Non-Official & Non-Aboriginal languages Akan (Twi) 13460 5985 25 22150
Non-Official & Non-Aboriginal languages Albanian 26895 13135 345 31930
Aboriginal languages Algonquian languages, n.i.e. 45 10 0 120
Aboriginal languages Algonquin 1260 370 40 2480
Non-Official & Non-Aboriginal languages American Sign Language 2685 3020
Non-Official & Non-Aboriginal languages Amharic 22465 12785 200 33670
```
We can use the `read_tsv` function
to read in `.tsv` (**t**ab **s**eparated **v**alues) files.
```
canlang_data <- read_tsv("data/can_lang.tsv")
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
If you compare the data frame here to the data frame we obtained in Section
[2\.4\.1](reading.html#readcsv) using `read_csv`, you’ll notice that they look identical:
they have the same number of columns and rows, the same column names, and the same entries! So
even though we needed to use a different
function depending on the file format, our resulting data frame
(`canlang_data`) in both cases was the same.
### 2\.4\.4 `read_delim` as a more flexible method to get tabular data into R
The `read_csv` and `read_tsv` functions are actually just special cases of the more general
`read_delim` function. We can use
`read_delim` to import both comma and tab\-separated values files, and more; we just
have to specify the delimiter.
For example, the `can_lang_no_names.tsv` file contains a different version of
this same data set with no column names and uses tabs as the delimiter
instead of commas.
Here is how the file would look in a plain text editor:
```
Aboriginal languages Aboriginal languages, n.o.s. 590 235 30 665
Non-Official & Non-Aboriginal languages Afrikaans 10260 4785 85 23415
Non-Official & Non-Aboriginal languages Afro-Asiatic languages, n.i.e. 1150
Non-Official & Non-Aboriginal languages Akan (Twi) 13460 5985 25 22150
Non-Official & Non-Aboriginal languages Albanian 26895 13135 345 31930
Aboriginal languages Algonquian languages, n.i.e. 45 10 0 120
Aboriginal languages Algonquin 1260 370 40 2480
Non-Official & Non-Aboriginal languages American Sign Language 2685 3020
Non-Official & Non-Aboriginal languages Amharic 22465 12785 200 33670
Non-Official & Non-Aboriginal languages Arabic 419890 223535 5585 629055
```
To read this into R using the `read_delim` function, we specify the path
to the file as the first argument, provide
the tab character `"\t"` as the `delim` argument,
and set the `col_names` argument to `FALSE` to denote that there are no column names
provided in the data. Note that the `read_csv`, `read_tsv`, and `read_delim` functions
all have a `col_names` argument with
the default value `TRUE`.
> **Note:** `\t` is an example of an *escaped character*,
> which always starts with a backslash (`\`).
> Escaped characters are used to represent non\-printing characters
> (like the tab) or those with special meanings (such as quotation marks).
```
canlang_data <- read_delim("data/can_lang_no_names.tsv",
delim = "\t",
col_names = FALSE)
canlang_data
```
```
## # A tibble: 214 × 6
## X1 X2 X3 X4 X5 X6
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal languages Aborigina… 590 235 30 665
## 2 Non-Official & Non-Aboriginal languages Afrikaans 10260 4785 85 23415
## 3 Non-Official & Non-Aboriginal languages Afro-Asia… 1150 445 10 2775
## 4 Non-Official & Non-Aboriginal languages Akan (Twi) 13460 5985 25 22150
## 5 Non-Official & Non-Aboriginal languages Albanian 26895 13135 345 31930
## 6 Aboriginal languages Algonquia… 45 10 0 120
## 7 Aboriginal languages Algonquin 1260 370 40 2480
## 8 Non-Official & Non-Aboriginal languages American … 2685 3020 1145 21930
## 9 Non-Official & Non-Aboriginal languages Amharic 22465 12785 200 33670
## 10 Non-Official & Non-Aboriginal languages Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
Data frames in R need to have column names. Thus if you read in data
without column names, R will assign names automatically. In this example,
R assigns the column names `X1, X2, X3, X4, X5, X6`.
It is best to rename your columns manually in this scenario. The current
column names (`X1, X2`, etc.) are not very descriptive and will make your analysis confusing.
To rename your columns, you can use the `rename` function
from [the `dplyr` R package](https://dplyr.tidyverse.org/) ([Wickham, François, et al. 2021](#ref-dplyr))
(one of the packages
loaded with `tidyverse`, so we don’t need to load it separately). The first
argument is the data set, and in the subsequent arguments you
write `new_name = old_name` for the selected variables to
rename. We rename the `X1, X2, ..., X6`
columns in the `canlang_data` data frame to more descriptive names below.
```
canlang_data <- rename(canlang_data,
category = X1,
language = X2,
mother_tongue = X3,
most_at_home = X4,
most_at_work = X5,
lang_known = X6)
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
### 2\.4\.5 Reading tabular data directly from a URL
We can also use `read_csv`, `read_tsv`, or `read_delim` (and related functions)
to read in data directly from a **U**niform **R**esource **L**ocator (URL) that
contains tabular data. Here, we provide the URL of a remote file to
`read_*`, instead of a path to a local file on our
computer. We need to surround the URL with quotes similar to when we specify a
path on our local computer. All other arguments that we use are the same as
when using these functions with a local file on our computer.
```
url <- "https://raw.githubusercontent.com/UBC-DSCI/data/main/can_lang.csv"
canlang_data <- read_csv(url)
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
### 2\.4\.6 Downloading data from a URL
Occasionally the data available at a URL is not formatted nicely enough to use
`read_csv`, `read_tsv`, `read_delim`, or other related functions to read the data
directly into R. In situations where it is necessary to download a file
to our local computer prior to working with it in R, we can use the `download.file`
function. The first argument is the URL, and the second is a path where we would
like to store the downloaded file.
```
download.file(url, "data/can_lang.csv")
canlang_data <- read_csv("data/can_lang.csv")
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
### 2\.4\.7 Previewing a data file before reading it into R
In many of the examples above, we gave you previews of the data file before we read
it into R. Previewing data is essential to see whether or not there are column
names, what the delimiters are, and if there are lines you need to skip.
You should do this yourself when trying to read in data files: open the file in
whichever text editor you prefer to inspect its contents prior to reading it into R.
### 2\.4\.1 `read_csv` to read in comma\-separated values files
Now that we have learned about *where* data could be, we will learn about *how*
to import data into R using various functions. Specifically, we will learn how
to *read* tabular data from a plain text file (a document containing only text)
*into* R and *write* tabular data to a file *out of* R. The function we use to do this
depends on the file’s format. For example, in the last chapter, we learned about using
the `tidyverse` `read_csv` function when reading `.csv` (**c**omma\-**s**eparated **v**alues)
files. In that case, the separator or *delimiter* that divided our columns was a
comma (`,`). We only learned the case where the data matched the expected defaults
of the `read_csv` function
(column names are present, and commas are used as the delimiter between columns).
In this section, we will learn how to read
files that do not satisfy the default expectations of `read_csv`.
Before we jump into the cases where the data aren’t in the expected default format
for `tidyverse` and `read_csv`, let’s revisit the more straightforward
case where the defaults hold, and the only argument we need to give to the function
is the path to the file, `data/can_lang.csv`. The `can_lang` data set contains
language data from the 2016 Canadian census.
We put `data/` before the file’s
name when we are loading the data set because this data set is located in a
sub\-folder, named `data`, relative to where we are running our R code.
Here is what the text in the file `data/can_lang.csv` looks like.
```
category,language,mother_tongue,most_at_home,most_at_work,lang_known
Aboriginal languages,"Aboriginal languages, n.o.s.",590,235,30,665
Non-Official & Non-Aboriginal languages,Afrikaans,10260,4785,85,23415
Non-Official & Non-Aboriginal languages,"Afro-Asiatic languages, n.i.e.",1150,44
Non-Official & Non-Aboriginal languages,Akan (Twi),13460,5985,25,22150
Non-Official & Non-Aboriginal languages,Albanian,26895,13135,345,31930
Aboriginal languages,"Algonquian languages, n.i.e.",45,10,0,120
Aboriginal languages,Algonquin,1260,370,40,2480
Non-Official & Non-Aboriginal languages,American Sign Language,2685,3020,1145,21
Non-Official & Non-Aboriginal languages,Amharic,22465,12785,200,33670
```
And here is a review of how we can use `read_csv` to load it into R. First we
load the `tidyverse` package to gain access to useful
functions for reading the data.
```
library(tidyverse)
```
Next we use `read_csv` to load the data into R, and in that call we specify the
relative path to the file. Note that it is normal and expected that a message is
printed out after using the `read_csv` and related functions. This message lets you know the data types
of each of the columns that R inferred while reading the data into R. In the
future when we use this and related functions to load data in this book, we will
silence these messages to help with the readability of the book.
```
canlang_data <- read_csv("data/can_lang.csv")
```
```
## Rows: 214 Columns: 6
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (2): category, language
## dbl (4): mother_tongue, most_at_home, most_at_work, lang_known
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
```
Finally, to view the first 10 rows of the data frame,
we must call it:
```
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
### 2\.4\.2 Skipping rows when reading in data
Oftentimes, information about how data was collected, or other relevant
information, is included at the top of the data file. This information is
usually written in sentence and paragraph form, with no delimiter because it is
not organized into columns. An example of this is shown below. This information
gives the data scientist useful context and information about the data,
however, it is not well formatted or intended to be read into a data frame cell
along with the tabular data that follows later in the file.
```
Data source: https://ttimbers.github.io/canlang/
Data originally published in: Statistics Canada Census of Population 2016.
Reproduced and distributed on an as-is basis with their permission.
category,language,mother_tongue,most_at_home,most_at_work,lang_known
Aboriginal languages,"Aboriginal languages, n.o.s.",590,235,30,665
Non-Official & Non-Aboriginal languages,Afrikaans,10260,4785,85,23415
Non-Official & Non-Aboriginal languages,"Afro-Asiatic languages, n.i.e.",1150,44
Non-Official & Non-Aboriginal languages,Akan (Twi),13460,5985,25,22150
Non-Official & Non-Aboriginal languages,Albanian,26895,13135,345,31930
Aboriginal languages,"Algonquian languages, n.i.e.",45,10,0,120
Aboriginal languages,Algonquin,1260,370,40,2480
Non-Official & Non-Aboriginal languages,American Sign Language,2685,3020,1145,21
Non-Official & Non-Aboriginal languages,Amharic,22465,12785,200,33670
```
With this extra information being present at the top of the file, using
`read_csv` as we did previously does not allow us to correctly load the data
into R. In the case of this file we end up only reading in one column of the
data set. In contrast to the normal and expected messages above, this time R
prints out a warning for us indicating that there might be a problem with how
our data is being read in.
```
canlang_data <- read_csv("data/can_lang_meta-data.csv")
```
```
## Warning: One or more parsing issues, call `problems()` on your data frame for details,
## e.g.:
## dat <- vroom(...)
## problems(dat)
```
```
canlang_data
```
```
## # A tibble: 217 × 1
## `Data source: https://ttimbers.github.io/canlang/`
## <chr>
## 1 "Data originally published in: Statistics Canada Census of Population 2016."
## 2 "Reproduced and distributed on an as-is basis with their permission."
## 3 "category,language,mother_tongue,most_at_home,most_at_work,lang_known"
## 4 "Aboriginal languages,\"Aboriginal languages, n.o.s.\",590,235,30,665"
## 5 "Non-Official & Non-Aboriginal languages,Afrikaans,10260,4785,85,23415"
## 6 "Non-Official & Non-Aboriginal languages,\"Afro-Asiatic languages, n.i.e.\",…
## 7 "Non-Official & Non-Aboriginal languages,Akan (Twi),13460,5985,25,22150"
## 8 "Non-Official & Non-Aboriginal languages,Albanian,26895,13135,345,31930"
## 9 "Aboriginal languages,\"Algonquian languages, n.i.e.\",45,10,0,120"
## 10 "Aboriginal languages,Algonquin,1260,370,40,2480"
## # ℹ 207 more rows
```
To successfully read data like this into R, the `skip`
argument can be useful to tell R
how many lines to skip before
it should start reading in the data. In the example above, we would set this
value to 3\.
```
canlang_data <- read_csv("data/can_lang_meta-data.csv",
skip = 3)
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
How did we know to skip three lines? We looked at the data! The first three lines
of the data had information we didn’t need to import:
```
Data source: https://ttimbers.github.io/canlang/
Data originally published in: Statistics Canada Census of Population 2016.
Reproduced and distributed on an as-is basis with their permission.
```
The column names began at line 4, so we skipped the first three lines.
### 2\.4\.3 `read_tsv` to read in tab\-separated values files
Another common way data is stored is with tabs as the delimiter. Notice the
data file, `can_lang.tsv`, has tabs in between the columns instead of
commas.
```
category language mother_tongue most_at_home most_at_work lang_kno
Aboriginal languages Aboriginal languages, n.o.s. 590 235 30 665
Non-Official & Non-Aboriginal languages Afrikaans 10260 4785 85 23415
Non-Official & Non-Aboriginal languages Afro-Asiatic languages, n.i.e. 1150
Non-Official & Non-Aboriginal languages Akan (Twi) 13460 5985 25 22150
Non-Official & Non-Aboriginal languages Albanian 26895 13135 345 31930
Aboriginal languages Algonquian languages, n.i.e. 45 10 0 120
Aboriginal languages Algonquin 1260 370 40 2480
Non-Official & Non-Aboriginal languages American Sign Language 2685 3020
Non-Official & Non-Aboriginal languages Amharic 22465 12785 200 33670
```
We can use the `read_tsv` function
to read in `.tsv` (**t**ab **s**eparated **v**alues) files.
```
canlang_data <- read_tsv("data/can_lang.tsv")
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
If you compare the data frame here to the data frame we obtained in Section
[2\.4\.1](reading.html#readcsv) using `read_csv`, you’ll notice that they look identical:
they have the same number of columns and rows, the same column names, and the same entries! So
even though we needed to use a different
function depending on the file format, our resulting data frame
(`canlang_data`) in both cases was the same.
### 2\.4\.4 `read_delim` as a more flexible method to get tabular data into R
The `read_csv` and `read_tsv` functions are actually just special cases of the more general
`read_delim` function. We can use
`read_delim` to import both comma and tab\-separated values files, and more; we just
have to specify the delimiter.
For example, the `can_lang_no_names.tsv` file contains a different version of
this same data set with no column names and uses tabs as the delimiter
instead of commas.
Here is how the file would look in a plain text editor:
```
Aboriginal languages Aboriginal languages, n.o.s. 590 235 30 665
Non-Official & Non-Aboriginal languages Afrikaans 10260 4785 85 23415
Non-Official & Non-Aboriginal languages Afro-Asiatic languages, n.i.e. 1150
Non-Official & Non-Aboriginal languages Akan (Twi) 13460 5985 25 22150
Non-Official & Non-Aboriginal languages Albanian 26895 13135 345 31930
Aboriginal languages Algonquian languages, n.i.e. 45 10 0 120
Aboriginal languages Algonquin 1260 370 40 2480
Non-Official & Non-Aboriginal languages American Sign Language 2685 3020
Non-Official & Non-Aboriginal languages Amharic 22465 12785 200 33670
Non-Official & Non-Aboriginal languages Arabic 419890 223535 5585 629055
```
To read this into R using the `read_delim` function, we specify the path
to the file as the first argument, provide
the tab character `"\t"` as the `delim` argument,
and set the `col_names` argument to `FALSE` to denote that there are no column names
provided in the data. Note that the `read_csv`, `read_tsv`, and `read_delim` functions
all have a `col_names` argument with
the default value `TRUE`.
> **Note:** `\t` is an example of an *escaped character*,
> which always starts with a backslash (`\`).
> Escaped characters are used to represent non\-printing characters
> (like the tab) or those with special meanings (such as quotation marks).
```
canlang_data <- read_delim("data/can_lang_no_names.tsv",
delim = "\t",
col_names = FALSE)
canlang_data
```
```
## # A tibble: 214 × 6
## X1 X2 X3 X4 X5 X6
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal languages Aborigina… 590 235 30 665
## 2 Non-Official & Non-Aboriginal languages Afrikaans 10260 4785 85 23415
## 3 Non-Official & Non-Aboriginal languages Afro-Asia… 1150 445 10 2775
## 4 Non-Official & Non-Aboriginal languages Akan (Twi) 13460 5985 25 22150
## 5 Non-Official & Non-Aboriginal languages Albanian 26895 13135 345 31930
## 6 Aboriginal languages Algonquia… 45 10 0 120
## 7 Aboriginal languages Algonquin 1260 370 40 2480
## 8 Non-Official & Non-Aboriginal languages American … 2685 3020 1145 21930
## 9 Non-Official & Non-Aboriginal languages Amharic 22465 12785 200 33670
## 10 Non-Official & Non-Aboriginal languages Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
Data frames in R need to have column names. Thus if you read in data
without column names, R will assign names automatically. In this example,
R assigns the column names `X1, X2, X3, X4, X5, X6`.
It is best to rename your columns manually in this scenario. The current
column names (`X1, X2`, etc.) are not very descriptive and will make your analysis confusing.
To rename your columns, you can use the `rename` function
from [the `dplyr` R package](https://dplyr.tidyverse.org/) ([Wickham, François, et al. 2021](#ref-dplyr))
(one of the packages
loaded with `tidyverse`, so we don’t need to load it separately). The first
argument is the data set, and in the subsequent arguments you
write `new_name = old_name` for the selected variables to
rename. We rename the `X1, X2, ..., X6`
columns in the `canlang_data` data frame to more descriptive names below.
```
canlang_data <- rename(canlang_data,
category = X1,
language = X2,
mother_tongue = X3,
most_at_home = X4,
most_at_work = X5,
lang_known = X6)
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
### 2\.4\.5 Reading tabular data directly from a URL
We can also use `read_csv`, `read_tsv`, or `read_delim` (and related functions)
to read in data directly from a **U**niform **R**esource **L**ocator (URL) that
contains tabular data. Here, we provide the URL of a remote file to
`read_*`, instead of a path to a local file on our
computer. We need to surround the URL with quotes similar to when we specify a
path on our local computer. All other arguments that we use are the same as
when using these functions with a local file on our computer.
```
url <- "https://raw.githubusercontent.com/UBC-DSCI/data/main/can_lang.csv"
canlang_data <- read_csv(url)
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
### 2\.4\.6 Downloading data from a URL
Occasionally the data available at a URL is not formatted nicely enough to use
`read_csv`, `read_tsv`, `read_delim`, or other related functions to read the data
directly into R. In situations where it is necessary to download a file
to our local computer prior to working with it in R, we can use the `download.file`
function. The first argument is the URL, and the second is a path where we would
like to store the downloaded file.
```
download.file(url, "data/can_lang.csv")
canlang_data <- read_csv("data/can_lang.csv")
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
### 2\.4\.7 Previewing a data file before reading it into R
In many of the examples above, we gave you previews of the data file before we read
it into R. Previewing data is essential to see whether or not there are column
names, what the delimiters are, and if there are lines you need to skip.
You should do this yourself when trying to read in data files: open the file in
whichever text editor you prefer to inspect its contents prior to reading it into R.
2\.5 Reading tabular data from a Microsoft Excel file
-----------------------------------------------------
There are many other ways to store tabular data sets beyond plain text files,
and similarly, many ways to load those data sets into R. For example, it is
very common to encounter, and need to load into R, data stored as a Microsoft
Excel spreadsheet (with the file name
extension `.xlsx`). To be able to do this, a key thing to know is that even
though `.csv` and `.xlsx` files look almost identical when loaded into Excel,
the data themselves are stored completely differently. While `.csv` files are
plain text files, where the characters you see when you open the file in a text
editor are exactly the data they represent, this is not the case for `.xlsx`
files. Take a look at a snippet of what a `.xlsx` file would look like in a text editor:
```
,?'O
_rels/.rels???J1??>E?{7?
<?V????w8?'J???'QrJ???Tf?d??d?o?wZ'???@>?4'?|??hlIo??F
t 8f??3wn
????t??u"/
%~Ed2??<?w??
?Pd(??J-?E???7?'t(?-GZ?????y???c~N?g[^_r?4
yG?O
?K??G?
]TUEe??O??c[???????6q??s??d?m???\???H?^????3} ?rZY? ?:L60?^?????XTP+?|?
X?a??4VT?,D?Jq
```
This type of file representation allows Excel files to store additional things
that you cannot store in a `.csv` file, such as fonts, text formatting,
graphics, multiple sheets and more. And despite looking odd in a plain text
editor, we can read Excel spreadsheets into R using the `readxl` package
developed specifically for this
purpose.
```
library(readxl)
canlang_data <- read_excel("data/can_lang.xlsx")
canlang_data
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
If the `.xlsx` file has multiple sheets, you have to use the `sheet` argument
to specify the sheet number or name. You can also specify cell ranges using the
`range` argument. This functionality is useful when a single sheet contains
multiple tables (a sad thing that happens to many Excel spreadsheets since this
makes reading in data more difficult).
As with plain text files, you should always explore the data file before
importing it into R. Exploring the data beforehand helps you decide which
arguments you need to load the data into R successfully. If you do not have
the Excel program on your computer, you can use other programs to preview the
file. Examples include Google Sheets and Libre Office.
In Table [2\.1](reading.html#tab:read-table) we summarize the `read_*` functions we covered
in this chapter. We also include the `read_csv2` function for data separated by
semicolons `;`, which you may run into with data sets where the decimal is
represented by a comma instead of a period (as with some data sets from
European countries).
Table 2\.1: Summary of `read_*` functions
| Data File Type | R Function | R Package |
| --- | --- | --- |
| Comma (`,`) separated files | `read_csv` | `readr` |
| Tab (`\t`) separated files | `read_tsv` | `readr` |
| Semicolon (`;`) separated files | `read_csv2` | `readr` |
| Various formats (`.csv`, `.tsv`) | `read_delim` | `readr` |
| Excel files (`.xlsx`) | `read_excel` | `readxl` |
> **Note:** `readr` is a part of the `tidyverse` package so we did not need to load
> this package separately since we loaded `tidyverse`.
2\.6 Reading data from a database
---------------------------------
Another very common form of data storage is the relational database. Databases
are great when you have large data sets or multiple users
working on a project. There are many relational database management systems,
such as SQLite, MySQL, PostgreSQL, Oracle,
and many more. These
different relational database management systems each have their own advantages
and limitations. Almost all employ SQL (*structured query language*) to obtain
data from the database. But you don’t need to know SQL to analyze data from
a database; several packages have been written that allow you to connect to
relational databases and use the R programming language
to obtain data. In this book, we will give examples of how to do this
using R with SQLite and PostgreSQL databases.
### 2\.6\.1 Reading data from a SQLite database
SQLite is probably the simplest relational database system
that one can use in combination with R. SQLite databases are self\-contained, and are
usually stored and accessed locally on one computer from
a file with a `.db` extension (or sometimes an `.sqlite` extension).
Similar to Excel files, these are not plain text
files and cannot be read in a plain text editor.
The first thing you need to do to read data into R from a database is to
connect to the database. We do that using the `dbConnect` function from the
`DBI` (database interface) package. This does not read
in the data, but simply tells R where the database is and opens up a
communication channel that R can use to send SQL commands to the database.
```
library(DBI)
canlang_conn <- dbConnect(RSQLite::SQLite(), "data/can_lang.db")
```
Often relational databases have many tables; thus, in order to retrieve
data from a database, you need to know the name of the table
in which the data is stored. You can get the names of
all the tables in the database using the `dbListTables`
function:
```
tables <- dbListTables(canlang_conn)
tables
```
```
## [1] "lang"
```
The `dbListTables` function returned only one name, which tells us
that there is only one table in this database. To reference a table in the
database (so that we can perform operations like selecting columns and filtering rows), we
use the `tbl` function from the `dbplyr` package. The object returned
by the `tbl` function allows us to work with data
stored in databases as if they were just regular data frames; but secretly, behind
the scenes, `dbplyr` is turning your function calls (e.g., `select` and `filter`)
into SQL queries!
```
library(dbplyr)
lang_db <- tbl(canlang_conn, "lang")
lang_db
```
```
## # Source: table<lang> [?? x 6]
## # Database: sqlite 3.41.2 [/home/rstudio/introduction-to-datascience/data/can_lang.db]
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ more rows
```
Although it looks like we just got a data frame from the database, we didn’t!
It’s a *reference*; the data is still stored only in the SQLite database. The
`dbplyr` package works this way because databases are often more efficient at selecting, filtering
and joining large data sets than R. And typically the database will not even
be stored on your computer, but rather a more powerful machine somewhere on the
web. So R is lazy and waits to bring this data into memory until you explicitly
tell it to using the `collect` function.
Figure [2\.2](reading.html#fig:01-ref-vs-tibble) highlights the difference
between a `tibble` object in R and the output we just created. Notice in the table
on the right, the first two lines of the output indicate the source is SQL. The
last line doesn’t show how many rows there are (R is trying to avoid performing
expensive query operations), whereas the output for the `tibble` object does.
Figure 2\.2: Comparison of a reference to data in a database and a tibble in R.
We can look at the SQL commands that are sent to the database when we write
`tbl(canlang_conn, "lang")` in R with the `show_query` function from the
`dbplyr` package.
```
show_query(tbl(canlang_conn, "lang"))
```
```
## <SQL>
## SELECT *
## FROM `lang`
```
The output above shows the SQL code that is sent to the database. When we
write `tbl(canlang_conn, "lang")` in R, in the background, the function is
translating the R code into SQL, sending that SQL to the database, and then translating the
response for us. So `dbplyr` does all the hard work of translating from R to SQL and back for us;
we can just stick with R!
With our `lang_db` table reference for the 2016 Canadian Census data in hand, we
can mostly continue onward as if it were a regular data frame. For example, let’s do the same exercise
from Chapter [1](intro.html#intro): we will obtain only those rows corresponding to Aboriginal languages, and keep only
the `language` and `mother_tongue` columns.
We can use the `filter` function to obtain only certain rows. Below we filter the data to include only Aboriginal languages.
```
aboriginal_lang_db <- filter(lang_db, category == "Aboriginal languages")
aboriginal_lang_db
```
```
## # Source: SQL [?? x 6]
## # Database: sqlite 3.41.2 [/home/rstudio/introduction-to-datascience/data/can_lang.db]
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Aboriginal langu… Algonqu… 45 10 0 120
## 3 Aboriginal langu… Algonqu… 1260 370 40 2480
## 4 Aboriginal langu… Athabas… 50 10 0 85
## 5 Aboriginal langu… Atikame… 6150 5465 1100 6645
## 6 Aboriginal langu… Babine … 110 20 10 210
## 7 Aboriginal langu… Beaver 190 50 0 340
## 8 Aboriginal langu… Blackfo… 2815 1110 85 5645
## 9 Aboriginal langu… Carrier 1025 250 15 2100
## 10 Aboriginal langu… Cayuga 45 10 10 125
## # ℹ more rows
```
Above you can again see the hints that this data is not actually stored in R yet:
the source is `SQL [?? x 6]` and the output says `... more rows` at the end
(both indicating that R does not know how many rows there are in total!),
and a database type `sqlite` is listed.
We didn’t use the `collect` function because we are not ready to bring the data into R yet.
We can still use the database to do some work to obtain *only* the small amount of data we want to work with locally
in R. Let’s add the second part of our database query: selecting only the `language` and `mother_tongue` columns
using the `select` function.
```
aboriginal_lang_selected_db <- select(aboriginal_lang_db, language, mother_tongue)
aboriginal_lang_selected_db
```
```
## # Source: SQL [?? x 2]
## # Database: sqlite 3.41.2 [/home/rstudio/introduction-to-datascience/data/can_lang.db]
## language mother_tongue
## <chr> <dbl>
## 1 Aboriginal languages, n.o.s. 590
## 2 Algonquian languages, n.i.e. 45
## 3 Algonquin 1260
## 4 Athabaskan languages, n.i.e. 50
## 5 Atikamekw 6150
## 6 Babine (Wetsuwet'en) 110
## 7 Beaver 190
## 8 Blackfoot 2815
## 9 Carrier 1025
## 10 Cayuga 45
## # ℹ more rows
```
Now you can see that the database will return only the two columns we asked for with the `select` function.
In order to actually retrieve this data in R as a data frame,
we use the `collect` function.
Below you will see that after running `collect`, R knows that the retrieved
data has 67 rows, and there is no database listed any more.
```
aboriginal_lang_data <- collect(aboriginal_lang_selected_db)
aboriginal_lang_data
```
```
## # A tibble: 67 × 2
## language mother_tongue
## <chr> <dbl>
## 1 Aboriginal languages, n.o.s. 590
## 2 Algonquian languages, n.i.e. 45
## 3 Algonquin 1260
## 4 Athabaskan languages, n.i.e. 50
## 5 Atikamekw 6150
## 6 Babine (Wetsuwet'en) 110
## 7 Beaver 190
## 8 Blackfoot 2815
## 9 Carrier 1025
## 10 Cayuga 45
## # ℹ 57 more rows
```
Aside from knowing the number of rows, the data looks pretty similar in both
outputs shown above. And `dbplyr` provides many more functions (not just `filter`)
that you can use to directly feed the database reference (`lang_db`) into
downstream analysis functions (e.g., `ggplot2` for data visualization).
But `dbplyr` does not provide *every* function that we need for analysis;
we do eventually need to call `collect`.
For example, look what happens when we try to use `nrow` to count rows
in a data frame:
```
nrow(aboriginal_lang_selected_db)
```
```
## [1] NA
```
or `tail` to preview the last six rows of a data frame:
```
tail(aboriginal_lang_selected_db)
```
```
## Error: tail() is not supported by sql sources
```
Additionally, some operations will not work to extract columns or single values
from the reference given by the `tbl` function. Thus, once you have finished
your data wrangling of the `tbl` database reference object, it is advisable to
bring it into R as a data frame using `collect`.
But be very careful using `collect`: databases are often *very* big,
and reading an entire table into R might take a long time to run or even possibly
crash your machine. So make sure you use `filter` and `select` on the database table
to reduce the data to a reasonable size before using `collect` to read it into R!
### 2\.6\.2 Reading data from a PostgreSQL database
PostgreSQL (also called Postgres) is a very popular
and open\-source option for relational database software.
Unlike SQLite,
PostgreSQL uses a client–server database engine, as it was designed to be used
and accessed on a network. This means that you have to provide more information
to R when connecting to Postgres databases. The additional information that you
need to include when you call the `dbConnect` function is listed below:
* `dbname`: the name of the database (a single PostgreSQL instance can host more than one database)
* `host`: the URL pointing to where the database is located
* `port`: the communication endpoint between R and the PostgreSQL database (usually `5432`)
* `user`: the username for accessing the database
* `password`: the password for accessing the database
Additionally, we must use the `RPostgres` package instead of `RSQLite` in the
`dbConnect` function call. Below we demonstrate how to connect to a version of
the `can_mov_db` database, which contains information about Canadian movies.
Note that the `host` (`fakeserver.stat.ubc.ca`), `user` (`user0001`), and
`password` (`abc123`) below are *not real*; you will not actually
be able to connect to a database using this information.
```
library(RPostgres)
canmov_conn <- dbConnect(RPostgres::Postgres(), dbname = "can_mov_db",
host = "fakeserver.stat.ubc.ca", port = 5432,
user = "user0001", password = "abc123")
```
After opening the connection, everything looks and behaves almost identically
to when we were using an SQLite database in R. For example, we can again use
`dbListTables` to find out what tables are in the `can_mov_db` database:
```
dbListTables(canmov_conn)
```
```
[1] "themes" "medium" "titles" "title_aliases" "forms"
[6] "episodes" "names" "names_occupations" "occupation" "ratings"
```
We see that there are 10 tables in this database. Let’s first look at the
`"ratings"` table to find the lowest rating that exists in the `can_mov_db`
database:
```
ratings_db <- tbl(canmov_conn, "ratings")
ratings_db
```
```
# Source: table<ratings> [?? x 3]
# Database: postgres [user0001@fakeserver.stat.ubc.ca:5432/can_mov_db]
title average_rating num_votes
<chr> <dbl> <int>
1 The Grand Seduction 6.6 150
2 Rhymes for Young Ghouls 6.3 1685
3 Mommy 7.5 1060
4 Incendies 6.1 1101
5 Bon Cop, Bad Cop 7.0 894
6 Goon 5.5 1111
7 Monsieur Lazhar 5.6 610
8 What if 5.3 1401
9 The Barbarian Invations 5.8 99
10 Away from Her 6.9 2311
# … with more rows
```
To find the lowest rating that exists in the data base, we first need to
extract the `average_rating` column using `select`:
```
avg_rating_db <- select(ratings_db, average_rating)
avg_rating_db
```
```
# Source: lazy query [?? x 1]
# Database: postgres [user0001@fakeserver.stat.ubc.ca:5432/can_mov_db]
average_rating
<dbl>
1 6.6
2 6.3
3 7.5
4 6.1
5 7.0
6 5.5
7 5.6
8 5.3
9 5.8
10 6.9
# … with more rows
```
Next we use `min` to find the minimum rating in that column:
```
min(avg_rating_db)
```
```
Error in min(avg_rating_db) : invalid 'type' (list) of argument
```
Instead of the minimum, we get an error! This is another example of when we
need to use the `collect` function to bring the data into R for further
computation:
```
avg_rating_data <- collect(avg_rating_db)
min(avg_rating_data)
```
```
[1] 1
```
We see the lowest rating given to a movie is 1, indicating that it must have
been a really bad movie…
### 2\.6\.3 Why should we bother with databases at all?
Opening a database
involved a lot more effort than just opening a `.csv`, `.tsv`, or any of the
other plain text or Excel formats. We had to open a connection to the database,
then use `dbplyr` to translate `tidyverse`\-like
commands (`filter`, `select` etc.) into SQL commands that the database
understands, and then finally `collect` the results. And not
all `tidyverse` commands can currently be translated to work with
databases. For example, we can compute a mean with a database
but can’t easily compute a median. So you might be wondering: why should we use
databases at all?
Databases are beneficial in a large\-scale setting:
* They enable storing large data sets across multiple computers with backups.
* They provide mechanisms for ensuring data integrity and validating input.
* They provide security and data access control.
* They allow multiple users to access data simultaneously
and remotely without conflicts and errors.
For example, there are billions of Google searches conducted daily in 2021 ([Real Time Statistics Project 2021](#ref-googlesearches)).
Can you imagine if Google stored all of the data
from those searches in a single `.csv` file!? Chaos would ensue!
### 2\.6\.1 Reading data from a SQLite database
SQLite is probably the simplest relational database system
that one can use in combination with R. SQLite databases are self\-contained, and are
usually stored and accessed locally on one computer from
a file with a `.db` extension (or sometimes an `.sqlite` extension).
Similar to Excel files, these are not plain text
files and cannot be read in a plain text editor.
The first thing you need to do to read data into R from a database is to
connect to the database. We do that using the `dbConnect` function from the
`DBI` (database interface) package. This does not read
in the data, but simply tells R where the database is and opens up a
communication channel that R can use to send SQL commands to the database.
```
library(DBI)
canlang_conn <- dbConnect(RSQLite::SQLite(), "data/can_lang.db")
```
Often relational databases have many tables; thus, in order to retrieve
data from a database, you need to know the name of the table
in which the data is stored. You can get the names of
all the tables in the database using the `dbListTables`
function:
```
tables <- dbListTables(canlang_conn)
tables
```
```
## [1] "lang"
```
The `dbListTables` function returned only one name, which tells us
that there is only one table in this database. To reference a table in the
database (so that we can perform operations like selecting columns and filtering rows), we
use the `tbl` function from the `dbplyr` package. The object returned
by the `tbl` function allows us to work with data
stored in databases as if they were just regular data frames; but secretly, behind
the scenes, `dbplyr` is turning your function calls (e.g., `select` and `filter`)
into SQL queries!
```
library(dbplyr)
lang_db <- tbl(canlang_conn, "lang")
lang_db
```
```
## # Source: table<lang> [?? x 6]
## # Database: sqlite 3.41.2 [/home/rstudio/introduction-to-datascience/data/can_lang.db]
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ more rows
```
Although it looks like we just got a data frame from the database, we didn’t!
It’s a *reference*; the data is still stored only in the SQLite database. The
`dbplyr` package works this way because databases are often more efficient at selecting, filtering
and joining large data sets than R. And typically the database will not even
be stored on your computer, but rather a more powerful machine somewhere on the
web. So R is lazy and waits to bring this data into memory until you explicitly
tell it to using the `collect` function.
Figure [2\.2](reading.html#fig:01-ref-vs-tibble) highlights the difference
between a `tibble` object in R and the output we just created. Notice in the table
on the right, the first two lines of the output indicate the source is SQL. The
last line doesn’t show how many rows there are (R is trying to avoid performing
expensive query operations), whereas the output for the `tibble` object does.
Figure 2\.2: Comparison of a reference to data in a database and a tibble in R.
We can look at the SQL commands that are sent to the database when we write
`tbl(canlang_conn, "lang")` in R with the `show_query` function from the
`dbplyr` package.
```
show_query(tbl(canlang_conn, "lang"))
```
```
## <SQL>
## SELECT *
## FROM `lang`
```
The output above shows the SQL code that is sent to the database. When we
write `tbl(canlang_conn, "lang")` in R, in the background, the function is
translating the R code into SQL, sending that SQL to the database, and then translating the
response for us. So `dbplyr` does all the hard work of translating from R to SQL and back for us;
we can just stick with R!
With our `lang_db` table reference for the 2016 Canadian Census data in hand, we
can mostly continue onward as if it were a regular data frame. For example, let’s do the same exercise
from Chapter [1](intro.html#intro): we will obtain only those rows corresponding to Aboriginal languages, and keep only
the `language` and `mother_tongue` columns.
We can use the `filter` function to obtain only certain rows. Below we filter the data to include only Aboriginal languages.
```
aboriginal_lang_db <- filter(lang_db, category == "Aboriginal languages")
aboriginal_lang_db
```
```
## # Source: SQL [?? x 6]
## # Database: sqlite 3.41.2 [/home/rstudio/introduction-to-datascience/data/can_lang.db]
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Aboriginal langu… Algonqu… 45 10 0 120
## 3 Aboriginal langu… Algonqu… 1260 370 40 2480
## 4 Aboriginal langu… Athabas… 50 10 0 85
## 5 Aboriginal langu… Atikame… 6150 5465 1100 6645
## 6 Aboriginal langu… Babine … 110 20 10 210
## 7 Aboriginal langu… Beaver 190 50 0 340
## 8 Aboriginal langu… Blackfo… 2815 1110 85 5645
## 9 Aboriginal langu… Carrier 1025 250 15 2100
## 10 Aboriginal langu… Cayuga 45 10 10 125
## # ℹ more rows
```
Above you can again see the hints that this data is not actually stored in R yet:
the source is `SQL [?? x 6]` and the output says `... more rows` at the end
(both indicating that R does not know how many rows there are in total!),
and a database type `sqlite` is listed.
We didn’t use the `collect` function because we are not ready to bring the data into R yet.
We can still use the database to do some work to obtain *only* the small amount of data we want to work with locally
in R. Let’s add the second part of our database query: selecting only the `language` and `mother_tongue` columns
using the `select` function.
```
aboriginal_lang_selected_db <- select(aboriginal_lang_db, language, mother_tongue)
aboriginal_lang_selected_db
```
```
## # Source: SQL [?? x 2]
## # Database: sqlite 3.41.2 [/home/rstudio/introduction-to-datascience/data/can_lang.db]
## language mother_tongue
## <chr> <dbl>
## 1 Aboriginal languages, n.o.s. 590
## 2 Algonquian languages, n.i.e. 45
## 3 Algonquin 1260
## 4 Athabaskan languages, n.i.e. 50
## 5 Atikamekw 6150
## 6 Babine (Wetsuwet'en) 110
## 7 Beaver 190
## 8 Blackfoot 2815
## 9 Carrier 1025
## 10 Cayuga 45
## # ℹ more rows
```
Now you can see that the database will return only the two columns we asked for with the `select` function.
In order to actually retrieve this data in R as a data frame,
we use the `collect` function.
Below you will see that after running `collect`, R knows that the retrieved
data has 67 rows, and there is no database listed any more.
```
aboriginal_lang_data <- collect(aboriginal_lang_selected_db)
aboriginal_lang_data
```
```
## # A tibble: 67 × 2
## language mother_tongue
## <chr> <dbl>
## 1 Aboriginal languages, n.o.s. 590
## 2 Algonquian languages, n.i.e. 45
## 3 Algonquin 1260
## 4 Athabaskan languages, n.i.e. 50
## 5 Atikamekw 6150
## 6 Babine (Wetsuwet'en) 110
## 7 Beaver 190
## 8 Blackfoot 2815
## 9 Carrier 1025
## 10 Cayuga 45
## # ℹ 57 more rows
```
Aside from knowing the number of rows, the data looks pretty similar in both
outputs shown above. And `dbplyr` provides many more functions (not just `filter`)
that you can use to directly feed the database reference (`lang_db`) into
downstream analysis functions (e.g., `ggplot2` for data visualization).
But `dbplyr` does not provide *every* function that we need for analysis;
we do eventually need to call `collect`.
For example, look what happens when we try to use `nrow` to count rows
in a data frame:
```
nrow(aboriginal_lang_selected_db)
```
```
## [1] NA
```
or `tail` to preview the last six rows of a data frame:
```
tail(aboriginal_lang_selected_db)
```
```
## Error: tail() is not supported by sql sources
```
Additionally, some operations will not work to extract columns or single values
from the reference given by the `tbl` function. Thus, once you have finished
your data wrangling of the `tbl` database reference object, it is advisable to
bring it into R as a data frame using `collect`.
But be very careful using `collect`: databases are often *very* big,
and reading an entire table into R might take a long time to run or even possibly
crash your machine. So make sure you use `filter` and `select` on the database table
to reduce the data to a reasonable size before using `collect` to read it into R!
### 2\.6\.2 Reading data from a PostgreSQL database
PostgreSQL (also called Postgres) is a very popular
and open\-source option for relational database software.
Unlike SQLite,
PostgreSQL uses a client–server database engine, as it was designed to be used
and accessed on a network. This means that you have to provide more information
to R when connecting to Postgres databases. The additional information that you
need to include when you call the `dbConnect` function is listed below:
* `dbname`: the name of the database (a single PostgreSQL instance can host more than one database)
* `host`: the URL pointing to where the database is located
* `port`: the communication endpoint between R and the PostgreSQL database (usually `5432`)
* `user`: the username for accessing the database
* `password`: the password for accessing the database
Additionally, we must use the `RPostgres` package instead of `RSQLite` in the
`dbConnect` function call. Below we demonstrate how to connect to a version of
the `can_mov_db` database, which contains information about Canadian movies.
Note that the `host` (`fakeserver.stat.ubc.ca`), `user` (`user0001`), and
`password` (`abc123`) below are *not real*; you will not actually
be able to connect to a database using this information.
```
library(RPostgres)
canmov_conn <- dbConnect(RPostgres::Postgres(), dbname = "can_mov_db",
host = "fakeserver.stat.ubc.ca", port = 5432,
user = "user0001", password = "abc123")
```
After opening the connection, everything looks and behaves almost identically
to when we were using an SQLite database in R. For example, we can again use
`dbListTables` to find out what tables are in the `can_mov_db` database:
```
dbListTables(canmov_conn)
```
```
[1] "themes" "medium" "titles" "title_aliases" "forms"
[6] "episodes" "names" "names_occupations" "occupation" "ratings"
```
We see that there are 10 tables in this database. Let’s first look at the
`"ratings"` table to find the lowest rating that exists in the `can_mov_db`
database:
```
ratings_db <- tbl(canmov_conn, "ratings")
ratings_db
```
```
# Source: table<ratings> [?? x 3]
# Database: postgres [user0001@fakeserver.stat.ubc.ca:5432/can_mov_db]
title average_rating num_votes
<chr> <dbl> <int>
1 The Grand Seduction 6.6 150
2 Rhymes for Young Ghouls 6.3 1685
3 Mommy 7.5 1060
4 Incendies 6.1 1101
5 Bon Cop, Bad Cop 7.0 894
6 Goon 5.5 1111
7 Monsieur Lazhar 5.6 610
8 What if 5.3 1401
9 The Barbarian Invations 5.8 99
10 Away from Her 6.9 2311
# … with more rows
```
To find the lowest rating that exists in the data base, we first need to
extract the `average_rating` column using `select`:
```
avg_rating_db <- select(ratings_db, average_rating)
avg_rating_db
```
```
# Source: lazy query [?? x 1]
# Database: postgres [user0001@fakeserver.stat.ubc.ca:5432/can_mov_db]
average_rating
<dbl>
1 6.6
2 6.3
3 7.5
4 6.1
5 7.0
6 5.5
7 5.6
8 5.3
9 5.8
10 6.9
# … with more rows
```
Next we use `min` to find the minimum rating in that column:
```
min(avg_rating_db)
```
```
Error in min(avg_rating_db) : invalid 'type' (list) of argument
```
Instead of the minimum, we get an error! This is another example of when we
need to use the `collect` function to bring the data into R for further
computation:
```
avg_rating_data <- collect(avg_rating_db)
min(avg_rating_data)
```
```
[1] 1
```
We see the lowest rating given to a movie is 1, indicating that it must have
been a really bad movie…
### 2\.6\.3 Why should we bother with databases at all?
Opening a database
involved a lot more effort than just opening a `.csv`, `.tsv`, or any of the
other plain text or Excel formats. We had to open a connection to the database,
then use `dbplyr` to translate `tidyverse`\-like
commands (`filter`, `select` etc.) into SQL commands that the database
understands, and then finally `collect` the results. And not
all `tidyverse` commands can currently be translated to work with
databases. For example, we can compute a mean with a database
but can’t easily compute a median. So you might be wondering: why should we use
databases at all?
Databases are beneficial in a large\-scale setting:
* They enable storing large data sets across multiple computers with backups.
* They provide mechanisms for ensuring data integrity and validating input.
* They provide security and data access control.
* They allow multiple users to access data simultaneously
and remotely without conflicts and errors.
For example, there are billions of Google searches conducted daily in 2021 ([Real Time Statistics Project 2021](#ref-googlesearches)).
Can you imagine if Google stored all of the data
from those searches in a single `.csv` file!? Chaos would ensue!
2\.7 Writing data from R to a `.csv` file
-----------------------------------------
At the middle and end of a data analysis, we often want to write a data frame
that has changed (either through filtering, selecting, mutating or summarizing)
to a file to share it with others or use it for another step in the analysis.
The most straightforward way to do this is to use the `write_csv` function
from the `tidyverse` package. The default
arguments for this file are to use a comma (`,`) as the delimiter and include
column names. Below we demonstrate creating a new version of the Canadian
languages data set without the official languages category according to the
Canadian 2016 Census, and then writing this to a `.csv` file:
```
no_official_lang_data <- filter(can_lang, category != "Official languages")
write_csv(no_official_lang_data, "data/no_official_languages.csv")
```
2\.8 Obtaining data from the web
--------------------------------
> **Note:** This section is not required reading for the remainder of the textbook. It
> is included for those readers interested in learning a little bit more about
> how to obtain different types of data from the web.
Data doesn’t just magically appear on your computer; you need to get it from
somewhere. Earlier in the chapter we showed you how to access data stored in a
plain text, spreadsheet\-like format (e.g., comma\- or tab\-separated) from a web
URL using one of the `read_*` functions from the `tidyverse`. But as time goes
on, it is increasingly uncommon to find data (especially large amounts of data)
in this format available for download from a URL. Instead, websites now often
offer something known as an **a**pplication **p**rogramming **i**nterface
(API), which
provides a programmatic way to ask for subsets of a data set. This allows the
website owner to control *who* has access to the data, *what portion* of the
data they have access to, and *how much* data they can access. Typically, the
website owner will give you a *token* or *key* (a secret string of characters somewhat
like a password) that you have to provide when accessing the API.
Another interesting thought: websites themselves *are* data! When you type a
URL into your browser window, your browser asks the *web server* (another
computer on the internet whose job it is to respond to requests for the
website) to give it the website’s data, and then your browser translates that
data into something you can see. If the website shows you some information that
you’re interested in, you could *create* a data set for yourself by copying and
pasting that information into a file. This process of taking information
directly from what a website displays is called
*web scraping* (or sometimes *screen scraping*). Now, of course, copying and pasting
information manually is a painstaking and error\-prone process, especially when
there is a lot of information to gather. So instead of asking your browser to
translate the information that the web server provides into something you can
see, you can collect that data programmatically—in the form of
**h**yper**t**ext **m**arkup **l**anguage
(HTML)
and **c**ascading **s**tyle **s**heet (CSS) code—and process it
to extract useful information. HTML provides the
basic structure of a site and tells the webpage how to display the content
(e.g., titles, paragraphs, bullet lists etc.), whereas CSS helps style the
content and tells the webpage how the HTML elements should
be presented (e.g., colors, layouts, fonts etc.).
This subsection will show you the basics of both web scraping
with the [`rvest` R package](https://rvest.tidyverse.org/) ([Wickham 2021a](#ref-rvest))
and accessing the NASA “Astronomy Picture of the Day” API
using the [`httr2` R package](https://httr2.r-lib.org/) ([Wickham 2023](#ref-httr2)).
### 2\.8\.1 Web scraping
#### HTML and CSS selectors
When you enter a URL into your browser, your browser connects to the
web server at that URL and asks for the *source code* for the website.
This is the data that the browser translates
into something you can see; so if we
are going to create our own data by scraping a website, we have to first understand
what that data looks like! For example, let’s say we are interested
in knowing the average rental price (per square foot) of the most recently
available one\-bedroom apartments in Vancouver
on [Craiglist](https://vancouver.craigslist.org). When we visit the Vancouver Craigslist
website and search for one\-bedroom apartments,
we should see something similar to Figure [2\.3](reading.html#fig:craigslist-human).
Figure 2\.3: Craigslist webpage of advertisements for one\-bedroom apartments.
Based on what our browser shows us, it’s pretty easy to find the size and price
for each apartment listed. But we would like to be able to obtain that information
using R, without any manual human effort or copying and pasting. We do this by
examining the *source code* that the web server actually sent our browser to
display for us. We show a snippet of it below; the
entire source
is [included with the code for this book](https://github.com/UBC-DSCI/introduction-to-datascience/blob/main/img/reading/website_source.txt):
```
<span class="result-meta">
<span class="result-price">$800</span>
<span class="housing">
1br -
</span>
<span class="result-hood"> (13768 108th Avenue)</span>
<span class="result-tags">
<span class="maptag" data-pid="6786042973">map</span>
</span>
<span class="banish icon icon-trash" role="button">
<span class="screen-reader-text">hide this posting</span>
</span>
<span class="unbanish icon icon-trash red" role="button"></span>
<a href="#" class="restore-link">
<span class="restore-narrow-text">restore</span>
<span class="restore-wide-text">restore this posting</span>
</a>
<span class="result-price">$2285</span>
</span>
```
Oof…you can tell that the source code for a web page is not really designed
for humans to understand easily. However, if you look through it closely, you
will find that the information we’re interested in is hidden among the muck.
For example, near the top of the snippet
above you can see a line that looks like
```
<span class="result-price">$800</span>
```
That snippet is definitely storing the price of a particular apartment. With some more
investigation, you should be able to find things like the date and time of the
listing, the address of the listing, and more. So this source code most likely
contains all the information we are interested in!
Let’s dig into that line above a bit more. You can see that
that bit of code has an *opening tag* (words between `<` and `>`, like
`<span>`) and a *closing tag* (the same with a slash, like `</span>`). HTML
source code generally stores its data between opening and closing tags like
these. Tags are keywords that tell the web browser how to display or format
the content. Above you can see that the information we want (`$800`) is stored
between an opening and closing tag (`<span>` and `</span>`). In the opening
tag, you can also see a very useful “class” (a special word that is sometimes
included with opening tags): `class="result-price"`. Since we want R to
programmatically sort through all of the source code for the website to find
apartment prices, maybe we can look for all the tags with the `"result-price"`
class, and grab the information between the opening and closing tag. Indeed,
take a look at another line of the source snippet above:
```
<span class="result-price">$2285</span>
```
It’s yet another price for an apartment listing, and the tags surrounding it
have the `"result-price"` class. Wonderful! Now that we know what pattern we
are looking for—a dollar amount between opening and closing tags that have the
`"result-price"` class—we should be able to use code to pull out all of the
matching patterns from the source code to obtain our data. This sort of “pattern”
is known as a *CSS selector* (where CSS stands for **c**ascading **s**tyle **s**heet).
The above was a simple example of “finding the pattern to look for”; many
websites are quite a bit larger and more complex, and so is their website
source code. Fortunately, there are tools available to make this process
easier. For example,
[SelectorGadget](https://selectorgadget.com/) is
an open\-source tool that simplifies identifying the generating
and finding of CSS selectors.
At the end of the chapter in the additional resources section, we include a link to
a short video on how to install and use the SelectorGadget tool to
obtain CSS selectors for use in web scraping.
After installing and enabling the tool, you can click the
website element for which you want an appropriate selector. For
example, if we click the price of an apartment listing, we
find that SelectorGadget shows us the selector `.result-price`
in its toolbar, and highlights all the other apartment
prices that would be obtained using that selector (Figure [2\.4](reading.html#fig:sg1)).
Figure 2\.4: Using the SelectorGadget on a Craigslist webpage to obtain the CCS selector useful for obtaining apartment prices.
If we then click the size of an apartment listing, SelectorGadget shows us
the `span` selector, and highlights many of the lines on the page; this indicates that the
`span` selector is not specific enough to capture only apartment sizes (Figure [2\.5](reading.html#fig:sg3)).
Figure 2\.5: Using the SelectorGadget on a Craigslist webpage to obtain a CCS selector useful for obtaining apartment sizes.
To narrow the selector, we can click one of the highlighted elements that
we *do not* want. For example, we can deselect the “pic/map” links,
resulting in only the data we want highlighted using the `.housing` selector (Figure [2\.6](reading.html#fig:sg2)).
Figure 2\.6: Using the SelectorGadget on a Craigslist webpage to refine the CCS selector to one that is most useful for obtaining apartment sizes.
So to scrape information about the square footage and rental price
of apartment listings, we need to use
the two CSS selectors `.housing` and `.result-price`, respectively.
The selector gadget returns them to us as a comma\-separated list (here
`.housing , .result-price`), which is exactly the format we need to provide to
R if we are using more than one CSS selector.
**Caution: are you allowed to scrape that website?**
*Before* scraping data from the web, you should always check whether or not
you are *allowed* to scrape it! There are two documents that are important
for this: the `robots.txt` file and the Terms of Service
document. If we take a look at [Craigslist’s Terms of Service document](https://www.craigslist.org/about/terms.of.use),
we find the following text: *“You agree not to copy/collect CL content
via robots, spiders, scripts, scrapers, crawlers, or any automated or manual equivalent (e.g., by hand).”*
So unfortunately, without explicit permission, we are not allowed to scrape the website.
What to do now? Well, we *could* ask the owner of Craigslist for permission to scrape.
However, we are not likely to get a response, and even if we did they would not likely give us permission.
The more realistic answer is that we simply cannot scrape Craigslist. If we still want
to find data about rental prices in Vancouver, we must go elsewhere.
To continue learning how to scrape data from the web, let’s instead
scrape data on the population of Canadian cities from Wikipedia.
We have checked the [Terms of Service document](https://foundation.wikimedia.org/wiki/Terms_of_Use/en),
and it does not mention that web scraping is disallowed.
We will use the SelectorGadget tool to pick elements that we are interested in
(city names and population counts) and deselect others to indicate that we are not
interested in them (province names), as shown in Figure [2\.7](reading.html#fig:sg4).
Figure 2\.7: Using the SelectorGadget on a Wikipedia webpage.
We include a link to a short video tutorial on this process at the end of the chapter
in the additional resources section. SelectorGadget provides in its toolbar
the following list of CSS selectors to use:
```
td:nth-child(8) ,
td:nth-child(4) ,
.largestCities-cell-background+ td a
```
Now that we have the CSS selectors that describe the properties of the elements
that we want to target, we can use them to find certain elements in web pages and extract data.
#### Using `rvest`
We will use the `rvest` R package to scrape data from the Wikipedia page.
We start by loading the `rvest` package:
```
library(rvest)
```
Next, we tell R what page we want to scrape by providing the webpage’s URL in quotations to the function `read_html`:
```
page <- read_html("https://en.wikipedia.org/wiki/Canada")
```
The `read_html` function directly downloads the source code for the page at
the URL you specify, just like your browser would if you navigated to that site. But
instead of displaying the website to you, the `read_html` function just returns
the HTML source code itself, which we have
stored in the `page` variable. Next, we send the page object to the `html_nodes`
function, along with the CSS selectors we obtained from
the SelectorGadget tool. Make sure to surround the selectors with quotation marks; the function, `html_nodes`, expects that
argument is a string. We store the result of the `html_nodes` function in the `population_nodes` variable.
Note that below we use the `paste` function with a comma separator (`sep=","`)
to build the list of selectors. The `paste` function converts
elements to characters and combines the values into a list. We use this function to
build the list of selectors to maintain code readability; this avoids
having a very long line of code.
```
selectors <- paste("td:nth-child(8)",
"td:nth-child(4)",
".largestCities-cell-background+ td a", sep = ",")
population_nodes <- html_nodes(page, selectors)
head(population_nodes)
```
```
## {xml_nodeset (6)}
## [1] <a href="/wiki/Greater_Toronto_Area" title="Greater Toronto Area">Toronto ...
## [2] <td style="text-align:right;">6,202,225</td>
## [3] <a href="/wiki/London,_Ontario" title="London, Ontario">London</a>
## [4] <td style="text-align:right;">543,551\n</td>
## [5] <a href="/wiki/Greater_Montreal" title="Greater Montreal">Montreal</a>
## [6] <td style="text-align:right;">4,291,732</td>
```
> **Note:** `head` is a function that is often useful for viewing only a short
> summary of an R object, rather than the whole thing (which may be quite a lot
> to look at). For example, here `head` shows us only the first 6 items in the
> `population_nodes` object. Note that some R objects by default print only a
> small summary. For example, `tibble` data frames only show you the first 10 rows.
> But not *all* R objects do this, and that’s where the `head` function helps
> summarize things for you.
Each of the items in the `population_nodes` list is a *node* from the HTML
document that matches the CSS selectors you specified. A *node* is an HTML tag
pair (e.g., `<td>` and `</td>` which defines the cell of a table) combined with
the content stored between the tags. For our CSS selector `td:nth-child(4)`, an
example node that would be selected would be:
```
<td style="text-align:left;background:#f0f0f0;">
<a href="/wiki/London,_Ontario" title="London, Ontario">London</a>
</td>
```
Next we extract the meaningful data—in other words, we get rid of the
HTML code syntax and tags—from the nodes using the `html_text` function.
In the case of the example node above, `html_text` function returns `"London"`.
```
population_text <- html_text(population_nodes)
head(population_text)
```
```
## [1] "Toronto" "6,202,225" "London" "543,551\n" "Montreal" "4,291,732"
```
Fantastic! We seem to have extracted the data of interest from the
raw HTML source code. But we are not quite done; the data
is not yet in an optimal format for data analysis. Both the city names and
population are encoded as characters in a single vector, instead of being in a
data frame with one character column for city and one numeric column for
population (like a spreadsheet).
Additionally, the populations contain commas (not useful for programmatically
dealing with numbers), and some even contain a line break character at the end
(`\n`). In Chapter [3](wrangling.html#wrangling), we will learn more about how to *wrangle* data
such as this into a more useful format for data analysis using R.
### 2\.8\.2 Using an API
Rather than posting a data file at a URL for you to download, many websites these days
provide an API that must be accessed through a programming language like R. The benefit of using an API
is that data owners have much more control over the data they provide to users. However, unlike
web scraping, there is no consistent way to access an API across websites. Every website typically
has its own API designed especially for its own use case. Therefore we will just provide one example
of accessing data through an API in this book, with the hope that it gives you enough of a basic
idea that you can learn how to use another API if needed. In particular, in this book we will show you the basics
of how to use the `httr2` package in R to access data from the NASA “Astronomy Picture
of the Day” API (a great source of desktop backgrounds, by the way—take a look at the stunning
picture of the Rho\-Ophiuchi cloud complex ([NASA et al. 2023](#ref-rhoophiuchi)) in Figure [2\.8](reading.html#fig:NASA-API-Rho-Ophiuchi) from July 13, 2023!).
Figure 2\.8: The James Webb Space Telescope’s NIRCam image of the Rho Ophiuchi molecular cloud complex.
First, you will need to visit the [NASA APIs page](https://api.nasa.gov/) and generate an API key (i.e., a password used to identify you when accessing the API).
Note that a valid email address is required to
associate with the key. The signup form looks something like Figure [2\.9](reading.html#fig:NASA-API-signup).
After filling out the basic information, you will receive the token via email.
Make sure to store the key in a safe place, and keep it private.
Figure 2\.9: Generating the API access token for the NASA API
**Caution: think about your API usage carefully!**
When you access an API, you are initiating a transfer of data from a web server
to your computer. Web servers are expensive to run and do not have infinite resources.
If you try to ask for *too much data* at once, you can use up a huge amount of the server’s bandwidth.
If you try to ask for data *too frequently*—e.g., if you
make many requests to the server in quick succession—you can also bog the server down and make
it unable to talk to anyone else. Most servers have mechanisms to revoke your access if you are not
careful, but you should try to prevent issues from happening in the first place by being extra careful
with how you write and run your code. You should also keep in mind that when a website owner
grants you API access, they also usually specify a limit (or *quota*) of how much data you can ask for.
Be careful not to overrun your quota! So *before* we try to use the API, we will first visit
[the NASA website](https://api.nasa.gov/) to see what limits we should abide by when using the API.
These limits are outlined in Figure [2\.10](reading.html#fig:NASA-API-limits).
Figure 2\.10: The NASA website specifies an hourly limit of 1,000 requests.
After checking the NASA website, it seems like we can send at most 1,000 requests per hour.
That should be more than enough for our purposes in this section.
#### Accessing the NASA API
The NASA API is what is known as an *HTTP API*: this is a particularly common
kind of API, where you can obtain data simply by accessing a
particular URL as if it were a regular website. To make a query to the NASA
API, we need to specify three things. First, we specify the URL *endpoint* of
the API, which is simply a URL that helps the remote server understand which
API you are trying to access. NASA offers a variety of APIs, each with its own
endpoint; in the case of the NASA “Astronomy Picture of the Day” API, the URL
endpoint is `https://api.nasa.gov/planetary/apod`. Second, we write `?`, which denotes that a
list of *query parameters* will follow. And finally, we specify a list of
query parameters of the form `parameter=value`, separated by `&` characters. The NASA
“Astronomy Picture of the Day” API accepts the parameters shown in
Figure [2\.11](reading.html#fig:NASA-API-parameters).
Figure 2\.11: The set of parameters that you can specify when querying the NASA “Astronomy Picture of the Day” API, along with syntax, default settings, and a description of each.
So for example, to obtain the image of the day
from July 13, 2023, the API query would have two parameters: `api_key=YOUR_API_KEY`
and `date=2023-07-13`. Remember to replace `YOUR_API_KEY` with the API key you
received from NASA in your email! Putting it all together, the query will look like the following:
```
https://api.nasa.gov/planetary/apod?api_key=YOUR_API_KEY&date=2023-07-13
```
If you try putting this URL into your web browser, you’ll actually find that the server
responds to your request with some text:
```
{"date":"2023-07-13","explanation":"A mere 390 light-years away, Sun-like stars
and future planetary systems are forming in the Rho Ophiuchi molecular cloud
complex, the closest star-forming region to our fair planet. The James Webb
Space Telescope's NIRCam peered into the nearby natal chaos to capture this
infrared image at an inspiring scale. The spectacular cosmic snapshot was
released to celebrate the successful first year of Webb's exploration of the
Universe. The frame spans less than a light-year across the Rho Ophiuchi region
and contains about 50 young stars. Brighter stars clearly sport Webb's
characteristic pattern of diffraction spikes. Huge jets of shocked molecular
hydrogen blasting from newborn stars are red in the image, with the large,
yellowish dusty cavity carved out by the energetic young star near its center.
Near some stars in the stunning image are shadows cast by their protoplanetary
disks.","hdurl":"https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph.png",
"media_type":"image","service_version":"v1","title":"Webb's
Rho Ophiuchi","url":"https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph1024.png"}
```
Neat! There is definitely some data there, but it’s a bit hard to
see what it all is. As it turns out, this is a common format for data called
*JSON* (JavaScript Object Notation).
We won’t encounter this kind of data much in this book,
but for now you can interpret this data as `key : value` pairs separated by
commas. For example, if you look closely, you’ll see that the first entry is
`"date":"2023-07-13"`, which indicates that we indeed successfully received
data corresponding to July 13, 2023\.
So now our job is to do all of this programmatically in R. We will load
the `httr2` package, and construct the query using the `request` function, which takes a single URL argument;
you will recognize the same query URL that we pasted into the browser earlier.
We will then send the query using the `req_perform` function, and finally
obtain a JSON representation of the response using the `resp_body_json` function.
```
library(httr2)
req <- request("https://api.nasa.gov/planetary/apod?api_key=YOUR_API_KEY&date=2023-07-13")
resp <- req_perform(req)
nasa_data_single <- resp_body_json(resp)
nasa_data_single
```
```
## $date
## [1] "2023-07-13"
##
## $explanation
## [1] "A mere 390 light-years away, Sun-like stars and future planetary systems are forming in the Rho Ophiuchi molecular cloud complex, the closest star-forming region to our fair planet. The James Webb Space Telescope's NIRCam peered into the nearby natal chaos to capture this infrared image at an inspiring scale. The spectacular cosmic snapshot was released to celebrate the successful first year of Webb's exploration of the Universe. The frame spans less than a light-year across the Rho Ophiuchi region and contains about 50 young stars. Brighter stars clearly sport Webb's characteristic pattern of diffraction spikes. Huge jets of shocked molecular hydrogen blasting from newborn stars are red in the image, with the large, yellowish dusty cavity carved out by the energetic young star near its center. Near some stars in the stunning image are shadows cast by their protoplanetary disks."
##
## $hdurl
## [1] "https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph.png"
##
## $media_type
## [1] "image"
##
## $service_version
## [1] "v1"
##
## $title
## [1] "Webb's Rho Ophiuchi"
##
## $url
## [1] "https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph1024.png"
```
We can obtain more records at once by using the `start_date` and `end_date` parameters, as
shown in the table of parameters in [2\.11](reading.html#fig:NASA-API-parameters).
Let’s obtain all the records between May 1, 2023, and July 13, 2023, and store the result
in an object called `nasa_data`; now the response
will take the form of an R *list* (you’ll learn more about these in Chapter [3](wrangling.html#wrangling)).
Each item in the list will correspond to a single day’s record (just like the `nasa_data_single` object),
and there will be 74 items total, one for each day between the start and end dates:
```
req <- request("https://api.nasa.gov/planetary/apod?api_key=YOUR_API_KEY&start_date=2023-05-01&end_date=2023-07-13")
resp <- req_perform(req)
nasa_data <- resp_body_json(response)
length(nasa_data)
```
```
## [1] 74
```
For further data processing using the techniques in this book, you’ll need to turn this list of items
into a data frame. Here we will extract the `date`, `title`, `copyright`, and `url` variables
from the JSON data, and construct a data frame using the extracted information.
> **Note:** Understanding this code is not required for the remainder of the textbook. It is included for those
> readers who would like to parse JSON data into a data frame in their own data analyses.
```
nasa_df_all <- tibble(bind_rows(lapply(nasa_data, as.data.frame.list)))
nasa_df <- select(nasa_df_all, date, title, copyright, url)
nasa_df
```
```
## # A tibble: 74 × 4
## date title copyright url
## <chr> <chr> <chr> <chr>
## 1 2023-05-01 Carina Nebula North "\nCarlos Tayl… http…
## 2 2023-05-02 Flat Rock Hills on Mars "\nNASA, \nJPL… http…
## 3 2023-05-03 Centaurus A: A Peculiar Island of Stars "\nMarco Loren… http…
## 4 2023-05-04 The Galaxy, the Jet, and a Famous Black Hole <NA> http…
## 5 2023-05-05 Shackleton from ShadowCam <NA> http…
## 6 2023-05-06 Twilight in a Flower "Dario Giannob… http…
## 7 2023-05-07 The Helix Nebula from CFHT <NA> http…
## 8 2023-05-08 The Spanish Dancer Spiral Galaxy <NA> http…
## 9 2023-05-09 Shadows of Earth "\nMarcella Gi… http…
## 10 2023-05-10 Milky Way over Egyptian Desert "\nAmr Abdulwa… http…
## # ℹ 64 more rows
```
Success—we have created a small data set using the NASA
API! This data is also quite different from what we obtained from web scraping;
the extracted information is readily available in a JSON format, as opposed to raw
HTML code (although not *every* API will provide data in such a nice format).
From this point onward, the `nasa_df` data frame is stored on your
machine, and you can play with it to your heart’s content. For example, you can use
`write_csv` to save it to a file and `read_csv` to read it into R again later;
and after reading the next few chapters you will have the skills to
do even more interesting things! If you decide that you want
to ask any of the various NASA APIs for more data
(see [the list of awesome NASA APIS here](https://api.nasa.gov/)
for more examples of what is possible), just be mindful as usual about how much
data you are requesting and how frequently you are making requests.
### 2\.8\.1 Web scraping
#### HTML and CSS selectors
When you enter a URL into your browser, your browser connects to the
web server at that URL and asks for the *source code* for the website.
This is the data that the browser translates
into something you can see; so if we
are going to create our own data by scraping a website, we have to first understand
what that data looks like! For example, let’s say we are interested
in knowing the average rental price (per square foot) of the most recently
available one\-bedroom apartments in Vancouver
on [Craiglist](https://vancouver.craigslist.org). When we visit the Vancouver Craigslist
website and search for one\-bedroom apartments,
we should see something similar to Figure [2\.3](reading.html#fig:craigslist-human).
Figure 2\.3: Craigslist webpage of advertisements for one\-bedroom apartments.
Based on what our browser shows us, it’s pretty easy to find the size and price
for each apartment listed. But we would like to be able to obtain that information
using R, without any manual human effort or copying and pasting. We do this by
examining the *source code* that the web server actually sent our browser to
display for us. We show a snippet of it below; the
entire source
is [included with the code for this book](https://github.com/UBC-DSCI/introduction-to-datascience/blob/main/img/reading/website_source.txt):
```
<span class="result-meta">
<span class="result-price">$800</span>
<span class="housing">
1br -
</span>
<span class="result-hood"> (13768 108th Avenue)</span>
<span class="result-tags">
<span class="maptag" data-pid="6786042973">map</span>
</span>
<span class="banish icon icon-trash" role="button">
<span class="screen-reader-text">hide this posting</span>
</span>
<span class="unbanish icon icon-trash red" role="button"></span>
<a href="#" class="restore-link">
<span class="restore-narrow-text">restore</span>
<span class="restore-wide-text">restore this posting</span>
</a>
<span class="result-price">$2285</span>
</span>
```
Oof…you can tell that the source code for a web page is not really designed
for humans to understand easily. However, if you look through it closely, you
will find that the information we’re interested in is hidden among the muck.
For example, near the top of the snippet
above you can see a line that looks like
```
<span class="result-price">$800</span>
```
That snippet is definitely storing the price of a particular apartment. With some more
investigation, you should be able to find things like the date and time of the
listing, the address of the listing, and more. So this source code most likely
contains all the information we are interested in!
Let’s dig into that line above a bit more. You can see that
that bit of code has an *opening tag* (words between `<` and `>`, like
`<span>`) and a *closing tag* (the same with a slash, like `</span>`). HTML
source code generally stores its data between opening and closing tags like
these. Tags are keywords that tell the web browser how to display or format
the content. Above you can see that the information we want (`$800`) is stored
between an opening and closing tag (`<span>` and `</span>`). In the opening
tag, you can also see a very useful “class” (a special word that is sometimes
included with opening tags): `class="result-price"`. Since we want R to
programmatically sort through all of the source code for the website to find
apartment prices, maybe we can look for all the tags with the `"result-price"`
class, and grab the information between the opening and closing tag. Indeed,
take a look at another line of the source snippet above:
```
<span class="result-price">$2285</span>
```
It’s yet another price for an apartment listing, and the tags surrounding it
have the `"result-price"` class. Wonderful! Now that we know what pattern we
are looking for—a dollar amount between opening and closing tags that have the
`"result-price"` class—we should be able to use code to pull out all of the
matching patterns from the source code to obtain our data. This sort of “pattern”
is known as a *CSS selector* (where CSS stands for **c**ascading **s**tyle **s**heet).
The above was a simple example of “finding the pattern to look for”; many
websites are quite a bit larger and more complex, and so is their website
source code. Fortunately, there are tools available to make this process
easier. For example,
[SelectorGadget](https://selectorgadget.com/) is
an open\-source tool that simplifies identifying the generating
and finding of CSS selectors.
At the end of the chapter in the additional resources section, we include a link to
a short video on how to install and use the SelectorGadget tool to
obtain CSS selectors for use in web scraping.
After installing and enabling the tool, you can click the
website element for which you want an appropriate selector. For
example, if we click the price of an apartment listing, we
find that SelectorGadget shows us the selector `.result-price`
in its toolbar, and highlights all the other apartment
prices that would be obtained using that selector (Figure [2\.4](reading.html#fig:sg1)).
Figure 2\.4: Using the SelectorGadget on a Craigslist webpage to obtain the CCS selector useful for obtaining apartment prices.
If we then click the size of an apartment listing, SelectorGadget shows us
the `span` selector, and highlights many of the lines on the page; this indicates that the
`span` selector is not specific enough to capture only apartment sizes (Figure [2\.5](reading.html#fig:sg3)).
Figure 2\.5: Using the SelectorGadget on a Craigslist webpage to obtain a CCS selector useful for obtaining apartment sizes.
To narrow the selector, we can click one of the highlighted elements that
we *do not* want. For example, we can deselect the “pic/map” links,
resulting in only the data we want highlighted using the `.housing` selector (Figure [2\.6](reading.html#fig:sg2)).
Figure 2\.6: Using the SelectorGadget on a Craigslist webpage to refine the CCS selector to one that is most useful for obtaining apartment sizes.
So to scrape information about the square footage and rental price
of apartment listings, we need to use
the two CSS selectors `.housing` and `.result-price`, respectively.
The selector gadget returns them to us as a comma\-separated list (here
`.housing , .result-price`), which is exactly the format we need to provide to
R if we are using more than one CSS selector.
**Caution: are you allowed to scrape that website?**
*Before* scraping data from the web, you should always check whether or not
you are *allowed* to scrape it! There are two documents that are important
for this: the `robots.txt` file and the Terms of Service
document. If we take a look at [Craigslist’s Terms of Service document](https://www.craigslist.org/about/terms.of.use),
we find the following text: *“You agree not to copy/collect CL content
via robots, spiders, scripts, scrapers, crawlers, or any automated or manual equivalent (e.g., by hand).”*
So unfortunately, without explicit permission, we are not allowed to scrape the website.
What to do now? Well, we *could* ask the owner of Craigslist for permission to scrape.
However, we are not likely to get a response, and even if we did they would not likely give us permission.
The more realistic answer is that we simply cannot scrape Craigslist. If we still want
to find data about rental prices in Vancouver, we must go elsewhere.
To continue learning how to scrape data from the web, let’s instead
scrape data on the population of Canadian cities from Wikipedia.
We have checked the [Terms of Service document](https://foundation.wikimedia.org/wiki/Terms_of_Use/en),
and it does not mention that web scraping is disallowed.
We will use the SelectorGadget tool to pick elements that we are interested in
(city names and population counts) and deselect others to indicate that we are not
interested in them (province names), as shown in Figure [2\.7](reading.html#fig:sg4).
Figure 2\.7: Using the SelectorGadget on a Wikipedia webpage.
We include a link to a short video tutorial on this process at the end of the chapter
in the additional resources section. SelectorGadget provides in its toolbar
the following list of CSS selectors to use:
```
td:nth-child(8) ,
td:nth-child(4) ,
.largestCities-cell-background+ td a
```
Now that we have the CSS selectors that describe the properties of the elements
that we want to target, we can use them to find certain elements in web pages and extract data.
#### Using `rvest`
We will use the `rvest` R package to scrape data from the Wikipedia page.
We start by loading the `rvest` package:
```
library(rvest)
```
Next, we tell R what page we want to scrape by providing the webpage’s URL in quotations to the function `read_html`:
```
page <- read_html("https://en.wikipedia.org/wiki/Canada")
```
The `read_html` function directly downloads the source code for the page at
the URL you specify, just like your browser would if you navigated to that site. But
instead of displaying the website to you, the `read_html` function just returns
the HTML source code itself, which we have
stored in the `page` variable. Next, we send the page object to the `html_nodes`
function, along with the CSS selectors we obtained from
the SelectorGadget tool. Make sure to surround the selectors with quotation marks; the function, `html_nodes`, expects that
argument is a string. We store the result of the `html_nodes` function in the `population_nodes` variable.
Note that below we use the `paste` function with a comma separator (`sep=","`)
to build the list of selectors. The `paste` function converts
elements to characters and combines the values into a list. We use this function to
build the list of selectors to maintain code readability; this avoids
having a very long line of code.
```
selectors <- paste("td:nth-child(8)",
"td:nth-child(4)",
".largestCities-cell-background+ td a", sep = ",")
population_nodes <- html_nodes(page, selectors)
head(population_nodes)
```
```
## {xml_nodeset (6)}
## [1] <a href="/wiki/Greater_Toronto_Area" title="Greater Toronto Area">Toronto ...
## [2] <td style="text-align:right;">6,202,225</td>
## [3] <a href="/wiki/London,_Ontario" title="London, Ontario">London</a>
## [4] <td style="text-align:right;">543,551\n</td>
## [5] <a href="/wiki/Greater_Montreal" title="Greater Montreal">Montreal</a>
## [6] <td style="text-align:right;">4,291,732</td>
```
> **Note:** `head` is a function that is often useful for viewing only a short
> summary of an R object, rather than the whole thing (which may be quite a lot
> to look at). For example, here `head` shows us only the first 6 items in the
> `population_nodes` object. Note that some R objects by default print only a
> small summary. For example, `tibble` data frames only show you the first 10 rows.
> But not *all* R objects do this, and that’s where the `head` function helps
> summarize things for you.
Each of the items in the `population_nodes` list is a *node* from the HTML
document that matches the CSS selectors you specified. A *node* is an HTML tag
pair (e.g., `<td>` and `</td>` which defines the cell of a table) combined with
the content stored between the tags. For our CSS selector `td:nth-child(4)`, an
example node that would be selected would be:
```
<td style="text-align:left;background:#f0f0f0;">
<a href="/wiki/London,_Ontario" title="London, Ontario">London</a>
</td>
```
Next we extract the meaningful data—in other words, we get rid of the
HTML code syntax and tags—from the nodes using the `html_text` function.
In the case of the example node above, `html_text` function returns `"London"`.
```
population_text <- html_text(population_nodes)
head(population_text)
```
```
## [1] "Toronto" "6,202,225" "London" "543,551\n" "Montreal" "4,291,732"
```
Fantastic! We seem to have extracted the data of interest from the
raw HTML source code. But we are not quite done; the data
is not yet in an optimal format for data analysis. Both the city names and
population are encoded as characters in a single vector, instead of being in a
data frame with one character column for city and one numeric column for
population (like a spreadsheet).
Additionally, the populations contain commas (not useful for programmatically
dealing with numbers), and some even contain a line break character at the end
(`\n`). In Chapter [3](wrangling.html#wrangling), we will learn more about how to *wrangle* data
such as this into a more useful format for data analysis using R.
#### HTML and CSS selectors
When you enter a URL into your browser, your browser connects to the
web server at that URL and asks for the *source code* for the website.
This is the data that the browser translates
into something you can see; so if we
are going to create our own data by scraping a website, we have to first understand
what that data looks like! For example, let’s say we are interested
in knowing the average rental price (per square foot) of the most recently
available one\-bedroom apartments in Vancouver
on [Craiglist](https://vancouver.craigslist.org). When we visit the Vancouver Craigslist
website and search for one\-bedroom apartments,
we should see something similar to Figure [2\.3](reading.html#fig:craigslist-human).
Figure 2\.3: Craigslist webpage of advertisements for one\-bedroom apartments.
Based on what our browser shows us, it’s pretty easy to find the size and price
for each apartment listed. But we would like to be able to obtain that information
using R, without any manual human effort or copying and pasting. We do this by
examining the *source code* that the web server actually sent our browser to
display for us. We show a snippet of it below; the
entire source
is [included with the code for this book](https://github.com/UBC-DSCI/introduction-to-datascience/blob/main/img/reading/website_source.txt):
```
<span class="result-meta">
<span class="result-price">$800</span>
<span class="housing">
1br -
</span>
<span class="result-hood"> (13768 108th Avenue)</span>
<span class="result-tags">
<span class="maptag" data-pid="6786042973">map</span>
</span>
<span class="banish icon icon-trash" role="button">
<span class="screen-reader-text">hide this posting</span>
</span>
<span class="unbanish icon icon-trash red" role="button"></span>
<a href="#" class="restore-link">
<span class="restore-narrow-text">restore</span>
<span class="restore-wide-text">restore this posting</span>
</a>
<span class="result-price">$2285</span>
</span>
```
Oof…you can tell that the source code for a web page is not really designed
for humans to understand easily. However, if you look through it closely, you
will find that the information we’re interested in is hidden among the muck.
For example, near the top of the snippet
above you can see a line that looks like
```
<span class="result-price">$800</span>
```
That snippet is definitely storing the price of a particular apartment. With some more
investigation, you should be able to find things like the date and time of the
listing, the address of the listing, and more. So this source code most likely
contains all the information we are interested in!
Let’s dig into that line above a bit more. You can see that
that bit of code has an *opening tag* (words between `<` and `>`, like
`<span>`) and a *closing tag* (the same with a slash, like `</span>`). HTML
source code generally stores its data between opening and closing tags like
these. Tags are keywords that tell the web browser how to display or format
the content. Above you can see that the information we want (`$800`) is stored
between an opening and closing tag (`<span>` and `</span>`). In the opening
tag, you can also see a very useful “class” (a special word that is sometimes
included with opening tags): `class="result-price"`. Since we want R to
programmatically sort through all of the source code for the website to find
apartment prices, maybe we can look for all the tags with the `"result-price"`
class, and grab the information between the opening and closing tag. Indeed,
take a look at another line of the source snippet above:
```
<span class="result-price">$2285</span>
```
It’s yet another price for an apartment listing, and the tags surrounding it
have the `"result-price"` class. Wonderful! Now that we know what pattern we
are looking for—a dollar amount between opening and closing tags that have the
`"result-price"` class—we should be able to use code to pull out all of the
matching patterns from the source code to obtain our data. This sort of “pattern”
is known as a *CSS selector* (where CSS stands for **c**ascading **s**tyle **s**heet).
The above was a simple example of “finding the pattern to look for”; many
websites are quite a bit larger and more complex, and so is their website
source code. Fortunately, there are tools available to make this process
easier. For example,
[SelectorGadget](https://selectorgadget.com/) is
an open\-source tool that simplifies identifying the generating
and finding of CSS selectors.
At the end of the chapter in the additional resources section, we include a link to
a short video on how to install and use the SelectorGadget tool to
obtain CSS selectors for use in web scraping.
After installing and enabling the tool, you can click the
website element for which you want an appropriate selector. For
example, if we click the price of an apartment listing, we
find that SelectorGadget shows us the selector `.result-price`
in its toolbar, and highlights all the other apartment
prices that would be obtained using that selector (Figure [2\.4](reading.html#fig:sg1)).
Figure 2\.4: Using the SelectorGadget on a Craigslist webpage to obtain the CCS selector useful for obtaining apartment prices.
If we then click the size of an apartment listing, SelectorGadget shows us
the `span` selector, and highlights many of the lines on the page; this indicates that the
`span` selector is not specific enough to capture only apartment sizes (Figure [2\.5](reading.html#fig:sg3)).
Figure 2\.5: Using the SelectorGadget on a Craigslist webpage to obtain a CCS selector useful for obtaining apartment sizes.
To narrow the selector, we can click one of the highlighted elements that
we *do not* want. For example, we can deselect the “pic/map” links,
resulting in only the data we want highlighted using the `.housing` selector (Figure [2\.6](reading.html#fig:sg2)).
Figure 2\.6: Using the SelectorGadget on a Craigslist webpage to refine the CCS selector to one that is most useful for obtaining apartment sizes.
So to scrape information about the square footage and rental price
of apartment listings, we need to use
the two CSS selectors `.housing` and `.result-price`, respectively.
The selector gadget returns them to us as a comma\-separated list (here
`.housing , .result-price`), which is exactly the format we need to provide to
R if we are using more than one CSS selector.
**Caution: are you allowed to scrape that website?**
*Before* scraping data from the web, you should always check whether or not
you are *allowed* to scrape it! There are two documents that are important
for this: the `robots.txt` file and the Terms of Service
document. If we take a look at [Craigslist’s Terms of Service document](https://www.craigslist.org/about/terms.of.use),
we find the following text: *“You agree not to copy/collect CL content
via robots, spiders, scripts, scrapers, crawlers, or any automated or manual equivalent (e.g., by hand).”*
So unfortunately, without explicit permission, we are not allowed to scrape the website.
What to do now? Well, we *could* ask the owner of Craigslist for permission to scrape.
However, we are not likely to get a response, and even if we did they would not likely give us permission.
The more realistic answer is that we simply cannot scrape Craigslist. If we still want
to find data about rental prices in Vancouver, we must go elsewhere.
To continue learning how to scrape data from the web, let’s instead
scrape data on the population of Canadian cities from Wikipedia.
We have checked the [Terms of Service document](https://foundation.wikimedia.org/wiki/Terms_of_Use/en),
and it does not mention that web scraping is disallowed.
We will use the SelectorGadget tool to pick elements that we are interested in
(city names and population counts) and deselect others to indicate that we are not
interested in them (province names), as shown in Figure [2\.7](reading.html#fig:sg4).
Figure 2\.7: Using the SelectorGadget on a Wikipedia webpage.
We include a link to a short video tutorial on this process at the end of the chapter
in the additional resources section. SelectorGadget provides in its toolbar
the following list of CSS selectors to use:
```
td:nth-child(8) ,
td:nth-child(4) ,
.largestCities-cell-background+ td a
```
Now that we have the CSS selectors that describe the properties of the elements
that we want to target, we can use them to find certain elements in web pages and extract data.
#### Using `rvest`
We will use the `rvest` R package to scrape data from the Wikipedia page.
We start by loading the `rvest` package:
```
library(rvest)
```
Next, we tell R what page we want to scrape by providing the webpage’s URL in quotations to the function `read_html`:
```
page <- read_html("https://en.wikipedia.org/wiki/Canada")
```
The `read_html` function directly downloads the source code for the page at
the URL you specify, just like your browser would if you navigated to that site. But
instead of displaying the website to you, the `read_html` function just returns
the HTML source code itself, which we have
stored in the `page` variable. Next, we send the page object to the `html_nodes`
function, along with the CSS selectors we obtained from
the SelectorGadget tool. Make sure to surround the selectors with quotation marks; the function, `html_nodes`, expects that
argument is a string. We store the result of the `html_nodes` function in the `population_nodes` variable.
Note that below we use the `paste` function with a comma separator (`sep=","`)
to build the list of selectors. The `paste` function converts
elements to characters and combines the values into a list. We use this function to
build the list of selectors to maintain code readability; this avoids
having a very long line of code.
```
selectors <- paste("td:nth-child(8)",
"td:nth-child(4)",
".largestCities-cell-background+ td a", sep = ",")
population_nodes <- html_nodes(page, selectors)
head(population_nodes)
```
```
## {xml_nodeset (6)}
## [1] <a href="/wiki/Greater_Toronto_Area" title="Greater Toronto Area">Toronto ...
## [2] <td style="text-align:right;">6,202,225</td>
## [3] <a href="/wiki/London,_Ontario" title="London, Ontario">London</a>
## [4] <td style="text-align:right;">543,551\n</td>
## [5] <a href="/wiki/Greater_Montreal" title="Greater Montreal">Montreal</a>
## [6] <td style="text-align:right;">4,291,732</td>
```
> **Note:** `head` is a function that is often useful for viewing only a short
> summary of an R object, rather than the whole thing (which may be quite a lot
> to look at). For example, here `head` shows us only the first 6 items in the
> `population_nodes` object. Note that some R objects by default print only a
> small summary. For example, `tibble` data frames only show you the first 10 rows.
> But not *all* R objects do this, and that’s where the `head` function helps
> summarize things for you.
Each of the items in the `population_nodes` list is a *node* from the HTML
document that matches the CSS selectors you specified. A *node* is an HTML tag
pair (e.g., `<td>` and `</td>` which defines the cell of a table) combined with
the content stored between the tags. For our CSS selector `td:nth-child(4)`, an
example node that would be selected would be:
```
<td style="text-align:left;background:#f0f0f0;">
<a href="/wiki/London,_Ontario" title="London, Ontario">London</a>
</td>
```
Next we extract the meaningful data—in other words, we get rid of the
HTML code syntax and tags—from the nodes using the `html_text` function.
In the case of the example node above, `html_text` function returns `"London"`.
```
population_text <- html_text(population_nodes)
head(population_text)
```
```
## [1] "Toronto" "6,202,225" "London" "543,551\n" "Montreal" "4,291,732"
```
Fantastic! We seem to have extracted the data of interest from the
raw HTML source code. But we are not quite done; the data
is not yet in an optimal format for data analysis. Both the city names and
population are encoded as characters in a single vector, instead of being in a
data frame with one character column for city and one numeric column for
population (like a spreadsheet).
Additionally, the populations contain commas (not useful for programmatically
dealing with numbers), and some even contain a line break character at the end
(`\n`). In Chapter [3](wrangling.html#wrangling), we will learn more about how to *wrangle* data
such as this into a more useful format for data analysis using R.
### 2\.8\.2 Using an API
Rather than posting a data file at a URL for you to download, many websites these days
provide an API that must be accessed through a programming language like R. The benefit of using an API
is that data owners have much more control over the data they provide to users. However, unlike
web scraping, there is no consistent way to access an API across websites. Every website typically
has its own API designed especially for its own use case. Therefore we will just provide one example
of accessing data through an API in this book, with the hope that it gives you enough of a basic
idea that you can learn how to use another API if needed. In particular, in this book we will show you the basics
of how to use the `httr2` package in R to access data from the NASA “Astronomy Picture
of the Day” API (a great source of desktop backgrounds, by the way—take a look at the stunning
picture of the Rho\-Ophiuchi cloud complex ([NASA et al. 2023](#ref-rhoophiuchi)) in Figure [2\.8](reading.html#fig:NASA-API-Rho-Ophiuchi) from July 13, 2023!).
Figure 2\.8: The James Webb Space Telescope’s NIRCam image of the Rho Ophiuchi molecular cloud complex.
First, you will need to visit the [NASA APIs page](https://api.nasa.gov/) and generate an API key (i.e., a password used to identify you when accessing the API).
Note that a valid email address is required to
associate with the key. The signup form looks something like Figure [2\.9](reading.html#fig:NASA-API-signup).
After filling out the basic information, you will receive the token via email.
Make sure to store the key in a safe place, and keep it private.
Figure 2\.9: Generating the API access token for the NASA API
**Caution: think about your API usage carefully!**
When you access an API, you are initiating a transfer of data from a web server
to your computer. Web servers are expensive to run and do not have infinite resources.
If you try to ask for *too much data* at once, you can use up a huge amount of the server’s bandwidth.
If you try to ask for data *too frequently*—e.g., if you
make many requests to the server in quick succession—you can also bog the server down and make
it unable to talk to anyone else. Most servers have mechanisms to revoke your access if you are not
careful, but you should try to prevent issues from happening in the first place by being extra careful
with how you write and run your code. You should also keep in mind that when a website owner
grants you API access, they also usually specify a limit (or *quota*) of how much data you can ask for.
Be careful not to overrun your quota! So *before* we try to use the API, we will first visit
[the NASA website](https://api.nasa.gov/) to see what limits we should abide by when using the API.
These limits are outlined in Figure [2\.10](reading.html#fig:NASA-API-limits).
Figure 2\.10: The NASA website specifies an hourly limit of 1,000 requests.
After checking the NASA website, it seems like we can send at most 1,000 requests per hour.
That should be more than enough for our purposes in this section.
#### Accessing the NASA API
The NASA API is what is known as an *HTTP API*: this is a particularly common
kind of API, where you can obtain data simply by accessing a
particular URL as if it were a regular website. To make a query to the NASA
API, we need to specify three things. First, we specify the URL *endpoint* of
the API, which is simply a URL that helps the remote server understand which
API you are trying to access. NASA offers a variety of APIs, each with its own
endpoint; in the case of the NASA “Astronomy Picture of the Day” API, the URL
endpoint is `https://api.nasa.gov/planetary/apod`. Second, we write `?`, which denotes that a
list of *query parameters* will follow. And finally, we specify a list of
query parameters of the form `parameter=value`, separated by `&` characters. The NASA
“Astronomy Picture of the Day” API accepts the parameters shown in
Figure [2\.11](reading.html#fig:NASA-API-parameters).
Figure 2\.11: The set of parameters that you can specify when querying the NASA “Astronomy Picture of the Day” API, along with syntax, default settings, and a description of each.
So for example, to obtain the image of the day
from July 13, 2023, the API query would have two parameters: `api_key=YOUR_API_KEY`
and `date=2023-07-13`. Remember to replace `YOUR_API_KEY` with the API key you
received from NASA in your email! Putting it all together, the query will look like the following:
```
https://api.nasa.gov/planetary/apod?api_key=YOUR_API_KEY&date=2023-07-13
```
If you try putting this URL into your web browser, you’ll actually find that the server
responds to your request with some text:
```
{"date":"2023-07-13","explanation":"A mere 390 light-years away, Sun-like stars
and future planetary systems are forming in the Rho Ophiuchi molecular cloud
complex, the closest star-forming region to our fair planet. The James Webb
Space Telescope's NIRCam peered into the nearby natal chaos to capture this
infrared image at an inspiring scale. The spectacular cosmic snapshot was
released to celebrate the successful first year of Webb's exploration of the
Universe. The frame spans less than a light-year across the Rho Ophiuchi region
and contains about 50 young stars. Brighter stars clearly sport Webb's
characteristic pattern of diffraction spikes. Huge jets of shocked molecular
hydrogen blasting from newborn stars are red in the image, with the large,
yellowish dusty cavity carved out by the energetic young star near its center.
Near some stars in the stunning image are shadows cast by their protoplanetary
disks.","hdurl":"https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph.png",
"media_type":"image","service_version":"v1","title":"Webb's
Rho Ophiuchi","url":"https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph1024.png"}
```
Neat! There is definitely some data there, but it’s a bit hard to
see what it all is. As it turns out, this is a common format for data called
*JSON* (JavaScript Object Notation).
We won’t encounter this kind of data much in this book,
but for now you can interpret this data as `key : value` pairs separated by
commas. For example, if you look closely, you’ll see that the first entry is
`"date":"2023-07-13"`, which indicates that we indeed successfully received
data corresponding to July 13, 2023\.
So now our job is to do all of this programmatically in R. We will load
the `httr2` package, and construct the query using the `request` function, which takes a single URL argument;
you will recognize the same query URL that we pasted into the browser earlier.
We will then send the query using the `req_perform` function, and finally
obtain a JSON representation of the response using the `resp_body_json` function.
```
library(httr2)
req <- request("https://api.nasa.gov/planetary/apod?api_key=YOUR_API_KEY&date=2023-07-13")
resp <- req_perform(req)
nasa_data_single <- resp_body_json(resp)
nasa_data_single
```
```
## $date
## [1] "2023-07-13"
##
## $explanation
## [1] "A mere 390 light-years away, Sun-like stars and future planetary systems are forming in the Rho Ophiuchi molecular cloud complex, the closest star-forming region to our fair planet. The James Webb Space Telescope's NIRCam peered into the nearby natal chaos to capture this infrared image at an inspiring scale. The spectacular cosmic snapshot was released to celebrate the successful first year of Webb's exploration of the Universe. The frame spans less than a light-year across the Rho Ophiuchi region and contains about 50 young stars. Brighter stars clearly sport Webb's characteristic pattern of diffraction spikes. Huge jets of shocked molecular hydrogen blasting from newborn stars are red in the image, with the large, yellowish dusty cavity carved out by the energetic young star near its center. Near some stars in the stunning image are shadows cast by their protoplanetary disks."
##
## $hdurl
## [1] "https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph.png"
##
## $media_type
## [1] "image"
##
## $service_version
## [1] "v1"
##
## $title
## [1] "Webb's Rho Ophiuchi"
##
## $url
## [1] "https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph1024.png"
```
We can obtain more records at once by using the `start_date` and `end_date` parameters, as
shown in the table of parameters in [2\.11](reading.html#fig:NASA-API-parameters).
Let’s obtain all the records between May 1, 2023, and July 13, 2023, and store the result
in an object called `nasa_data`; now the response
will take the form of an R *list* (you’ll learn more about these in Chapter [3](wrangling.html#wrangling)).
Each item in the list will correspond to a single day’s record (just like the `nasa_data_single` object),
and there will be 74 items total, one for each day between the start and end dates:
```
req <- request("https://api.nasa.gov/planetary/apod?api_key=YOUR_API_KEY&start_date=2023-05-01&end_date=2023-07-13")
resp <- req_perform(req)
nasa_data <- resp_body_json(response)
length(nasa_data)
```
```
## [1] 74
```
For further data processing using the techniques in this book, you’ll need to turn this list of items
into a data frame. Here we will extract the `date`, `title`, `copyright`, and `url` variables
from the JSON data, and construct a data frame using the extracted information.
> **Note:** Understanding this code is not required for the remainder of the textbook. It is included for those
> readers who would like to parse JSON data into a data frame in their own data analyses.
```
nasa_df_all <- tibble(bind_rows(lapply(nasa_data, as.data.frame.list)))
nasa_df <- select(nasa_df_all, date, title, copyright, url)
nasa_df
```
```
## # A tibble: 74 × 4
## date title copyright url
## <chr> <chr> <chr> <chr>
## 1 2023-05-01 Carina Nebula North "\nCarlos Tayl… http…
## 2 2023-05-02 Flat Rock Hills on Mars "\nNASA, \nJPL… http…
## 3 2023-05-03 Centaurus A: A Peculiar Island of Stars "\nMarco Loren… http…
## 4 2023-05-04 The Galaxy, the Jet, and a Famous Black Hole <NA> http…
## 5 2023-05-05 Shackleton from ShadowCam <NA> http…
## 6 2023-05-06 Twilight in a Flower "Dario Giannob… http…
## 7 2023-05-07 The Helix Nebula from CFHT <NA> http…
## 8 2023-05-08 The Spanish Dancer Spiral Galaxy <NA> http…
## 9 2023-05-09 Shadows of Earth "\nMarcella Gi… http…
## 10 2023-05-10 Milky Way over Egyptian Desert "\nAmr Abdulwa… http…
## # ℹ 64 more rows
```
Success—we have created a small data set using the NASA
API! This data is also quite different from what we obtained from web scraping;
the extracted information is readily available in a JSON format, as opposed to raw
HTML code (although not *every* API will provide data in such a nice format).
From this point onward, the `nasa_df` data frame is stored on your
machine, and you can play with it to your heart’s content. For example, you can use
`write_csv` to save it to a file and `read_csv` to read it into R again later;
and after reading the next few chapters you will have the skills to
do even more interesting things! If you decide that you want
to ask any of the various NASA APIs for more data
(see [the list of awesome NASA APIS here](https://api.nasa.gov/)
for more examples of what is possible), just be mindful as usual about how much
data you are requesting and how frequently you are making requests.
#### Accessing the NASA API
The NASA API is what is known as an *HTTP API*: this is a particularly common
kind of API, where you can obtain data simply by accessing a
particular URL as if it were a regular website. To make a query to the NASA
API, we need to specify three things. First, we specify the URL *endpoint* of
the API, which is simply a URL that helps the remote server understand which
API you are trying to access. NASA offers a variety of APIs, each with its own
endpoint; in the case of the NASA “Astronomy Picture of the Day” API, the URL
endpoint is `https://api.nasa.gov/planetary/apod`. Second, we write `?`, which denotes that a
list of *query parameters* will follow. And finally, we specify a list of
query parameters of the form `parameter=value`, separated by `&` characters. The NASA
“Astronomy Picture of the Day” API accepts the parameters shown in
Figure [2\.11](reading.html#fig:NASA-API-parameters).
Figure 2\.11: The set of parameters that you can specify when querying the NASA “Astronomy Picture of the Day” API, along with syntax, default settings, and a description of each.
So for example, to obtain the image of the day
from July 13, 2023, the API query would have two parameters: `api_key=YOUR_API_KEY`
and `date=2023-07-13`. Remember to replace `YOUR_API_KEY` with the API key you
received from NASA in your email! Putting it all together, the query will look like the following:
```
https://api.nasa.gov/planetary/apod?api_key=YOUR_API_KEY&date=2023-07-13
```
If you try putting this URL into your web browser, you’ll actually find that the server
responds to your request with some text:
```
{"date":"2023-07-13","explanation":"A mere 390 light-years away, Sun-like stars
and future planetary systems are forming in the Rho Ophiuchi molecular cloud
complex, the closest star-forming region to our fair planet. The James Webb
Space Telescope's NIRCam peered into the nearby natal chaos to capture this
infrared image at an inspiring scale. The spectacular cosmic snapshot was
released to celebrate the successful first year of Webb's exploration of the
Universe. The frame spans less than a light-year across the Rho Ophiuchi region
and contains about 50 young stars. Brighter stars clearly sport Webb's
characteristic pattern of diffraction spikes. Huge jets of shocked molecular
hydrogen blasting from newborn stars are red in the image, with the large,
yellowish dusty cavity carved out by the energetic young star near its center.
Near some stars in the stunning image are shadows cast by their protoplanetary
disks.","hdurl":"https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph.png",
"media_type":"image","service_version":"v1","title":"Webb's
Rho Ophiuchi","url":"https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph1024.png"}
```
Neat! There is definitely some data there, but it’s a bit hard to
see what it all is. As it turns out, this is a common format for data called
*JSON* (JavaScript Object Notation).
We won’t encounter this kind of data much in this book,
but for now you can interpret this data as `key : value` pairs separated by
commas. For example, if you look closely, you’ll see that the first entry is
`"date":"2023-07-13"`, which indicates that we indeed successfully received
data corresponding to July 13, 2023\.
So now our job is to do all of this programmatically in R. We will load
the `httr2` package, and construct the query using the `request` function, which takes a single URL argument;
you will recognize the same query URL that we pasted into the browser earlier.
We will then send the query using the `req_perform` function, and finally
obtain a JSON representation of the response using the `resp_body_json` function.
```
library(httr2)
req <- request("https://api.nasa.gov/planetary/apod?api_key=YOUR_API_KEY&date=2023-07-13")
resp <- req_perform(req)
nasa_data_single <- resp_body_json(resp)
nasa_data_single
```
```
## $date
## [1] "2023-07-13"
##
## $explanation
## [1] "A mere 390 light-years away, Sun-like stars and future planetary systems are forming in the Rho Ophiuchi molecular cloud complex, the closest star-forming region to our fair planet. The James Webb Space Telescope's NIRCam peered into the nearby natal chaos to capture this infrared image at an inspiring scale. The spectacular cosmic snapshot was released to celebrate the successful first year of Webb's exploration of the Universe. The frame spans less than a light-year across the Rho Ophiuchi region and contains about 50 young stars. Brighter stars clearly sport Webb's characteristic pattern of diffraction spikes. Huge jets of shocked molecular hydrogen blasting from newborn stars are red in the image, with the large, yellowish dusty cavity carved out by the energetic young star near its center. Near some stars in the stunning image are shadows cast by their protoplanetary disks."
##
## $hdurl
## [1] "https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph.png"
##
## $media_type
## [1] "image"
##
## $service_version
## [1] "v1"
##
## $title
## [1] "Webb's Rho Ophiuchi"
##
## $url
## [1] "https://apod.nasa.gov/apod/image/2307/STScI-01_RhoOph1024.png"
```
We can obtain more records at once by using the `start_date` and `end_date` parameters, as
shown in the table of parameters in [2\.11](reading.html#fig:NASA-API-parameters).
Let’s obtain all the records between May 1, 2023, and July 13, 2023, and store the result
in an object called `nasa_data`; now the response
will take the form of an R *list* (you’ll learn more about these in Chapter [3](wrangling.html#wrangling)).
Each item in the list will correspond to a single day’s record (just like the `nasa_data_single` object),
and there will be 74 items total, one for each day between the start and end dates:
```
req <- request("https://api.nasa.gov/planetary/apod?api_key=YOUR_API_KEY&start_date=2023-05-01&end_date=2023-07-13")
resp <- req_perform(req)
nasa_data <- resp_body_json(response)
length(nasa_data)
```
```
## [1] 74
```
For further data processing using the techniques in this book, you’ll need to turn this list of items
into a data frame. Here we will extract the `date`, `title`, `copyright`, and `url` variables
from the JSON data, and construct a data frame using the extracted information.
> **Note:** Understanding this code is not required for the remainder of the textbook. It is included for those
> readers who would like to parse JSON data into a data frame in their own data analyses.
```
nasa_df_all <- tibble(bind_rows(lapply(nasa_data, as.data.frame.list)))
nasa_df <- select(nasa_df_all, date, title, copyright, url)
nasa_df
```
```
## # A tibble: 74 × 4
## date title copyright url
## <chr> <chr> <chr> <chr>
## 1 2023-05-01 Carina Nebula North "\nCarlos Tayl… http…
## 2 2023-05-02 Flat Rock Hills on Mars "\nNASA, \nJPL… http…
## 3 2023-05-03 Centaurus A: A Peculiar Island of Stars "\nMarco Loren… http…
## 4 2023-05-04 The Galaxy, the Jet, and a Famous Black Hole <NA> http…
## 5 2023-05-05 Shackleton from ShadowCam <NA> http…
## 6 2023-05-06 Twilight in a Flower "Dario Giannob… http…
## 7 2023-05-07 The Helix Nebula from CFHT <NA> http…
## 8 2023-05-08 The Spanish Dancer Spiral Galaxy <NA> http…
## 9 2023-05-09 Shadows of Earth "\nMarcella Gi… http…
## 10 2023-05-10 Milky Way over Egyptian Desert "\nAmr Abdulwa… http…
## # ℹ 64 more rows
```
Success—we have created a small data set using the NASA
API! This data is also quite different from what we obtained from web scraping;
the extracted information is readily available in a JSON format, as opposed to raw
HTML code (although not *every* API will provide data in such a nice format).
From this point onward, the `nasa_df` data frame is stored on your
machine, and you can play with it to your heart’s content. For example, you can use
`write_csv` to save it to a file and `read_csv` to read it into R again later;
and after reading the next few chapters you will have the skills to
do even more interesting things! If you decide that you want
to ask any of the various NASA APIs for more data
(see [the list of awesome NASA APIS here](https://api.nasa.gov/)
for more examples of what is possible), just be mindful as usual about how much
data you are requesting and how frequently you are making requests.
2\.9 Exercises
--------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Reading in data locally and from the web” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
2\.10 Additional resources
--------------------------
* The [`readr` documentation](https://readr.tidyverse.org/)
provides the documentation for many of the reading functions we cover in this chapter.
It is where you should look if you want to learn more about the functions in this
chapter, the full set of arguments you can use, and other related functions.
The site also provides a very nice cheat sheet that summarizes many of the data
wrangling functions from this chapter.
* Sometimes you might run into data in such poor shape that none of the reading
functions we cover in this chapter work. In that case, you can consult the
[data import chapter](https://r4ds.had.co.nz/data-import.html) from *R for Data
Science* ([Wickham and Grolemund 2016](#ref-wickham2016r)), which goes into a lot more detail about how R parses
text from files into data frames.
* The [`here` R package](https://here.r-lib.org/) ([Müller 2020](#ref-here))
provides a way for you to construct or find your files’ paths.
* The [`readxl` documentation](https://readxl.tidyverse.org/) provides more
details on reading data from Excel, such as reading in data with multiple
sheets, or specifying the cells to read in.
* The [`rio` R package](https://github.com/leeper/rio) ([Leeper 2021](#ref-rio)) provides an alternative
set of tools for reading and writing data in R. It aims to be a “Swiss army
knife” for data reading/writing/converting, and supports a wide variety of data
types (including data formats generated by other statistical software like SPSS
and SAS).
* A [video](https://www.youtube.com/embed/ephId3mYu9o) from the Udacity
course *Linux Command Line Basics* provides a good explanation of absolute versus relative paths.
* If you read the subsection on obtaining data from the web via scraping and
APIs, we provide two companion tutorial video links for how to use the
SelectorGadget tool to obtain desired CSS selectors for:
+ [extracting the data for apartment listings on Craigslist](https://www.youtube.com/embed/YdIWI6K64zo), and
+ [extracting Canadian city names and populations from Wikipedia](https://www.youtube.com/embed/O9HKbdhqYzk).
* The [`polite` R package](https://dmi3kno.github.io/polite/) ([Perepolkin 2021](#ref-polite)) provides
a set of tools for responsibly scraping data from websites.
| Data Science |
ubc-dsci.github.io | https://ubc-dsci.github.io/introduction-to-datascience/wrangling.html |
Chapter 3 Cleaning and wrangling data
=====================================
3\.1 Overview
-------------
This chapter is centered around defining tidy data—a data format that is
suitable for analysis—and the tools needed to transform raw data into this
format. This will be presented in the context of a real\-world data science
application, providing more practice working through a whole case study.
3\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Define the term “tidy data”.
* Discuss the advantages of storing data in a tidy data format.
* Define what vectors, lists, and data frames are in R, and describe how they relate to
each other.
* Describe the common types of data in R and their uses.
* Use the following functions for their intended data wrangling tasks:
+ `c`
+ `pivot_longer`
+ `pivot_wider`
+ `separate`
+ `select`
+ `filter`
+ `mutate`
+ `summarize`
+ `map`
+ `group_by`
+ `across`
+ `rowwise`
* Use the following operators for their intended data wrangling tasks:
+ `==`, `!=`, `<`, `<=`, `>`, and `>=`
+ `%in%`
+ `!`, `&`, and `|`
+ `|>` and `%>%`
3\.3 Data frames, vectors, and lists
------------------------------------
In Chapters [1](intro.html#intro) and [2](reading.html#reading), *data frames* were the focus:
we learned how to import data into R as a data frame, and perform basic operations on data frames in R.
In the remainder of this book, this pattern continues. The vast majority of tools we use will require
that data are represented as a data frame in R. Therefore, in this section,
we will dig more deeply into what data frames are and how they are represented in R.
This knowledge will be helpful in effectively utilizing these objects in our data analyses.
### 3\.3\.1 What is a data frame?
A data frame is a table\-like structure for storing data in R. Data frames are
important to learn about because most data that you will encounter in practice
can be naturally stored as a table. In order to define data frames precisely,
we need to introduce a few technical terms:
* **variable:** a characteristic, number, or quantity that can be measured.
* **observation:** all of the measurements for a given entity.
* **value:** a single measurement of a single variable for a given entity.
Given these definitions, a **data frame** is a tabular data structure in R
that is designed to store observations, variables, and their values.
Most commonly, each column in a data frame corresponds to a variable,
and each row corresponds to an observation. For example, Figure
[3\.1](wrangling.html#fig:02-obs) displays a data set of city populations. Here, the variables
are “region, year, population”; each of these are properties that can be
collected or measured. The first observation is “Toronto, 2016, 2235145”;
these are the values that the three variables take for the first entity in the
data set. There are 13 entities in the data set in total, corresponding to the
13 rows in Figure [3\.1](wrangling.html#fig:02-obs).
Figure 3\.1: A data frame storing data regarding the population of various regions in Canada. In this example data frame, the row that corresponds to the observation for the city of Vancouver is colored yellow, and the column that corresponds to the population variable is colored blue.
R stores the columns of a data frame as either
*lists* or *vectors*. For example, the data frame in Figure
[3\.2](wrangling.html#fig:02-vectors) has three vectors whose names are `region`, `year` and
`population`. The next two sections will explain what lists and vectors are.
Figure 3\.2: Data frame with three vectors.
### 3\.3\.2 What is a vector?
In R, **vectors** are objects that can contain one or more elements. The vector
elements are ordered, and they must all be of the same **data type**;
R has several different basic data types, as shown in Table [3\.1](wrangling.html#tab:datatype-table).
Figure [3\.3](wrangling.html#fig:02-vector) provides an example of a vector where all of the elements are
of character type.
You can create vectors in R using the `c` function (`c` stands for “concatenate”). For
example, to create the vector `region` as shown in Figure
[3\.3](wrangling.html#fig:02-vector), you would write:
```
region <- c("Toronto", "Montreal", "Vancouver", "Calgary", "Ottawa")
region
```
```
## [1] "Toronto" "Montreal" "Vancouver" "Calgary" "Ottawa"
```
> **Note:** Technically, these objects are called “atomic vectors.” In this book
> we have chosen to call them “vectors,” which is how they are most commonly
> referred to in the R community. To be totally precise, “vector” is an umbrella term that
> encompasses both atomic vector and list objects in R. But this creates a
> confusing situation where the term “vector” could
> mean “atomic vector” *or* “the umbrella term for atomic vector and list,”
> depending on context. Very confusing indeed! So to keep things simple, in
> this book we *always* use the term “vector” to refer to “atomic vector.”
> We encourage readers who are enthusiastic to learn more to read the
> Vectors chapter of *Advanced R* ([Wickham 2019](#ref-wickham2019advanced)).
Figure 3\.3: Example of a vector whose type is character.
Table 3\.1: Basic data types in R
| Data type | Abbreviation | Description | Example |
| --- | --- | --- | --- |
| character | chr | letters or numbers surrounded by quotes | “1” , “Hello world!” |
| double | dbl | numbers with decimals values | 1\.2333 |
| integer | int | numbers that do not contain decimals | 1L, 20L (where “L” tells R to store as an integer) |
| logical | lgl | either true or false | `TRUE`, `FALSE` |
| factor | fct | used to represent data with a limited number of values (usually categories) | a `color` variable with levels `red`, `green` and `orange` |
It is important in R to make sure you represent your data with the correct type.
Many of the `tidyverse` functions we use in this book treat
the various data types differently. You should use integers and double types
(which both fall under the “numeric” umbrella type) to represent numbers and perform
arithmetic. Doubles are more common than integers in R, though; for instance, a double data type is the
default when you create a vector of numbers using `c()`, and when you read in
whole numbers via `read_csv`. Characters are used to represent data that should
be thought of as “text”, such as words, names, paths, URLs, and more. Factors help us
encode variables that represent *categories*; a factor variable takes one of a discrete
set of values known as *levels* (one for each category). The levels can be ordered or unordered. Even though
factors can sometimes *look* like characters, they are not used to represent
text, words, names, and paths in the way that characters are; in fact, R
internally stores factors using integers! There are other basic data types in R, such as *raw*
and *complex*, but we do not use these in this textbook.
### 3\.3\.3 What is a list?
Lists are also objects in R that have multiple, ordered elements.
Vectors and lists differ by the requirement of element type
consistency. All elements within a single vector must be of the same type (e.g.,
all elements are characters), whereas elements within a single list can be of
different types (e.g., characters, integers, logicals, and even other lists). See Figure [3\.4](wrangling.html#fig:02-vec-vs-list).
Figure 3\.4: A vector versus a list.
### 3\.3\.4 What does this have to do with data frames?
A data frame is really a special kind of list that follows two rules:
1. Each element itself must either be a vector or a list.
2. Each element (vector or list) must have the same length.
Not all columns in a data frame need to be of the same type.
Figure [3\.5](wrangling.html#fig:02-dataframe) shows a data frame where
the columns are vectors of different types.
But remember: because the columns in this example are *vectors*,
the elements must be the same data type *within each column.*
On the other hand, if our data frame had *list* columns, there would be no such requirement.
It is generally much more common to use *vector* columns, though,
as the values for a single variable are usually all of the same type.
Figure 3\.5: Data frame and vector types.
Data frames are actually included in R itself, without the need for any additional packages. However, the
`tidyverse` functions that we use
throughout this book all work with a special kind of data frame called a *tibble*. Tibbles have some additional
features and benefits over built\-in data frames in R. These include the
ability to add useful attributes (such as grouping, which we will discuss later)
and more predictable type preservation when subsetting.
Because a tibble is just a data frame with some added features,
we will collectively refer to both built\-in R data frames and
tibbles as *data frames* in this book.
> **Note:** You can use the function `class` on a data object to assess whether a data
> frame is a built\-in R data frame or a tibble. If the data object is a data
> frame, `class` will return `"data.frame"`. If the data object is a
> tibble it will return `"tbl_df" "tbl" "data.frame"`. You can easily convert
> built\-in R data frames to tibbles using the `tidyverse` `as_tibble` function.
> For example we can check the class of the Canadian languages data set,
> `can_lang`, we worked with in the previous chapters and we see it is a tibble.
>
>
>
> ```
> class(can_lang)
> ```
>
>
> ```
> ## [1] "spec_tbl_df" "tbl_df" "tbl" "data.frame"
> ```
Vectors, data frames and lists are basic types of *data structure* in R, which
are core to most data analyses. We summarize them in Table
[3\.2](wrangling.html#tab:datastructure-table). There are several other data structures in the R programming
language (*e.g.,* matrices), but these are beyond the scope of this book.
Table 3\.2: Basic data structures in R
| Data Structure | Description |
| --- | --- |
| vector | An ordered collection of one, or more, values of the *same data type*. |
| list | An ordered collection of one, or more, values of *possibly different data types*. |
| data frame | A list of either vectors or lists of the *same length*, with column names. We typically use a data frame to represent a data set. |
3\.4 Tidy data
--------------
There are many ways a tabular data set can be organized. This chapter will focus
on introducing the **tidy data** format of organization and how to make your raw
(and likely messy) data tidy. A tidy data frame satisfies
the following three criteria ([Wickham 2014](#ref-wickham2014tidy)):
* each row is a single observation,
* each column is a single variable, and
* each value is a single cell (i.e., its entry in the data
frame is not shared with another value).
Figure [3\.6](wrangling.html#fig:02-tidy-image) demonstrates a tidy data set that satisfies these
three criteria.
Figure 3\.6: Tidy data satisfies three criteria.
There are many good reasons for making sure your data are tidy as a first step in your analysis.
The most important is that it is a single, consistent format that nearly every function
in the `tidyverse` recognizes. No matter what the variables and observations
in your data represent, as long as the data frame
is tidy, you can manipulate it, plot it, and analyze it using the same tools.
If your data is *not* tidy, you will have to write special bespoke code
in your analysis that will not only be error\-prone, but hard for others to understand.
Beyond making your analysis more accessible to others and less error\-prone, tidy data
is also typically easy for humans to interpret. Given these benefits,
it is well worth spending the time to get your data into a tidy format
upfront. Fortunately, there are many well\-designed `tidyverse` data
cleaning/wrangling tools to help you easily tidy your data. Let’s explore them
below!
> **Note:** Is there only one shape for tidy data for a given data set? Not
> necessarily! It depends on the statistical question you are asking and what
> the variables are for that question. For tidy data, each variable should be
> its own column. So, just as it’s essential to match your statistical question
> with the appropriate data analysis tool, it’s important to match your
> statistical question with the appropriate variables and ensure they are
> represented as individual columns to make the data tidy.
### 3\.4\.1 Tidying up: going from wide to long using `pivot_longer`
One task that is commonly performed to get data into a tidy format
is to combine values that are stored in separate columns,
but are really part of the same variable, into one.
Data is often stored this way
because this format is sometimes more intuitive for human readability
and understanding, and humans create data sets.
In Figure [3\.7](wrangling.html#fig:02-wide-to-long),
the table on the left is in an untidy, “wide” format because the year values
(2006, 2011, 2016\) are stored as column names.
And as a consequence,
the values for population for the various cities
over these years are also split across several columns.
For humans, this table is easy to read, which is why you will often find data
stored in this wide format. However, this format is difficult to work with
when performing data visualization or statistical analysis using R. For
example, if we wanted to find the latest year it would be challenging because
the year values are stored as column names instead of as values in a single
column. So before we could apply a function to find the latest year (for
example, by using `max`), we would have to first extract the column names
to get them as a vector and then apply a function to extract the latest year.
The problem only gets worse if you would like to find the value for the
population for a given region for the latest year. Both of these tasks are
greatly simplified once the data is tidied.
Another problem with data in this format is that we don’t know what the
numbers under each year actually represent. Do those numbers represent
population size? Land area? It’s not clear.
To solve both of these problems,
we can reshape this data set to a tidy data format
by creating a column called “year” and a column called
“population.” This transformation—which makes the data
“longer”—is shown as the right table in
Figure [3\.7](wrangling.html#fig:02-wide-to-long).
Figure 3\.7: Pivoting data from a wide to long data format.
We can achieve this effect in R using the `pivot_longer` function from the `tidyverse` package.
The `pivot_longer` function combines columns,
and is usually used during tidying data
when we need to make the data frame longer and narrower.
To learn how to use `pivot_longer`, we will work through an example with the
`region_lang_top5_cities_wide.csv` data set. This data set contains the
counts of how many Canadians cited each language as their mother tongue for five
major Canadian cities (Toronto, Montréal, Vancouver, Calgary, and Edmonton) from
the 2016 Canadian census.
To get started,
we will load the `tidyverse` package and use `read_csv` to load the (untidy) data.
```
library(tidyverse)
lang_wide <- read_csv("data/region_lang_top5_cities_wide.csv")
lang_wide
```
```
## # A tibble: 214 × 7
## category language Toronto Montréal Vancouver Calgary Edmonton
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal languages Aborigi… 80 30 70 20 25
## 2 Non-Official & Non-Abor… Afrikaa… 985 90 1435 960 575
## 3 Non-Official & Non-Abor… Afro-As… 360 240 45 45 65
## 4 Non-Official & Non-Abor… Akan (T… 8485 1015 400 705 885
## 5 Non-Official & Non-Abor… Albanian 13260 2450 1090 1365 770
## 6 Aboriginal languages Algonqu… 5 5 0 0 0
## 7 Aboriginal languages Algonqu… 5 30 5 5 0
## 8 Non-Official & Non-Abor… America… 470 50 265 100 180
## 9 Non-Official & Non-Abor… Amharic 7460 665 1140 4075 2515
## 10 Non-Official & Non-Abor… Arabic 85175 151955 14320 18965 17525
## # ℹ 204 more rows
```
What is wrong with the untidy format above?
The table on the left in Figure [3\.8](wrangling.html#fig:img-pivot-longer-with-table)
represents the data in the “wide” (messy) format.
From a data analysis perspective, this format is not ideal because the values of
the variable *region* (Toronto, Montréal, Vancouver, Calgary, and Edmonton)
are stored as column names. Thus they
are not easily accessible to the data analysis functions we will apply
to our data set. Additionally, the *mother tongue* variable values are
spread across multiple columns, which will prevent us from doing any desired
visualization or statistical tasks until we combine them into one column. For
instance, suppose we want to know the languages with the highest number of
Canadians reporting it as their mother tongue among all five regions. This
question would be tough to answer with the data in its current format.
We *could* find the answer with the data in this format,
though it would be much easier to answer if we tidy our
data first. If mother tongue were instead stored as one column,
as shown in the tidy data on the right in
Figure [3\.8](wrangling.html#fig:img-pivot-longer-with-table),
we could simply use the `max` function in one line of code
to get the maximum value.
Figure 3\.8: Going from wide to long with the `pivot_longer` function.
Figure [3\.9](wrangling.html#fig:img-pivot-longer) details the arguments that we need to specify
in the `pivot_longer` function to accomplish this data transformation.
Figure 3\.9: Syntax for the `pivot_longer` function.
We use `pivot_longer` to combine the Toronto, Montréal,
Vancouver, Calgary, and Edmonton columns into a single column called `region`,
and create a column called `mother_tongue` that contains the count of how many
Canadians report each language as their mother tongue for each metropolitan
area. We use a colon `:` between Toronto and Edmonton to tell R to select all
the columns between Toronto and Edmonton:
```
lang_mother_tidy <- pivot_longer(lang_wide,
cols = Toronto:Edmonton,
names_to = "region",
values_to = "mother_tongue"
)
lang_mother_tidy
```
```
## # A tibble: 1,070 × 4
## category language region mother_tongue
## <chr> <chr> <chr> <dbl>
## 1 Aboriginal languages Aboriginal lang… Toron… 80
## 2 Aboriginal languages Aboriginal lang… Montr… 30
## 3 Aboriginal languages Aboriginal lang… Vanco… 70
## 4 Aboriginal languages Aboriginal lang… Calga… 20
## 5 Aboriginal languages Aboriginal lang… Edmon… 25
## 6 Non-Official & Non-Aboriginal languages Afrikaans Toron… 985
## 7 Non-Official & Non-Aboriginal languages Afrikaans Montr… 90
## 8 Non-Official & Non-Aboriginal languages Afrikaans Vanco… 1435
## 9 Non-Official & Non-Aboriginal languages Afrikaans Calga… 960
## 10 Non-Official & Non-Aboriginal languages Afrikaans Edmon… 575
## # ℹ 1,060 more rows
```
> **Note**: In the code above, the call to the
> `pivot_longer` function is split across several lines. This is allowed in
> certain cases; for example, when calling a function as above, as long as the
> line ends with a comma `,` R knows to keep reading on the next line.
> Splitting long lines like this across multiple lines is encouraged
> as it helps significantly with code readability. Generally speaking, you should
> limit each line of code to about 80 characters.
The data above is now tidy because all three criteria for tidy data have now
been met:
1. All the variables (`category`, `language`, `region` and `mother_tongue`) are
now their own columns in the data frame.
2. Each observation, (i.e., each language in a region) is in a single row.
3. Each value is a single cell, i.e., its row, column position in the data
frame is not shared with another value.
### 3\.4\.2 Tidying up: going from long to wide using `pivot_wider`
Suppose we have observations spread across multiple rows rather than in a single
row. For example, in Figure [3\.10](wrangling.html#fig:long-to-wide), the table on the left is in an
untidy, long format because the `count` column contains three variables
(population, commuter count, and year the city was incorporated)
and information about each observation
(here, population, commuter, and incorporated values for a region) is split across three rows.
Remember: one of the criteria for tidy data
is that each observation must be in a single row.
Using data in this format—where two or more variables are mixed together
in a single column—makes it harder to apply many usual `tidyverse` functions.
For example, finding the maximum number of commuters
would require an additional step of filtering for the commuter values
before the maximum can be computed.
In comparison, if the data were tidy,
all we would have to do is compute the maximum value for the commuter column.
To reshape this untidy data set to a tidy (and in this case, wider) format,
we need to create columns called “population”, “commuters”, and “incorporated.”
This is illustrated in the right table of Figure [3\.10](wrangling.html#fig:long-to-wide).
Figure 3\.10: Going from long to wide data.
To tidy this type of data in R, we can use the `pivot_wider` function.
The `pivot_wider` function generally increases the number of columns (widens)
and decreases the number of rows in a data set.
To learn how to use `pivot_wider`,
we will work through an example
with the `region_lang_top5_cities_long.csv` data set.
This data set contains the number of Canadians reporting
the primary language at home and work for five
major cities (Toronto, Montréal, Vancouver, Calgary, and Edmonton).
```
lang_long <- read_csv("data/region_lang_top5_cities_long.csv")
lang_long
```
```
## # A tibble: 2,140 × 5
## region category language type count
## <chr> <chr> <chr> <chr> <dbl>
## 1 Montréal Aboriginal languages Aboriginal languages, n.o.s. most_at_home 15
## 2 Montréal Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## 3 Toronto Aboriginal languages Aboriginal languages, n.o.s. most_at_home 50
## 4 Toronto Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## 5 Calgary Aboriginal languages Aboriginal languages, n.o.s. most_at_home 5
## 6 Calgary Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## 7 Edmonton Aboriginal languages Aboriginal languages, n.o.s. most_at_home 10
## 8 Edmonton Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## 9 Vancouver Aboriginal languages Aboriginal languages, n.o.s. most_at_home 15
## 10 Vancouver Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## # ℹ 2,130 more rows
```
What makes the data set shown above untidy?
In this example, each observation is a language in a region.
However, each observation is split across multiple rows:
one where the count for `most_at_home` is recorded,
and the other where the count for `most_at_work` is recorded.
Suppose the goal with this data was to
visualize the relationship between the number of
Canadians reporting their primary language at home and work.
Doing that would be difficult with this data in its current form,
since these two variables are stored in the same column.
Figure [3\.11](wrangling.html#fig:img-pivot-wider-table) shows how this data
will be tidied using the `pivot_wider` function.
Figure 3\.11: Going from long to wide with the `pivot_wider` function.
Figure [3\.12](wrangling.html#fig:img-pivot-wider) details the arguments that we need to specify
in the `pivot_wider` function.
Figure 3\.12: Syntax for the `pivot_wider` function.
We will apply the function as detailed in Figure [3\.12](wrangling.html#fig:img-pivot-wider).
```
lang_home_tidy <- pivot_wider(lang_long,
names_from = type,
values_from = count
)
lang_home_tidy
```
```
## # A tibble: 1,070 × 5
## region category language most_at_home most_at_work
## <chr> <chr> <chr> <dbl> <dbl>
## 1 Montréal Aboriginal languages Aborigi… 15 0
## 2 Toronto Aboriginal languages Aborigi… 50 0
## 3 Calgary Aboriginal languages Aborigi… 5 0
## 4 Edmonton Aboriginal languages Aborigi… 10 0
## 5 Vancouver Aboriginal languages Aborigi… 15 0
## 6 Montréal Non-Official & Non-Aboriginal l… Afrikaa… 10 0
## 7 Toronto Non-Official & Non-Aboriginal l… Afrikaa… 265 0
## 8 Calgary Non-Official & Non-Aboriginal l… Afrikaa… 505 15
## 9 Edmonton Non-Official & Non-Aboriginal l… Afrikaa… 300 0
## 10 Vancouver Non-Official & Non-Aboriginal l… Afrikaa… 520 10
## # ℹ 1,060 more rows
```
The data above is now tidy! We can go through the three criteria again to check
that this data is a tidy data set.
1. All the statistical variables are their own columns in the data frame (i.e.,
`most_at_home`, and `most_at_work` have been separated into their own
columns in the data frame).
2. Each observation, (i.e., each language in a region) is in a single row.
3. Each value is a single cell (i.e., its row, column position in the data
frame is not shared with another value).
You might notice that we have the same number of columns in the tidy data set as
we did in the messy one. Therefore `pivot_wider` didn’t really “widen” the data,
as the name suggests. This is just because the original `type` column only had
two categories in it. If it had more than two, `pivot_wider` would have created
more columns, and we would see the data set “widen.”
### 3\.4\.3 Tidying up: using `separate` to deal with multiple delimiters
Data are also not considered tidy when multiple values are stored in the same
cell. The data set we show below is even messier than the ones we dealt with
above: the `Toronto`, `Montréal`, `Vancouver`, `Calgary`, and `Edmonton` columns
contain the number of Canadians reporting their primary language at home and
work in one column separated by the delimiter (`/`). The column names are the
values of a variable, *and* each value does not have its own cell! To turn this
messy data into tidy data, we’ll have to fix these issues.
```
lang_messy <- read_csv("data/region_lang_top5_cities_messy.csv")
lang_messy
```
```
## # A tibble: 214 × 7
## category language Toronto Montréal Vancouver Calgary Edmonton
## <chr> <chr> <chr> <chr> <chr> <chr> <chr>
## 1 Aboriginal languages Aborigi… 50/0 15/0 15/0 5/0 10/0
## 2 Non-Official & Non-Abor… Afrikaa… 265/0 10/0 520/10 505/15 300/0
## 3 Non-Official & Non-Abor… Afro-As… 185/10 65/0 10/0 15/0 20/0
## 4 Non-Official & Non-Abor… Akan (T… 4045/20 440/0 125/10 330/0 445/0
## 5 Non-Official & Non-Abor… Albanian 6380/2… 1445/20 530/10 620/25 370/10
## 6 Aboriginal languages Algonqu… 5/0 0/0 0/0 0/0 0/0
## 7 Aboriginal languages Algonqu… 0/0 10/0 0/0 0/0 0/0
## 8 Non-Official & Non-Abor… America… 720/245 70/0 300/140 85/25 190/85
## 9 Non-Official & Non-Abor… Amharic 3820/55 315/0 540/10 2730/50 1695/35
## 10 Non-Official & Non-Abor… Arabic 45025/… 72980/1… 8680/275 11010/… 10590/3…
## # ℹ 204 more rows
```
First we’ll use `pivot_longer` to create two columns, `region` and `value`,
similar to what we did previously.
The new `region` columns will contain the region names,
and the new column `value` will be a temporary holding place for the
data that we need to further separate, i.e., the
number of Canadians reporting their primary language at home and work.
```
lang_messy_longer <- pivot_longer(lang_messy,
cols = Toronto:Edmonton,
names_to = "region",
values_to = "value"
)
lang_messy_longer
```
```
## # A tibble: 1,070 × 4
## category language region value
## <chr> <chr> <chr> <chr>
## 1 Aboriginal languages Aboriginal languages, n… Toron… 50/0
## 2 Aboriginal languages Aboriginal languages, n… Montr… 15/0
## 3 Aboriginal languages Aboriginal languages, n… Vanco… 15/0
## 4 Aboriginal languages Aboriginal languages, n… Calga… 5/0
## 5 Aboriginal languages Aboriginal languages, n… Edmon… 10/0
## 6 Non-Official & Non-Aboriginal languages Afrikaans Toron… 265/0
## 7 Non-Official & Non-Aboriginal languages Afrikaans Montr… 10/0
## 8 Non-Official & Non-Aboriginal languages Afrikaans Vanco… 520/…
## 9 Non-Official & Non-Aboriginal languages Afrikaans Calga… 505/…
## 10 Non-Official & Non-Aboriginal languages Afrikaans Edmon… 300/0
## # ℹ 1,060 more rows
```
Next we’ll use `separate` to split the `value` column into two columns.
One column will contain only the counts of Canadians
that speak each language most at home,
and the other will contain the counts of Canadians
that speak each language most at work for each region.
Figure [3\.13](wrangling.html#fig:img-separate)
outlines what we need to specify to use `separate`.
Figure 3\.13: Syntax for the `separate` function.
```
tidy_lang <- separate(lang_messy_longer,
col = value,
into = c("most_at_home", "most_at_work"),
sep = "/"
)
tidy_lang
```
```
## # A tibble: 1,070 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <chr> <chr>
## 1 Aboriginal languages Aborigi… Toron… 50 0
## 2 Aboriginal languages Aborigi… Montr… 15 0
## 3 Aboriginal languages Aborigi… Vanco… 15 0
## 4 Aboriginal languages Aborigi… Calga… 5 0
## 5 Aboriginal languages Aborigi… Edmon… 10 0
## 6 Non-Official & Non-Aboriginal lang… Afrikaa… Toron… 265 0
## 7 Non-Official & Non-Aboriginal lang… Afrikaa… Montr… 10 0
## 8 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 9 Non-Official & Non-Aboriginal lang… Afrikaa… Calga… 505 15
## 10 Non-Official & Non-Aboriginal lang… Afrikaa… Edmon… 300 0
## # ℹ 1,060 more rows
```
Is this data set now tidy? If we recall the three criteria for tidy data:
* each row is a single observation,
* each column is a single variable, and
* each value is a single cell.
We can see that this data now satisfies all three criteria, making it easier to
analyze. But we aren’t done yet! Notice in the table above that the word
`<chr>` appears beneath each of the column names. The word under the column name
indicates the data type of each column. Here all of the variables are
“character” data types. Recall, character data types are letter(s) or digits(s)
surrounded by quotes. In the previous example in Section [3\.4\.2](wrangling.html#pivot-wider), the
`most_at_home` and `most_at_work` variables were `<dbl>` (double)—you can
verify this by looking at the tables in the previous sections—which is a type
of numeric data. This change is due to the delimiter (`/`) when we read in this
messy data set. R read these columns in as character types, and by default,
`separate` will return columns as character data types.
It makes sense for `region`, `category`, and `language` to be stored as a
character (or perhaps factor) type. However, suppose we want to apply any functions that treat the
`most_at_home` and `most_at_work` columns as a number (e.g., finding rows
above a numeric threshold of a column).
In that case,
it won’t be possible to do if the variable is stored as a `character`.
Fortunately, the `separate` function provides a natural way to fix problems
like this: we can set `convert = TRUE` to convert the `most_at_home`
and `most_at_work` columns to the correct data type.
```
tidy_lang <- separate(lang_messy_longer,
col = value,
into = c("most_at_home", "most_at_work"),
sep = "/",
convert = TRUE
)
tidy_lang
```
```
## # A tibble: 1,070 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Aboriginal languages Aborigi… Toron… 50 0
## 2 Aboriginal languages Aborigi… Montr… 15 0
## 3 Aboriginal languages Aborigi… Vanco… 15 0
## 4 Aboriginal languages Aborigi… Calga… 5 0
## 5 Aboriginal languages Aborigi… Edmon… 10 0
## 6 Non-Official & Non-Aboriginal lang… Afrikaa… Toron… 265 0
## 7 Non-Official & Non-Aboriginal lang… Afrikaa… Montr… 10 0
## 8 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 9 Non-Official & Non-Aboriginal lang… Afrikaa… Calga… 505 15
## 10 Non-Official & Non-Aboriginal lang… Afrikaa… Edmon… 300 0
## # ℹ 1,060 more rows
```
Now we see `<int>` appears under the `most_at_home` and `most_at_work` columns,
indicating they are integer data types (i.e., numbers)!
3\.5 Using `select` to extract a range of columns
-------------------------------------------------
Now that the `tidy_lang` data is indeed *tidy*, we can start manipulating it
using the powerful suite of functions from the `tidyverse`.
For the first example, recall the `select` function from Chapter [1](intro.html#intro),
which lets us create a subset of columns from a data frame.
Suppose we wanted to select only the columns `language`, `region`,
`most_at_home` and `most_at_work` from the `tidy_lang` data set. Using what we
learned in Chapter [1](intro.html#intro), we would pass the `tidy_lang` data frame as
well as all of these column names into the `select` function:
```
selected_columns <- select(tidy_lang,
language,
region,
most_at_home,
most_at_work)
selected_columns
```
```
## # A tibble: 1,070 × 4
## language region most_at_home most_at_work
## <chr> <chr> <int> <int>
## 1 Aboriginal languages, n.o.s. Toronto 50 0
## 2 Aboriginal languages, n.o.s. Montréal 15 0
## 3 Aboriginal languages, n.o.s. Vancouver 15 0
## 4 Aboriginal languages, n.o.s. Calgary 5 0
## 5 Aboriginal languages, n.o.s. Edmonton 10 0
## 6 Afrikaans Toronto 265 0
## 7 Afrikaans Montréal 10 0
## 8 Afrikaans Vancouver 520 10
## 9 Afrikaans Calgary 505 15
## 10 Afrikaans Edmonton 300 0
## # ℹ 1,060 more rows
```
Here we wrote out the names of each of the columns. However, this method is
time\-consuming, especially if you have a lot of columns! Another approach is to
use a “select helper”. Select helpers are operators that make it easier for
us to select columns. For instance, we can use a select helper to choose a
range of columns rather than typing each column name out. To do this, we use the
colon (`:`) operator to denote the range. For example, to get all the columns in
the `tidy_lang` data frame from `language` to `most_at_work` we pass
`language:most_at_work` as the second argument to the `select` function.
```
column_range <- select(tidy_lang, language:most_at_work)
column_range
```
```
## # A tibble: 1,070 × 4
## language region most_at_home most_at_work
## <chr> <chr> <int> <int>
## 1 Aboriginal languages, n.o.s. Toronto 50 0
## 2 Aboriginal languages, n.o.s. Montréal 15 0
## 3 Aboriginal languages, n.o.s. Vancouver 15 0
## 4 Aboriginal languages, n.o.s. Calgary 5 0
## 5 Aboriginal languages, n.o.s. Edmonton 10 0
## 6 Afrikaans Toronto 265 0
## 7 Afrikaans Montréal 10 0
## 8 Afrikaans Vancouver 520 10
## 9 Afrikaans Calgary 505 15
## 10 Afrikaans Edmonton 300 0
## # ℹ 1,060 more rows
```
Notice that we get the same output as we did above,
but with less (and clearer!) code. This type of operator
is especially handy for large data sets.
Suppose instead we wanted to extract columns that followed a particular pattern
rather than just selecting a range. For example, let’s say we wanted only to select the
columns `most_at_home` and `most_at_work`. There are other helpers that allow
us to select variables based on their names. In particular, we can use the `select` helper
`starts_with` to choose only the columns that start with the word “most”:
```
select(tidy_lang, starts_with("most"))
```
```
## # A tibble: 1,070 × 2
## most_at_home most_at_work
## <int> <int>
## 1 50 0
## 2 15 0
## 3 15 0
## 4 5 0
## 5 10 0
## 6 265 0
## 7 10 0
## 8 520 10
## 9 505 15
## 10 300 0
## # ℹ 1,060 more rows
```
We could also have chosen the columns containing an underscore `_` by adding
`contains("_")` as the second argument in the `select` function, since we notice
the columns we want contain underscores and the others don’t.
```
select(tidy_lang, contains("_"))
```
```
## # A tibble: 1,070 × 2
## most_at_home most_at_work
## <int> <int>
## 1 50 0
## 2 15 0
## 3 15 0
## 4 5 0
## 5 10 0
## 6 265 0
## 7 10 0
## 8 520 10
## 9 505 15
## 10 300 0
## # ℹ 1,060 more rows
```
There are many different `select` helpers that select
variables based on certain criteria.
The additional resources section at the end of this chapter
provides a comprehensive resource on `select` helpers.
3\.6 Using `filter` to extract rows
-----------------------------------
Next, we revisit the `filter` function from Chapter [1](intro.html#intro),
which lets us create a subset of rows from a data frame.
Recall the two main arguments to the `filter` function:
the first is the name of the data frame object, and
the second is a *logical statement* to use when filtering the rows.
`filter` works by returning the rows where the logical statement evaluates to `TRUE`.
This section will highlight more advanced usage of the `filter` function.
In particular, this section provides an in\-depth treatment of the variety of logical statements
one can use in the `filter` function to select subsets of rows.
### 3\.6\.1 Extracting rows that have a certain value with `==`
Suppose we are only interested in the subset of rows in `tidy_lang` corresponding to the
official languages of Canada (English and French).
We can `filter` for these rows by using the *equivalency operator* (`==`)
to compare the values of the `category` column
with the value `"Official languages"`.
With these arguments, `filter` returns a data frame with all the columns
of the input data frame
but only the rows we asked for in the logical statement, i.e.,
those where the `category` column holds the value `"Official languages"`.
We name this data frame `official_langs`.
```
official_langs <- filter(tidy_lang, category == "Official languages")
official_langs
```
```
## # A tibble: 10 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages English Toronto 3836770 3218725
## 2 Official languages English Montréal 620510 412120
## 3 Official languages English Vancouver 1622735 1330555
## 4 Official languages English Calgary 1065070 844740
## 5 Official languages English Edmonton 1050410 792700
## 6 Official languages French Toronto 29800 11940
## 7 Official languages French Montréal 2669195 1607550
## 8 Official languages French Vancouver 8630 3245
## 9 Official languages French Calgary 8630 2140
## 10 Official languages French Edmonton 10950 2520
```
### 3\.6\.2 Extracting rows that do not have a certain value with `!=`
What if we want all the other language categories in the data set *except* for
those in the `"Official languages"` category? We can accomplish this with the `!=`
operator, which means “not equal to”. So if we want to find all the rows
where the `category` does *not* equal `"Official languages"` we write the code
below.
```
filter(tidy_lang, category != "Official languages")
```
```
## # A tibble: 1,060 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Aboriginal languages Aborigi… Toron… 50 0
## 2 Aboriginal languages Aborigi… Montr… 15 0
## 3 Aboriginal languages Aborigi… Vanco… 15 0
## 4 Aboriginal languages Aborigi… Calga… 5 0
## 5 Aboriginal languages Aborigi… Edmon… 10 0
## 6 Non-Official & Non-Aboriginal lang… Afrikaa… Toron… 265 0
## 7 Non-Official & Non-Aboriginal lang… Afrikaa… Montr… 10 0
## 8 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 9 Non-Official & Non-Aboriginal lang… Afrikaa… Calga… 505 15
## 10 Non-Official & Non-Aboriginal lang… Afrikaa… Edmon… 300 0
## # ℹ 1,050 more rows
```
### 3\.6\.3 Extracting rows satisfying multiple conditions using `,` or `&`
Suppose now we want to look at only the rows
for the French language in Montréal.
To do this, we need to filter the data set
to find rows that satisfy multiple conditions simultaneously.
We can do this with the comma symbol (`,`), which in the case of `filter`
is interpreted by R as “and”.
We write the code as shown below to filter the `official_langs` data frame
to subset the rows where `region == "Montréal"`
*and* the `language == "French"`.
```
filter(official_langs, region == "Montréal", language == "French")
```
```
## # A tibble: 1 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages French Montréal 2669195 1607550
```
We can also use the ampersand (`&`) logical operator, which gives
us cases where *both* one condition *and* another condition
are satisfied. You can use either comma (`,`) or ampersand (`&`) in the `filter`
function interchangeably.
```
filter(official_langs, region == "Montréal" & language == "French")
```
```
## # A tibble: 1 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages French Montréal 2669195 1607550
```
### 3\.6\.4 Extracting rows satisfying at least one condition using `|`
Suppose we were interested in only those rows corresponding to cities in Alberta
in the `official_langs` data set (Edmonton and Calgary).
We can’t use `,` as we did above because `region`
cannot be both Edmonton *and* Calgary simultaneously.
Instead, we can use the vertical pipe (`|`) logical operator,
which gives us the cases where one condition *or*
another condition *or* both are satisfied.
In the code below, we ask R to return the rows
where the `region` columns are equal to “Calgary” *or* “Edmonton”.
```
filter(official_langs, region == "Calgary" | region == "Edmonton")
```
```
## # A tibble: 4 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages English Calgary 1065070 844740
## 2 Official languages English Edmonton 1050410 792700
## 3 Official languages French Calgary 8630 2140
## 4 Official languages French Edmonton 10950 2520
```
### 3\.6\.5 Extracting rows with values in a vector using `%in%`
Next, suppose we want to see the populations of our five cities.
Let’s read in the `region_data.csv` file
that comes from the 2016 Canadian census,
as it contains statistics for number of households, land area, population
and number of dwellings for different regions.
```
region_data <- read_csv("data/region_data.csv")
region_data
```
```
## # A tibble: 35 × 5
## region households area population dwellings
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Belleville 43002 1355. 103472 45050
## 2 Lethbridge 45696 3047. 117394 48317
## 3 Thunder Bay 52545 2618. 121621 57146
## 4 Peterborough 50533 1637. 121721 55662
## 5 Saint John 52872 3793. 126202 58398
## 6 Brantford 52530 1086. 134203 54419
## 7 Moncton 61769 2625. 144810 66699
## 8 Guelph 59280 604. 151984 63324
## 9 Trois-Rivières 72502 1053. 156042 77734
## 10 Saguenay 72479 3079. 160980 77968
## # ℹ 25 more rows
```
To get the population of the five cities
we can filter the data set using the `%in%` operator.
The `%in%` operator is used to see if an element belongs to a vector.
Here we are filtering for rows where the value in the `region` column
matches any of the five cities we are intersted in: Toronto, Montréal,
Vancouver, Calgary, and Edmonton.
```
city_names <- c("Toronto", "Montréal", "Vancouver", "Calgary", "Edmonton")
five_cities <- filter(region_data,
region %in% city_names)
five_cities
```
```
## # A tibble: 5 × 5
## region households area population dwellings
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Edmonton 502143 9858. 1321426 537634
## 2 Calgary 519693 5242. 1392609 544870
## 3 Vancouver 960894 3040. 2463431 1027613
## 4 Montréal 1727310 4638. 4098927 1823281
## 5 Toronto 2135909 6270. 5928040 2235145
```
> **Note:** What’s the difference between `==` and `%in%`? Suppose we have two
> vectors, `vectorA` and `vectorB`. If you type `vectorA == vectorB` into R it
> will compare the vectors element by element. R checks if the first element of
> `vectorA` equals the first element of `vectorB`, the second element of
> `vectorA` equals the second element of `vectorB`, and so on. On the other hand,
> `vectorA %in% vectorB` compares the first element of `vectorA` to all the
> elements in `vectorB`. Then the second element of `vectorA` is compared
> to all the elements in `vectorB`, and so on. Notice the difference between `==` and
> `%in%` in the example below.
>
>
>
> ```
> c("Vancouver", "Toronto") == c("Toronto", "Vancouver")
> ```
>
>
> ```
> ## [1] FALSE FALSE
> ```
>
>
> ```
> c("Vancouver", "Toronto") %in% c("Toronto", "Vancouver")
> ```
>
>
> ```
> ## [1] TRUE TRUE
> ```
### 3\.6\.6 Extracting rows above or below a threshold using `>` and `<`
We saw in Section [3\.6\.3](wrangling.html#filter-and) that
2,669,195 people reported
speaking French in Montréal as their primary language at home.
If we are interested in finding the official languages in regions
with higher numbers of people who speak it as their primary language at home
compared to French in Montréal, then we can use `filter` to obtain rows
where the value of `most_at_home` is greater than
2,669,195\.
We use the `>` symbol to look for values *above* a threshold, and the `<` symbol
to look for values *below* a threshold. The `>=` and `<=` symbols similarly look
for *equal to or above* a threshold and *equal to or below* a
threshold.
```
filter(official_langs, most_at_home > 2669195)
```
```
## # A tibble: 1 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages English Toronto 3836770 3218725
```
`filter` returns a data frame with only one row, indicating that when
considering the official languages,
only English in Toronto is reported by more people
as their primary language at home
than French in Montréal according to the 2016 Canadian census.
3\.7 Using `mutate` to modify or add columns
--------------------------------------------
### 3\.7\.1 Using `mutate` to modify columns
In Section [3\.4\.3](wrangling.html#separate),
when we first read in the `"region_lang_top5_cities_messy.csv"` data,
all of the variables were “character” data types.
During the tidying process,
we used the `convert` argument from the `separate` function
to convert the `most_at_home` and `most_at_work` columns
to the desired integer (i.e., numeric class) data types.
But suppose we didn’t use the `convert` argument,
and needed to modify the column type some other way.
Below we create such a situation
so that we can demonstrate how to use `mutate`
to change the column types of a data frame.
`mutate` is a useful function to modify or create new data frame columns.
```
lang_messy <- read_csv("data/region_lang_top5_cities_messy.csv")
lang_messy_longer <- pivot_longer(lang_messy,
cols = Toronto:Edmonton,
names_to = "region",
values_to = "value")
tidy_lang_chr <- separate(lang_messy_longer, col = value,
into = c("most_at_home", "most_at_work"),
sep = "/")
official_langs_chr <- filter(tidy_lang_chr, category == "Official languages")
official_langs_chr
```
```
## # A tibble: 10 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <chr> <chr>
## 1 Official languages English Toronto 3836770 3218725
## 2 Official languages English Montréal 620510 412120
## 3 Official languages English Vancouver 1622735 1330555
## 4 Official languages English Calgary 1065070 844740
## 5 Official languages English Edmonton 1050410 792700
## 6 Official languages French Toronto 29800 11940
## 7 Official languages French Montréal 2669195 1607550
## 8 Official languages French Vancouver 8630 3245
## 9 Official languages French Calgary 8630 2140
## 10 Official languages French Edmonton 10950 2520
```
To use `mutate`, again we first specify the data set in the first argument,
and in the following arguments,
we specify the name of the column we want to modify or create
(here `most_at_home` and `most_at_work`), an `=` sign,
and then the function we want to apply (here `as.numeric`).
In the function we want to apply,
we refer directly to the column name upon which we want it to act
(here `most_at_home` and `most_at_work`).
In our example, we are naming the columns the same
names as columns that already exist in the data frame
(“most\_at\_home”, “most\_at\_work”)
and this will cause `mutate` to *overwrite* those columns
(also referred to as modifying those columns *in\-place*).
If we were to give the columns a new name,
then `mutate` would create new columns with the names we specified.
`mutate`’s general syntax is detailed in Figure [3\.14](wrangling.html#fig:img-mutate).
Figure 3\.14: Syntax for the `mutate` function.
Below we use `mutate` to convert the columns `most_at_home` and `most_at_work`
to numeric data types in the `official_langs` data set as described in Figure
[3\.14](wrangling.html#fig:img-mutate):
```
official_langs_numeric <- mutate(official_langs_chr,
most_at_home = as.numeric(most_at_home),
most_at_work = as.numeric(most_at_work)
)
official_langs_numeric
```
```
## # A tibble: 10 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <dbl> <dbl>
## 1 Official languages English Toronto 3836770 3218725
## 2 Official languages English Montréal 620510 412120
## 3 Official languages English Vancouver 1622735 1330555
## 4 Official languages English Calgary 1065070 844740
## 5 Official languages English Edmonton 1050410 792700
## 6 Official languages French Toronto 29800 11940
## 7 Official languages French Montréal 2669195 1607550
## 8 Official languages French Vancouver 8630 3245
## 9 Official languages French Calgary 8630 2140
## 10 Official languages French Edmonton 10950 2520
```
Now we see `<dbl>` appears under the `most_at_home` and `most_at_work` columns,
indicating they are double data types (which is a numeric data type)!
### 3\.7\.2 Using `mutate` to create new columns
We can see in the table that
3,836,770 people reported
speaking English in Toronto as their primary language at home, according to
the 2016 Canadian census. What does this number mean to us? To understand this
number, we need context. In particular, how many people were in Toronto when
this data was collected? From the 2016 Canadian census profile, the population
of Toronto was reported to be
5,928,040 people.
The number of people who report that English is their primary language at home
is much more meaningful when we report it in this context.
We can even go a step further and transform this count to a relative frequency
or proportion.
We can do this by dividing the number of people reporting a given language
as their primary language at home by the number of people who live in Toronto.
For example,
the proportion of people who reported that their primary language at home
was English in the 2016 Canadian census was
0\.65
in Toronto.
Let’s use `mutate` to create a new column in our data frame
that holds the proportion of people who speak English
for our five cities of focus in this chapter.
To accomplish this, we will need to do two tasks
beforehand:
1. Create a vector containing the population values for the cities.
2. Filter the `official_langs` data frame
so that we only keep the rows where the language is English.
To create a vector containing the population values for the five cities
(Toronto, Montréal, Vancouver, Calgary, Edmonton),
we will use the `c` function (recall that `c` stands for “concatenate”):
```
city_pops <- c(5928040, 4098927, 2463431, 1392609, 1321426)
city_pops
```
```
## [1] 5928040 4098927 2463431 1392609 1321426
```
And next, we will filter the `official_langs` data frame
so that we only keep the rows where the language is English.
We will name the new data frame we get from this `english_langs`:
```
english_langs <- filter(official_langs, language == "English")
english_langs
```
```
## # A tibble: 5 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages English Toronto 3836770 3218725
## 2 Official languages English Montréal 620510 412120
## 3 Official languages English Vancouver 1622735 1330555
## 4 Official languages English Calgary 1065070 844740
## 5 Official languages English Edmonton 1050410 792700
```
Finally, we can use `mutate` to create a new column,
named `most_at_home_proportion`, that will have value that corresponds to
the proportion of people reporting English as their primary
language at home.
We will compute this by dividing the column by our vector of city populations.
```
english_langs <- mutate(english_langs,
most_at_home_proportion = most_at_home / city_pops)
english_langs
```
```
## # A tibble: 5 × 6
## category language region most_at_home most_at_work most_at_home_proport…¹
## <chr> <chr> <chr> <int> <int> <dbl>
## 1 Official lan… English Toron… 3836770 3218725 0.647
## 2 Official lan… English Montr… 620510 412120 0.151
## 3 Official lan… English Vanco… 1622735 1330555 0.659
## 4 Official lan… English Calga… 1065070 844740 0.765
## 5 Official lan… English Edmon… 1050410 792700 0.795
## # ℹ abbreviated name: ¹most_at_home_proportion
```
In the computation above, we had to ensure that we ordered the `city_pops` vector in the
same order as the cities were listed in the `english_langs` data frame.
This is because R will perform the division computation we did by dividing
each element of the `most_at_home` column by each element of the
`city_pops` vector, matching them up by position.
Failing to do this would have resulted in the incorrect math being performed.
> **Note:** In more advanced data wrangling,
> one might solve this problem in a less error\-prone way though using
> a technique called “joins.”
> We link to resources that discuss this in the additional
> resources at the end of this chapter.
3\.8 Combining functions using the pipe operator, `|>`
------------------------------------------------------
In R, we often have to call multiple functions in a sequence to process a data
frame. The basic ways of doing this can become quickly unreadable if there are
many steps. For example, suppose we need to perform three operations on a data
frame called `data`:
1. add a new column `new_col` that is double another `old_col`,
2. filter for rows where another column, `other_col`, is more than 5, and
3. select only the new column `new_col` for those rows.
One way of performing these three steps is to just write
multiple lines of code, storing temporary objects as you go:
```
output_1 <- mutate(data, new_col = old_col * 2)
output_2 <- filter(output_1, other_col > 5)
output <- select(output_2, new_col)
```
This is difficult to understand for multiple reasons. The reader may be tricked
into thinking the named `output_1` and `output_2` objects are important for some
reason, while they are just temporary intermediate computations. Further, the
reader has to look through and find where `output_1` and `output_2` are used in
each subsequent line.
Another option for doing this would be to *compose* the functions:
```
output <- select(filter(mutate(data, new_col = old_col * 2),
other_col > 5),
new_col)
```
Code like this can also be difficult to understand. Functions compose (reading
from left to right) in the *opposite order* in which they are computed by R
(above, `mutate` happens first, then `filter`, then `select`). It is also just a
really long line of code to read in one go.
The *pipe operator* (`|>`) solves this problem, resulting in cleaner and
easier\-to\-follow code. `|>` is built into R so you don’t need to load any
packages to use it.
You can think of the pipe as a physical pipe. It takes the output from the
function on the left\-hand side of the pipe, and passes it as the first argument
to the function on the right\-hand side of the pipe.
The code below accomplishes the same thing as the previous
two code blocks:
```
output <- data |>
mutate(new_col = old_col * 2) |>
filter(other_col > 5) |>
select(new_col)
```
> **Note:** You might also have noticed that we split the function calls across
> lines after the pipe, similar to when we did this earlier in the chapter
> for long function calls. Again, this is allowed and recommended, especially when
> the piped function calls create a long line of code. Doing this makes
> your code more readable. When you do this, it is important to end each line
> with the pipe operator `|>` to tell R that your code is continuing onto the
> next line.
> **Note:** In this textbook, we will be using the base R pipe operator syntax, `|>`.
> This base R `|>` pipe operator was inspired by a previous version of the pipe
> operator, `%>%`. The `%>%` pipe operator is not built into R
> and is from the `magrittr` R package.
> The `tidyverse` metapackage imports the `%>%` pipe operator via `dplyr`
> (which in turn imports the `magrittr` R package).
> There are some other differences between `%>%` and `|>` related to
> more advanced R uses, such as sharing and distributing code as R packages,
> however, these are beyond the scope of this textbook.
> We have this note in the book to make the reader aware that `%>%` exists
> as it is still commonly used in data analysis code and in many data science
> books and other resources.
> In most cases these two pipes are interchangeable and either can be used.
### 3\.8\.1 Using `|>` to combine `filter` and `select`
Let’s work with the tidy `tidy_lang` data set from Section [3\.4\.3](wrangling.html#separate),
which contains the number of Canadians reporting their primary language at home
and work for five major cities
(Toronto, Montréal, Vancouver, Calgary, and Edmonton):
```
tidy_lang
```
```
## # A tibble: 1,070 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Aboriginal languages Aborigi… Toron… 50 0
## 2 Aboriginal languages Aborigi… Montr… 15 0
## 3 Aboriginal languages Aborigi… Vanco… 15 0
## 4 Aboriginal languages Aborigi… Calga… 5 0
## 5 Aboriginal languages Aborigi… Edmon… 10 0
## 6 Non-Official & Non-Aboriginal lang… Afrikaa… Toron… 265 0
## 7 Non-Official & Non-Aboriginal lang… Afrikaa… Montr… 10 0
## 8 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 9 Non-Official & Non-Aboriginal lang… Afrikaa… Calga… 505 15
## 10 Non-Official & Non-Aboriginal lang… Afrikaa… Edmon… 300 0
## # ℹ 1,060 more rows
```
Suppose we want to create a subset of the data with only the languages and
counts of each language spoken most at home for the city of Vancouver. To do
this, we can use the functions `filter` and `select`. First, we use `filter` to
create a data frame called `van_data` that contains only values for Vancouver.
```
van_data <- filter(tidy_lang, region == "Vancouver")
van_data
```
```
## # A tibble: 214 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Aboriginal languages Aborigi… Vanco… 15 0
## 2 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 3 Non-Official & Non-Aboriginal lang… Afro-As… Vanco… 10 0
## 4 Non-Official & Non-Aboriginal lang… Akan (T… Vanco… 125 10
## 5 Non-Official & Non-Aboriginal lang… Albanian Vanco… 530 10
## 6 Aboriginal languages Algonqu… Vanco… 0 0
## 7 Aboriginal languages Algonqu… Vanco… 0 0
## 8 Non-Official & Non-Aboriginal lang… America… Vanco… 300 140
## 9 Non-Official & Non-Aboriginal lang… Amharic Vanco… 540 10
## 10 Non-Official & Non-Aboriginal lang… Arabic Vanco… 8680 275
## # ℹ 204 more rows
```
We then use `select` on this data frame to keep only the variables we want:
```
van_data_selected <- select(van_data, language, most_at_home)
van_data_selected
```
```
## # A tibble: 214 × 2
## language most_at_home
## <chr> <int>
## 1 Aboriginal languages, n.o.s. 15
## 2 Afrikaans 520
## 3 Afro-Asiatic languages, n.i.e. 10
## 4 Akan (Twi) 125
## 5 Albanian 530
## 6 Algonquian languages, n.i.e. 0
## 7 Algonquin 0
## 8 American Sign Language 300
## 9 Amharic 540
## 10 Arabic 8680
## # ℹ 204 more rows
```
Although this is valid code, there is a more readable approach we could take by
using the pipe, `|>`. With the pipe, we do not need to create an intermediate
object to store the output from `filter`. Instead, we can directly send the
output of `filter` to the input of `select`:
```
van_data_selected <- filter(tidy_lang, region == "Vancouver") |>
select(language, most_at_home)
van_data_selected
```
```
## # A tibble: 214 × 2
## language most_at_home
## <chr> <int>
## 1 Aboriginal languages, n.o.s. 15
## 2 Afrikaans 520
## 3 Afro-Asiatic languages, n.i.e. 10
## 4 Akan (Twi) 125
## 5 Albanian 530
## 6 Algonquian languages, n.i.e. 0
## 7 Algonquin 0
## 8 American Sign Language 300
## 9 Amharic 540
## 10 Arabic 8680
## # ℹ 204 more rows
```
But wait…Why do the `select` function calls
look different in these two examples?
Remember: when you use the pipe,
the output of the first function is automatically provided
as the first argument for the function that comes after it.
Therefore you do not specify the first argument in that function call.
In the code above,
The pipe passes the left\-hand side (the output of `filter`) to the first argument of the function on the right (`select`),
so in the `select` function you only see the second argument (and beyond).
As you can see, both of these approaches—with and without pipes—give us the same output, but the second
approach is clearer and more readable.
### 3\.8\.2 Using `|>` with more than two functions
The pipe operator (\|\>) can be used with any function in R.
Additionally, we can pipe together more than two functions.
For example, we can pipe together three functions to:
* `filter` rows to include only those where the counts of the language most spoken at home are greater than 10,000,
* `select` only the columns corresponding to `region`, `language` and `most_at_home`, and
* `arrange` the data frame rows in order by counts of the language most spoken at home
from smallest to largest.
As we saw in Chapter [1](intro.html#intro),
we can use the `tidyverse` `arrange` function
to order the rows in the data frame by the values of one or more columns.
Here we pass the column name `most_at_home` to arrange the data frame rows by the values in that column, in ascending order.
```
large_region_lang <- filter(tidy_lang, most_at_home > 10000) |>
select(region, language, most_at_home) |>
arrange(most_at_home)
large_region_lang
```
```
## # A tibble: 67 × 3
## region language most_at_home
## <chr> <chr> <int>
## 1 Edmonton Arabic 10590
## 2 Montréal Tamil 10670
## 3 Vancouver Russian 10795
## 4 Edmonton Spanish 10880
## 5 Edmonton French 10950
## 6 Calgary Arabic 11010
## 7 Calgary Urdu 11060
## 8 Vancouver Hindi 11235
## 9 Montréal Armenian 11835
## 10 Toronto Romanian 12200
## # ℹ 57 more rows
```
You will notice above that we passed `tidy_lang` as the first argument of the `filter` function.
We can also pipe the data frame into the same sequence of functions rather than
using it as the first argument of the first function. These two choices are equivalent,
and we get the same result.
```
large_region_lang <- tidy_lang |>
filter(most_at_home > 10000) |>
select(region, language, most_at_home) |>
arrange(most_at_home)
large_region_lang
```
```
## # A tibble: 67 × 3
## region language most_at_home
## <chr> <chr> <int>
## 1 Edmonton Arabic 10590
## 2 Montréal Tamil 10670
## 3 Vancouver Russian 10795
## 4 Edmonton Spanish 10880
## 5 Edmonton French 10950
## 6 Calgary Arabic 11010
## 7 Calgary Urdu 11060
## 8 Vancouver Hindi 11235
## 9 Montréal Armenian 11835
## 10 Toronto Romanian 12200
## # ℹ 57 more rows
```
Now that we’ve shown you the pipe operator as an alternative to storing
temporary objects and composing code, does this mean you should *never* store
temporary objects or compose code? Not necessarily!
There are times when you will still want to do these things.
For example, you might store a temporary object before feeding it into a plot function
so you can iteratively change the plot without having to
redo all of your data transformations.
Additionally, piping many functions can be overwhelming and difficult to debug;
you may want to store a temporary object midway through to inspect your result
before moving on with further steps.
3\.9 Aggregating data with `summarize` and `map`
------------------------------------------------
### 3\.9\.1 Calculating summary statistics on whole columns
As a part of many data analyses, we need to calculate a summary value for the
data (a *summary statistic*).
Examples of summary statistics we might want to calculate
are the number of observations, the average/mean value for a column,
the minimum value, etc.
Oftentimes,
this summary statistic is calculated from the values in a data frame column,
or columns, as shown in Figure [3\.15](wrangling.html#fig:summarize).
Figure 3\.15: `summarize` is useful for calculating summary statistics on one or more column(s). In its simplest use case, it creates a new data frame with a single row containing the summary statistic(s) for each column being summarized. The darker, top row of each table represents the column headers.
A useful `dplyr` function for calculating summary statistics is `summarize`,
where the first argument is the data frame and subsequent arguments
are the summaries we want to perform.
Here we show how to use the `summarize` function to calculate the minimum
and maximum number of Canadians
reporting a particular language as their primary language at home.
First a reminder of what `region_lang` looks like:
```
region_lang
```
```
## # A tibble: 7,490 × 7
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 St. Joh… Aborigi… Aborigi… 5 0 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
We apply `summarize` to calculate the minimum
and maximum number of Canadians
reporting a particular language as their primary language at home,
for any region:
```
summarize(region_lang,
min_most_at_home = min(most_at_home),
max_most_at_home = max(most_at_home))
```
```
## # A tibble: 1 × 2
## min_most_at_home max_most_at_home
## <dbl> <dbl>
## 1 0 3836770
```
From this we see that there are some languages in the data set that no one speaks
as their primary language at home. We also see that the most commonly spoken
primary language at home is spoken by
3,836,770
people.
### 3\.9\.2 Calculating summary statistics when there are `NA`s
In data frames in R, the value `NA` is often used to denote missing data.
Many of the base R statistical summary functions
(e.g., `max`, `min`, `mean`, `sum`, etc) will return `NA`
when applied to columns containing `NA` values.
Usually that is not what we want to happen;
instead, we would usually like R to ignore the missing entries
and calculate the summary statistic using all of the other non\-`NA` values
in the column.
Fortunately many of these functions provide an argument `na.rm` that lets
us tell the function what to do when it encounters `NA` values.
In particular, if we specify `na.rm = TRUE`, the function will ignore
missing values and return a summary of all the non\-missing entries.
We show an example of this combined with `summarize` below.
First we create a new version of the `region_lang` data frame,
named `region_lang_na`, that has a seemingly innocuous `NA`
in the first row of the `most_at_home column`:
```
region_lang_na
```
```
## # A tibble: 7,490 × 7
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 St. Joh… Aborigi… Aborigi… 5 NA 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
Now if we apply the `summarize` function as above,
we see that we no longer get the minimum and maximum returned,
but just an `NA` instead!
```
summarize(region_lang_na,
min_most_at_home = min(most_at_home),
max_most_at_home = max(most_at_home))
```
```
## # A tibble: 1 × 2
## min_most_at_home max_most_at_home
## <dbl> <dbl>
## 1 NA NA
```
We can fix this by adding the `na.rm = TRUE` as explained above:
```
summarize(region_lang_na,
min_most_at_home = min(most_at_home, na.rm = TRUE),
max_most_at_home = max(most_at_home, na.rm = TRUE))
```
```
## # A tibble: 1 × 2
## min_most_at_home max_most_at_home
## <dbl> <dbl>
## 1 0 3836770
```
### 3\.9\.3 Calculating summary statistics for groups of rows
A common pairing with `summarize` is `group_by`. Pairing these functions
together can let you summarize values for subgroups within a data set,
as illustrated in Figure [3\.16](wrangling.html#fig:summarize-groupby).
For example, we can use `group_by` to group the regions of the `region_lang` data frame and then calculate the minimum and maximum number of Canadians
reporting the language as the primary language at home
for each of the regions in the data set.
Figure 3\.16: `summarize` and `group_by` is useful for calculating summary statistics on one or more column(s) for each group. It creates a new data frame—with one row for each group—containing the summary statistic(s) for each column being summarized. It also creates a column listing the value of the grouping variable. The darker, top row of each table represents the column headers. The orange, blue, and green colored rows correspond to the rows that belong to each of the three groups being represented in this cartoon example.
The `group_by` function takes at least two arguments. The first is the data
frame that will be grouped, and the second and onwards are columns to use in the
grouping. Here we use only one column for grouping (`region`), but more than one
can also be used. To do this, list additional columns separated by commas.
```
group_by(region_lang, region) |>
summarize(
min_most_at_home = min(most_at_home),
max_most_at_home = max(most_at_home)
)
```
```
## # A tibble: 35 × 3
## region min_most_at_home max_most_at_home
## <chr> <dbl> <dbl>
## 1 Abbotsford - Mission 0 137445
## 2 Barrie 0 182390
## 3 Belleville 0 97840
## 4 Brantford 0 124560
## 5 Calgary 0 1065070
## 6 Edmonton 0 1050410
## 7 Greater Sudbury 0 133960
## 8 Guelph 0 130950
## 9 Halifax 0 371215
## 10 Hamilton 0 630380
## # ℹ 25 more rows
```
Notice that `group_by` on its own doesn’t change the way the data looks.
In the output below, the grouped data set looks the same,
and it doesn’t *appear* to be grouped by `region`.
Instead, `group_by` simply changes how other functions work with the data,
as we saw with `summarize` above.
```
group_by(region_lang, region)
```
```
## # A tibble: 7,490 × 7
## # Groups: region [35]
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 St. Joh… Aborigi… Aborigi… 5 0 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
### 3\.9\.4 Calculating summary statistics on many columns
Sometimes we need to summarize statistics across many columns.
An example of this is illustrated in Figure [3\.17](wrangling.html#fig:summarize-across).
In such a case, using `summarize` alone means that we have to
type out the name of each column we want to summarize.
In this section we will meet two strategies for performing this task.
First we will see how we can do this using `summarize` \+ `across`.
Then we will also explore how we can use a more general iteration function,
`map`, to also accomplish this.
Figure 3\.17: `summarize` \+ `across` or `map` is useful for efficiently calculating summary statistics on many columns at once. The darker, top row of each table represents the column headers.
#### `summarize` and `across` for calculating summary statistics on many columns
To summarize statistics across many columns, we can use the
`summarize` function we have just recently learned about.
However, in such a case, using `summarize` alone means that we have to
type out the name of each column we want to summarize.
To do this more efficiently, we can pair `summarize` with `across`
and use a colon `:` to specify a range of columns we would like
to perform the statistical summaries on.
Here we demonstrate finding the maximum value
of each of the numeric
columns of the `region_lang` data set.
```
region_lang |>
summarize(across(mother_tongue:lang_known, max))
```
```
## # A tibble: 1 × 4
## mother_tongue most_at_home most_at_work lang_known
## <dbl> <dbl> <dbl> <dbl>
## 1 3061820 3836770 3218725 5600480
```
> **Note:** Similar to when we use base R statistical summary functions
> (e.g., `max`, `min`, `mean`, `sum`, etc) with `summarize` alone,
> the use of the `summarize` \+ `across` functions paired
> with base R statistical summary functions
> also return `NA`s when we apply them to columns that
> contain `NA`s in the data frame.
>
>
> To resolve this issue, again we need to add the argument `na.rm = TRUE`.
> But in this case we need to use it a little bit differently:
> we write a `~`, and then call the summary function
> with the first argument `.x` and the second argument `na.rm = TRUE`.
> For example, for the previous example with the `max` function, we would write
>
>
>
> ```
> region_lang_na |>
> summarize(across(mother_tongue:lang_known, ~ max(.x, na.rm = TRUE)))
> ```
>
>
> ```
> ## # A tibble: 1 × 4
> ## mother_tongue most_at_home most_at_work lang_known
> ## <dbl> <dbl> <dbl> <dbl>
> ## 1 3061820 3836770 3218725 5600480
> ```
>
> The meaning of this unusual syntax is a bit beyond the scope of this book,
> but interested readers can look up *anonymous functions* in the `purrr`
> package from `tidyverse`.
#### `map` for calculating summary statistics on many columns
An alternative to `summarize` and `across`
for applying a function to many columns is the `map` family of functions.
Let’s again find the maximum value of each column of the
`region_lang` data frame, but using `map` with the `max` function this time.
`map` takes two arguments:
an object (a vector, data frame or list) that you want to apply the function to,
and the function that you would like to apply to each column.
Note that `map` does not have an argument
to specify *which* columns to apply the function to.
Therefore, we will use the `select` function before calling `map`
to choose the columns for which we want the maximum.
```
region_lang |>
select(mother_tongue:lang_known) |>
map(max)
```
```
## $mother_tongue
## [1] 3061820
##
## $most_at_home
## [1] 3836770
##
## $most_at_work
## [1] 3218725
##
## $lang_known
## [1] 5600480
```
> **Note:** The `map` function comes from the `purrr` package. But since
> `purrr` is part of the tidyverse, once we call `library(tidyverse)` we
> do not need to load the `purrr` package separately.
The output looks a bit weird… we passed in a data frame, but the output
doesn’t look like a data frame. As it so happens, it is *not* a data frame, but
rather a plain list:
```
region_lang |>
select(mother_tongue:lang_known) |>
map(max) |>
typeof()
```
```
## [1] "list"
```
So what do we do? Should we convert this to a data frame? We could, but a
simpler alternative is to just use a different `map` function. There
are quite a few to choose from, they all work similarly, but
their name reflects the type of output you want from the mapping operation.
Table [3\.3](wrangling.html#tab:map-table) lists the commonly used `map` functions as well
as their output type.
Table 3\.3: The `map` functions in R.
| `map` function | Output |
| --- | --- |
| `map` | list |
| `map_lgl` | logical vector |
| `map_int` | integer vector |
| `map_dbl` | double vector |
| `map_chr` | character vector |
| `map_dfc` | data frame, combining column\-wise |
| `map_dfr` | data frame, combining row\-wise |
Let’s get the columns’ maximums again, but this time use the `map_dfr` function
to return the output as a data frame:
```
region_lang |>
select(mother_tongue:lang_known) |>
map_dfr(max)
```
```
## # A tibble: 1 × 4
## mother_tongue most_at_home most_at_work lang_known
## <dbl> <dbl> <dbl> <dbl>
## 1 3061820 3836770 3218725 5600480
```
> **Note:** Similar to when we use base R statistical summary functions
> (e.g., `max`, `min`, `mean`, `sum`, etc.) with `summarize`,
> `map` functions paired with base R statistical summary functions
> also return `NA` values when we apply them to columns that
> contain `NA` values.
>
>
> To avoid this, again we need to add the argument `na.rm = TRUE`.
> When we use this with `map`, we do this by adding a `,`
> and then `na.rm = TRUE` after specifying the function, as illustrated below:
>
>
>
> ```
> region_lang_na |>
> select(mother_tongue:lang_known) |>
> map_dfr(max, na.rm = TRUE)
> ```
>
>
> ```
> ## # A tibble: 1 × 4
> ## mother_tongue most_at_home most_at_work lang_known
> ## <dbl> <dbl> <dbl> <dbl>
> ## 1 3061820 3836770 3218725 5600480
> ```
The `map` functions are generally quite useful for solving many problems
involving repeatedly applying functions in R.
Additionally, their use is not limited to columns of a data frame;
`map` family functions can be used to apply functions to elements of a vector,
or a list, and even to lists of (nested!) data frames.
To learn more about the `map` functions, see the additional resources
section at the end of this chapter.
3\.10 Apply functions across many columns with `mutate` and `across`
--------------------------------------------------------------------
Sometimes we need to apply a function to many columns in a data frame.
For example, we would need to do this when converting units of measurements across many columns.
We illustrate such a data transformation in Figure [3\.18](wrangling.html#fig:mutate-across).
Figure 3\.18: `mutate` and `across` is useful for applying functions across many columns. The darker, top row of each table represents the column headers.
For example,
imagine that we wanted to convert all the numeric columns
in the `region_lang` data frame from double type to integer type
using the `as.integer` function.
When we revisit the `region_lang` data frame,
we can see that this would be the columns from `mother_tongue` to `lang_known`.
```
region_lang
```
```
## # A tibble: 7,490 × 7
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 St. Joh… Aborigi… Aborigi… 5 0 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
To accomplish such a task, we can use `mutate` paired with `across`.
This works in a similar way for column selection,
as we saw when we used `summarize` \+ `across` earlier.
As we did above,
we again use `across` to specify the columns using `select` syntax
as well as the function we want to apply on the specified columns.
However, a key difference here is that we are using `mutate`,
which means that we get back a data frame with the same number of columns and rows.
The only thing that changes is the transformation we applied
to the specified columns (here `mother_tongue` to `lang_known`).
```
region_lang |>
mutate(across(mother_tongue:lang_known, as.integer))
```
```
## # A tibble: 7,490 × 7
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <int> <int> <int> <int>
## 1 St. Joh… Aborigi… Aborigi… 5 0 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
3\.11 Apply functions across columns within one row with `rowwise` and `mutate`
-------------------------------------------------------------------------------
What if you want to apply a function across columns but within one row?
We illustrate such a data transformation in Figure [3\.19](wrangling.html#fig:rowwise).
Figure 3\.19: `rowwise` and `mutate` is useful for applying functions across columns within one row. The darker, top row of each table represents the column headers.
For instance, suppose we want to know the maximum value between `mother_tongue`,
`most_at_home`, `most_at_work`
and `lang_known` for each language and region
in the `region_lang` data set.
In other words, we want to apply the `max` function *row\-wise.*
We will use the (aptly named) `rowwise` function in combination with `mutate`
to accomplish this task.
Before we apply `rowwise`, we will `select` only the count columns
so we can see all the columns in the data frame’s output easily in the book.
So for this demonstration, the data set we are operating on looks like this:
```
region_lang |>
select(mother_tongue:lang_known)
```
```
## # A tibble: 7,490 × 4
## mother_tongue most_at_home most_at_work lang_known
## <dbl> <dbl> <dbl> <dbl>
## 1 5 0 0 0
## 2 5 0 0 0
## 3 0 0 0 0
## 4 0 0 0 0
## 5 5 5 0 0
## 6 0 5 0 20
## 7 0 0 0 0
## 8 0 0 0 0
## 9 30 15 0 10
## 10 0 0 0 0
## # ℹ 7,480 more rows
```
Now we apply `rowwise` before `mutate`, to tell R that we would like
the mutate function to be applied across, and within, a row,
as opposed to being applied on a column
(which is the default behavior of `mutate`):
```
region_lang |>
select(mother_tongue:lang_known) |>
rowwise() |>
mutate(maximum = max(c(mother_tongue,
most_at_home,
most_at_work,
lang_known)))
```
```
## # A tibble: 7,490 × 5
## # Rowwise:
## mother_tongue most_at_home most_at_work lang_known maximum
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 5 0 0 0 5
## 2 5 0 0 0 5
## 3 0 0 0 0 0
## 4 0 0 0 0 0
## 5 5 5 0 0 5
## 6 0 5 0 20 20
## 7 0 0 0 0 0
## 8 0 0 0 0 0
## 9 30 15 0 10 30
## 10 0 0 0 0 0
## # ℹ 7,480 more rows
```
We see that we get an additional column added to the data frame,
named `maximum`, which is the maximum value between `mother_tongue`,
`most_at_home`, `most_at_work` and `lang_known` for each language
and region.
Similar to `group_by`,
`rowwise` doesn’t appear to do anything when it is called by itself.
However, we can apply `rowwise` in combination
with other functions to change how these other functions operate on the data.
Notice if we used `mutate` without `rowwise`,
we would have computed the maximum value across *all* rows
rather than the maximum value for *each* row.
Below we show what would have happened had we not used
`rowwise`. In particular, the same maximum value is reported
in every single row; this code does not provide the desired result.
```
region_lang |>
select(mother_tongue:lang_known) |>
mutate(maximum = max(c(mother_tongue,
most_at_home,
most_at_home,
lang_known)))
```
```
## # A tibble: 7,490 × 5
## mother_tongue most_at_home most_at_work lang_known maximum
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 5 0 0 0 5600480
## 2 5 0 0 0 5600480
## 3 0 0 0 0 5600480
## 4 0 0 0 0 5600480
## 5 5 5 0 0 5600480
## 6 0 5 0 20 5600480
## 7 0 0 0 0 5600480
## 8 0 0 0 0 5600480
## 9 30 15 0 10 5600480
## 10 0 0 0 0 5600480
## # ℹ 7,480 more rows
```
3\.12 Summary
-------------
Cleaning and wrangling data can be a very time\-consuming process. However,
it is a critical step in any data analysis. We have explored many different
functions for cleaning and wrangling data into a tidy format.
Table [3\.4](wrangling.html#tab:summary-functions-table) summarizes some of the key wrangling
functions we learned in this chapter. In the following chapters, you will
learn how you can take this tidy data and do so much more with it to answer your
burning data science questions!
Table 3\.4: Summary of wrangling functions
| Function | Description |
| --- | --- |
| `across` | allows you to apply function(s) to multiple columns |
| `filter` | subsets rows of a data frame |
| `group_by` | allows you to apply function(s) to groups of rows |
| `mutate` | adds or modifies columns in a data frame |
| `map` | general iteration function |
| `pivot_longer` | generally makes the data frame longer and narrower |
| `pivot_wider` | generally makes a data frame wider and decreases the number of rows |
| `rowwise` | applies functions across columns within one row |
| `separate` | splits up a character column into multiple columns |
| `select` | subsets columns of a data frame |
| `summarize` | calculates summaries of inputs |
3\.13 Exercises
---------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Cleaning and wrangling data” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
3\.14 Additional resources
--------------------------
* As we mentioned earlier, `tidyverse` is actually an *R
meta package*: it installs and loads a collection of R packages that all
follow the tidy data philosophy we discussed above. One of the `tidyverse`
packages is `dplyr`—a data wrangling workhorse. You have already met many
of `dplyr`’s functions
(`select`, `filter`, `mutate`, `arrange`, `summarize`, and `group_by`).
To learn more about these functions and meet a few more useful
functions, we recommend you check out Chapters 5\-9 of the [STAT545 online notes](https://stat545.com/).
of the data wrangling, exploration, and analysis with R book.
* The [`dplyr` R package documentation](https://dplyr.tidyverse.org/) ([Wickham, François, et al. 2021](#ref-dplyr)) is
another resource to learn more about the functions in this
chapter, the full set of arguments you can use, and other related functions.
The site also provides a very nice cheat sheet that summarizes many of the
data wrangling functions from this chapter.
* Check out the [`tidyselect` R package page](https://tidyselect.r-lib.org/index.html)
([Henry and Wickham 2021](#ref-tidyselect)) for a comprehensive list of `select` helpers.
These helpers can be used to choose columns in a data frame when paired with the `select` function
(and other functions that use the `tidyselect` syntax, such as `pivot_longer`).
The [documentation for `select` helpers](https://tidyselect.r-lib.org/reference/select_helpers.html)
is a useful reference to find the helper you need for your particular problem.
* *R for Data Science* ([Wickham and Grolemund 2016](#ref-wickham2016r)) has a few chapters related to
data wrangling that go into more depth than this book. For example, the
[tidy data chapter](https://r4ds.had.co.nz/tidy-data.html) covers tidy data,
`pivot_longer`/`pivot_wider` and `separate`, but also covers missing values
and additional wrangling functions (like `unite`). The [data
transformation chapter](https://r4ds.had.co.nz/transform.html) covers
`select`, `filter`, `arrange`, `mutate`, and `summarize`. And the [`map`
functions chapter](https://r4ds.had.co.nz/iteration.html#the-map-functions)
provides more about the `map` functions.
* You will occasionally encounter a case where you need to iterate over items
in a data frame, but none of the above functions are flexible enough to do
what you want. In that case, you may consider using [a for
loop](https://r4ds.had.co.nz/iteration.html#iteration).
3\.1 Overview
-------------
This chapter is centered around defining tidy data—a data format that is
suitable for analysis—and the tools needed to transform raw data into this
format. This will be presented in the context of a real\-world data science
application, providing more practice working through a whole case study.
3\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Define the term “tidy data”.
* Discuss the advantages of storing data in a tidy data format.
* Define what vectors, lists, and data frames are in R, and describe how they relate to
each other.
* Describe the common types of data in R and their uses.
* Use the following functions for their intended data wrangling tasks:
+ `c`
+ `pivot_longer`
+ `pivot_wider`
+ `separate`
+ `select`
+ `filter`
+ `mutate`
+ `summarize`
+ `map`
+ `group_by`
+ `across`
+ `rowwise`
* Use the following operators for their intended data wrangling tasks:
+ `==`, `!=`, `<`, `<=`, `>`, and `>=`
+ `%in%`
+ `!`, `&`, and `|`
+ `|>` and `%>%`
3\.3 Data frames, vectors, and lists
------------------------------------
In Chapters [1](intro.html#intro) and [2](reading.html#reading), *data frames* were the focus:
we learned how to import data into R as a data frame, and perform basic operations on data frames in R.
In the remainder of this book, this pattern continues. The vast majority of tools we use will require
that data are represented as a data frame in R. Therefore, in this section,
we will dig more deeply into what data frames are and how they are represented in R.
This knowledge will be helpful in effectively utilizing these objects in our data analyses.
### 3\.3\.1 What is a data frame?
A data frame is a table\-like structure for storing data in R. Data frames are
important to learn about because most data that you will encounter in practice
can be naturally stored as a table. In order to define data frames precisely,
we need to introduce a few technical terms:
* **variable:** a characteristic, number, or quantity that can be measured.
* **observation:** all of the measurements for a given entity.
* **value:** a single measurement of a single variable for a given entity.
Given these definitions, a **data frame** is a tabular data structure in R
that is designed to store observations, variables, and their values.
Most commonly, each column in a data frame corresponds to a variable,
and each row corresponds to an observation. For example, Figure
[3\.1](wrangling.html#fig:02-obs) displays a data set of city populations. Here, the variables
are “region, year, population”; each of these are properties that can be
collected or measured. The first observation is “Toronto, 2016, 2235145”;
these are the values that the three variables take for the first entity in the
data set. There are 13 entities in the data set in total, corresponding to the
13 rows in Figure [3\.1](wrangling.html#fig:02-obs).
Figure 3\.1: A data frame storing data regarding the population of various regions in Canada. In this example data frame, the row that corresponds to the observation for the city of Vancouver is colored yellow, and the column that corresponds to the population variable is colored blue.
R stores the columns of a data frame as either
*lists* or *vectors*. For example, the data frame in Figure
[3\.2](wrangling.html#fig:02-vectors) has three vectors whose names are `region`, `year` and
`population`. The next two sections will explain what lists and vectors are.
Figure 3\.2: Data frame with three vectors.
### 3\.3\.2 What is a vector?
In R, **vectors** are objects that can contain one or more elements. The vector
elements are ordered, and they must all be of the same **data type**;
R has several different basic data types, as shown in Table [3\.1](wrangling.html#tab:datatype-table).
Figure [3\.3](wrangling.html#fig:02-vector) provides an example of a vector where all of the elements are
of character type.
You can create vectors in R using the `c` function (`c` stands for “concatenate”). For
example, to create the vector `region` as shown in Figure
[3\.3](wrangling.html#fig:02-vector), you would write:
```
region <- c("Toronto", "Montreal", "Vancouver", "Calgary", "Ottawa")
region
```
```
## [1] "Toronto" "Montreal" "Vancouver" "Calgary" "Ottawa"
```
> **Note:** Technically, these objects are called “atomic vectors.” In this book
> we have chosen to call them “vectors,” which is how they are most commonly
> referred to in the R community. To be totally precise, “vector” is an umbrella term that
> encompasses both atomic vector and list objects in R. But this creates a
> confusing situation where the term “vector” could
> mean “atomic vector” *or* “the umbrella term for atomic vector and list,”
> depending on context. Very confusing indeed! So to keep things simple, in
> this book we *always* use the term “vector” to refer to “atomic vector.”
> We encourage readers who are enthusiastic to learn more to read the
> Vectors chapter of *Advanced R* ([Wickham 2019](#ref-wickham2019advanced)).
Figure 3\.3: Example of a vector whose type is character.
Table 3\.1: Basic data types in R
| Data type | Abbreviation | Description | Example |
| --- | --- | --- | --- |
| character | chr | letters or numbers surrounded by quotes | “1” , “Hello world!” |
| double | dbl | numbers with decimals values | 1\.2333 |
| integer | int | numbers that do not contain decimals | 1L, 20L (where “L” tells R to store as an integer) |
| logical | lgl | either true or false | `TRUE`, `FALSE` |
| factor | fct | used to represent data with a limited number of values (usually categories) | a `color` variable with levels `red`, `green` and `orange` |
It is important in R to make sure you represent your data with the correct type.
Many of the `tidyverse` functions we use in this book treat
the various data types differently. You should use integers and double types
(which both fall under the “numeric” umbrella type) to represent numbers and perform
arithmetic. Doubles are more common than integers in R, though; for instance, a double data type is the
default when you create a vector of numbers using `c()`, and when you read in
whole numbers via `read_csv`. Characters are used to represent data that should
be thought of as “text”, such as words, names, paths, URLs, and more. Factors help us
encode variables that represent *categories*; a factor variable takes one of a discrete
set of values known as *levels* (one for each category). The levels can be ordered or unordered. Even though
factors can sometimes *look* like characters, they are not used to represent
text, words, names, and paths in the way that characters are; in fact, R
internally stores factors using integers! There are other basic data types in R, such as *raw*
and *complex*, but we do not use these in this textbook.
### 3\.3\.3 What is a list?
Lists are also objects in R that have multiple, ordered elements.
Vectors and lists differ by the requirement of element type
consistency. All elements within a single vector must be of the same type (e.g.,
all elements are characters), whereas elements within a single list can be of
different types (e.g., characters, integers, logicals, and even other lists). See Figure [3\.4](wrangling.html#fig:02-vec-vs-list).
Figure 3\.4: A vector versus a list.
### 3\.3\.4 What does this have to do with data frames?
A data frame is really a special kind of list that follows two rules:
1. Each element itself must either be a vector or a list.
2. Each element (vector or list) must have the same length.
Not all columns in a data frame need to be of the same type.
Figure [3\.5](wrangling.html#fig:02-dataframe) shows a data frame where
the columns are vectors of different types.
But remember: because the columns in this example are *vectors*,
the elements must be the same data type *within each column.*
On the other hand, if our data frame had *list* columns, there would be no such requirement.
It is generally much more common to use *vector* columns, though,
as the values for a single variable are usually all of the same type.
Figure 3\.5: Data frame and vector types.
Data frames are actually included in R itself, without the need for any additional packages. However, the
`tidyverse` functions that we use
throughout this book all work with a special kind of data frame called a *tibble*. Tibbles have some additional
features and benefits over built\-in data frames in R. These include the
ability to add useful attributes (such as grouping, which we will discuss later)
and more predictable type preservation when subsetting.
Because a tibble is just a data frame with some added features,
we will collectively refer to both built\-in R data frames and
tibbles as *data frames* in this book.
> **Note:** You can use the function `class` on a data object to assess whether a data
> frame is a built\-in R data frame or a tibble. If the data object is a data
> frame, `class` will return `"data.frame"`. If the data object is a
> tibble it will return `"tbl_df" "tbl" "data.frame"`. You can easily convert
> built\-in R data frames to tibbles using the `tidyverse` `as_tibble` function.
> For example we can check the class of the Canadian languages data set,
> `can_lang`, we worked with in the previous chapters and we see it is a tibble.
>
>
>
> ```
> class(can_lang)
> ```
>
>
> ```
> ## [1] "spec_tbl_df" "tbl_df" "tbl" "data.frame"
> ```
Vectors, data frames and lists are basic types of *data structure* in R, which
are core to most data analyses. We summarize them in Table
[3\.2](wrangling.html#tab:datastructure-table). There are several other data structures in the R programming
language (*e.g.,* matrices), but these are beyond the scope of this book.
Table 3\.2: Basic data structures in R
| Data Structure | Description |
| --- | --- |
| vector | An ordered collection of one, or more, values of the *same data type*. |
| list | An ordered collection of one, or more, values of *possibly different data types*. |
| data frame | A list of either vectors or lists of the *same length*, with column names. We typically use a data frame to represent a data set. |
### 3\.3\.1 What is a data frame?
A data frame is a table\-like structure for storing data in R. Data frames are
important to learn about because most data that you will encounter in practice
can be naturally stored as a table. In order to define data frames precisely,
we need to introduce a few technical terms:
* **variable:** a characteristic, number, or quantity that can be measured.
* **observation:** all of the measurements for a given entity.
* **value:** a single measurement of a single variable for a given entity.
Given these definitions, a **data frame** is a tabular data structure in R
that is designed to store observations, variables, and their values.
Most commonly, each column in a data frame corresponds to a variable,
and each row corresponds to an observation. For example, Figure
[3\.1](wrangling.html#fig:02-obs) displays a data set of city populations. Here, the variables
are “region, year, population”; each of these are properties that can be
collected or measured. The first observation is “Toronto, 2016, 2235145”;
these are the values that the three variables take for the first entity in the
data set. There are 13 entities in the data set in total, corresponding to the
13 rows in Figure [3\.1](wrangling.html#fig:02-obs).
Figure 3\.1: A data frame storing data regarding the population of various regions in Canada. In this example data frame, the row that corresponds to the observation for the city of Vancouver is colored yellow, and the column that corresponds to the population variable is colored blue.
R stores the columns of a data frame as either
*lists* or *vectors*. For example, the data frame in Figure
[3\.2](wrangling.html#fig:02-vectors) has three vectors whose names are `region`, `year` and
`population`. The next two sections will explain what lists and vectors are.
Figure 3\.2: Data frame with three vectors.
### 3\.3\.2 What is a vector?
In R, **vectors** are objects that can contain one or more elements. The vector
elements are ordered, and they must all be of the same **data type**;
R has several different basic data types, as shown in Table [3\.1](wrangling.html#tab:datatype-table).
Figure [3\.3](wrangling.html#fig:02-vector) provides an example of a vector where all of the elements are
of character type.
You can create vectors in R using the `c` function (`c` stands for “concatenate”). For
example, to create the vector `region` as shown in Figure
[3\.3](wrangling.html#fig:02-vector), you would write:
```
region <- c("Toronto", "Montreal", "Vancouver", "Calgary", "Ottawa")
region
```
```
## [1] "Toronto" "Montreal" "Vancouver" "Calgary" "Ottawa"
```
> **Note:** Technically, these objects are called “atomic vectors.” In this book
> we have chosen to call them “vectors,” which is how they are most commonly
> referred to in the R community. To be totally precise, “vector” is an umbrella term that
> encompasses both atomic vector and list objects in R. But this creates a
> confusing situation where the term “vector” could
> mean “atomic vector” *or* “the umbrella term for atomic vector and list,”
> depending on context. Very confusing indeed! So to keep things simple, in
> this book we *always* use the term “vector” to refer to “atomic vector.”
> We encourage readers who are enthusiastic to learn more to read the
> Vectors chapter of *Advanced R* ([Wickham 2019](#ref-wickham2019advanced)).
Figure 3\.3: Example of a vector whose type is character.
Table 3\.1: Basic data types in R
| Data type | Abbreviation | Description | Example |
| --- | --- | --- | --- |
| character | chr | letters or numbers surrounded by quotes | “1” , “Hello world!” |
| double | dbl | numbers with decimals values | 1\.2333 |
| integer | int | numbers that do not contain decimals | 1L, 20L (where “L” tells R to store as an integer) |
| logical | lgl | either true or false | `TRUE`, `FALSE` |
| factor | fct | used to represent data with a limited number of values (usually categories) | a `color` variable with levels `red`, `green` and `orange` |
It is important in R to make sure you represent your data with the correct type.
Many of the `tidyverse` functions we use in this book treat
the various data types differently. You should use integers and double types
(which both fall under the “numeric” umbrella type) to represent numbers and perform
arithmetic. Doubles are more common than integers in R, though; for instance, a double data type is the
default when you create a vector of numbers using `c()`, and when you read in
whole numbers via `read_csv`. Characters are used to represent data that should
be thought of as “text”, such as words, names, paths, URLs, and more. Factors help us
encode variables that represent *categories*; a factor variable takes one of a discrete
set of values known as *levels* (one for each category). The levels can be ordered or unordered. Even though
factors can sometimes *look* like characters, they are not used to represent
text, words, names, and paths in the way that characters are; in fact, R
internally stores factors using integers! There are other basic data types in R, such as *raw*
and *complex*, but we do not use these in this textbook.
### 3\.3\.3 What is a list?
Lists are also objects in R that have multiple, ordered elements.
Vectors and lists differ by the requirement of element type
consistency. All elements within a single vector must be of the same type (e.g.,
all elements are characters), whereas elements within a single list can be of
different types (e.g., characters, integers, logicals, and even other lists). See Figure [3\.4](wrangling.html#fig:02-vec-vs-list).
Figure 3\.4: A vector versus a list.
### 3\.3\.4 What does this have to do with data frames?
A data frame is really a special kind of list that follows two rules:
1. Each element itself must either be a vector or a list.
2. Each element (vector or list) must have the same length.
Not all columns in a data frame need to be of the same type.
Figure [3\.5](wrangling.html#fig:02-dataframe) shows a data frame where
the columns are vectors of different types.
But remember: because the columns in this example are *vectors*,
the elements must be the same data type *within each column.*
On the other hand, if our data frame had *list* columns, there would be no such requirement.
It is generally much more common to use *vector* columns, though,
as the values for a single variable are usually all of the same type.
Figure 3\.5: Data frame and vector types.
Data frames are actually included in R itself, without the need for any additional packages. However, the
`tidyverse` functions that we use
throughout this book all work with a special kind of data frame called a *tibble*. Tibbles have some additional
features and benefits over built\-in data frames in R. These include the
ability to add useful attributes (such as grouping, which we will discuss later)
and more predictable type preservation when subsetting.
Because a tibble is just a data frame with some added features,
we will collectively refer to both built\-in R data frames and
tibbles as *data frames* in this book.
> **Note:** You can use the function `class` on a data object to assess whether a data
> frame is a built\-in R data frame or a tibble. If the data object is a data
> frame, `class` will return `"data.frame"`. If the data object is a
> tibble it will return `"tbl_df" "tbl" "data.frame"`. You can easily convert
> built\-in R data frames to tibbles using the `tidyverse` `as_tibble` function.
> For example we can check the class of the Canadian languages data set,
> `can_lang`, we worked with in the previous chapters and we see it is a tibble.
>
>
>
> ```
> class(can_lang)
> ```
>
>
> ```
> ## [1] "spec_tbl_df" "tbl_df" "tbl" "data.frame"
> ```
Vectors, data frames and lists are basic types of *data structure* in R, which
are core to most data analyses. We summarize them in Table
[3\.2](wrangling.html#tab:datastructure-table). There are several other data structures in the R programming
language (*e.g.,* matrices), but these are beyond the scope of this book.
Table 3\.2: Basic data structures in R
| Data Structure | Description |
| --- | --- |
| vector | An ordered collection of one, or more, values of the *same data type*. |
| list | An ordered collection of one, or more, values of *possibly different data types*. |
| data frame | A list of either vectors or lists of the *same length*, with column names. We typically use a data frame to represent a data set. |
3\.4 Tidy data
--------------
There are many ways a tabular data set can be organized. This chapter will focus
on introducing the **tidy data** format of organization and how to make your raw
(and likely messy) data tidy. A tidy data frame satisfies
the following three criteria ([Wickham 2014](#ref-wickham2014tidy)):
* each row is a single observation,
* each column is a single variable, and
* each value is a single cell (i.e., its entry in the data
frame is not shared with another value).
Figure [3\.6](wrangling.html#fig:02-tidy-image) demonstrates a tidy data set that satisfies these
three criteria.
Figure 3\.6: Tidy data satisfies three criteria.
There are many good reasons for making sure your data are tidy as a first step in your analysis.
The most important is that it is a single, consistent format that nearly every function
in the `tidyverse` recognizes. No matter what the variables and observations
in your data represent, as long as the data frame
is tidy, you can manipulate it, plot it, and analyze it using the same tools.
If your data is *not* tidy, you will have to write special bespoke code
in your analysis that will not only be error\-prone, but hard for others to understand.
Beyond making your analysis more accessible to others and less error\-prone, tidy data
is also typically easy for humans to interpret. Given these benefits,
it is well worth spending the time to get your data into a tidy format
upfront. Fortunately, there are many well\-designed `tidyverse` data
cleaning/wrangling tools to help you easily tidy your data. Let’s explore them
below!
> **Note:** Is there only one shape for tidy data for a given data set? Not
> necessarily! It depends on the statistical question you are asking and what
> the variables are for that question. For tidy data, each variable should be
> its own column. So, just as it’s essential to match your statistical question
> with the appropriate data analysis tool, it’s important to match your
> statistical question with the appropriate variables and ensure they are
> represented as individual columns to make the data tidy.
### 3\.4\.1 Tidying up: going from wide to long using `pivot_longer`
One task that is commonly performed to get data into a tidy format
is to combine values that are stored in separate columns,
but are really part of the same variable, into one.
Data is often stored this way
because this format is sometimes more intuitive for human readability
and understanding, and humans create data sets.
In Figure [3\.7](wrangling.html#fig:02-wide-to-long),
the table on the left is in an untidy, “wide” format because the year values
(2006, 2011, 2016\) are stored as column names.
And as a consequence,
the values for population for the various cities
over these years are also split across several columns.
For humans, this table is easy to read, which is why you will often find data
stored in this wide format. However, this format is difficult to work with
when performing data visualization or statistical analysis using R. For
example, if we wanted to find the latest year it would be challenging because
the year values are stored as column names instead of as values in a single
column. So before we could apply a function to find the latest year (for
example, by using `max`), we would have to first extract the column names
to get them as a vector and then apply a function to extract the latest year.
The problem only gets worse if you would like to find the value for the
population for a given region for the latest year. Both of these tasks are
greatly simplified once the data is tidied.
Another problem with data in this format is that we don’t know what the
numbers under each year actually represent. Do those numbers represent
population size? Land area? It’s not clear.
To solve both of these problems,
we can reshape this data set to a tidy data format
by creating a column called “year” and a column called
“population.” This transformation—which makes the data
“longer”—is shown as the right table in
Figure [3\.7](wrangling.html#fig:02-wide-to-long).
Figure 3\.7: Pivoting data from a wide to long data format.
We can achieve this effect in R using the `pivot_longer` function from the `tidyverse` package.
The `pivot_longer` function combines columns,
and is usually used during tidying data
when we need to make the data frame longer and narrower.
To learn how to use `pivot_longer`, we will work through an example with the
`region_lang_top5_cities_wide.csv` data set. This data set contains the
counts of how many Canadians cited each language as their mother tongue for five
major Canadian cities (Toronto, Montréal, Vancouver, Calgary, and Edmonton) from
the 2016 Canadian census.
To get started,
we will load the `tidyverse` package and use `read_csv` to load the (untidy) data.
```
library(tidyverse)
lang_wide <- read_csv("data/region_lang_top5_cities_wide.csv")
lang_wide
```
```
## # A tibble: 214 × 7
## category language Toronto Montréal Vancouver Calgary Edmonton
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal languages Aborigi… 80 30 70 20 25
## 2 Non-Official & Non-Abor… Afrikaa… 985 90 1435 960 575
## 3 Non-Official & Non-Abor… Afro-As… 360 240 45 45 65
## 4 Non-Official & Non-Abor… Akan (T… 8485 1015 400 705 885
## 5 Non-Official & Non-Abor… Albanian 13260 2450 1090 1365 770
## 6 Aboriginal languages Algonqu… 5 5 0 0 0
## 7 Aboriginal languages Algonqu… 5 30 5 5 0
## 8 Non-Official & Non-Abor… America… 470 50 265 100 180
## 9 Non-Official & Non-Abor… Amharic 7460 665 1140 4075 2515
## 10 Non-Official & Non-Abor… Arabic 85175 151955 14320 18965 17525
## # ℹ 204 more rows
```
What is wrong with the untidy format above?
The table on the left in Figure [3\.8](wrangling.html#fig:img-pivot-longer-with-table)
represents the data in the “wide” (messy) format.
From a data analysis perspective, this format is not ideal because the values of
the variable *region* (Toronto, Montréal, Vancouver, Calgary, and Edmonton)
are stored as column names. Thus they
are not easily accessible to the data analysis functions we will apply
to our data set. Additionally, the *mother tongue* variable values are
spread across multiple columns, which will prevent us from doing any desired
visualization or statistical tasks until we combine them into one column. For
instance, suppose we want to know the languages with the highest number of
Canadians reporting it as their mother tongue among all five regions. This
question would be tough to answer with the data in its current format.
We *could* find the answer with the data in this format,
though it would be much easier to answer if we tidy our
data first. If mother tongue were instead stored as one column,
as shown in the tidy data on the right in
Figure [3\.8](wrangling.html#fig:img-pivot-longer-with-table),
we could simply use the `max` function in one line of code
to get the maximum value.
Figure 3\.8: Going from wide to long with the `pivot_longer` function.
Figure [3\.9](wrangling.html#fig:img-pivot-longer) details the arguments that we need to specify
in the `pivot_longer` function to accomplish this data transformation.
Figure 3\.9: Syntax for the `pivot_longer` function.
We use `pivot_longer` to combine the Toronto, Montréal,
Vancouver, Calgary, and Edmonton columns into a single column called `region`,
and create a column called `mother_tongue` that contains the count of how many
Canadians report each language as their mother tongue for each metropolitan
area. We use a colon `:` between Toronto and Edmonton to tell R to select all
the columns between Toronto and Edmonton:
```
lang_mother_tidy <- pivot_longer(lang_wide,
cols = Toronto:Edmonton,
names_to = "region",
values_to = "mother_tongue"
)
lang_mother_tidy
```
```
## # A tibble: 1,070 × 4
## category language region mother_tongue
## <chr> <chr> <chr> <dbl>
## 1 Aboriginal languages Aboriginal lang… Toron… 80
## 2 Aboriginal languages Aboriginal lang… Montr… 30
## 3 Aboriginal languages Aboriginal lang… Vanco… 70
## 4 Aboriginal languages Aboriginal lang… Calga… 20
## 5 Aboriginal languages Aboriginal lang… Edmon… 25
## 6 Non-Official & Non-Aboriginal languages Afrikaans Toron… 985
## 7 Non-Official & Non-Aboriginal languages Afrikaans Montr… 90
## 8 Non-Official & Non-Aboriginal languages Afrikaans Vanco… 1435
## 9 Non-Official & Non-Aboriginal languages Afrikaans Calga… 960
## 10 Non-Official & Non-Aboriginal languages Afrikaans Edmon… 575
## # ℹ 1,060 more rows
```
> **Note**: In the code above, the call to the
> `pivot_longer` function is split across several lines. This is allowed in
> certain cases; for example, when calling a function as above, as long as the
> line ends with a comma `,` R knows to keep reading on the next line.
> Splitting long lines like this across multiple lines is encouraged
> as it helps significantly with code readability. Generally speaking, you should
> limit each line of code to about 80 characters.
The data above is now tidy because all three criteria for tidy data have now
been met:
1. All the variables (`category`, `language`, `region` and `mother_tongue`) are
now their own columns in the data frame.
2. Each observation, (i.e., each language in a region) is in a single row.
3. Each value is a single cell, i.e., its row, column position in the data
frame is not shared with another value.
### 3\.4\.2 Tidying up: going from long to wide using `pivot_wider`
Suppose we have observations spread across multiple rows rather than in a single
row. For example, in Figure [3\.10](wrangling.html#fig:long-to-wide), the table on the left is in an
untidy, long format because the `count` column contains three variables
(population, commuter count, and year the city was incorporated)
and information about each observation
(here, population, commuter, and incorporated values for a region) is split across three rows.
Remember: one of the criteria for tidy data
is that each observation must be in a single row.
Using data in this format—where two or more variables are mixed together
in a single column—makes it harder to apply many usual `tidyverse` functions.
For example, finding the maximum number of commuters
would require an additional step of filtering for the commuter values
before the maximum can be computed.
In comparison, if the data were tidy,
all we would have to do is compute the maximum value for the commuter column.
To reshape this untidy data set to a tidy (and in this case, wider) format,
we need to create columns called “population”, “commuters”, and “incorporated.”
This is illustrated in the right table of Figure [3\.10](wrangling.html#fig:long-to-wide).
Figure 3\.10: Going from long to wide data.
To tidy this type of data in R, we can use the `pivot_wider` function.
The `pivot_wider` function generally increases the number of columns (widens)
and decreases the number of rows in a data set.
To learn how to use `pivot_wider`,
we will work through an example
with the `region_lang_top5_cities_long.csv` data set.
This data set contains the number of Canadians reporting
the primary language at home and work for five
major cities (Toronto, Montréal, Vancouver, Calgary, and Edmonton).
```
lang_long <- read_csv("data/region_lang_top5_cities_long.csv")
lang_long
```
```
## # A tibble: 2,140 × 5
## region category language type count
## <chr> <chr> <chr> <chr> <dbl>
## 1 Montréal Aboriginal languages Aboriginal languages, n.o.s. most_at_home 15
## 2 Montréal Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## 3 Toronto Aboriginal languages Aboriginal languages, n.o.s. most_at_home 50
## 4 Toronto Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## 5 Calgary Aboriginal languages Aboriginal languages, n.o.s. most_at_home 5
## 6 Calgary Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## 7 Edmonton Aboriginal languages Aboriginal languages, n.o.s. most_at_home 10
## 8 Edmonton Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## 9 Vancouver Aboriginal languages Aboriginal languages, n.o.s. most_at_home 15
## 10 Vancouver Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## # ℹ 2,130 more rows
```
What makes the data set shown above untidy?
In this example, each observation is a language in a region.
However, each observation is split across multiple rows:
one where the count for `most_at_home` is recorded,
and the other where the count for `most_at_work` is recorded.
Suppose the goal with this data was to
visualize the relationship between the number of
Canadians reporting their primary language at home and work.
Doing that would be difficult with this data in its current form,
since these two variables are stored in the same column.
Figure [3\.11](wrangling.html#fig:img-pivot-wider-table) shows how this data
will be tidied using the `pivot_wider` function.
Figure 3\.11: Going from long to wide with the `pivot_wider` function.
Figure [3\.12](wrangling.html#fig:img-pivot-wider) details the arguments that we need to specify
in the `pivot_wider` function.
Figure 3\.12: Syntax for the `pivot_wider` function.
We will apply the function as detailed in Figure [3\.12](wrangling.html#fig:img-pivot-wider).
```
lang_home_tidy <- pivot_wider(lang_long,
names_from = type,
values_from = count
)
lang_home_tidy
```
```
## # A tibble: 1,070 × 5
## region category language most_at_home most_at_work
## <chr> <chr> <chr> <dbl> <dbl>
## 1 Montréal Aboriginal languages Aborigi… 15 0
## 2 Toronto Aboriginal languages Aborigi… 50 0
## 3 Calgary Aboriginal languages Aborigi… 5 0
## 4 Edmonton Aboriginal languages Aborigi… 10 0
## 5 Vancouver Aboriginal languages Aborigi… 15 0
## 6 Montréal Non-Official & Non-Aboriginal l… Afrikaa… 10 0
## 7 Toronto Non-Official & Non-Aboriginal l… Afrikaa… 265 0
## 8 Calgary Non-Official & Non-Aboriginal l… Afrikaa… 505 15
## 9 Edmonton Non-Official & Non-Aboriginal l… Afrikaa… 300 0
## 10 Vancouver Non-Official & Non-Aboriginal l… Afrikaa… 520 10
## # ℹ 1,060 more rows
```
The data above is now tidy! We can go through the three criteria again to check
that this data is a tidy data set.
1. All the statistical variables are their own columns in the data frame (i.e.,
`most_at_home`, and `most_at_work` have been separated into their own
columns in the data frame).
2. Each observation, (i.e., each language in a region) is in a single row.
3. Each value is a single cell (i.e., its row, column position in the data
frame is not shared with another value).
You might notice that we have the same number of columns in the tidy data set as
we did in the messy one. Therefore `pivot_wider` didn’t really “widen” the data,
as the name suggests. This is just because the original `type` column only had
two categories in it. If it had more than two, `pivot_wider` would have created
more columns, and we would see the data set “widen.”
### 3\.4\.3 Tidying up: using `separate` to deal with multiple delimiters
Data are also not considered tidy when multiple values are stored in the same
cell. The data set we show below is even messier than the ones we dealt with
above: the `Toronto`, `Montréal`, `Vancouver`, `Calgary`, and `Edmonton` columns
contain the number of Canadians reporting their primary language at home and
work in one column separated by the delimiter (`/`). The column names are the
values of a variable, *and* each value does not have its own cell! To turn this
messy data into tidy data, we’ll have to fix these issues.
```
lang_messy <- read_csv("data/region_lang_top5_cities_messy.csv")
lang_messy
```
```
## # A tibble: 214 × 7
## category language Toronto Montréal Vancouver Calgary Edmonton
## <chr> <chr> <chr> <chr> <chr> <chr> <chr>
## 1 Aboriginal languages Aborigi… 50/0 15/0 15/0 5/0 10/0
## 2 Non-Official & Non-Abor… Afrikaa… 265/0 10/0 520/10 505/15 300/0
## 3 Non-Official & Non-Abor… Afro-As… 185/10 65/0 10/0 15/0 20/0
## 4 Non-Official & Non-Abor… Akan (T… 4045/20 440/0 125/10 330/0 445/0
## 5 Non-Official & Non-Abor… Albanian 6380/2… 1445/20 530/10 620/25 370/10
## 6 Aboriginal languages Algonqu… 5/0 0/0 0/0 0/0 0/0
## 7 Aboriginal languages Algonqu… 0/0 10/0 0/0 0/0 0/0
## 8 Non-Official & Non-Abor… America… 720/245 70/0 300/140 85/25 190/85
## 9 Non-Official & Non-Abor… Amharic 3820/55 315/0 540/10 2730/50 1695/35
## 10 Non-Official & Non-Abor… Arabic 45025/… 72980/1… 8680/275 11010/… 10590/3…
## # ℹ 204 more rows
```
First we’ll use `pivot_longer` to create two columns, `region` and `value`,
similar to what we did previously.
The new `region` columns will contain the region names,
and the new column `value` will be a temporary holding place for the
data that we need to further separate, i.e., the
number of Canadians reporting their primary language at home and work.
```
lang_messy_longer <- pivot_longer(lang_messy,
cols = Toronto:Edmonton,
names_to = "region",
values_to = "value"
)
lang_messy_longer
```
```
## # A tibble: 1,070 × 4
## category language region value
## <chr> <chr> <chr> <chr>
## 1 Aboriginal languages Aboriginal languages, n… Toron… 50/0
## 2 Aboriginal languages Aboriginal languages, n… Montr… 15/0
## 3 Aboriginal languages Aboriginal languages, n… Vanco… 15/0
## 4 Aboriginal languages Aboriginal languages, n… Calga… 5/0
## 5 Aboriginal languages Aboriginal languages, n… Edmon… 10/0
## 6 Non-Official & Non-Aboriginal languages Afrikaans Toron… 265/0
## 7 Non-Official & Non-Aboriginal languages Afrikaans Montr… 10/0
## 8 Non-Official & Non-Aboriginal languages Afrikaans Vanco… 520/…
## 9 Non-Official & Non-Aboriginal languages Afrikaans Calga… 505/…
## 10 Non-Official & Non-Aboriginal languages Afrikaans Edmon… 300/0
## # ℹ 1,060 more rows
```
Next we’ll use `separate` to split the `value` column into two columns.
One column will contain only the counts of Canadians
that speak each language most at home,
and the other will contain the counts of Canadians
that speak each language most at work for each region.
Figure [3\.13](wrangling.html#fig:img-separate)
outlines what we need to specify to use `separate`.
Figure 3\.13: Syntax for the `separate` function.
```
tidy_lang <- separate(lang_messy_longer,
col = value,
into = c("most_at_home", "most_at_work"),
sep = "/"
)
tidy_lang
```
```
## # A tibble: 1,070 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <chr> <chr>
## 1 Aboriginal languages Aborigi… Toron… 50 0
## 2 Aboriginal languages Aborigi… Montr… 15 0
## 3 Aboriginal languages Aborigi… Vanco… 15 0
## 4 Aboriginal languages Aborigi… Calga… 5 0
## 5 Aboriginal languages Aborigi… Edmon… 10 0
## 6 Non-Official & Non-Aboriginal lang… Afrikaa… Toron… 265 0
## 7 Non-Official & Non-Aboriginal lang… Afrikaa… Montr… 10 0
## 8 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 9 Non-Official & Non-Aboriginal lang… Afrikaa… Calga… 505 15
## 10 Non-Official & Non-Aboriginal lang… Afrikaa… Edmon… 300 0
## # ℹ 1,060 more rows
```
Is this data set now tidy? If we recall the three criteria for tidy data:
* each row is a single observation,
* each column is a single variable, and
* each value is a single cell.
We can see that this data now satisfies all three criteria, making it easier to
analyze. But we aren’t done yet! Notice in the table above that the word
`<chr>` appears beneath each of the column names. The word under the column name
indicates the data type of each column. Here all of the variables are
“character” data types. Recall, character data types are letter(s) or digits(s)
surrounded by quotes. In the previous example in Section [3\.4\.2](wrangling.html#pivot-wider), the
`most_at_home` and `most_at_work` variables were `<dbl>` (double)—you can
verify this by looking at the tables in the previous sections—which is a type
of numeric data. This change is due to the delimiter (`/`) when we read in this
messy data set. R read these columns in as character types, and by default,
`separate` will return columns as character data types.
It makes sense for `region`, `category`, and `language` to be stored as a
character (or perhaps factor) type. However, suppose we want to apply any functions that treat the
`most_at_home` and `most_at_work` columns as a number (e.g., finding rows
above a numeric threshold of a column).
In that case,
it won’t be possible to do if the variable is stored as a `character`.
Fortunately, the `separate` function provides a natural way to fix problems
like this: we can set `convert = TRUE` to convert the `most_at_home`
and `most_at_work` columns to the correct data type.
```
tidy_lang <- separate(lang_messy_longer,
col = value,
into = c("most_at_home", "most_at_work"),
sep = "/",
convert = TRUE
)
tidy_lang
```
```
## # A tibble: 1,070 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Aboriginal languages Aborigi… Toron… 50 0
## 2 Aboriginal languages Aborigi… Montr… 15 0
## 3 Aboriginal languages Aborigi… Vanco… 15 0
## 4 Aboriginal languages Aborigi… Calga… 5 0
## 5 Aboriginal languages Aborigi… Edmon… 10 0
## 6 Non-Official & Non-Aboriginal lang… Afrikaa… Toron… 265 0
## 7 Non-Official & Non-Aboriginal lang… Afrikaa… Montr… 10 0
## 8 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 9 Non-Official & Non-Aboriginal lang… Afrikaa… Calga… 505 15
## 10 Non-Official & Non-Aboriginal lang… Afrikaa… Edmon… 300 0
## # ℹ 1,060 more rows
```
Now we see `<int>` appears under the `most_at_home` and `most_at_work` columns,
indicating they are integer data types (i.e., numbers)!
### 3\.4\.1 Tidying up: going from wide to long using `pivot_longer`
One task that is commonly performed to get data into a tidy format
is to combine values that are stored in separate columns,
but are really part of the same variable, into one.
Data is often stored this way
because this format is sometimes more intuitive for human readability
and understanding, and humans create data sets.
In Figure [3\.7](wrangling.html#fig:02-wide-to-long),
the table on the left is in an untidy, “wide” format because the year values
(2006, 2011, 2016\) are stored as column names.
And as a consequence,
the values for population for the various cities
over these years are also split across several columns.
For humans, this table is easy to read, which is why you will often find data
stored in this wide format. However, this format is difficult to work with
when performing data visualization or statistical analysis using R. For
example, if we wanted to find the latest year it would be challenging because
the year values are stored as column names instead of as values in a single
column. So before we could apply a function to find the latest year (for
example, by using `max`), we would have to first extract the column names
to get them as a vector and then apply a function to extract the latest year.
The problem only gets worse if you would like to find the value for the
population for a given region for the latest year. Both of these tasks are
greatly simplified once the data is tidied.
Another problem with data in this format is that we don’t know what the
numbers under each year actually represent. Do those numbers represent
population size? Land area? It’s not clear.
To solve both of these problems,
we can reshape this data set to a tidy data format
by creating a column called “year” and a column called
“population.” This transformation—which makes the data
“longer”—is shown as the right table in
Figure [3\.7](wrangling.html#fig:02-wide-to-long).
Figure 3\.7: Pivoting data from a wide to long data format.
We can achieve this effect in R using the `pivot_longer` function from the `tidyverse` package.
The `pivot_longer` function combines columns,
and is usually used during tidying data
when we need to make the data frame longer and narrower.
To learn how to use `pivot_longer`, we will work through an example with the
`region_lang_top5_cities_wide.csv` data set. This data set contains the
counts of how many Canadians cited each language as their mother tongue for five
major Canadian cities (Toronto, Montréal, Vancouver, Calgary, and Edmonton) from
the 2016 Canadian census.
To get started,
we will load the `tidyverse` package and use `read_csv` to load the (untidy) data.
```
library(tidyverse)
lang_wide <- read_csv("data/region_lang_top5_cities_wide.csv")
lang_wide
```
```
## # A tibble: 214 × 7
## category language Toronto Montréal Vancouver Calgary Edmonton
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal languages Aborigi… 80 30 70 20 25
## 2 Non-Official & Non-Abor… Afrikaa… 985 90 1435 960 575
## 3 Non-Official & Non-Abor… Afro-As… 360 240 45 45 65
## 4 Non-Official & Non-Abor… Akan (T… 8485 1015 400 705 885
## 5 Non-Official & Non-Abor… Albanian 13260 2450 1090 1365 770
## 6 Aboriginal languages Algonqu… 5 5 0 0 0
## 7 Aboriginal languages Algonqu… 5 30 5 5 0
## 8 Non-Official & Non-Abor… America… 470 50 265 100 180
## 9 Non-Official & Non-Abor… Amharic 7460 665 1140 4075 2515
## 10 Non-Official & Non-Abor… Arabic 85175 151955 14320 18965 17525
## # ℹ 204 more rows
```
What is wrong with the untidy format above?
The table on the left in Figure [3\.8](wrangling.html#fig:img-pivot-longer-with-table)
represents the data in the “wide” (messy) format.
From a data analysis perspective, this format is not ideal because the values of
the variable *region* (Toronto, Montréal, Vancouver, Calgary, and Edmonton)
are stored as column names. Thus they
are not easily accessible to the data analysis functions we will apply
to our data set. Additionally, the *mother tongue* variable values are
spread across multiple columns, which will prevent us from doing any desired
visualization or statistical tasks until we combine them into one column. For
instance, suppose we want to know the languages with the highest number of
Canadians reporting it as their mother tongue among all five regions. This
question would be tough to answer with the data in its current format.
We *could* find the answer with the data in this format,
though it would be much easier to answer if we tidy our
data first. If mother tongue were instead stored as one column,
as shown in the tidy data on the right in
Figure [3\.8](wrangling.html#fig:img-pivot-longer-with-table),
we could simply use the `max` function in one line of code
to get the maximum value.
Figure 3\.8: Going from wide to long with the `pivot_longer` function.
Figure [3\.9](wrangling.html#fig:img-pivot-longer) details the arguments that we need to specify
in the `pivot_longer` function to accomplish this data transformation.
Figure 3\.9: Syntax for the `pivot_longer` function.
We use `pivot_longer` to combine the Toronto, Montréal,
Vancouver, Calgary, and Edmonton columns into a single column called `region`,
and create a column called `mother_tongue` that contains the count of how many
Canadians report each language as their mother tongue for each metropolitan
area. We use a colon `:` between Toronto and Edmonton to tell R to select all
the columns between Toronto and Edmonton:
```
lang_mother_tidy <- pivot_longer(lang_wide,
cols = Toronto:Edmonton,
names_to = "region",
values_to = "mother_tongue"
)
lang_mother_tidy
```
```
## # A tibble: 1,070 × 4
## category language region mother_tongue
## <chr> <chr> <chr> <dbl>
## 1 Aboriginal languages Aboriginal lang… Toron… 80
## 2 Aboriginal languages Aboriginal lang… Montr… 30
## 3 Aboriginal languages Aboriginal lang… Vanco… 70
## 4 Aboriginal languages Aboriginal lang… Calga… 20
## 5 Aboriginal languages Aboriginal lang… Edmon… 25
## 6 Non-Official & Non-Aboriginal languages Afrikaans Toron… 985
## 7 Non-Official & Non-Aboriginal languages Afrikaans Montr… 90
## 8 Non-Official & Non-Aboriginal languages Afrikaans Vanco… 1435
## 9 Non-Official & Non-Aboriginal languages Afrikaans Calga… 960
## 10 Non-Official & Non-Aboriginal languages Afrikaans Edmon… 575
## # ℹ 1,060 more rows
```
> **Note**: In the code above, the call to the
> `pivot_longer` function is split across several lines. This is allowed in
> certain cases; for example, when calling a function as above, as long as the
> line ends with a comma `,` R knows to keep reading on the next line.
> Splitting long lines like this across multiple lines is encouraged
> as it helps significantly with code readability. Generally speaking, you should
> limit each line of code to about 80 characters.
The data above is now tidy because all three criteria for tidy data have now
been met:
1. All the variables (`category`, `language`, `region` and `mother_tongue`) are
now their own columns in the data frame.
2. Each observation, (i.e., each language in a region) is in a single row.
3. Each value is a single cell, i.e., its row, column position in the data
frame is not shared with another value.
### 3\.4\.2 Tidying up: going from long to wide using `pivot_wider`
Suppose we have observations spread across multiple rows rather than in a single
row. For example, in Figure [3\.10](wrangling.html#fig:long-to-wide), the table on the left is in an
untidy, long format because the `count` column contains three variables
(population, commuter count, and year the city was incorporated)
and information about each observation
(here, population, commuter, and incorporated values for a region) is split across three rows.
Remember: one of the criteria for tidy data
is that each observation must be in a single row.
Using data in this format—where two or more variables are mixed together
in a single column—makes it harder to apply many usual `tidyverse` functions.
For example, finding the maximum number of commuters
would require an additional step of filtering for the commuter values
before the maximum can be computed.
In comparison, if the data were tidy,
all we would have to do is compute the maximum value for the commuter column.
To reshape this untidy data set to a tidy (and in this case, wider) format,
we need to create columns called “population”, “commuters”, and “incorporated.”
This is illustrated in the right table of Figure [3\.10](wrangling.html#fig:long-to-wide).
Figure 3\.10: Going from long to wide data.
To tidy this type of data in R, we can use the `pivot_wider` function.
The `pivot_wider` function generally increases the number of columns (widens)
and decreases the number of rows in a data set.
To learn how to use `pivot_wider`,
we will work through an example
with the `region_lang_top5_cities_long.csv` data set.
This data set contains the number of Canadians reporting
the primary language at home and work for five
major cities (Toronto, Montréal, Vancouver, Calgary, and Edmonton).
```
lang_long <- read_csv("data/region_lang_top5_cities_long.csv")
lang_long
```
```
## # A tibble: 2,140 × 5
## region category language type count
## <chr> <chr> <chr> <chr> <dbl>
## 1 Montréal Aboriginal languages Aboriginal languages, n.o.s. most_at_home 15
## 2 Montréal Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## 3 Toronto Aboriginal languages Aboriginal languages, n.o.s. most_at_home 50
## 4 Toronto Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## 5 Calgary Aboriginal languages Aboriginal languages, n.o.s. most_at_home 5
## 6 Calgary Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## 7 Edmonton Aboriginal languages Aboriginal languages, n.o.s. most_at_home 10
## 8 Edmonton Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## 9 Vancouver Aboriginal languages Aboriginal languages, n.o.s. most_at_home 15
## 10 Vancouver Aboriginal languages Aboriginal languages, n.o.s. most_at_work 0
## # ℹ 2,130 more rows
```
What makes the data set shown above untidy?
In this example, each observation is a language in a region.
However, each observation is split across multiple rows:
one where the count for `most_at_home` is recorded,
and the other where the count for `most_at_work` is recorded.
Suppose the goal with this data was to
visualize the relationship between the number of
Canadians reporting their primary language at home and work.
Doing that would be difficult with this data in its current form,
since these two variables are stored in the same column.
Figure [3\.11](wrangling.html#fig:img-pivot-wider-table) shows how this data
will be tidied using the `pivot_wider` function.
Figure 3\.11: Going from long to wide with the `pivot_wider` function.
Figure [3\.12](wrangling.html#fig:img-pivot-wider) details the arguments that we need to specify
in the `pivot_wider` function.
Figure 3\.12: Syntax for the `pivot_wider` function.
We will apply the function as detailed in Figure [3\.12](wrangling.html#fig:img-pivot-wider).
```
lang_home_tidy <- pivot_wider(lang_long,
names_from = type,
values_from = count
)
lang_home_tidy
```
```
## # A tibble: 1,070 × 5
## region category language most_at_home most_at_work
## <chr> <chr> <chr> <dbl> <dbl>
## 1 Montréal Aboriginal languages Aborigi… 15 0
## 2 Toronto Aboriginal languages Aborigi… 50 0
## 3 Calgary Aboriginal languages Aborigi… 5 0
## 4 Edmonton Aboriginal languages Aborigi… 10 0
## 5 Vancouver Aboriginal languages Aborigi… 15 0
## 6 Montréal Non-Official & Non-Aboriginal l… Afrikaa… 10 0
## 7 Toronto Non-Official & Non-Aboriginal l… Afrikaa… 265 0
## 8 Calgary Non-Official & Non-Aboriginal l… Afrikaa… 505 15
## 9 Edmonton Non-Official & Non-Aboriginal l… Afrikaa… 300 0
## 10 Vancouver Non-Official & Non-Aboriginal l… Afrikaa… 520 10
## # ℹ 1,060 more rows
```
The data above is now tidy! We can go through the three criteria again to check
that this data is a tidy data set.
1. All the statistical variables are their own columns in the data frame (i.e.,
`most_at_home`, and `most_at_work` have been separated into their own
columns in the data frame).
2. Each observation, (i.e., each language in a region) is in a single row.
3. Each value is a single cell (i.e., its row, column position in the data
frame is not shared with another value).
You might notice that we have the same number of columns in the tidy data set as
we did in the messy one. Therefore `pivot_wider` didn’t really “widen” the data,
as the name suggests. This is just because the original `type` column only had
two categories in it. If it had more than two, `pivot_wider` would have created
more columns, and we would see the data set “widen.”
### 3\.4\.3 Tidying up: using `separate` to deal with multiple delimiters
Data are also not considered tidy when multiple values are stored in the same
cell. The data set we show below is even messier than the ones we dealt with
above: the `Toronto`, `Montréal`, `Vancouver`, `Calgary`, and `Edmonton` columns
contain the number of Canadians reporting their primary language at home and
work in one column separated by the delimiter (`/`). The column names are the
values of a variable, *and* each value does not have its own cell! To turn this
messy data into tidy data, we’ll have to fix these issues.
```
lang_messy <- read_csv("data/region_lang_top5_cities_messy.csv")
lang_messy
```
```
## # A tibble: 214 × 7
## category language Toronto Montréal Vancouver Calgary Edmonton
## <chr> <chr> <chr> <chr> <chr> <chr> <chr>
## 1 Aboriginal languages Aborigi… 50/0 15/0 15/0 5/0 10/0
## 2 Non-Official & Non-Abor… Afrikaa… 265/0 10/0 520/10 505/15 300/0
## 3 Non-Official & Non-Abor… Afro-As… 185/10 65/0 10/0 15/0 20/0
## 4 Non-Official & Non-Abor… Akan (T… 4045/20 440/0 125/10 330/0 445/0
## 5 Non-Official & Non-Abor… Albanian 6380/2… 1445/20 530/10 620/25 370/10
## 6 Aboriginal languages Algonqu… 5/0 0/0 0/0 0/0 0/0
## 7 Aboriginal languages Algonqu… 0/0 10/0 0/0 0/0 0/0
## 8 Non-Official & Non-Abor… America… 720/245 70/0 300/140 85/25 190/85
## 9 Non-Official & Non-Abor… Amharic 3820/55 315/0 540/10 2730/50 1695/35
## 10 Non-Official & Non-Abor… Arabic 45025/… 72980/1… 8680/275 11010/… 10590/3…
## # ℹ 204 more rows
```
First we’ll use `pivot_longer` to create two columns, `region` and `value`,
similar to what we did previously.
The new `region` columns will contain the region names,
and the new column `value` will be a temporary holding place for the
data that we need to further separate, i.e., the
number of Canadians reporting their primary language at home and work.
```
lang_messy_longer <- pivot_longer(lang_messy,
cols = Toronto:Edmonton,
names_to = "region",
values_to = "value"
)
lang_messy_longer
```
```
## # A tibble: 1,070 × 4
## category language region value
## <chr> <chr> <chr> <chr>
## 1 Aboriginal languages Aboriginal languages, n… Toron… 50/0
## 2 Aboriginal languages Aboriginal languages, n… Montr… 15/0
## 3 Aboriginal languages Aboriginal languages, n… Vanco… 15/0
## 4 Aboriginal languages Aboriginal languages, n… Calga… 5/0
## 5 Aboriginal languages Aboriginal languages, n… Edmon… 10/0
## 6 Non-Official & Non-Aboriginal languages Afrikaans Toron… 265/0
## 7 Non-Official & Non-Aboriginal languages Afrikaans Montr… 10/0
## 8 Non-Official & Non-Aboriginal languages Afrikaans Vanco… 520/…
## 9 Non-Official & Non-Aboriginal languages Afrikaans Calga… 505/…
## 10 Non-Official & Non-Aboriginal languages Afrikaans Edmon… 300/0
## # ℹ 1,060 more rows
```
Next we’ll use `separate` to split the `value` column into two columns.
One column will contain only the counts of Canadians
that speak each language most at home,
and the other will contain the counts of Canadians
that speak each language most at work for each region.
Figure [3\.13](wrangling.html#fig:img-separate)
outlines what we need to specify to use `separate`.
Figure 3\.13: Syntax for the `separate` function.
```
tidy_lang <- separate(lang_messy_longer,
col = value,
into = c("most_at_home", "most_at_work"),
sep = "/"
)
tidy_lang
```
```
## # A tibble: 1,070 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <chr> <chr>
## 1 Aboriginal languages Aborigi… Toron… 50 0
## 2 Aboriginal languages Aborigi… Montr… 15 0
## 3 Aboriginal languages Aborigi… Vanco… 15 0
## 4 Aboriginal languages Aborigi… Calga… 5 0
## 5 Aboriginal languages Aborigi… Edmon… 10 0
## 6 Non-Official & Non-Aboriginal lang… Afrikaa… Toron… 265 0
## 7 Non-Official & Non-Aboriginal lang… Afrikaa… Montr… 10 0
## 8 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 9 Non-Official & Non-Aboriginal lang… Afrikaa… Calga… 505 15
## 10 Non-Official & Non-Aboriginal lang… Afrikaa… Edmon… 300 0
## # ℹ 1,060 more rows
```
Is this data set now tidy? If we recall the three criteria for tidy data:
* each row is a single observation,
* each column is a single variable, and
* each value is a single cell.
We can see that this data now satisfies all three criteria, making it easier to
analyze. But we aren’t done yet! Notice in the table above that the word
`<chr>` appears beneath each of the column names. The word under the column name
indicates the data type of each column. Here all of the variables are
“character” data types. Recall, character data types are letter(s) or digits(s)
surrounded by quotes. In the previous example in Section [3\.4\.2](wrangling.html#pivot-wider), the
`most_at_home` and `most_at_work` variables were `<dbl>` (double)—you can
verify this by looking at the tables in the previous sections—which is a type
of numeric data. This change is due to the delimiter (`/`) when we read in this
messy data set. R read these columns in as character types, and by default,
`separate` will return columns as character data types.
It makes sense for `region`, `category`, and `language` to be stored as a
character (or perhaps factor) type. However, suppose we want to apply any functions that treat the
`most_at_home` and `most_at_work` columns as a number (e.g., finding rows
above a numeric threshold of a column).
In that case,
it won’t be possible to do if the variable is stored as a `character`.
Fortunately, the `separate` function provides a natural way to fix problems
like this: we can set `convert = TRUE` to convert the `most_at_home`
and `most_at_work` columns to the correct data type.
```
tidy_lang <- separate(lang_messy_longer,
col = value,
into = c("most_at_home", "most_at_work"),
sep = "/",
convert = TRUE
)
tidy_lang
```
```
## # A tibble: 1,070 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Aboriginal languages Aborigi… Toron… 50 0
## 2 Aboriginal languages Aborigi… Montr… 15 0
## 3 Aboriginal languages Aborigi… Vanco… 15 0
## 4 Aboriginal languages Aborigi… Calga… 5 0
## 5 Aboriginal languages Aborigi… Edmon… 10 0
## 6 Non-Official & Non-Aboriginal lang… Afrikaa… Toron… 265 0
## 7 Non-Official & Non-Aboriginal lang… Afrikaa… Montr… 10 0
## 8 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 9 Non-Official & Non-Aboriginal lang… Afrikaa… Calga… 505 15
## 10 Non-Official & Non-Aboriginal lang… Afrikaa… Edmon… 300 0
## # ℹ 1,060 more rows
```
Now we see `<int>` appears under the `most_at_home` and `most_at_work` columns,
indicating they are integer data types (i.e., numbers)!
3\.5 Using `select` to extract a range of columns
-------------------------------------------------
Now that the `tidy_lang` data is indeed *tidy*, we can start manipulating it
using the powerful suite of functions from the `tidyverse`.
For the first example, recall the `select` function from Chapter [1](intro.html#intro),
which lets us create a subset of columns from a data frame.
Suppose we wanted to select only the columns `language`, `region`,
`most_at_home` and `most_at_work` from the `tidy_lang` data set. Using what we
learned in Chapter [1](intro.html#intro), we would pass the `tidy_lang` data frame as
well as all of these column names into the `select` function:
```
selected_columns <- select(tidy_lang,
language,
region,
most_at_home,
most_at_work)
selected_columns
```
```
## # A tibble: 1,070 × 4
## language region most_at_home most_at_work
## <chr> <chr> <int> <int>
## 1 Aboriginal languages, n.o.s. Toronto 50 0
## 2 Aboriginal languages, n.o.s. Montréal 15 0
## 3 Aboriginal languages, n.o.s. Vancouver 15 0
## 4 Aboriginal languages, n.o.s. Calgary 5 0
## 5 Aboriginal languages, n.o.s. Edmonton 10 0
## 6 Afrikaans Toronto 265 0
## 7 Afrikaans Montréal 10 0
## 8 Afrikaans Vancouver 520 10
## 9 Afrikaans Calgary 505 15
## 10 Afrikaans Edmonton 300 0
## # ℹ 1,060 more rows
```
Here we wrote out the names of each of the columns. However, this method is
time\-consuming, especially if you have a lot of columns! Another approach is to
use a “select helper”. Select helpers are operators that make it easier for
us to select columns. For instance, we can use a select helper to choose a
range of columns rather than typing each column name out. To do this, we use the
colon (`:`) operator to denote the range. For example, to get all the columns in
the `tidy_lang` data frame from `language` to `most_at_work` we pass
`language:most_at_work` as the second argument to the `select` function.
```
column_range <- select(tidy_lang, language:most_at_work)
column_range
```
```
## # A tibble: 1,070 × 4
## language region most_at_home most_at_work
## <chr> <chr> <int> <int>
## 1 Aboriginal languages, n.o.s. Toronto 50 0
## 2 Aboriginal languages, n.o.s. Montréal 15 0
## 3 Aboriginal languages, n.o.s. Vancouver 15 0
## 4 Aboriginal languages, n.o.s. Calgary 5 0
## 5 Aboriginal languages, n.o.s. Edmonton 10 0
## 6 Afrikaans Toronto 265 0
## 7 Afrikaans Montréal 10 0
## 8 Afrikaans Vancouver 520 10
## 9 Afrikaans Calgary 505 15
## 10 Afrikaans Edmonton 300 0
## # ℹ 1,060 more rows
```
Notice that we get the same output as we did above,
but with less (and clearer!) code. This type of operator
is especially handy for large data sets.
Suppose instead we wanted to extract columns that followed a particular pattern
rather than just selecting a range. For example, let’s say we wanted only to select the
columns `most_at_home` and `most_at_work`. There are other helpers that allow
us to select variables based on their names. In particular, we can use the `select` helper
`starts_with` to choose only the columns that start with the word “most”:
```
select(tidy_lang, starts_with("most"))
```
```
## # A tibble: 1,070 × 2
## most_at_home most_at_work
## <int> <int>
## 1 50 0
## 2 15 0
## 3 15 0
## 4 5 0
## 5 10 0
## 6 265 0
## 7 10 0
## 8 520 10
## 9 505 15
## 10 300 0
## # ℹ 1,060 more rows
```
We could also have chosen the columns containing an underscore `_` by adding
`contains("_")` as the second argument in the `select` function, since we notice
the columns we want contain underscores and the others don’t.
```
select(tidy_lang, contains("_"))
```
```
## # A tibble: 1,070 × 2
## most_at_home most_at_work
## <int> <int>
## 1 50 0
## 2 15 0
## 3 15 0
## 4 5 0
## 5 10 0
## 6 265 0
## 7 10 0
## 8 520 10
## 9 505 15
## 10 300 0
## # ℹ 1,060 more rows
```
There are many different `select` helpers that select
variables based on certain criteria.
The additional resources section at the end of this chapter
provides a comprehensive resource on `select` helpers.
3\.6 Using `filter` to extract rows
-----------------------------------
Next, we revisit the `filter` function from Chapter [1](intro.html#intro),
which lets us create a subset of rows from a data frame.
Recall the two main arguments to the `filter` function:
the first is the name of the data frame object, and
the second is a *logical statement* to use when filtering the rows.
`filter` works by returning the rows where the logical statement evaluates to `TRUE`.
This section will highlight more advanced usage of the `filter` function.
In particular, this section provides an in\-depth treatment of the variety of logical statements
one can use in the `filter` function to select subsets of rows.
### 3\.6\.1 Extracting rows that have a certain value with `==`
Suppose we are only interested in the subset of rows in `tidy_lang` corresponding to the
official languages of Canada (English and French).
We can `filter` for these rows by using the *equivalency operator* (`==`)
to compare the values of the `category` column
with the value `"Official languages"`.
With these arguments, `filter` returns a data frame with all the columns
of the input data frame
but only the rows we asked for in the logical statement, i.e.,
those where the `category` column holds the value `"Official languages"`.
We name this data frame `official_langs`.
```
official_langs <- filter(tidy_lang, category == "Official languages")
official_langs
```
```
## # A tibble: 10 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages English Toronto 3836770 3218725
## 2 Official languages English Montréal 620510 412120
## 3 Official languages English Vancouver 1622735 1330555
## 4 Official languages English Calgary 1065070 844740
## 5 Official languages English Edmonton 1050410 792700
## 6 Official languages French Toronto 29800 11940
## 7 Official languages French Montréal 2669195 1607550
## 8 Official languages French Vancouver 8630 3245
## 9 Official languages French Calgary 8630 2140
## 10 Official languages French Edmonton 10950 2520
```
### 3\.6\.2 Extracting rows that do not have a certain value with `!=`
What if we want all the other language categories in the data set *except* for
those in the `"Official languages"` category? We can accomplish this with the `!=`
operator, which means “not equal to”. So if we want to find all the rows
where the `category` does *not* equal `"Official languages"` we write the code
below.
```
filter(tidy_lang, category != "Official languages")
```
```
## # A tibble: 1,060 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Aboriginal languages Aborigi… Toron… 50 0
## 2 Aboriginal languages Aborigi… Montr… 15 0
## 3 Aboriginal languages Aborigi… Vanco… 15 0
## 4 Aboriginal languages Aborigi… Calga… 5 0
## 5 Aboriginal languages Aborigi… Edmon… 10 0
## 6 Non-Official & Non-Aboriginal lang… Afrikaa… Toron… 265 0
## 7 Non-Official & Non-Aboriginal lang… Afrikaa… Montr… 10 0
## 8 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 9 Non-Official & Non-Aboriginal lang… Afrikaa… Calga… 505 15
## 10 Non-Official & Non-Aboriginal lang… Afrikaa… Edmon… 300 0
## # ℹ 1,050 more rows
```
### 3\.6\.3 Extracting rows satisfying multiple conditions using `,` or `&`
Suppose now we want to look at only the rows
for the French language in Montréal.
To do this, we need to filter the data set
to find rows that satisfy multiple conditions simultaneously.
We can do this with the comma symbol (`,`), which in the case of `filter`
is interpreted by R as “and”.
We write the code as shown below to filter the `official_langs` data frame
to subset the rows where `region == "Montréal"`
*and* the `language == "French"`.
```
filter(official_langs, region == "Montréal", language == "French")
```
```
## # A tibble: 1 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages French Montréal 2669195 1607550
```
We can also use the ampersand (`&`) logical operator, which gives
us cases where *both* one condition *and* another condition
are satisfied. You can use either comma (`,`) or ampersand (`&`) in the `filter`
function interchangeably.
```
filter(official_langs, region == "Montréal" & language == "French")
```
```
## # A tibble: 1 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages French Montréal 2669195 1607550
```
### 3\.6\.4 Extracting rows satisfying at least one condition using `|`
Suppose we were interested in only those rows corresponding to cities in Alberta
in the `official_langs` data set (Edmonton and Calgary).
We can’t use `,` as we did above because `region`
cannot be both Edmonton *and* Calgary simultaneously.
Instead, we can use the vertical pipe (`|`) logical operator,
which gives us the cases where one condition *or*
another condition *or* both are satisfied.
In the code below, we ask R to return the rows
where the `region` columns are equal to “Calgary” *or* “Edmonton”.
```
filter(official_langs, region == "Calgary" | region == "Edmonton")
```
```
## # A tibble: 4 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages English Calgary 1065070 844740
## 2 Official languages English Edmonton 1050410 792700
## 3 Official languages French Calgary 8630 2140
## 4 Official languages French Edmonton 10950 2520
```
### 3\.6\.5 Extracting rows with values in a vector using `%in%`
Next, suppose we want to see the populations of our five cities.
Let’s read in the `region_data.csv` file
that comes from the 2016 Canadian census,
as it contains statistics for number of households, land area, population
and number of dwellings for different regions.
```
region_data <- read_csv("data/region_data.csv")
region_data
```
```
## # A tibble: 35 × 5
## region households area population dwellings
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Belleville 43002 1355. 103472 45050
## 2 Lethbridge 45696 3047. 117394 48317
## 3 Thunder Bay 52545 2618. 121621 57146
## 4 Peterborough 50533 1637. 121721 55662
## 5 Saint John 52872 3793. 126202 58398
## 6 Brantford 52530 1086. 134203 54419
## 7 Moncton 61769 2625. 144810 66699
## 8 Guelph 59280 604. 151984 63324
## 9 Trois-Rivières 72502 1053. 156042 77734
## 10 Saguenay 72479 3079. 160980 77968
## # ℹ 25 more rows
```
To get the population of the five cities
we can filter the data set using the `%in%` operator.
The `%in%` operator is used to see if an element belongs to a vector.
Here we are filtering for rows where the value in the `region` column
matches any of the five cities we are intersted in: Toronto, Montréal,
Vancouver, Calgary, and Edmonton.
```
city_names <- c("Toronto", "Montréal", "Vancouver", "Calgary", "Edmonton")
five_cities <- filter(region_data,
region %in% city_names)
five_cities
```
```
## # A tibble: 5 × 5
## region households area population dwellings
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Edmonton 502143 9858. 1321426 537634
## 2 Calgary 519693 5242. 1392609 544870
## 3 Vancouver 960894 3040. 2463431 1027613
## 4 Montréal 1727310 4638. 4098927 1823281
## 5 Toronto 2135909 6270. 5928040 2235145
```
> **Note:** What’s the difference between `==` and `%in%`? Suppose we have two
> vectors, `vectorA` and `vectorB`. If you type `vectorA == vectorB` into R it
> will compare the vectors element by element. R checks if the first element of
> `vectorA` equals the first element of `vectorB`, the second element of
> `vectorA` equals the second element of `vectorB`, and so on. On the other hand,
> `vectorA %in% vectorB` compares the first element of `vectorA` to all the
> elements in `vectorB`. Then the second element of `vectorA` is compared
> to all the elements in `vectorB`, and so on. Notice the difference between `==` and
> `%in%` in the example below.
>
>
>
> ```
> c("Vancouver", "Toronto") == c("Toronto", "Vancouver")
> ```
>
>
> ```
> ## [1] FALSE FALSE
> ```
>
>
> ```
> c("Vancouver", "Toronto") %in% c("Toronto", "Vancouver")
> ```
>
>
> ```
> ## [1] TRUE TRUE
> ```
### 3\.6\.6 Extracting rows above or below a threshold using `>` and `<`
We saw in Section [3\.6\.3](wrangling.html#filter-and) that
2,669,195 people reported
speaking French in Montréal as their primary language at home.
If we are interested in finding the official languages in regions
with higher numbers of people who speak it as their primary language at home
compared to French in Montréal, then we can use `filter` to obtain rows
where the value of `most_at_home` is greater than
2,669,195\.
We use the `>` symbol to look for values *above* a threshold, and the `<` symbol
to look for values *below* a threshold. The `>=` and `<=` symbols similarly look
for *equal to or above* a threshold and *equal to or below* a
threshold.
```
filter(official_langs, most_at_home > 2669195)
```
```
## # A tibble: 1 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages English Toronto 3836770 3218725
```
`filter` returns a data frame with only one row, indicating that when
considering the official languages,
only English in Toronto is reported by more people
as their primary language at home
than French in Montréal according to the 2016 Canadian census.
### 3\.6\.1 Extracting rows that have a certain value with `==`
Suppose we are only interested in the subset of rows in `tidy_lang` corresponding to the
official languages of Canada (English and French).
We can `filter` for these rows by using the *equivalency operator* (`==`)
to compare the values of the `category` column
with the value `"Official languages"`.
With these arguments, `filter` returns a data frame with all the columns
of the input data frame
but only the rows we asked for in the logical statement, i.e.,
those where the `category` column holds the value `"Official languages"`.
We name this data frame `official_langs`.
```
official_langs <- filter(tidy_lang, category == "Official languages")
official_langs
```
```
## # A tibble: 10 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages English Toronto 3836770 3218725
## 2 Official languages English Montréal 620510 412120
## 3 Official languages English Vancouver 1622735 1330555
## 4 Official languages English Calgary 1065070 844740
## 5 Official languages English Edmonton 1050410 792700
## 6 Official languages French Toronto 29800 11940
## 7 Official languages French Montréal 2669195 1607550
## 8 Official languages French Vancouver 8630 3245
## 9 Official languages French Calgary 8630 2140
## 10 Official languages French Edmonton 10950 2520
```
### 3\.6\.2 Extracting rows that do not have a certain value with `!=`
What if we want all the other language categories in the data set *except* for
those in the `"Official languages"` category? We can accomplish this with the `!=`
operator, which means “not equal to”. So if we want to find all the rows
where the `category` does *not* equal `"Official languages"` we write the code
below.
```
filter(tidy_lang, category != "Official languages")
```
```
## # A tibble: 1,060 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Aboriginal languages Aborigi… Toron… 50 0
## 2 Aboriginal languages Aborigi… Montr… 15 0
## 3 Aboriginal languages Aborigi… Vanco… 15 0
## 4 Aboriginal languages Aborigi… Calga… 5 0
## 5 Aboriginal languages Aborigi… Edmon… 10 0
## 6 Non-Official & Non-Aboriginal lang… Afrikaa… Toron… 265 0
## 7 Non-Official & Non-Aboriginal lang… Afrikaa… Montr… 10 0
## 8 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 9 Non-Official & Non-Aboriginal lang… Afrikaa… Calga… 505 15
## 10 Non-Official & Non-Aboriginal lang… Afrikaa… Edmon… 300 0
## # ℹ 1,050 more rows
```
### 3\.6\.3 Extracting rows satisfying multiple conditions using `,` or `&`
Suppose now we want to look at only the rows
for the French language in Montréal.
To do this, we need to filter the data set
to find rows that satisfy multiple conditions simultaneously.
We can do this with the comma symbol (`,`), which in the case of `filter`
is interpreted by R as “and”.
We write the code as shown below to filter the `official_langs` data frame
to subset the rows where `region == "Montréal"`
*and* the `language == "French"`.
```
filter(official_langs, region == "Montréal", language == "French")
```
```
## # A tibble: 1 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages French Montréal 2669195 1607550
```
We can also use the ampersand (`&`) logical operator, which gives
us cases where *both* one condition *and* another condition
are satisfied. You can use either comma (`,`) or ampersand (`&`) in the `filter`
function interchangeably.
```
filter(official_langs, region == "Montréal" & language == "French")
```
```
## # A tibble: 1 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages French Montréal 2669195 1607550
```
### 3\.6\.4 Extracting rows satisfying at least one condition using `|`
Suppose we were interested in only those rows corresponding to cities in Alberta
in the `official_langs` data set (Edmonton and Calgary).
We can’t use `,` as we did above because `region`
cannot be both Edmonton *and* Calgary simultaneously.
Instead, we can use the vertical pipe (`|`) logical operator,
which gives us the cases where one condition *or*
another condition *or* both are satisfied.
In the code below, we ask R to return the rows
where the `region` columns are equal to “Calgary” *or* “Edmonton”.
```
filter(official_langs, region == "Calgary" | region == "Edmonton")
```
```
## # A tibble: 4 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages English Calgary 1065070 844740
## 2 Official languages English Edmonton 1050410 792700
## 3 Official languages French Calgary 8630 2140
## 4 Official languages French Edmonton 10950 2520
```
### 3\.6\.5 Extracting rows with values in a vector using `%in%`
Next, suppose we want to see the populations of our five cities.
Let’s read in the `region_data.csv` file
that comes from the 2016 Canadian census,
as it contains statistics for number of households, land area, population
and number of dwellings for different regions.
```
region_data <- read_csv("data/region_data.csv")
region_data
```
```
## # A tibble: 35 × 5
## region households area population dwellings
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Belleville 43002 1355. 103472 45050
## 2 Lethbridge 45696 3047. 117394 48317
## 3 Thunder Bay 52545 2618. 121621 57146
## 4 Peterborough 50533 1637. 121721 55662
## 5 Saint John 52872 3793. 126202 58398
## 6 Brantford 52530 1086. 134203 54419
## 7 Moncton 61769 2625. 144810 66699
## 8 Guelph 59280 604. 151984 63324
## 9 Trois-Rivières 72502 1053. 156042 77734
## 10 Saguenay 72479 3079. 160980 77968
## # ℹ 25 more rows
```
To get the population of the five cities
we can filter the data set using the `%in%` operator.
The `%in%` operator is used to see if an element belongs to a vector.
Here we are filtering for rows where the value in the `region` column
matches any of the five cities we are intersted in: Toronto, Montréal,
Vancouver, Calgary, and Edmonton.
```
city_names <- c("Toronto", "Montréal", "Vancouver", "Calgary", "Edmonton")
five_cities <- filter(region_data,
region %in% city_names)
five_cities
```
```
## # A tibble: 5 × 5
## region households area population dwellings
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Edmonton 502143 9858. 1321426 537634
## 2 Calgary 519693 5242. 1392609 544870
## 3 Vancouver 960894 3040. 2463431 1027613
## 4 Montréal 1727310 4638. 4098927 1823281
## 5 Toronto 2135909 6270. 5928040 2235145
```
> **Note:** What’s the difference between `==` and `%in%`? Suppose we have two
> vectors, `vectorA` and `vectorB`. If you type `vectorA == vectorB` into R it
> will compare the vectors element by element. R checks if the first element of
> `vectorA` equals the first element of `vectorB`, the second element of
> `vectorA` equals the second element of `vectorB`, and so on. On the other hand,
> `vectorA %in% vectorB` compares the first element of `vectorA` to all the
> elements in `vectorB`. Then the second element of `vectorA` is compared
> to all the elements in `vectorB`, and so on. Notice the difference between `==` and
> `%in%` in the example below.
>
>
>
> ```
> c("Vancouver", "Toronto") == c("Toronto", "Vancouver")
> ```
>
>
> ```
> ## [1] FALSE FALSE
> ```
>
>
> ```
> c("Vancouver", "Toronto") %in% c("Toronto", "Vancouver")
> ```
>
>
> ```
> ## [1] TRUE TRUE
> ```
### 3\.6\.6 Extracting rows above or below a threshold using `>` and `<`
We saw in Section [3\.6\.3](wrangling.html#filter-and) that
2,669,195 people reported
speaking French in Montréal as their primary language at home.
If we are interested in finding the official languages in regions
with higher numbers of people who speak it as their primary language at home
compared to French in Montréal, then we can use `filter` to obtain rows
where the value of `most_at_home` is greater than
2,669,195\.
We use the `>` symbol to look for values *above* a threshold, and the `<` symbol
to look for values *below* a threshold. The `>=` and `<=` symbols similarly look
for *equal to or above* a threshold and *equal to or below* a
threshold.
```
filter(official_langs, most_at_home > 2669195)
```
```
## # A tibble: 1 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages English Toronto 3836770 3218725
```
`filter` returns a data frame with only one row, indicating that when
considering the official languages,
only English in Toronto is reported by more people
as their primary language at home
than French in Montréal according to the 2016 Canadian census.
3\.7 Using `mutate` to modify or add columns
--------------------------------------------
### 3\.7\.1 Using `mutate` to modify columns
In Section [3\.4\.3](wrangling.html#separate),
when we first read in the `"region_lang_top5_cities_messy.csv"` data,
all of the variables were “character” data types.
During the tidying process,
we used the `convert` argument from the `separate` function
to convert the `most_at_home` and `most_at_work` columns
to the desired integer (i.e., numeric class) data types.
But suppose we didn’t use the `convert` argument,
and needed to modify the column type some other way.
Below we create such a situation
so that we can demonstrate how to use `mutate`
to change the column types of a data frame.
`mutate` is a useful function to modify or create new data frame columns.
```
lang_messy <- read_csv("data/region_lang_top5_cities_messy.csv")
lang_messy_longer <- pivot_longer(lang_messy,
cols = Toronto:Edmonton,
names_to = "region",
values_to = "value")
tidy_lang_chr <- separate(lang_messy_longer, col = value,
into = c("most_at_home", "most_at_work"),
sep = "/")
official_langs_chr <- filter(tidy_lang_chr, category == "Official languages")
official_langs_chr
```
```
## # A tibble: 10 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <chr> <chr>
## 1 Official languages English Toronto 3836770 3218725
## 2 Official languages English Montréal 620510 412120
## 3 Official languages English Vancouver 1622735 1330555
## 4 Official languages English Calgary 1065070 844740
## 5 Official languages English Edmonton 1050410 792700
## 6 Official languages French Toronto 29800 11940
## 7 Official languages French Montréal 2669195 1607550
## 8 Official languages French Vancouver 8630 3245
## 9 Official languages French Calgary 8630 2140
## 10 Official languages French Edmonton 10950 2520
```
To use `mutate`, again we first specify the data set in the first argument,
and in the following arguments,
we specify the name of the column we want to modify or create
(here `most_at_home` and `most_at_work`), an `=` sign,
and then the function we want to apply (here `as.numeric`).
In the function we want to apply,
we refer directly to the column name upon which we want it to act
(here `most_at_home` and `most_at_work`).
In our example, we are naming the columns the same
names as columns that already exist in the data frame
(“most\_at\_home”, “most\_at\_work”)
and this will cause `mutate` to *overwrite* those columns
(also referred to as modifying those columns *in\-place*).
If we were to give the columns a new name,
then `mutate` would create new columns with the names we specified.
`mutate`’s general syntax is detailed in Figure [3\.14](wrangling.html#fig:img-mutate).
Figure 3\.14: Syntax for the `mutate` function.
Below we use `mutate` to convert the columns `most_at_home` and `most_at_work`
to numeric data types in the `official_langs` data set as described in Figure
[3\.14](wrangling.html#fig:img-mutate):
```
official_langs_numeric <- mutate(official_langs_chr,
most_at_home = as.numeric(most_at_home),
most_at_work = as.numeric(most_at_work)
)
official_langs_numeric
```
```
## # A tibble: 10 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <dbl> <dbl>
## 1 Official languages English Toronto 3836770 3218725
## 2 Official languages English Montréal 620510 412120
## 3 Official languages English Vancouver 1622735 1330555
## 4 Official languages English Calgary 1065070 844740
## 5 Official languages English Edmonton 1050410 792700
## 6 Official languages French Toronto 29800 11940
## 7 Official languages French Montréal 2669195 1607550
## 8 Official languages French Vancouver 8630 3245
## 9 Official languages French Calgary 8630 2140
## 10 Official languages French Edmonton 10950 2520
```
Now we see `<dbl>` appears under the `most_at_home` and `most_at_work` columns,
indicating they are double data types (which is a numeric data type)!
### 3\.7\.2 Using `mutate` to create new columns
We can see in the table that
3,836,770 people reported
speaking English in Toronto as their primary language at home, according to
the 2016 Canadian census. What does this number mean to us? To understand this
number, we need context. In particular, how many people were in Toronto when
this data was collected? From the 2016 Canadian census profile, the population
of Toronto was reported to be
5,928,040 people.
The number of people who report that English is their primary language at home
is much more meaningful when we report it in this context.
We can even go a step further and transform this count to a relative frequency
or proportion.
We can do this by dividing the number of people reporting a given language
as their primary language at home by the number of people who live in Toronto.
For example,
the proportion of people who reported that their primary language at home
was English in the 2016 Canadian census was
0\.65
in Toronto.
Let’s use `mutate` to create a new column in our data frame
that holds the proportion of people who speak English
for our five cities of focus in this chapter.
To accomplish this, we will need to do two tasks
beforehand:
1. Create a vector containing the population values for the cities.
2. Filter the `official_langs` data frame
so that we only keep the rows where the language is English.
To create a vector containing the population values for the five cities
(Toronto, Montréal, Vancouver, Calgary, Edmonton),
we will use the `c` function (recall that `c` stands for “concatenate”):
```
city_pops <- c(5928040, 4098927, 2463431, 1392609, 1321426)
city_pops
```
```
## [1] 5928040 4098927 2463431 1392609 1321426
```
And next, we will filter the `official_langs` data frame
so that we only keep the rows where the language is English.
We will name the new data frame we get from this `english_langs`:
```
english_langs <- filter(official_langs, language == "English")
english_langs
```
```
## # A tibble: 5 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages English Toronto 3836770 3218725
## 2 Official languages English Montréal 620510 412120
## 3 Official languages English Vancouver 1622735 1330555
## 4 Official languages English Calgary 1065070 844740
## 5 Official languages English Edmonton 1050410 792700
```
Finally, we can use `mutate` to create a new column,
named `most_at_home_proportion`, that will have value that corresponds to
the proportion of people reporting English as their primary
language at home.
We will compute this by dividing the column by our vector of city populations.
```
english_langs <- mutate(english_langs,
most_at_home_proportion = most_at_home / city_pops)
english_langs
```
```
## # A tibble: 5 × 6
## category language region most_at_home most_at_work most_at_home_proport…¹
## <chr> <chr> <chr> <int> <int> <dbl>
## 1 Official lan… English Toron… 3836770 3218725 0.647
## 2 Official lan… English Montr… 620510 412120 0.151
## 3 Official lan… English Vanco… 1622735 1330555 0.659
## 4 Official lan… English Calga… 1065070 844740 0.765
## 5 Official lan… English Edmon… 1050410 792700 0.795
## # ℹ abbreviated name: ¹most_at_home_proportion
```
In the computation above, we had to ensure that we ordered the `city_pops` vector in the
same order as the cities were listed in the `english_langs` data frame.
This is because R will perform the division computation we did by dividing
each element of the `most_at_home` column by each element of the
`city_pops` vector, matching them up by position.
Failing to do this would have resulted in the incorrect math being performed.
> **Note:** In more advanced data wrangling,
> one might solve this problem in a less error\-prone way though using
> a technique called “joins.”
> We link to resources that discuss this in the additional
> resources at the end of this chapter.
### 3\.7\.1 Using `mutate` to modify columns
In Section [3\.4\.3](wrangling.html#separate),
when we first read in the `"region_lang_top5_cities_messy.csv"` data,
all of the variables were “character” data types.
During the tidying process,
we used the `convert` argument from the `separate` function
to convert the `most_at_home` and `most_at_work` columns
to the desired integer (i.e., numeric class) data types.
But suppose we didn’t use the `convert` argument,
and needed to modify the column type some other way.
Below we create such a situation
so that we can demonstrate how to use `mutate`
to change the column types of a data frame.
`mutate` is a useful function to modify or create new data frame columns.
```
lang_messy <- read_csv("data/region_lang_top5_cities_messy.csv")
lang_messy_longer <- pivot_longer(lang_messy,
cols = Toronto:Edmonton,
names_to = "region",
values_to = "value")
tidy_lang_chr <- separate(lang_messy_longer, col = value,
into = c("most_at_home", "most_at_work"),
sep = "/")
official_langs_chr <- filter(tidy_lang_chr, category == "Official languages")
official_langs_chr
```
```
## # A tibble: 10 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <chr> <chr>
## 1 Official languages English Toronto 3836770 3218725
## 2 Official languages English Montréal 620510 412120
## 3 Official languages English Vancouver 1622735 1330555
## 4 Official languages English Calgary 1065070 844740
## 5 Official languages English Edmonton 1050410 792700
## 6 Official languages French Toronto 29800 11940
## 7 Official languages French Montréal 2669195 1607550
## 8 Official languages French Vancouver 8630 3245
## 9 Official languages French Calgary 8630 2140
## 10 Official languages French Edmonton 10950 2520
```
To use `mutate`, again we first specify the data set in the first argument,
and in the following arguments,
we specify the name of the column we want to modify or create
(here `most_at_home` and `most_at_work`), an `=` sign,
and then the function we want to apply (here `as.numeric`).
In the function we want to apply,
we refer directly to the column name upon which we want it to act
(here `most_at_home` and `most_at_work`).
In our example, we are naming the columns the same
names as columns that already exist in the data frame
(“most\_at\_home”, “most\_at\_work”)
and this will cause `mutate` to *overwrite* those columns
(also referred to as modifying those columns *in\-place*).
If we were to give the columns a new name,
then `mutate` would create new columns with the names we specified.
`mutate`’s general syntax is detailed in Figure [3\.14](wrangling.html#fig:img-mutate).
Figure 3\.14: Syntax for the `mutate` function.
Below we use `mutate` to convert the columns `most_at_home` and `most_at_work`
to numeric data types in the `official_langs` data set as described in Figure
[3\.14](wrangling.html#fig:img-mutate):
```
official_langs_numeric <- mutate(official_langs_chr,
most_at_home = as.numeric(most_at_home),
most_at_work = as.numeric(most_at_work)
)
official_langs_numeric
```
```
## # A tibble: 10 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <dbl> <dbl>
## 1 Official languages English Toronto 3836770 3218725
## 2 Official languages English Montréal 620510 412120
## 3 Official languages English Vancouver 1622735 1330555
## 4 Official languages English Calgary 1065070 844740
## 5 Official languages English Edmonton 1050410 792700
## 6 Official languages French Toronto 29800 11940
## 7 Official languages French Montréal 2669195 1607550
## 8 Official languages French Vancouver 8630 3245
## 9 Official languages French Calgary 8630 2140
## 10 Official languages French Edmonton 10950 2520
```
Now we see `<dbl>` appears under the `most_at_home` and `most_at_work` columns,
indicating they are double data types (which is a numeric data type)!
### 3\.7\.2 Using `mutate` to create new columns
We can see in the table that
3,836,770 people reported
speaking English in Toronto as their primary language at home, according to
the 2016 Canadian census. What does this number mean to us? To understand this
number, we need context. In particular, how many people were in Toronto when
this data was collected? From the 2016 Canadian census profile, the population
of Toronto was reported to be
5,928,040 people.
The number of people who report that English is their primary language at home
is much more meaningful when we report it in this context.
We can even go a step further and transform this count to a relative frequency
or proportion.
We can do this by dividing the number of people reporting a given language
as their primary language at home by the number of people who live in Toronto.
For example,
the proportion of people who reported that their primary language at home
was English in the 2016 Canadian census was
0\.65
in Toronto.
Let’s use `mutate` to create a new column in our data frame
that holds the proportion of people who speak English
for our five cities of focus in this chapter.
To accomplish this, we will need to do two tasks
beforehand:
1. Create a vector containing the population values for the cities.
2. Filter the `official_langs` data frame
so that we only keep the rows where the language is English.
To create a vector containing the population values for the five cities
(Toronto, Montréal, Vancouver, Calgary, Edmonton),
we will use the `c` function (recall that `c` stands for “concatenate”):
```
city_pops <- c(5928040, 4098927, 2463431, 1392609, 1321426)
city_pops
```
```
## [1] 5928040 4098927 2463431 1392609 1321426
```
And next, we will filter the `official_langs` data frame
so that we only keep the rows where the language is English.
We will name the new data frame we get from this `english_langs`:
```
english_langs <- filter(official_langs, language == "English")
english_langs
```
```
## # A tibble: 5 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Official languages English Toronto 3836770 3218725
## 2 Official languages English Montréal 620510 412120
## 3 Official languages English Vancouver 1622735 1330555
## 4 Official languages English Calgary 1065070 844740
## 5 Official languages English Edmonton 1050410 792700
```
Finally, we can use `mutate` to create a new column,
named `most_at_home_proportion`, that will have value that corresponds to
the proportion of people reporting English as their primary
language at home.
We will compute this by dividing the column by our vector of city populations.
```
english_langs <- mutate(english_langs,
most_at_home_proportion = most_at_home / city_pops)
english_langs
```
```
## # A tibble: 5 × 6
## category language region most_at_home most_at_work most_at_home_proport…¹
## <chr> <chr> <chr> <int> <int> <dbl>
## 1 Official lan… English Toron… 3836770 3218725 0.647
## 2 Official lan… English Montr… 620510 412120 0.151
## 3 Official lan… English Vanco… 1622735 1330555 0.659
## 4 Official lan… English Calga… 1065070 844740 0.765
## 5 Official lan… English Edmon… 1050410 792700 0.795
## # ℹ abbreviated name: ¹most_at_home_proportion
```
In the computation above, we had to ensure that we ordered the `city_pops` vector in the
same order as the cities were listed in the `english_langs` data frame.
This is because R will perform the division computation we did by dividing
each element of the `most_at_home` column by each element of the
`city_pops` vector, matching them up by position.
Failing to do this would have resulted in the incorrect math being performed.
> **Note:** In more advanced data wrangling,
> one might solve this problem in a less error\-prone way though using
> a technique called “joins.”
> We link to resources that discuss this in the additional
> resources at the end of this chapter.
3\.8 Combining functions using the pipe operator, `|>`
------------------------------------------------------
In R, we often have to call multiple functions in a sequence to process a data
frame. The basic ways of doing this can become quickly unreadable if there are
many steps. For example, suppose we need to perform three operations on a data
frame called `data`:
1. add a new column `new_col` that is double another `old_col`,
2. filter for rows where another column, `other_col`, is more than 5, and
3. select only the new column `new_col` for those rows.
One way of performing these three steps is to just write
multiple lines of code, storing temporary objects as you go:
```
output_1 <- mutate(data, new_col = old_col * 2)
output_2 <- filter(output_1, other_col > 5)
output <- select(output_2, new_col)
```
This is difficult to understand for multiple reasons. The reader may be tricked
into thinking the named `output_1` and `output_2` objects are important for some
reason, while they are just temporary intermediate computations. Further, the
reader has to look through and find where `output_1` and `output_2` are used in
each subsequent line.
Another option for doing this would be to *compose* the functions:
```
output <- select(filter(mutate(data, new_col = old_col * 2),
other_col > 5),
new_col)
```
Code like this can also be difficult to understand. Functions compose (reading
from left to right) in the *opposite order* in which they are computed by R
(above, `mutate` happens first, then `filter`, then `select`). It is also just a
really long line of code to read in one go.
The *pipe operator* (`|>`) solves this problem, resulting in cleaner and
easier\-to\-follow code. `|>` is built into R so you don’t need to load any
packages to use it.
You can think of the pipe as a physical pipe. It takes the output from the
function on the left\-hand side of the pipe, and passes it as the first argument
to the function on the right\-hand side of the pipe.
The code below accomplishes the same thing as the previous
two code blocks:
```
output <- data |>
mutate(new_col = old_col * 2) |>
filter(other_col > 5) |>
select(new_col)
```
> **Note:** You might also have noticed that we split the function calls across
> lines after the pipe, similar to when we did this earlier in the chapter
> for long function calls. Again, this is allowed and recommended, especially when
> the piped function calls create a long line of code. Doing this makes
> your code more readable. When you do this, it is important to end each line
> with the pipe operator `|>` to tell R that your code is continuing onto the
> next line.
> **Note:** In this textbook, we will be using the base R pipe operator syntax, `|>`.
> This base R `|>` pipe operator was inspired by a previous version of the pipe
> operator, `%>%`. The `%>%` pipe operator is not built into R
> and is from the `magrittr` R package.
> The `tidyverse` metapackage imports the `%>%` pipe operator via `dplyr`
> (which in turn imports the `magrittr` R package).
> There are some other differences between `%>%` and `|>` related to
> more advanced R uses, such as sharing and distributing code as R packages,
> however, these are beyond the scope of this textbook.
> We have this note in the book to make the reader aware that `%>%` exists
> as it is still commonly used in data analysis code and in many data science
> books and other resources.
> In most cases these two pipes are interchangeable and either can be used.
### 3\.8\.1 Using `|>` to combine `filter` and `select`
Let’s work with the tidy `tidy_lang` data set from Section [3\.4\.3](wrangling.html#separate),
which contains the number of Canadians reporting their primary language at home
and work for five major cities
(Toronto, Montréal, Vancouver, Calgary, and Edmonton):
```
tidy_lang
```
```
## # A tibble: 1,070 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Aboriginal languages Aborigi… Toron… 50 0
## 2 Aboriginal languages Aborigi… Montr… 15 0
## 3 Aboriginal languages Aborigi… Vanco… 15 0
## 4 Aboriginal languages Aborigi… Calga… 5 0
## 5 Aboriginal languages Aborigi… Edmon… 10 0
## 6 Non-Official & Non-Aboriginal lang… Afrikaa… Toron… 265 0
## 7 Non-Official & Non-Aboriginal lang… Afrikaa… Montr… 10 0
## 8 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 9 Non-Official & Non-Aboriginal lang… Afrikaa… Calga… 505 15
## 10 Non-Official & Non-Aboriginal lang… Afrikaa… Edmon… 300 0
## # ℹ 1,060 more rows
```
Suppose we want to create a subset of the data with only the languages and
counts of each language spoken most at home for the city of Vancouver. To do
this, we can use the functions `filter` and `select`. First, we use `filter` to
create a data frame called `van_data` that contains only values for Vancouver.
```
van_data <- filter(tidy_lang, region == "Vancouver")
van_data
```
```
## # A tibble: 214 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Aboriginal languages Aborigi… Vanco… 15 0
## 2 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 3 Non-Official & Non-Aboriginal lang… Afro-As… Vanco… 10 0
## 4 Non-Official & Non-Aboriginal lang… Akan (T… Vanco… 125 10
## 5 Non-Official & Non-Aboriginal lang… Albanian Vanco… 530 10
## 6 Aboriginal languages Algonqu… Vanco… 0 0
## 7 Aboriginal languages Algonqu… Vanco… 0 0
## 8 Non-Official & Non-Aboriginal lang… America… Vanco… 300 140
## 9 Non-Official & Non-Aboriginal lang… Amharic Vanco… 540 10
## 10 Non-Official & Non-Aboriginal lang… Arabic Vanco… 8680 275
## # ℹ 204 more rows
```
We then use `select` on this data frame to keep only the variables we want:
```
van_data_selected <- select(van_data, language, most_at_home)
van_data_selected
```
```
## # A tibble: 214 × 2
## language most_at_home
## <chr> <int>
## 1 Aboriginal languages, n.o.s. 15
## 2 Afrikaans 520
## 3 Afro-Asiatic languages, n.i.e. 10
## 4 Akan (Twi) 125
## 5 Albanian 530
## 6 Algonquian languages, n.i.e. 0
## 7 Algonquin 0
## 8 American Sign Language 300
## 9 Amharic 540
## 10 Arabic 8680
## # ℹ 204 more rows
```
Although this is valid code, there is a more readable approach we could take by
using the pipe, `|>`. With the pipe, we do not need to create an intermediate
object to store the output from `filter`. Instead, we can directly send the
output of `filter` to the input of `select`:
```
van_data_selected <- filter(tidy_lang, region == "Vancouver") |>
select(language, most_at_home)
van_data_selected
```
```
## # A tibble: 214 × 2
## language most_at_home
## <chr> <int>
## 1 Aboriginal languages, n.o.s. 15
## 2 Afrikaans 520
## 3 Afro-Asiatic languages, n.i.e. 10
## 4 Akan (Twi) 125
## 5 Albanian 530
## 6 Algonquian languages, n.i.e. 0
## 7 Algonquin 0
## 8 American Sign Language 300
## 9 Amharic 540
## 10 Arabic 8680
## # ℹ 204 more rows
```
But wait…Why do the `select` function calls
look different in these two examples?
Remember: when you use the pipe,
the output of the first function is automatically provided
as the first argument for the function that comes after it.
Therefore you do not specify the first argument in that function call.
In the code above,
The pipe passes the left\-hand side (the output of `filter`) to the first argument of the function on the right (`select`),
so in the `select` function you only see the second argument (and beyond).
As you can see, both of these approaches—with and without pipes—give us the same output, but the second
approach is clearer and more readable.
### 3\.8\.2 Using `|>` with more than two functions
The pipe operator (\|\>) can be used with any function in R.
Additionally, we can pipe together more than two functions.
For example, we can pipe together three functions to:
* `filter` rows to include only those where the counts of the language most spoken at home are greater than 10,000,
* `select` only the columns corresponding to `region`, `language` and `most_at_home`, and
* `arrange` the data frame rows in order by counts of the language most spoken at home
from smallest to largest.
As we saw in Chapter [1](intro.html#intro),
we can use the `tidyverse` `arrange` function
to order the rows in the data frame by the values of one or more columns.
Here we pass the column name `most_at_home` to arrange the data frame rows by the values in that column, in ascending order.
```
large_region_lang <- filter(tidy_lang, most_at_home > 10000) |>
select(region, language, most_at_home) |>
arrange(most_at_home)
large_region_lang
```
```
## # A tibble: 67 × 3
## region language most_at_home
## <chr> <chr> <int>
## 1 Edmonton Arabic 10590
## 2 Montréal Tamil 10670
## 3 Vancouver Russian 10795
## 4 Edmonton Spanish 10880
## 5 Edmonton French 10950
## 6 Calgary Arabic 11010
## 7 Calgary Urdu 11060
## 8 Vancouver Hindi 11235
## 9 Montréal Armenian 11835
## 10 Toronto Romanian 12200
## # ℹ 57 more rows
```
You will notice above that we passed `tidy_lang` as the first argument of the `filter` function.
We can also pipe the data frame into the same sequence of functions rather than
using it as the first argument of the first function. These two choices are equivalent,
and we get the same result.
```
large_region_lang <- tidy_lang |>
filter(most_at_home > 10000) |>
select(region, language, most_at_home) |>
arrange(most_at_home)
large_region_lang
```
```
## # A tibble: 67 × 3
## region language most_at_home
## <chr> <chr> <int>
## 1 Edmonton Arabic 10590
## 2 Montréal Tamil 10670
## 3 Vancouver Russian 10795
## 4 Edmonton Spanish 10880
## 5 Edmonton French 10950
## 6 Calgary Arabic 11010
## 7 Calgary Urdu 11060
## 8 Vancouver Hindi 11235
## 9 Montréal Armenian 11835
## 10 Toronto Romanian 12200
## # ℹ 57 more rows
```
Now that we’ve shown you the pipe operator as an alternative to storing
temporary objects and composing code, does this mean you should *never* store
temporary objects or compose code? Not necessarily!
There are times when you will still want to do these things.
For example, you might store a temporary object before feeding it into a plot function
so you can iteratively change the plot without having to
redo all of your data transformations.
Additionally, piping many functions can be overwhelming and difficult to debug;
you may want to store a temporary object midway through to inspect your result
before moving on with further steps.
### 3\.8\.1 Using `|>` to combine `filter` and `select`
Let’s work with the tidy `tidy_lang` data set from Section [3\.4\.3](wrangling.html#separate),
which contains the number of Canadians reporting their primary language at home
and work for five major cities
(Toronto, Montréal, Vancouver, Calgary, and Edmonton):
```
tidy_lang
```
```
## # A tibble: 1,070 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Aboriginal languages Aborigi… Toron… 50 0
## 2 Aboriginal languages Aborigi… Montr… 15 0
## 3 Aboriginal languages Aborigi… Vanco… 15 0
## 4 Aboriginal languages Aborigi… Calga… 5 0
## 5 Aboriginal languages Aborigi… Edmon… 10 0
## 6 Non-Official & Non-Aboriginal lang… Afrikaa… Toron… 265 0
## 7 Non-Official & Non-Aboriginal lang… Afrikaa… Montr… 10 0
## 8 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 9 Non-Official & Non-Aboriginal lang… Afrikaa… Calga… 505 15
## 10 Non-Official & Non-Aboriginal lang… Afrikaa… Edmon… 300 0
## # ℹ 1,060 more rows
```
Suppose we want to create a subset of the data with only the languages and
counts of each language spoken most at home for the city of Vancouver. To do
this, we can use the functions `filter` and `select`. First, we use `filter` to
create a data frame called `van_data` that contains only values for Vancouver.
```
van_data <- filter(tidy_lang, region == "Vancouver")
van_data
```
```
## # A tibble: 214 × 5
## category language region most_at_home most_at_work
## <chr> <chr> <chr> <int> <int>
## 1 Aboriginal languages Aborigi… Vanco… 15 0
## 2 Non-Official & Non-Aboriginal lang… Afrikaa… Vanco… 520 10
## 3 Non-Official & Non-Aboriginal lang… Afro-As… Vanco… 10 0
## 4 Non-Official & Non-Aboriginal lang… Akan (T… Vanco… 125 10
## 5 Non-Official & Non-Aboriginal lang… Albanian Vanco… 530 10
## 6 Aboriginal languages Algonqu… Vanco… 0 0
## 7 Aboriginal languages Algonqu… Vanco… 0 0
## 8 Non-Official & Non-Aboriginal lang… America… Vanco… 300 140
## 9 Non-Official & Non-Aboriginal lang… Amharic Vanco… 540 10
## 10 Non-Official & Non-Aboriginal lang… Arabic Vanco… 8680 275
## # ℹ 204 more rows
```
We then use `select` on this data frame to keep only the variables we want:
```
van_data_selected <- select(van_data, language, most_at_home)
van_data_selected
```
```
## # A tibble: 214 × 2
## language most_at_home
## <chr> <int>
## 1 Aboriginal languages, n.o.s. 15
## 2 Afrikaans 520
## 3 Afro-Asiatic languages, n.i.e. 10
## 4 Akan (Twi) 125
## 5 Albanian 530
## 6 Algonquian languages, n.i.e. 0
## 7 Algonquin 0
## 8 American Sign Language 300
## 9 Amharic 540
## 10 Arabic 8680
## # ℹ 204 more rows
```
Although this is valid code, there is a more readable approach we could take by
using the pipe, `|>`. With the pipe, we do not need to create an intermediate
object to store the output from `filter`. Instead, we can directly send the
output of `filter` to the input of `select`:
```
van_data_selected <- filter(tidy_lang, region == "Vancouver") |>
select(language, most_at_home)
van_data_selected
```
```
## # A tibble: 214 × 2
## language most_at_home
## <chr> <int>
## 1 Aboriginal languages, n.o.s. 15
## 2 Afrikaans 520
## 3 Afro-Asiatic languages, n.i.e. 10
## 4 Akan (Twi) 125
## 5 Albanian 530
## 6 Algonquian languages, n.i.e. 0
## 7 Algonquin 0
## 8 American Sign Language 300
## 9 Amharic 540
## 10 Arabic 8680
## # ℹ 204 more rows
```
But wait…Why do the `select` function calls
look different in these two examples?
Remember: when you use the pipe,
the output of the first function is automatically provided
as the first argument for the function that comes after it.
Therefore you do not specify the first argument in that function call.
In the code above,
The pipe passes the left\-hand side (the output of `filter`) to the first argument of the function on the right (`select`),
so in the `select` function you only see the second argument (and beyond).
As you can see, both of these approaches—with and without pipes—give us the same output, but the second
approach is clearer and more readable.
### 3\.8\.2 Using `|>` with more than two functions
The pipe operator (\|\>) can be used with any function in R.
Additionally, we can pipe together more than two functions.
For example, we can pipe together three functions to:
* `filter` rows to include only those where the counts of the language most spoken at home are greater than 10,000,
* `select` only the columns corresponding to `region`, `language` and `most_at_home`, and
* `arrange` the data frame rows in order by counts of the language most spoken at home
from smallest to largest.
As we saw in Chapter [1](intro.html#intro),
we can use the `tidyverse` `arrange` function
to order the rows in the data frame by the values of one or more columns.
Here we pass the column name `most_at_home` to arrange the data frame rows by the values in that column, in ascending order.
```
large_region_lang <- filter(tidy_lang, most_at_home > 10000) |>
select(region, language, most_at_home) |>
arrange(most_at_home)
large_region_lang
```
```
## # A tibble: 67 × 3
## region language most_at_home
## <chr> <chr> <int>
## 1 Edmonton Arabic 10590
## 2 Montréal Tamil 10670
## 3 Vancouver Russian 10795
## 4 Edmonton Spanish 10880
## 5 Edmonton French 10950
## 6 Calgary Arabic 11010
## 7 Calgary Urdu 11060
## 8 Vancouver Hindi 11235
## 9 Montréal Armenian 11835
## 10 Toronto Romanian 12200
## # ℹ 57 more rows
```
You will notice above that we passed `tidy_lang` as the first argument of the `filter` function.
We can also pipe the data frame into the same sequence of functions rather than
using it as the first argument of the first function. These two choices are equivalent,
and we get the same result.
```
large_region_lang <- tidy_lang |>
filter(most_at_home > 10000) |>
select(region, language, most_at_home) |>
arrange(most_at_home)
large_region_lang
```
```
## # A tibble: 67 × 3
## region language most_at_home
## <chr> <chr> <int>
## 1 Edmonton Arabic 10590
## 2 Montréal Tamil 10670
## 3 Vancouver Russian 10795
## 4 Edmonton Spanish 10880
## 5 Edmonton French 10950
## 6 Calgary Arabic 11010
## 7 Calgary Urdu 11060
## 8 Vancouver Hindi 11235
## 9 Montréal Armenian 11835
## 10 Toronto Romanian 12200
## # ℹ 57 more rows
```
Now that we’ve shown you the pipe operator as an alternative to storing
temporary objects and composing code, does this mean you should *never* store
temporary objects or compose code? Not necessarily!
There are times when you will still want to do these things.
For example, you might store a temporary object before feeding it into a plot function
so you can iteratively change the plot without having to
redo all of your data transformations.
Additionally, piping many functions can be overwhelming and difficult to debug;
you may want to store a temporary object midway through to inspect your result
before moving on with further steps.
3\.9 Aggregating data with `summarize` and `map`
------------------------------------------------
### 3\.9\.1 Calculating summary statistics on whole columns
As a part of many data analyses, we need to calculate a summary value for the
data (a *summary statistic*).
Examples of summary statistics we might want to calculate
are the number of observations, the average/mean value for a column,
the minimum value, etc.
Oftentimes,
this summary statistic is calculated from the values in a data frame column,
or columns, as shown in Figure [3\.15](wrangling.html#fig:summarize).
Figure 3\.15: `summarize` is useful for calculating summary statistics on one or more column(s). In its simplest use case, it creates a new data frame with a single row containing the summary statistic(s) for each column being summarized. The darker, top row of each table represents the column headers.
A useful `dplyr` function for calculating summary statistics is `summarize`,
where the first argument is the data frame and subsequent arguments
are the summaries we want to perform.
Here we show how to use the `summarize` function to calculate the minimum
and maximum number of Canadians
reporting a particular language as their primary language at home.
First a reminder of what `region_lang` looks like:
```
region_lang
```
```
## # A tibble: 7,490 × 7
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 St. Joh… Aborigi… Aborigi… 5 0 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
We apply `summarize` to calculate the minimum
and maximum number of Canadians
reporting a particular language as their primary language at home,
for any region:
```
summarize(region_lang,
min_most_at_home = min(most_at_home),
max_most_at_home = max(most_at_home))
```
```
## # A tibble: 1 × 2
## min_most_at_home max_most_at_home
## <dbl> <dbl>
## 1 0 3836770
```
From this we see that there are some languages in the data set that no one speaks
as their primary language at home. We also see that the most commonly spoken
primary language at home is spoken by
3,836,770
people.
### 3\.9\.2 Calculating summary statistics when there are `NA`s
In data frames in R, the value `NA` is often used to denote missing data.
Many of the base R statistical summary functions
(e.g., `max`, `min`, `mean`, `sum`, etc) will return `NA`
when applied to columns containing `NA` values.
Usually that is not what we want to happen;
instead, we would usually like R to ignore the missing entries
and calculate the summary statistic using all of the other non\-`NA` values
in the column.
Fortunately many of these functions provide an argument `na.rm` that lets
us tell the function what to do when it encounters `NA` values.
In particular, if we specify `na.rm = TRUE`, the function will ignore
missing values and return a summary of all the non\-missing entries.
We show an example of this combined with `summarize` below.
First we create a new version of the `region_lang` data frame,
named `region_lang_na`, that has a seemingly innocuous `NA`
in the first row of the `most_at_home column`:
```
region_lang_na
```
```
## # A tibble: 7,490 × 7
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 St. Joh… Aborigi… Aborigi… 5 NA 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
Now if we apply the `summarize` function as above,
we see that we no longer get the minimum and maximum returned,
but just an `NA` instead!
```
summarize(region_lang_na,
min_most_at_home = min(most_at_home),
max_most_at_home = max(most_at_home))
```
```
## # A tibble: 1 × 2
## min_most_at_home max_most_at_home
## <dbl> <dbl>
## 1 NA NA
```
We can fix this by adding the `na.rm = TRUE` as explained above:
```
summarize(region_lang_na,
min_most_at_home = min(most_at_home, na.rm = TRUE),
max_most_at_home = max(most_at_home, na.rm = TRUE))
```
```
## # A tibble: 1 × 2
## min_most_at_home max_most_at_home
## <dbl> <dbl>
## 1 0 3836770
```
### 3\.9\.3 Calculating summary statistics for groups of rows
A common pairing with `summarize` is `group_by`. Pairing these functions
together can let you summarize values for subgroups within a data set,
as illustrated in Figure [3\.16](wrangling.html#fig:summarize-groupby).
For example, we can use `group_by` to group the regions of the `region_lang` data frame and then calculate the minimum and maximum number of Canadians
reporting the language as the primary language at home
for each of the regions in the data set.
Figure 3\.16: `summarize` and `group_by` is useful for calculating summary statistics on one or more column(s) for each group. It creates a new data frame—with one row for each group—containing the summary statistic(s) for each column being summarized. It also creates a column listing the value of the grouping variable. The darker, top row of each table represents the column headers. The orange, blue, and green colored rows correspond to the rows that belong to each of the three groups being represented in this cartoon example.
The `group_by` function takes at least two arguments. The first is the data
frame that will be grouped, and the second and onwards are columns to use in the
grouping. Here we use only one column for grouping (`region`), but more than one
can also be used. To do this, list additional columns separated by commas.
```
group_by(region_lang, region) |>
summarize(
min_most_at_home = min(most_at_home),
max_most_at_home = max(most_at_home)
)
```
```
## # A tibble: 35 × 3
## region min_most_at_home max_most_at_home
## <chr> <dbl> <dbl>
## 1 Abbotsford - Mission 0 137445
## 2 Barrie 0 182390
## 3 Belleville 0 97840
## 4 Brantford 0 124560
## 5 Calgary 0 1065070
## 6 Edmonton 0 1050410
## 7 Greater Sudbury 0 133960
## 8 Guelph 0 130950
## 9 Halifax 0 371215
## 10 Hamilton 0 630380
## # ℹ 25 more rows
```
Notice that `group_by` on its own doesn’t change the way the data looks.
In the output below, the grouped data set looks the same,
and it doesn’t *appear* to be grouped by `region`.
Instead, `group_by` simply changes how other functions work with the data,
as we saw with `summarize` above.
```
group_by(region_lang, region)
```
```
## # A tibble: 7,490 × 7
## # Groups: region [35]
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 St. Joh… Aborigi… Aborigi… 5 0 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
### 3\.9\.4 Calculating summary statistics on many columns
Sometimes we need to summarize statistics across many columns.
An example of this is illustrated in Figure [3\.17](wrangling.html#fig:summarize-across).
In such a case, using `summarize` alone means that we have to
type out the name of each column we want to summarize.
In this section we will meet two strategies for performing this task.
First we will see how we can do this using `summarize` \+ `across`.
Then we will also explore how we can use a more general iteration function,
`map`, to also accomplish this.
Figure 3\.17: `summarize` \+ `across` or `map` is useful for efficiently calculating summary statistics on many columns at once. The darker, top row of each table represents the column headers.
#### `summarize` and `across` for calculating summary statistics on many columns
To summarize statistics across many columns, we can use the
`summarize` function we have just recently learned about.
However, in such a case, using `summarize` alone means that we have to
type out the name of each column we want to summarize.
To do this more efficiently, we can pair `summarize` with `across`
and use a colon `:` to specify a range of columns we would like
to perform the statistical summaries on.
Here we demonstrate finding the maximum value
of each of the numeric
columns of the `region_lang` data set.
```
region_lang |>
summarize(across(mother_tongue:lang_known, max))
```
```
## # A tibble: 1 × 4
## mother_tongue most_at_home most_at_work lang_known
## <dbl> <dbl> <dbl> <dbl>
## 1 3061820 3836770 3218725 5600480
```
> **Note:** Similar to when we use base R statistical summary functions
> (e.g., `max`, `min`, `mean`, `sum`, etc) with `summarize` alone,
> the use of the `summarize` \+ `across` functions paired
> with base R statistical summary functions
> also return `NA`s when we apply them to columns that
> contain `NA`s in the data frame.
>
>
> To resolve this issue, again we need to add the argument `na.rm = TRUE`.
> But in this case we need to use it a little bit differently:
> we write a `~`, and then call the summary function
> with the first argument `.x` and the second argument `na.rm = TRUE`.
> For example, for the previous example with the `max` function, we would write
>
>
>
> ```
> region_lang_na |>
> summarize(across(mother_tongue:lang_known, ~ max(.x, na.rm = TRUE)))
> ```
>
>
> ```
> ## # A tibble: 1 × 4
> ## mother_tongue most_at_home most_at_work lang_known
> ## <dbl> <dbl> <dbl> <dbl>
> ## 1 3061820 3836770 3218725 5600480
> ```
>
> The meaning of this unusual syntax is a bit beyond the scope of this book,
> but interested readers can look up *anonymous functions* in the `purrr`
> package from `tidyverse`.
#### `map` for calculating summary statistics on many columns
An alternative to `summarize` and `across`
for applying a function to many columns is the `map` family of functions.
Let’s again find the maximum value of each column of the
`region_lang` data frame, but using `map` with the `max` function this time.
`map` takes two arguments:
an object (a vector, data frame or list) that you want to apply the function to,
and the function that you would like to apply to each column.
Note that `map` does not have an argument
to specify *which* columns to apply the function to.
Therefore, we will use the `select` function before calling `map`
to choose the columns for which we want the maximum.
```
region_lang |>
select(mother_tongue:lang_known) |>
map(max)
```
```
## $mother_tongue
## [1] 3061820
##
## $most_at_home
## [1] 3836770
##
## $most_at_work
## [1] 3218725
##
## $lang_known
## [1] 5600480
```
> **Note:** The `map` function comes from the `purrr` package. But since
> `purrr` is part of the tidyverse, once we call `library(tidyverse)` we
> do not need to load the `purrr` package separately.
The output looks a bit weird… we passed in a data frame, but the output
doesn’t look like a data frame. As it so happens, it is *not* a data frame, but
rather a plain list:
```
region_lang |>
select(mother_tongue:lang_known) |>
map(max) |>
typeof()
```
```
## [1] "list"
```
So what do we do? Should we convert this to a data frame? We could, but a
simpler alternative is to just use a different `map` function. There
are quite a few to choose from, they all work similarly, but
their name reflects the type of output you want from the mapping operation.
Table [3\.3](wrangling.html#tab:map-table) lists the commonly used `map` functions as well
as their output type.
Table 3\.3: The `map` functions in R.
| `map` function | Output |
| --- | --- |
| `map` | list |
| `map_lgl` | logical vector |
| `map_int` | integer vector |
| `map_dbl` | double vector |
| `map_chr` | character vector |
| `map_dfc` | data frame, combining column\-wise |
| `map_dfr` | data frame, combining row\-wise |
Let’s get the columns’ maximums again, but this time use the `map_dfr` function
to return the output as a data frame:
```
region_lang |>
select(mother_tongue:lang_known) |>
map_dfr(max)
```
```
## # A tibble: 1 × 4
## mother_tongue most_at_home most_at_work lang_known
## <dbl> <dbl> <dbl> <dbl>
## 1 3061820 3836770 3218725 5600480
```
> **Note:** Similar to when we use base R statistical summary functions
> (e.g., `max`, `min`, `mean`, `sum`, etc.) with `summarize`,
> `map` functions paired with base R statistical summary functions
> also return `NA` values when we apply them to columns that
> contain `NA` values.
>
>
> To avoid this, again we need to add the argument `na.rm = TRUE`.
> When we use this with `map`, we do this by adding a `,`
> and then `na.rm = TRUE` after specifying the function, as illustrated below:
>
>
>
> ```
> region_lang_na |>
> select(mother_tongue:lang_known) |>
> map_dfr(max, na.rm = TRUE)
> ```
>
>
> ```
> ## # A tibble: 1 × 4
> ## mother_tongue most_at_home most_at_work lang_known
> ## <dbl> <dbl> <dbl> <dbl>
> ## 1 3061820 3836770 3218725 5600480
> ```
The `map` functions are generally quite useful for solving many problems
involving repeatedly applying functions in R.
Additionally, their use is not limited to columns of a data frame;
`map` family functions can be used to apply functions to elements of a vector,
or a list, and even to lists of (nested!) data frames.
To learn more about the `map` functions, see the additional resources
section at the end of this chapter.
### 3\.9\.1 Calculating summary statistics on whole columns
As a part of many data analyses, we need to calculate a summary value for the
data (a *summary statistic*).
Examples of summary statistics we might want to calculate
are the number of observations, the average/mean value for a column,
the minimum value, etc.
Oftentimes,
this summary statistic is calculated from the values in a data frame column,
or columns, as shown in Figure [3\.15](wrangling.html#fig:summarize).
Figure 3\.15: `summarize` is useful for calculating summary statistics on one or more column(s). In its simplest use case, it creates a new data frame with a single row containing the summary statistic(s) for each column being summarized. The darker, top row of each table represents the column headers.
A useful `dplyr` function for calculating summary statistics is `summarize`,
where the first argument is the data frame and subsequent arguments
are the summaries we want to perform.
Here we show how to use the `summarize` function to calculate the minimum
and maximum number of Canadians
reporting a particular language as their primary language at home.
First a reminder of what `region_lang` looks like:
```
region_lang
```
```
## # A tibble: 7,490 × 7
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 St. Joh… Aborigi… Aborigi… 5 0 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
We apply `summarize` to calculate the minimum
and maximum number of Canadians
reporting a particular language as their primary language at home,
for any region:
```
summarize(region_lang,
min_most_at_home = min(most_at_home),
max_most_at_home = max(most_at_home))
```
```
## # A tibble: 1 × 2
## min_most_at_home max_most_at_home
## <dbl> <dbl>
## 1 0 3836770
```
From this we see that there are some languages in the data set that no one speaks
as their primary language at home. We also see that the most commonly spoken
primary language at home is spoken by
3,836,770
people.
### 3\.9\.2 Calculating summary statistics when there are `NA`s
In data frames in R, the value `NA` is often used to denote missing data.
Many of the base R statistical summary functions
(e.g., `max`, `min`, `mean`, `sum`, etc) will return `NA`
when applied to columns containing `NA` values.
Usually that is not what we want to happen;
instead, we would usually like R to ignore the missing entries
and calculate the summary statistic using all of the other non\-`NA` values
in the column.
Fortunately many of these functions provide an argument `na.rm` that lets
us tell the function what to do when it encounters `NA` values.
In particular, if we specify `na.rm = TRUE`, the function will ignore
missing values and return a summary of all the non\-missing entries.
We show an example of this combined with `summarize` below.
First we create a new version of the `region_lang` data frame,
named `region_lang_na`, that has a seemingly innocuous `NA`
in the first row of the `most_at_home column`:
```
region_lang_na
```
```
## # A tibble: 7,490 × 7
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 St. Joh… Aborigi… Aborigi… 5 NA 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
Now if we apply the `summarize` function as above,
we see that we no longer get the minimum and maximum returned,
but just an `NA` instead!
```
summarize(region_lang_na,
min_most_at_home = min(most_at_home),
max_most_at_home = max(most_at_home))
```
```
## # A tibble: 1 × 2
## min_most_at_home max_most_at_home
## <dbl> <dbl>
## 1 NA NA
```
We can fix this by adding the `na.rm = TRUE` as explained above:
```
summarize(region_lang_na,
min_most_at_home = min(most_at_home, na.rm = TRUE),
max_most_at_home = max(most_at_home, na.rm = TRUE))
```
```
## # A tibble: 1 × 2
## min_most_at_home max_most_at_home
## <dbl> <dbl>
## 1 0 3836770
```
### 3\.9\.3 Calculating summary statistics for groups of rows
A common pairing with `summarize` is `group_by`. Pairing these functions
together can let you summarize values for subgroups within a data set,
as illustrated in Figure [3\.16](wrangling.html#fig:summarize-groupby).
For example, we can use `group_by` to group the regions of the `region_lang` data frame and then calculate the minimum and maximum number of Canadians
reporting the language as the primary language at home
for each of the regions in the data set.
Figure 3\.16: `summarize` and `group_by` is useful for calculating summary statistics on one or more column(s) for each group. It creates a new data frame—with one row for each group—containing the summary statistic(s) for each column being summarized. It also creates a column listing the value of the grouping variable. The darker, top row of each table represents the column headers. The orange, blue, and green colored rows correspond to the rows that belong to each of the three groups being represented in this cartoon example.
The `group_by` function takes at least two arguments. The first is the data
frame that will be grouped, and the second and onwards are columns to use in the
grouping. Here we use only one column for grouping (`region`), but more than one
can also be used. To do this, list additional columns separated by commas.
```
group_by(region_lang, region) |>
summarize(
min_most_at_home = min(most_at_home),
max_most_at_home = max(most_at_home)
)
```
```
## # A tibble: 35 × 3
## region min_most_at_home max_most_at_home
## <chr> <dbl> <dbl>
## 1 Abbotsford - Mission 0 137445
## 2 Barrie 0 182390
## 3 Belleville 0 97840
## 4 Brantford 0 124560
## 5 Calgary 0 1065070
## 6 Edmonton 0 1050410
## 7 Greater Sudbury 0 133960
## 8 Guelph 0 130950
## 9 Halifax 0 371215
## 10 Hamilton 0 630380
## # ℹ 25 more rows
```
Notice that `group_by` on its own doesn’t change the way the data looks.
In the output below, the grouped data set looks the same,
and it doesn’t *appear* to be grouped by `region`.
Instead, `group_by` simply changes how other functions work with the data,
as we saw with `summarize` above.
```
group_by(region_lang, region)
```
```
## # A tibble: 7,490 × 7
## # Groups: region [35]
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 St. Joh… Aborigi… Aborigi… 5 0 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
### 3\.9\.4 Calculating summary statistics on many columns
Sometimes we need to summarize statistics across many columns.
An example of this is illustrated in Figure [3\.17](wrangling.html#fig:summarize-across).
In such a case, using `summarize` alone means that we have to
type out the name of each column we want to summarize.
In this section we will meet two strategies for performing this task.
First we will see how we can do this using `summarize` \+ `across`.
Then we will also explore how we can use a more general iteration function,
`map`, to also accomplish this.
Figure 3\.17: `summarize` \+ `across` or `map` is useful for efficiently calculating summary statistics on many columns at once. The darker, top row of each table represents the column headers.
#### `summarize` and `across` for calculating summary statistics on many columns
To summarize statistics across many columns, we can use the
`summarize` function we have just recently learned about.
However, in such a case, using `summarize` alone means that we have to
type out the name of each column we want to summarize.
To do this more efficiently, we can pair `summarize` with `across`
and use a colon `:` to specify a range of columns we would like
to perform the statistical summaries on.
Here we demonstrate finding the maximum value
of each of the numeric
columns of the `region_lang` data set.
```
region_lang |>
summarize(across(mother_tongue:lang_known, max))
```
```
## # A tibble: 1 × 4
## mother_tongue most_at_home most_at_work lang_known
## <dbl> <dbl> <dbl> <dbl>
## 1 3061820 3836770 3218725 5600480
```
> **Note:** Similar to when we use base R statistical summary functions
> (e.g., `max`, `min`, `mean`, `sum`, etc) with `summarize` alone,
> the use of the `summarize` \+ `across` functions paired
> with base R statistical summary functions
> also return `NA`s when we apply them to columns that
> contain `NA`s in the data frame.
>
>
> To resolve this issue, again we need to add the argument `na.rm = TRUE`.
> But in this case we need to use it a little bit differently:
> we write a `~`, and then call the summary function
> with the first argument `.x` and the second argument `na.rm = TRUE`.
> For example, for the previous example with the `max` function, we would write
>
>
>
> ```
> region_lang_na |>
> summarize(across(mother_tongue:lang_known, ~ max(.x, na.rm = TRUE)))
> ```
>
>
> ```
> ## # A tibble: 1 × 4
> ## mother_tongue most_at_home most_at_work lang_known
> ## <dbl> <dbl> <dbl> <dbl>
> ## 1 3061820 3836770 3218725 5600480
> ```
>
> The meaning of this unusual syntax is a bit beyond the scope of this book,
> but interested readers can look up *anonymous functions* in the `purrr`
> package from `tidyverse`.
#### `map` for calculating summary statistics on many columns
An alternative to `summarize` and `across`
for applying a function to many columns is the `map` family of functions.
Let’s again find the maximum value of each column of the
`region_lang` data frame, but using `map` with the `max` function this time.
`map` takes two arguments:
an object (a vector, data frame or list) that you want to apply the function to,
and the function that you would like to apply to each column.
Note that `map` does not have an argument
to specify *which* columns to apply the function to.
Therefore, we will use the `select` function before calling `map`
to choose the columns for which we want the maximum.
```
region_lang |>
select(mother_tongue:lang_known) |>
map(max)
```
```
## $mother_tongue
## [1] 3061820
##
## $most_at_home
## [1] 3836770
##
## $most_at_work
## [1] 3218725
##
## $lang_known
## [1] 5600480
```
> **Note:** The `map` function comes from the `purrr` package. But since
> `purrr` is part of the tidyverse, once we call `library(tidyverse)` we
> do not need to load the `purrr` package separately.
The output looks a bit weird… we passed in a data frame, but the output
doesn’t look like a data frame. As it so happens, it is *not* a data frame, but
rather a plain list:
```
region_lang |>
select(mother_tongue:lang_known) |>
map(max) |>
typeof()
```
```
## [1] "list"
```
So what do we do? Should we convert this to a data frame? We could, but a
simpler alternative is to just use a different `map` function. There
are quite a few to choose from, they all work similarly, but
their name reflects the type of output you want from the mapping operation.
Table [3\.3](wrangling.html#tab:map-table) lists the commonly used `map` functions as well
as their output type.
Table 3\.3: The `map` functions in R.
| `map` function | Output |
| --- | --- |
| `map` | list |
| `map_lgl` | logical vector |
| `map_int` | integer vector |
| `map_dbl` | double vector |
| `map_chr` | character vector |
| `map_dfc` | data frame, combining column\-wise |
| `map_dfr` | data frame, combining row\-wise |
Let’s get the columns’ maximums again, but this time use the `map_dfr` function
to return the output as a data frame:
```
region_lang |>
select(mother_tongue:lang_known) |>
map_dfr(max)
```
```
## # A tibble: 1 × 4
## mother_tongue most_at_home most_at_work lang_known
## <dbl> <dbl> <dbl> <dbl>
## 1 3061820 3836770 3218725 5600480
```
> **Note:** Similar to when we use base R statistical summary functions
> (e.g., `max`, `min`, `mean`, `sum`, etc.) with `summarize`,
> `map` functions paired with base R statistical summary functions
> also return `NA` values when we apply them to columns that
> contain `NA` values.
>
>
> To avoid this, again we need to add the argument `na.rm = TRUE`.
> When we use this with `map`, we do this by adding a `,`
> and then `na.rm = TRUE` after specifying the function, as illustrated below:
>
>
>
> ```
> region_lang_na |>
> select(mother_tongue:lang_known) |>
> map_dfr(max, na.rm = TRUE)
> ```
>
>
> ```
> ## # A tibble: 1 × 4
> ## mother_tongue most_at_home most_at_work lang_known
> ## <dbl> <dbl> <dbl> <dbl>
> ## 1 3061820 3836770 3218725 5600480
> ```
The `map` functions are generally quite useful for solving many problems
involving repeatedly applying functions in R.
Additionally, their use is not limited to columns of a data frame;
`map` family functions can be used to apply functions to elements of a vector,
or a list, and even to lists of (nested!) data frames.
To learn more about the `map` functions, see the additional resources
section at the end of this chapter.
#### `summarize` and `across` for calculating summary statistics on many columns
To summarize statistics across many columns, we can use the
`summarize` function we have just recently learned about.
However, in such a case, using `summarize` alone means that we have to
type out the name of each column we want to summarize.
To do this more efficiently, we can pair `summarize` with `across`
and use a colon `:` to specify a range of columns we would like
to perform the statistical summaries on.
Here we demonstrate finding the maximum value
of each of the numeric
columns of the `region_lang` data set.
```
region_lang |>
summarize(across(mother_tongue:lang_known, max))
```
```
## # A tibble: 1 × 4
## mother_tongue most_at_home most_at_work lang_known
## <dbl> <dbl> <dbl> <dbl>
## 1 3061820 3836770 3218725 5600480
```
> **Note:** Similar to when we use base R statistical summary functions
> (e.g., `max`, `min`, `mean`, `sum`, etc) with `summarize` alone,
> the use of the `summarize` \+ `across` functions paired
> with base R statistical summary functions
> also return `NA`s when we apply them to columns that
> contain `NA`s in the data frame.
>
>
> To resolve this issue, again we need to add the argument `na.rm = TRUE`.
> But in this case we need to use it a little bit differently:
> we write a `~`, and then call the summary function
> with the first argument `.x` and the second argument `na.rm = TRUE`.
> For example, for the previous example with the `max` function, we would write
>
>
>
> ```
> region_lang_na |>
> summarize(across(mother_tongue:lang_known, ~ max(.x, na.rm = TRUE)))
> ```
>
>
> ```
> ## # A tibble: 1 × 4
> ## mother_tongue most_at_home most_at_work lang_known
> ## <dbl> <dbl> <dbl> <dbl>
> ## 1 3061820 3836770 3218725 5600480
> ```
>
> The meaning of this unusual syntax is a bit beyond the scope of this book,
> but interested readers can look up *anonymous functions* in the `purrr`
> package from `tidyverse`.
#### `map` for calculating summary statistics on many columns
An alternative to `summarize` and `across`
for applying a function to many columns is the `map` family of functions.
Let’s again find the maximum value of each column of the
`region_lang` data frame, but using `map` with the `max` function this time.
`map` takes two arguments:
an object (a vector, data frame or list) that you want to apply the function to,
and the function that you would like to apply to each column.
Note that `map` does not have an argument
to specify *which* columns to apply the function to.
Therefore, we will use the `select` function before calling `map`
to choose the columns for which we want the maximum.
```
region_lang |>
select(mother_tongue:lang_known) |>
map(max)
```
```
## $mother_tongue
## [1] 3061820
##
## $most_at_home
## [1] 3836770
##
## $most_at_work
## [1] 3218725
##
## $lang_known
## [1] 5600480
```
> **Note:** The `map` function comes from the `purrr` package. But since
> `purrr` is part of the tidyverse, once we call `library(tidyverse)` we
> do not need to load the `purrr` package separately.
The output looks a bit weird… we passed in a data frame, but the output
doesn’t look like a data frame. As it so happens, it is *not* a data frame, but
rather a plain list:
```
region_lang |>
select(mother_tongue:lang_known) |>
map(max) |>
typeof()
```
```
## [1] "list"
```
So what do we do? Should we convert this to a data frame? We could, but a
simpler alternative is to just use a different `map` function. There
are quite a few to choose from, they all work similarly, but
their name reflects the type of output you want from the mapping operation.
Table [3\.3](wrangling.html#tab:map-table) lists the commonly used `map` functions as well
as their output type.
Table 3\.3: The `map` functions in R.
| `map` function | Output |
| --- | --- |
| `map` | list |
| `map_lgl` | logical vector |
| `map_int` | integer vector |
| `map_dbl` | double vector |
| `map_chr` | character vector |
| `map_dfc` | data frame, combining column\-wise |
| `map_dfr` | data frame, combining row\-wise |
Let’s get the columns’ maximums again, but this time use the `map_dfr` function
to return the output as a data frame:
```
region_lang |>
select(mother_tongue:lang_known) |>
map_dfr(max)
```
```
## # A tibble: 1 × 4
## mother_tongue most_at_home most_at_work lang_known
## <dbl> <dbl> <dbl> <dbl>
## 1 3061820 3836770 3218725 5600480
```
> **Note:** Similar to when we use base R statistical summary functions
> (e.g., `max`, `min`, `mean`, `sum`, etc.) with `summarize`,
> `map` functions paired with base R statistical summary functions
> also return `NA` values when we apply them to columns that
> contain `NA` values.
>
>
> To avoid this, again we need to add the argument `na.rm = TRUE`.
> When we use this with `map`, we do this by adding a `,`
> and then `na.rm = TRUE` after specifying the function, as illustrated below:
>
>
>
> ```
> region_lang_na |>
> select(mother_tongue:lang_known) |>
> map_dfr(max, na.rm = TRUE)
> ```
>
>
> ```
> ## # A tibble: 1 × 4
> ## mother_tongue most_at_home most_at_work lang_known
> ## <dbl> <dbl> <dbl> <dbl>
> ## 1 3061820 3836770 3218725 5600480
> ```
The `map` functions are generally quite useful for solving many problems
involving repeatedly applying functions in R.
Additionally, their use is not limited to columns of a data frame;
`map` family functions can be used to apply functions to elements of a vector,
or a list, and even to lists of (nested!) data frames.
To learn more about the `map` functions, see the additional resources
section at the end of this chapter.
3\.10 Apply functions across many columns with `mutate` and `across`
--------------------------------------------------------------------
Sometimes we need to apply a function to many columns in a data frame.
For example, we would need to do this when converting units of measurements across many columns.
We illustrate such a data transformation in Figure [3\.18](wrangling.html#fig:mutate-across).
Figure 3\.18: `mutate` and `across` is useful for applying functions across many columns. The darker, top row of each table represents the column headers.
For example,
imagine that we wanted to convert all the numeric columns
in the `region_lang` data frame from double type to integer type
using the `as.integer` function.
When we revisit the `region_lang` data frame,
we can see that this would be the columns from `mother_tongue` to `lang_known`.
```
region_lang
```
```
## # A tibble: 7,490 × 7
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 St. Joh… Aborigi… Aborigi… 5 0 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
To accomplish such a task, we can use `mutate` paired with `across`.
This works in a similar way for column selection,
as we saw when we used `summarize` \+ `across` earlier.
As we did above,
we again use `across` to specify the columns using `select` syntax
as well as the function we want to apply on the specified columns.
However, a key difference here is that we are using `mutate`,
which means that we get back a data frame with the same number of columns and rows.
The only thing that changes is the transformation we applied
to the specified columns (here `mother_tongue` to `lang_known`).
```
region_lang |>
mutate(across(mother_tongue:lang_known, as.integer))
```
```
## # A tibble: 7,490 × 7
## region category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <chr> <int> <int> <int> <int>
## 1 St. Joh… Aborigi… Aborigi… 5 0 0 0
## 2 Halifax Aborigi… Aborigi… 5 0 0 0
## 3 Moncton Aborigi… Aborigi… 0 0 0 0
## 4 Saint J… Aborigi… Aborigi… 0 0 0 0
## 5 Saguenay Aborigi… Aborigi… 5 5 0 0
## 6 Québec Aborigi… Aborigi… 0 5 0 20
## 7 Sherbro… Aborigi… Aborigi… 0 0 0 0
## 8 Trois-R… Aborigi… Aborigi… 0 0 0 0
## 9 Montréal Aborigi… Aborigi… 30 15 0 10
## 10 Kingston Aborigi… Aborigi… 0 0 0 0
## # ℹ 7,480 more rows
```
3\.11 Apply functions across columns within one row with `rowwise` and `mutate`
-------------------------------------------------------------------------------
What if you want to apply a function across columns but within one row?
We illustrate such a data transformation in Figure [3\.19](wrangling.html#fig:rowwise).
Figure 3\.19: `rowwise` and `mutate` is useful for applying functions across columns within one row. The darker, top row of each table represents the column headers.
For instance, suppose we want to know the maximum value between `mother_tongue`,
`most_at_home`, `most_at_work`
and `lang_known` for each language and region
in the `region_lang` data set.
In other words, we want to apply the `max` function *row\-wise.*
We will use the (aptly named) `rowwise` function in combination with `mutate`
to accomplish this task.
Before we apply `rowwise`, we will `select` only the count columns
so we can see all the columns in the data frame’s output easily in the book.
So for this demonstration, the data set we are operating on looks like this:
```
region_lang |>
select(mother_tongue:lang_known)
```
```
## # A tibble: 7,490 × 4
## mother_tongue most_at_home most_at_work lang_known
## <dbl> <dbl> <dbl> <dbl>
## 1 5 0 0 0
## 2 5 0 0 0
## 3 0 0 0 0
## 4 0 0 0 0
## 5 5 5 0 0
## 6 0 5 0 20
## 7 0 0 0 0
## 8 0 0 0 0
## 9 30 15 0 10
## 10 0 0 0 0
## # ℹ 7,480 more rows
```
Now we apply `rowwise` before `mutate`, to tell R that we would like
the mutate function to be applied across, and within, a row,
as opposed to being applied on a column
(which is the default behavior of `mutate`):
```
region_lang |>
select(mother_tongue:lang_known) |>
rowwise() |>
mutate(maximum = max(c(mother_tongue,
most_at_home,
most_at_work,
lang_known)))
```
```
## # A tibble: 7,490 × 5
## # Rowwise:
## mother_tongue most_at_home most_at_work lang_known maximum
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 5 0 0 0 5
## 2 5 0 0 0 5
## 3 0 0 0 0 0
## 4 0 0 0 0 0
## 5 5 5 0 0 5
## 6 0 5 0 20 20
## 7 0 0 0 0 0
## 8 0 0 0 0 0
## 9 30 15 0 10 30
## 10 0 0 0 0 0
## # ℹ 7,480 more rows
```
We see that we get an additional column added to the data frame,
named `maximum`, which is the maximum value between `mother_tongue`,
`most_at_home`, `most_at_work` and `lang_known` for each language
and region.
Similar to `group_by`,
`rowwise` doesn’t appear to do anything when it is called by itself.
However, we can apply `rowwise` in combination
with other functions to change how these other functions operate on the data.
Notice if we used `mutate` without `rowwise`,
we would have computed the maximum value across *all* rows
rather than the maximum value for *each* row.
Below we show what would have happened had we not used
`rowwise`. In particular, the same maximum value is reported
in every single row; this code does not provide the desired result.
```
region_lang |>
select(mother_tongue:lang_known) |>
mutate(maximum = max(c(mother_tongue,
most_at_home,
most_at_home,
lang_known)))
```
```
## # A tibble: 7,490 × 5
## mother_tongue most_at_home most_at_work lang_known maximum
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 5 0 0 0 5600480
## 2 5 0 0 0 5600480
## 3 0 0 0 0 5600480
## 4 0 0 0 0 5600480
## 5 5 5 0 0 5600480
## 6 0 5 0 20 5600480
## 7 0 0 0 0 5600480
## 8 0 0 0 0 5600480
## 9 30 15 0 10 5600480
## 10 0 0 0 0 5600480
## # ℹ 7,480 more rows
```
3\.12 Summary
-------------
Cleaning and wrangling data can be a very time\-consuming process. However,
it is a critical step in any data analysis. We have explored many different
functions for cleaning and wrangling data into a tidy format.
Table [3\.4](wrangling.html#tab:summary-functions-table) summarizes some of the key wrangling
functions we learned in this chapter. In the following chapters, you will
learn how you can take this tidy data and do so much more with it to answer your
burning data science questions!
Table 3\.4: Summary of wrangling functions
| Function | Description |
| --- | --- |
| `across` | allows you to apply function(s) to multiple columns |
| `filter` | subsets rows of a data frame |
| `group_by` | allows you to apply function(s) to groups of rows |
| `mutate` | adds or modifies columns in a data frame |
| `map` | general iteration function |
| `pivot_longer` | generally makes the data frame longer and narrower |
| `pivot_wider` | generally makes a data frame wider and decreases the number of rows |
| `rowwise` | applies functions across columns within one row |
| `separate` | splits up a character column into multiple columns |
| `select` | subsets columns of a data frame |
| `summarize` | calculates summaries of inputs |
3\.13 Exercises
---------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Cleaning and wrangling data” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
3\.14 Additional resources
--------------------------
* As we mentioned earlier, `tidyverse` is actually an *R
meta package*: it installs and loads a collection of R packages that all
follow the tidy data philosophy we discussed above. One of the `tidyverse`
packages is `dplyr`—a data wrangling workhorse. You have already met many
of `dplyr`’s functions
(`select`, `filter`, `mutate`, `arrange`, `summarize`, and `group_by`).
To learn more about these functions and meet a few more useful
functions, we recommend you check out Chapters 5\-9 of the [STAT545 online notes](https://stat545.com/).
of the data wrangling, exploration, and analysis with R book.
* The [`dplyr` R package documentation](https://dplyr.tidyverse.org/) ([Wickham, François, et al. 2021](#ref-dplyr)) is
another resource to learn more about the functions in this
chapter, the full set of arguments you can use, and other related functions.
The site also provides a very nice cheat sheet that summarizes many of the
data wrangling functions from this chapter.
* Check out the [`tidyselect` R package page](https://tidyselect.r-lib.org/index.html)
([Henry and Wickham 2021](#ref-tidyselect)) for a comprehensive list of `select` helpers.
These helpers can be used to choose columns in a data frame when paired with the `select` function
(and other functions that use the `tidyselect` syntax, such as `pivot_longer`).
The [documentation for `select` helpers](https://tidyselect.r-lib.org/reference/select_helpers.html)
is a useful reference to find the helper you need for your particular problem.
* *R for Data Science* ([Wickham and Grolemund 2016](#ref-wickham2016r)) has a few chapters related to
data wrangling that go into more depth than this book. For example, the
[tidy data chapter](https://r4ds.had.co.nz/tidy-data.html) covers tidy data,
`pivot_longer`/`pivot_wider` and `separate`, but also covers missing values
and additional wrangling functions (like `unite`). The [data
transformation chapter](https://r4ds.had.co.nz/transform.html) covers
`select`, `filter`, `arrange`, `mutate`, and `summarize`. And the [`map`
functions chapter](https://r4ds.had.co.nz/iteration.html#the-map-functions)
provides more about the `map` functions.
* You will occasionally encounter a case where you need to iterate over items
in a data frame, but none of the above functions are flexible enough to do
what you want. In that case, you may consider using [a for
loop](https://r4ds.had.co.nz/iteration.html#iteration).
| Data Science |
ubc-dsci.github.io | https://ubc-dsci.github.io/introduction-to-datascience/viz.html |
Chapter 4 Effective data visualization
======================================
4\.1 Overview
-------------
This chapter will introduce concepts and tools relating to data visualization
beyond what we have seen and practiced so far. We will focus on guiding
principles for effective data visualization and explaining visualizations
independent of any particular tool or programming language. In the process, we
will cover some specifics of creating visualizations (scatter plots, bar
plots, line plots, and histograms) for data using R.
4\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Describe when to use the following kinds of visualizations to answer specific questions using a data set:
+ scatter plots
+ line plots
+ bar plots
+ histogram plots
* Given a data set and a question, select from the above plot types and use R to create a visualization that best answers the question.
* Evaluate the effectiveness of a visualization and suggest improvements to better answer a given question.
* Referring to the visualization, communicate the conclusions in non\-technical terms.
* Identify rules of thumb for creating effective visualizations.
* Use the `ggplot2` package in R to create and refine the above visualizations using:
+ geometric objects: `geom_point`, `geom_line`, `geom_histogram`, `geom_bar`, `geom_vline`, `geom_hline`
+ scales: `xlim`, `ylim`
+ aesthetic mappings: `x`, `y`, `fill`, `color`, `shape`
+ labeling: `xlab`, `ylab`, `labs`
+ font control and legend positioning: `theme`
+ subplots: `facet_grid`
* Define the three key aspects of `ggplot2` objects:
+ aesthetic mappings
+ geometric objects
+ scales
* Describe the difference in raster and vector output formats.
* Use `ggsave` to save visualizations in `.png` and `.svg` format.
4\.3 Choosing the visualization
-------------------------------
#### *Ask a question, and answer it*
The purpose of a visualization is to answer a question
about a data set of interest. So naturally, the
first thing to do **before** creating a visualization is to formulate the
question about the data you are trying to answer. A good visualization will
clearly answer your question without distraction; a *great* visualization will
suggest even what the question was itself without additional explanation.
Imagine your visualization as part of a poster presentation for a project; even
if you aren’t standing at the poster explaining things, an effective
visualization will convey your message to the audience.
Recall the different data analysis questions
from Chapter [1](intro.html#intro).
With the visualizations we will cover in this chapter,
we will be able to answer *only descriptive and exploratory* questions.
Be careful to not answer any *predictive, inferential, causal*
*or mechanistic* questions with the visualizations presented here,
as we have not learned the tools necessary to do that properly just yet.
As with most coding tasks, it is totally fine (and quite common) to make
mistakes and iterate a few times before you find the right visualization for
your data and question. There are many different kinds of plotting
graphics available to use (see Chapter 5 of *Fundamentals of Data Visualization* ([Wilke 2019](#ref-wilkeviz)) for a directory).
The types of plot that we introduce in this book are shown in Figure [4\.1](viz.html#fig:plot-sketches);
which one you should select depends on your data
and the question you want to answer.
In general, the guiding principles of when to use each type of plot
are as follows:
* **scatter plots** visualize the relationship between two quantitative variables
* **line plots** visualize trends with respect to an independent, ordered quantity (e.g., time)
* **bar plots** visualize comparisons of amounts
* **histograms** visualize the distribution of one quantitative variable (i.e., all its possible values and how often they occur)
Figure 4\.1: Examples of scatter, line and bar plots, as well as histograms.
All types of visualization have their (mis)uses, but three kinds are usually
hard to understand or are easily replaced with an oft\-better alternative. In
particular, you should avoid **pie charts**; it is generally better to use
bars, as it is easier to compare bar heights than pie slice sizes. You should
also not use **3\-D visualizations**, as they are typically hard to understand
when converted to a static 2\-D image format. Finally, do not use tables to make
numerical comparisons; humans are much better at quickly processing visual
information than text and math. Bar plots are again typically a better
alternative.
4\.4 Refining the visualization
-------------------------------
#### *Convey the message, minimize noise*
Just being able to make a visualization in R (or any other language,
for that matter) doesn’t mean that it effectively communicates your message to
others. Once you have selected a broad type of visualization to use, you will
have to refine it to suit your particular need. Some rules of thumb for doing
this are listed below. They generally fall into two classes: you want to
*make your visualization convey your message*, and you want to *reduce visual noise*
as much as possible. Humans have limited cognitive ability to process
information; both of these types of refinement aim to reduce the mental load on
your audience when viewing your visualization, making it easier for them to
understand and remember your message quickly.
**Convey the message**
* Make sure the visualization answers the question you have asked most simply and plainly as possible.
* Use legends and labels so that your visualization is understandable without reading the surrounding text.
* Ensure the text, symbols, lines, etc., on your visualization are big enough to be easily read.
* Ensure the data are clearly visible; don’t hide the shape/distribution of the data behind other objects (e.g., a bar).
* Make sure to use color schemes that are understandable by those with
colorblindness (a surprisingly large fraction of the overall
population—from about 1% to 10%, depending on sex and ancestry ([Deeb 2005](#ref-deebblind))).
For example, [ColorBrewer](https://colorbrewer2.org)
and [the `RColorBrewer` R package](https://cran.r-project.org/web/packages/RColorBrewer/index.html) ([Neuwirth 2014](#ref-RColorBrewer)) provide the
ability to pick such color schemes, and you can check your visualizations
after you have created them by uploading to online tools
such as a [color blindness simulator](https://www.color-blindness.com/coblis-color-blindness-simulator/).
* Redundancy can be helpful; sometimes conveying the same message in multiple ways reinforces it for the audience.
**Minimize noise**
* Use colors sparingly. Too many different colors can be distracting, create false patterns, and detract from the message.
* Be wary of overplotting. Overplotting is when marks that represent the data
overlap, and is problematic as it prevents you from seeing how many data
points are represented in areas of the visualization where this occurs. If your
plot has too many dots or lines and starts to look like a mess, you need to do
something different.
* Only make the plot area (where the dots, lines, bars are) as big as needed. Simple plots can be made small.
* Don’t adjust the axes to zoom in on small differences. If the difference is small, show that it’s small!
4\.5 Creating visualizations with `ggplot2`
-------------------------------------------
#### *Build the visualization iteratively*
This section will cover examples of how to choose and refine a visualization
given a data set and a question that you want to answer, and then how to create
the visualization in R using the `ggplot2` R package. Given that
the `ggplot2` package is loaded by the `tidyverse` metapackage, we still
need to load only \`tidyverse’:
```
library(tidyverse)
```
### 4\.5\.1 Scatter plots and line plots: the Mauna Loa CO\\(\_{\\text{2}}\\) data set
The [Mauna Loa CO\\(\_{\\text{2}}\\) data set](https://www.esrl.noaa.gov/gmd/ccgg/trends/data.html),
curated by Dr. Pieter Tans, NOAA/GML
and Dr. Ralph Keeling, Scripps Institution of Oceanography,
records the atmospheric concentration of carbon dioxide
(CO\\(\_{\\text{2}}\\), in parts per million)
at the Mauna Loa research station in Hawaii
from 1959 onward ([Tans and Keeling 2020](#ref-maunadata)).
For this book, we are going to focus on the years 1980\-2020\.
**Question:**
Does the concentration of atmospheric CO\\(\_{\\text{2}}\\) change over time,
and are there any interesting patterns to note?
To get started, we will read and inspect the data:
```
# mauna loa carbon dioxide data
co2_df <- read_csv("data/mauna_loa_data.csv")
co2_df
```
```
## # A tibble: 484 × 2
## date_measured ppm
## <date> <dbl>
## 1 1980-02-01 338.
## 2 1980-03-01 340.
## 3 1980-04-01 341.
## 4 1980-05-01 341.
## 5 1980-06-01 341.
## 6 1980-07-01 339.
## 7 1980-08-01 338.
## 8 1980-09-01 336.
## 9 1980-10-01 336.
## 10 1980-11-01 337.
## # ℹ 474 more rows
```
We see that there are two columns in the `co2_df` data frame; `date_measured` and `ppm`.
The `date_measured` column holds the date the measurement was taken,
and is of type `date`.
The `ppm` column holds the value of CO\\(\_{\\text{2}}\\) in parts per million
that was measured on each date, and is type `double`.
> **Note:** `read_csv` was able to parse the `date_measured` column into the
> `date` vector type because it was entered
> in the international standard date format,
> called ISO 8601, which lists dates as `year-month-day`.
> `date` vectors are `double` vectors with special properties that allow
> them to handle dates correctly.
> For example, `date` type vectors allow functions like `ggplot`
> to treat them as numeric dates and not as character vectors,
> even though they contain non\-numeric characters
> (e.g., in the `date_measured` column in the `co2_df` data frame).
> This means R will not accidentally plot the dates in the wrong order
> (i.e., not alphanumerically as would happen if it was a character vector).
> An in\-depth study of dates and times is beyond the scope of the book,
> but interested readers
> may consult the Dates and Times chapter of *R for Data Science* ([Wickham and Grolemund 2016](#ref-wickham2016r));
> see the additional resources at the end of this chapter.
Since we are investigating a relationship between two variables
(CO\\(\_{\\text{2}}\\) concentration and date),
a scatter plot is a good place to start.
Scatter plots show the data as individual points with `x` (horizontal axis)
and `y` (vertical axis) coordinates.
Here, we will use the measurement date as the `x` coordinate
and the CO\\(\_{\\text{2}}\\) concentration as the `y` coordinate.
When using the `ggplot2` package,
we create a plot object with the `ggplot` function.
There are a few basic aspects of a plot that we need to specify:
* The name of the data frame object to visualize.
+ Here, we specify the `co2_df` data frame.
* The **aesthetic mapping**, which tells `ggplot` how the columns in the data frame map to properties of the visualization.
+ To create an aesthetic mapping, we use the `aes` function.
+ Here, we set the plot `x` axis to the `date_measured` variable, and the plot `y` axis to the `ppm` variable.
* The `+` operator, which tells `ggplot` that we would like to add another layer to the plot.
* The **geometric object**, which specifies how the mapped data should be displayed.
+ To create a geometric object, we use a `geom_*` function (see the [ggplot reference](https://ggplot2.tidyverse.org/reference/) for a list of geometric objects).
+ Here, we use the `geom_point` function to visualize our data as a scatter plot.
```
co2_scatter <- ggplot(co2_df, aes(x = date_measured, y = ppm)) +
geom_point()
co2_scatter
```
Figure 4\.2: Scatter plot of atmospheric concentration of CO\\(\_{2}\\) over time.
The visualization in Figure [4\.2](viz.html#fig:03-data-co2-scatter)
shows a clear upward trend
in the atmospheric concentration of CO\\(\_{\\text{2}}\\) over time.
This plot answers the first part of our question in the affirmative,
but that appears to be the only conclusion one can make
from the scatter visualization.
One important thing to note about this data is that one of the variables
we are exploring is time.
Time is a special kind of quantitative variable
because it forces additional structure on the data—the
data points have a natural order.
Specifically, each observation in the data set has a predecessor
and a successor, and the order of the observations matters; changing their order
alters their meaning.
In situations like this, we typically use a line plot to visualize
the data. Line plots connect the sequence of `x` and `y` coordinates
of the observations with line segments, thereby emphasizing their order.
We can create a line plot in `ggplot` using the `geom_line` function.
Let’s now try to visualize the `co2_df` as a line plot
with just the default arguments:
```
co2_line <- ggplot(co2_df, aes(x = date_measured, y = ppm)) +
geom_line()
co2_line
```
Figure 4\.3: Line plot of atmospheric concentration of CO\\(\_{2}\\) over time.
Aha! Figure [4\.3](viz.html#fig:03-data-co2-line) shows us there *is* another interesting
phenomenon in the data: in addition to increasing over time, the concentration
seems to oscillate as well. Given the visualization as it is now, it is still
hard to tell how fast the oscillation is, but nevertheless, the line seems to
be a better choice for answering the question than the scatter plot was. The
comparison between these two visualizations also illustrates a common issue with
scatter plots: often, the points are shown too close together or even on top of
one another, muddling information that would otherwise be clear
(*overplotting*).
Now that we have settled on the rough details of the visualization, it is time
to refine things. This plot is fairly straightforward, and there is not much
visual noise to remove. But there are a few things we must do to improve
clarity, such as adding informative axis labels and making the font a more
readable size. To add axis labels, we use the `xlab` and `ylab` functions. To
change the font size, we use the `theme` function with the `text` argument:
```
co2_line <- ggplot(co2_df, aes(x = date_measured, y = ppm)) +
geom_line() +
xlab("Year") +
ylab("Atmospheric CO2 (ppm)") +
theme(text = element_text(size = 12))
co2_line
```
Figure 4\.4: Line plot of atmospheric concentration of CO\\(\_{2}\\) over time with clearer axes and labels.
> **Note:** The `theme` function is quite complex and has many arguments
> that can be specified to control many non\-data aspects of a visualization.
> An in\-depth discussion of the `theme` function is beyond the scope of this book.
> Interested readers may consult the `theme` function documentation;
> see the additional resources section at the end of this chapter.
Finally, let’s see if we can better understand the oscillation by changing the
visualization slightly. Note that it is totally fine to use a small number of
visualizations to answer different aspects of the question you are trying to
answer. We will accomplish this by using *scales*,
another important feature of `ggplot2` that easily transforms the different
variables and set limits. We scale the horizontal axis using the `xlim` function,
and the vertical axis with the `ylim` function.
In particular, here, we will use the `xlim` function to zoom in
on just five years of data (say, 1990\-1994\).
`xlim` takes a vector of length two
to specify the upper and lower bounds to limit the axis.
We can create that using the `c` function.
Note that it is important that the vector given to `xlim` must be of the same
type as the data that is mapped to that axis.
Here, we have mapped a date to the x\-axis,
and so we need to use the `date` function
(from the `tidyverse` [`lubridate` R package](https://lubridate.tidyverse.org/) ([Spinu, Grolemund, and Wickham 2021](#ref-lubridate); [Grolemund and Wickham 2011](#ref-lubridatepaper)))
to convert the character strings we provide to `c` to `date` vectors.
> **Note:** `lubridate` is a package that is installed by the `tidyverse` metapackage,
> but is not loaded by it.
> Hence we need to load it separately in the code below.
```
library(lubridate)
co2_line <- ggplot(co2_df, aes(x = date_measured, y = ppm)) +
geom_line() +
xlab("Year") +
ylab("Atmospheric CO2 (ppm)") +
xlim(c(date("1990-01-01"), date("1993-12-01"))) +
theme(text = element_text(size = 12))
co2_line
```
Figure 4\.5: Line plot of atmospheric concentration of CO\\(\_{2}\\) from 1990 to 1994\.
Interesting! It seems that each year, the atmospheric CO\\(\_{\\text{2}}\\) increases until it reaches its peak somewhere around April, decreases until around late September,
and finally increases again until the end of the year. In Hawaii, there are two seasons: summer from May through October, and winter from November through April.
Therefore, the oscillating pattern in CO\\(\_{\\text{2}}\\) matches up fairly closely with the two seasons.
As you might have noticed from the code used to create the final visualization
of the `co2_df` data frame,
we construct the visualizations in `ggplot` with layers.
New layers are added with the `+` operator,
and we can really add as many as we would like!
A useful analogy to constructing a data visualization is painting a picture.
We start with a blank canvas,
and the first thing we do is prepare the surface
for our painting by adding primer.
In our data visualization this is akin to calling `ggplot`
and specifying the data set we will be using.
Next, we sketch out the background of the painting.
In our data visualization,
this would be when we map data to the axes in the `aes` function.
Then we add our key visual subjects to the painting.
In our data visualization,
this would be the geometric objects (e.g., `geom_point`, `geom_line`, etc.).
And finally, we work on adding details and refinements to the painting.
In our data visualization this would be when we fine tune axis labels,
change the font, adjust the point size, and do other related things.
### 4\.5\.2 Scatter plots: the Old Faithful eruption time data set
The `faithful` data set contains measurements
of the waiting time between eruptions
and the subsequent eruption duration (in minutes) of the Old Faithful
geyser in Yellowstone National Park, Wyoming, United States.
The `faithful` data set is available in base R as a data frame,
so it does not need to be loaded.
We convert it to a tibble to take advantage of the nicer print output
these specialized data frames provide.
**Question:**
Is there a relationship between the waiting time before an eruption
and the duration of the eruption?
```
# old faithful eruption time / wait time data
faithful <- as_tibble(faithful)
faithful
```
```
## # A tibble: 272 × 2
## eruptions waiting
## <dbl> <dbl>
## 1 3.6 79
## 2 1.8 54
## 3 3.33 74
## 4 2.28 62
## 5 4.53 85
## 6 2.88 55
## 7 4.7 88
## 8 3.6 85
## 9 1.95 51
## 10 4.35 85
## # ℹ 262 more rows
```
Here again, we investigate the relationship between two quantitative variables
(waiting time and eruption time).
But if you look at the output of the data frame,
you’ll notice that unlike time in the Mauna Loa CO\\(\_{\\text{2}}\\) data set,
neither of the variables here have a natural order to them.
So a scatter plot is likely to be the most appropriate
visualization. Let’s create a scatter plot using the `ggplot`
function with the `waiting` variable on the horizontal axis, the `eruptions`
variable on the vertical axis, and the `geom_point` geometric object.
The result is shown in Figure [4\.6](viz.html#fig:03-data-faithful-scatter).
```
faithful_scatter <- ggplot(faithful, aes(x = waiting, y = eruptions)) +
geom_point()
faithful_scatter
```
Figure 4\.6: Scatter plot of waiting time and eruption time.
We can see in Figure [4\.6](viz.html#fig:03-data-faithful-scatter) that the data tend to fall
into two groups: one with short waiting and eruption times, and one with long
waiting and eruption times. Note that in this case, there is no overplotting:
the points are generally nicely visually separated, and the pattern they form
is clear. In order to refine the visualization, we need only to add axis
labels and make the font more readable:
```
faithful_scatter <- ggplot(faithful, aes(x = waiting, y = eruptions)) +
geom_point() +
xlab("Waiting Time (mins)") +
ylab("Eruption Duration (mins)") +
theme(text = element_text(size = 12))
faithful_scatter
```
Figure 4\.7: Scatter plot of waiting time and eruption time with clearer axes and labels.
### 4\.5\.3 Axis transformation and colored scatter plots: the Canadian languages data set
Recall the `can_lang` data set ([Timbers 2020](#ref-timbers2020canlang)) from Chapters [1](intro.html#intro), [2](reading.html#reading), and [3](wrangling.html#wrangling),
which contains counts of languages from the 2016
Canadian census.
**Question:** Is there a relationship between
the percentage of people who speak a language as their mother tongue and
the percentage for whom that is the primary language spoken at home?
And is there a pattern in the strength of this relationship in the
higher\-level language categories (Official languages, Aboriginal languages, or
non\-official and non\-Aboriginal languages)?
To get started, we will read and inspect the data:
```
can_lang <- read_csv("data/can_lang.csv")
can_lang
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
We will begin with a scatter plot of the `mother_tongue` and `most_at_home` columns from our data frame.
The resulting plot is shown in Figure [4\.8](viz.html#fig:03-mother-tongue-vs-most-at-home).
```
ggplot(can_lang, aes(x = most_at_home, y = mother_tongue)) +
geom_point()
```
Figure 4\.8: Scatter plot of number of Canadians reporting a language as their mother tongue vs the primary language at home.
To make an initial improvement in the interpretability
of Figure [4\.8](viz.html#fig:03-mother-tongue-vs-most-at-home), we should
replace the default axis
names with more informative labels. We can use `\n` to create a line break in
the axis names so that the words after `\n` are printed on a new line. This will
make the axes labels on the plots more readable.
We should also increase the font size to further
improve readability.
```
ggplot(can_lang, aes(x = most_at_home, y = mother_tongue)) +
geom_point() +
xlab("Language spoken most at home \n (number of Canadian residents)") +
ylab("Mother tongue \n (number of Canadian residents)") +
theme(text = element_text(size = 12))
```
Figure 4\.9: Scatter plot of number of Canadians reporting a language as their mother tongue vs the primary language at home with x and y labels.
Okay! The axes and labels in Figure [4\.9](viz.html#fig:03-mother-tongue-vs-most-at-home-labs) are
much more readable and interpretable now. However, the scatter points themselves could use
some work; most of the 214 data points are bunched
up in the lower left\-hand side of the visualization. The data is clumped because
many more people in Canada speak English or French (the two points in
the upper right corner) than other languages.
In particular, the most common mother tongue language
has 19,460,850 speakers,
while the least common has only 10\.
That’s a 6\-decimal\-place difference
in the magnitude of these two numbers!
We can confirm that the two points in the upper right\-hand corner correspond
to Canada’s two official languages by filtering the data:
```
can_lang |>
filter(language == "English" | language == "French")
```
```
## # A tibble: 2 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Official languages English 19460850 22162865 15265335 29748265
## 2 Official languages French 7166700 6943800 3825215 10242945
```
Recall that our question about this data pertains to *all* languages;
so to properly answer our question,
we will need to adjust the scale of the axes so that we can clearly
see all of the scatter points.
In particular, we will improve the plot by adjusting the horizontal
and vertical axes so that they are on a **logarithmic** (or **log**) scale.
Log scaling is useful when your data take both *very large* and *very small* values,
because it helps space out small values and squishes larger values together.
For example, \\(\\log\_{10}(1\) \= 0\\), \\(\\log\_{10}(10\) \= 1\\), \\(\\log\_{10}(100\) \= 2\\), and \\(\\log\_{10}(1000\) \= 3\\);
on the logarithmic scale,
the values 1, 10, 100, and 1000 are all the same distance apart!
So we see that applying this function is moving big values closer together
and moving small values farther apart.
Note that if your data can take the value 0, logarithmic scaling may not
be appropriate (since `log10(0)` is `-Inf` in R). There are other ways to transform
the data in such a case, but these are beyond the scope of the book.
We can accomplish logarithmic scaling in a `ggplot` visualization
using the `scale_x_log10` and `scale_y_log10` functions.
Given that the x and y axes have large numbers, we should also format the axis labels
to put commas in these numbers to increase their readability.
We can do this in R by passing the `label_comma` function (from the `scales` package)
to the `labels` argument of the `scale_x_log10` and `scale_x_log10` functions.
```
library(scales)
ggplot(can_lang, aes(x = most_at_home, y = mother_tongue)) +
geom_point() +
xlab("Language spoken most at home \n (number of Canadian residents)") +
ylab("Mother tongue \n (number of Canadian residents)") +
theme(text = element_text(size = 12)) +
scale_x_log10(labels = label_comma()) +
scale_y_log10(labels = label_comma())
```
Figure 4\.10: Scatter plot of number of Canadians reporting a language as their mother tongue vs the primary language at home with log adjusted x and y axes.
Similar to some of the examples in Chapter [3](wrangling.html#wrangling),
we can convert the counts to percentages to give them context
and make them easier to understand.
We can do this by dividing the number of people reporting a given language
as their mother tongue or primary language at home
by the number of people who live in Canada and multiplying by 100%.
For example,
the percentage of people who reported that their mother tongue was English
in the 2016 Canadian census
was 19,460,850
/ 35,151,728 \\(\\times\\)
100 % \=
55\.36%.
Below we use `mutate` to calculate the percentage of people reporting a given
language as their mother tongue and primary language at home for all the
languages in the `can_lang` data set. Since the new columns are appended to the
end of the data table, we selected the new columns after the transformation so
you can clearly see the mutated output from the table.
```
can_lang <- can_lang |>
mutate(
mother_tongue_percent = (mother_tongue / 35151728) * 100,
most_at_home_percent = (most_at_home / 35151728) * 100
)
can_lang |>
select(mother_tongue_percent, most_at_home_percent)
```
```
## # A tibble: 214 × 2
## mother_tongue_percent most_at_home_percent
## <dbl> <dbl>
## 1 0.00168 0.000669
## 2 0.0292 0.0136
## 3 0.00327 0.00127
## 4 0.0383 0.0170
## 5 0.0765 0.0374
## 6 0.000128 0.0000284
## 7 0.00358 0.00105
## 8 0.00764 0.00859
## 9 0.0639 0.0364
## 10 1.19 0.636
## # ℹ 204 more rows
```
Finally, we will edit the visualization to use the percentages we just computed
(and change our axis labels to reflect this change in
units). Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props) displays
the final result.
```
ggplot(can_lang, aes(x = most_at_home_percent, y = mother_tongue_percent)) +
geom_point() +
xlab("Language spoken most at home \n (percentage of Canadian residents)") +
ylab("Mother tongue \n (percentage of Canadian residents)") +
theme(text = element_text(size = 12)) +
scale_x_log10(labels = comma) +
scale_y_log10(labels = comma)
```
Figure 4\.11: Scatter plot of percentage of Canadians reporting a language as their mother tongue vs the primary language at home.
Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props) is the appropriate
visualization to use to answer the first question in this section, i.e.,
whether there is a relationship between the percentage of people who speak
a language as their mother tongue and the percentage for whom that
is the primary language spoken at home.
To fully answer the question, we need to use
Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props)
to assess a few key characteristics of the data:
* **Direction:** if the y variable tends to increase when the x variable increases, then y has a **positive** relationship with x. If
y tends to decrease when x increases, then y has a **negative** relationship with x. If y does not meaningfully increase or decrease
as x increases, then y has **little or no** relationship with x.
* **Strength:** if the y variable *reliably* increases, decreases, or stays flat as x increases,
then the relationship is **strong**. Otherwise, the relationship is **weak**. Intuitively,
the relationship is strong when the scatter points are close together and look more like a “line” or “curve” than a “cloud.”
* **Shape:** if you can draw a straight line roughly through the data points, the relationship is **linear**. Otherwise, it is **nonlinear**.
In Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props), we see that
as the percentage of people who have a language as their mother tongue increases,
so does the percentage of people who speak that language at home.
Therefore, there is a **positive** relationship between these two variables.
Furthermore, because the points in Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props)
are fairly close together, and the points look more like a “line” than a “cloud”,
we can say that this is a **strong** relationship.
And finally, because drawing a straight line through these points in
Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props)
would fit the pattern we observe quite well, we say that the relationship is **linear**.
Onto the second part of our exploratory data analysis question!
Recall that we are interested in knowing whether the strength
of the relationship we uncovered
in Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props) depends
on the higher\-level language category (Official languages, Aboriginal languages,
and non\-official, non\-Aboriginal languages).
One common way to explore this
is to color the data points on the scatter plot we have already created by
group. For example, given that we have the higher\-level language category for
each language recorded in the 2016 Canadian census, we can color the points in
our previous
scatter plot to represent each language’s higher\-level language category.
Here we want to distinguish the values according to the `category` group with
which they belong. We can add an argument to the `aes` function, specifying
that the `category` column should color the points. Adding this argument will
color the points according to their group and add a legend at the side of the
plot.
```
ggplot(can_lang, aes(x = most_at_home_percent,
y = mother_tongue_percent,
color = category)) +
geom_point() +
xlab("Language spoken most at home \n (percentage of Canadian residents)") +
ylab("Mother tongue \n (percentage of Canadian residents)") +
theme(text = element_text(size = 12)) +
scale_x_log10(labels = comma) +
scale_y_log10(labels = comma)
```
Figure 4\.12: Scatter plot of percentage of Canadians reporting a language as their mother tongue vs the primary language at home colored by language category.
The legend in Figure [4\.12](viz.html#fig:03-scatter-color-by-category)
takes up valuable plot area.
We can improve this by moving the legend title using the `legend.position`
and `legend.direction`
arguments of the `theme` function.
Here we set `legend.position` to `"top"` to put the legend above the plot
and `legend.direction` to `"vertical"` so that the legend items remain
vertically stacked on top of each other.
When the `legend.position` is set to either `"top"` or `"bottom"`
the default direction is to stack the legend items horizontally.
However, that will not work well for this particular visualization
because the legend labels are quite long
and would run off the page if displayed this way.
```
ggplot(can_lang, aes(x = most_at_home_percent,
y = mother_tongue_percent,
color = category)) +
geom_point() +
xlab("Language spoken most at home \n (percentage of Canadian residents)") +
ylab("Mother tongue \n (percentage of Canadian residents)") +
theme(text = element_text(size = 12),
legend.position = "top",
legend.direction = "vertical") +
scale_x_log10(labels = comma) +
scale_y_log10(labels = comma)
```
Figure 4\.13: Scatter plot of percentage of Canadians reporting a language as their mother tongue vs the primary language at home colored by language category with the legend edited.
In Figure [4\.13](viz.html#fig:03-scatter-color-by-category-legend-edit), the points are colored with
the default `ggplot2` color palette. But what if you want to use different
colors? In R, two packages that provide alternative color
palettes are `RColorBrewer` ([Neuwirth 2014](#ref-RColorBrewer))
and `ggthemes` ([Arnold 2019](#ref-ggthemes)); in this book we will cover how to use `RColorBrewer`.
You can visualize the list of color
palettes that `RColorBrewer` has to offer with the `display.brewer.all`
function. You can also print a list of color\-blind friendly palettes by adding
`colorblindFriendly = TRUE` to the function.
```
library(RColorBrewer)
display.brewer.all(colorblindFriendly = TRUE)
```
Figure 4\.14: Color palettes available from the `RColorBrewer` R package.
From Figure [4\.14](viz.html#fig:rcolorbrewer),
we can choose the color palette we want to use in our plot.
To change the color palette,
we add the `scale_color_brewer` layer indicating the palette we want to use.
You can use
this [color blindness simulator](https://www.color-blindness.com/coblis-color-blindness-simulator/) to check
if your visualizations
are color\-blind friendly.
Below we pick the `"Set2"` palette, with the result shown
in Figure [4\.15](viz.html#fig:scatter-color-by-category-palette).
We also set the `shape` aesthetic mapping to the `category` variable as well;
this makes the scatter point shapes different for each category. This kind of
visual redundancy—i.e., conveying the same information with both scatter point color and shape—can
further improve the clarity and accessibility of your visualization.
```
ggplot(can_lang, aes(x = most_at_home_percent,
y = mother_tongue_percent,
color = category,
shape = category)) +
geom_point() +
xlab("Language spoken most at home \n (percentage of Canadian residents)") +
ylab("Mother tongue \n (percentage of Canadian residents)") +
theme(text = element_text(size = 12),
legend.position = "top",
legend.direction = "vertical") +
scale_x_log10(labels = comma) +
scale_y_log10(labels = comma) +
scale_color_brewer(palette = "Set2")
```
Figure 4\.15: Scatter plot of percentage of Canadians reporting a language as their mother tongue vs the primary language at home colored by language category with color\-blind friendly colors.
From the visualization in Figure [4\.15](viz.html#fig:scatter-color-by-category-palette),
we can now clearly see that the vast majority of Canadians reported one of the official languages
as their mother tongue and as the language they speak most often at home.
What do we see when considering the second part of our exploratory question?
Do we see a difference in the relationship
between languages spoken as a mother tongue and as a primary language
at home across the higher\-level language categories?
Based on Figure [4\.15](viz.html#fig:scatter-color-by-category-palette), there does not
appear to be much of a difference.
For each higher\-level language category,
there appears to be a strong, positive, and linear relationship between
the percentage of people who speak a language as their mother tongue
and the percentage who speak it as their primary language at home.
The relationship looks similar regardless of the category.
Does this mean that this relationship is positive for all languages in the
world? And further, can we use this data visualization on its own to predict how many people
have a given language as their mother tongue if we know how many people speak
it as their primary language at home? The answer to both these questions is
“no!” However, with exploratory data analysis, we can create new hypotheses,
ideas, and questions (like the ones at the beginning of this paragraph).
Answering those questions often involves doing more complex analyses, and sometimes
even gathering additional data. We will see more of such complex analyses later on in
this book.
### 4\.5\.4 Bar plots: the island landmass data set
The `islands.csv` data set contains a list of Earth’s landmasses as well as their area (in thousands of square miles) ([McNeil 1977](#ref-islandsdata)).
**Question:** Are the continents (North / South America, Africa, Europe, Asia, Australia, Antarctica) Earth’s seven largest landmasses? If so, what are the next few largest landmasses after those?
To get started, we will read and inspect the data:
```
# islands data
islands_df <- read_csv("data/islands.csv")
islands_df
```
```
## # A tibble: 48 × 3
## landmass size landmass_type
## <chr> <dbl> <chr>
## 1 Africa 11506 Continent
## 2 Antarctica 5500 Continent
## 3 Asia 16988 Continent
## 4 Australia 2968 Continent
## 5 Axel Heiberg 16 Other
## 6 Baffin 184 Other
## 7 Banks 23 Other
## 8 Borneo 280 Other
## 9 Britain 84 Other
## 10 Celebes 73 Other
## # ℹ 38 more rows
```
Here, we have a data frame of Earth’s landmasses,
and are trying to compare their sizes.
The right type of visualization to answer this question is a bar plot.
In a bar plot, the height of each bar represents the value of an *amount*
(a size, count, proportion, percentage, etc).
They are particularly useful for comparing counts or proportions across different
groups of a categorical variable. Note, however, that bar plots should generally not be
used to display mean or median values, as they hide important information about
the variation of the data. Instead it’s better to show the distribution of
all the individual data points, e.g., using a histogram, which we will discuss further in Section [4\.5\.5](viz.html#histogramsviz).
We specify that we would like to use a bar plot
via the `geom_bar` function in `ggplot2`.
However, by default, `geom_bar` sets the heights
of bars to the number of times a value appears in a data frame (its *count*); here, we want to plot exactly the values in the data frame, i.e.,
the landmass sizes. So we have to pass the `stat = "identity"` argument to `geom_bar`. The result is
shown in Figure [4\.16](viz.html#fig:03-data-islands-bar).
```
islands_bar <- ggplot(islands_df, aes(x = landmass, y = size)) +
geom_bar(stat = "identity")
islands_bar
```
Figure 4\.16: Bar plot of Earth’s landmass sizes with squished labels.
Alright, not bad! The plot in Figure [4\.16](viz.html#fig:03-data-islands-bar) is
definitely the right kind of visualization, as we can clearly see and compare
sizes of landmasses. The major issues are that the smaller landmasses’ sizes
are hard to distinguish, and the names of the landmasses are obscuring each
other as they have been squished into too little space. But remember that the
question we asked was only about the largest landmasses; let’s make the plot a
little bit clearer by keeping only the largest 12 landmasses. We do this using
the `slice_max` function: the `order_by` argument is the name of the column we
want to use for comparing which is largest, and the `n` argument specifies how many
rows to keep. Then to give the labels enough
space, we’ll use horizontal bars instead of vertical ones. We do this by
swapping the `x` and `y` variables.
> **Note:** Recall that in Chapter [1](intro.html#intro), we used `arrange` followed by `slice` to
> obtain the ten rows with the largest values of a variable. We could have instead used
> the `slice_max` function for this purpose. The `slice_max` and `slice_min` functions
> achieve the same goal as `arrange` followed by `slice`, but are slightly more efficient
> because they are specialized for this purpose. In general, it is good to use more specialized
> functions when they are available!
```
islands_top12 <- slice_max(islands_df, order_by = size, n = 12)
islands_bar <- ggplot(islands_top12, aes(x = size, y = landmass)) +
geom_bar(stat = "identity")
islands_bar
```
Figure 4\.17: Bar plot of size for Earth’s largest 12 landmasses.
The plot in Figure [4\.17](viz.html#fig:03-data-islands-bar-2) is definitely clearer now,
and allows us to answer our question
(“Are the top 7 largest landmasses continents?”) in the affirmative.
However, we could still improve this visualization by
coloring the bars based on whether they correspond to a continent,
and by organizing the bars by landmass size rather than by alphabetical order.
The data for coloring the bars is stored in the `landmass_type` column,
so we add the `fill` argument to the aesthetic mapping
and set it to `landmass_type`. We manually select two colors for the bars
using the `scale_fill_manual` function:`"darkorange"`
for orange and `"steelblue"` for blue.
To organize the landmasses by their `size` variable,
we will use the `tidyverse` `fct_reorder` function
in the aesthetic mapping to organize the landmasses by their `size` variable.
The first argument passed to `fct_reorder` is the name of the factor column
whose levels we would like to reorder (here, `landmass`).
The second argument is the column name
that holds the values we would like to use to do the ordering (here, `size`).
The `fct_reorder` function uses ascending order by default,
but this can be changed to descending order
by setting `.desc = TRUE`.
We do this here so that the largest bar will be closest to the axis line,
which is more visually appealing.
To finalize this plot we will customize the axis and legend labels,
and add a title to the chart. Plot titles are not always required, especially when
it would be redundant with an already\-existing
caption or surrounding context (e.g., in a slide presentation with annotations).
But if you decide to include one, a good plot title should provide the take home message
that you want readers to focus on, e.g., “Earth’s seven largest landmasses are continents,”
or a more general summary of the information displayed, e.g., “Earth’s twelve largest landmasses.”
To make these final adjustments we will use the `labs` function rather than the `xlab` and `ylab` functions
we have seen earlier in this chapter, as `labs` lets us modify the legend label and title in addition to axis labels.
We provide a label for each aesthetic mapping in the plot—in this case, `x`, `y`, and `fill`—as well as one for the `title` argument.
Finally, we again use the `theme` function
to change the font size.
```
islands_bar <- ggplot(islands_top12,
aes(x = size,
y = fct_reorder(landmass, size, .desc = TRUE),
fill = landmass_type)) +
geom_bar(stat = "identity") +
labs(x = "Size (1000 square mi)",
y = "Landmass",
fill = "Type",
title = "Earth's twelve largest landmasses") +
scale_fill_manual(values = c("steelblue", "darkorange")) +
theme(text = element_text(size = 10))
islands_bar
```
Figure 4\.18: Bar plot of size for Earth’s largest 12 landmasses, colored by landmass type, with clearer axes and labels.
The plot in Figure [4\.18](viz.html#fig:03-data-islands-bar-4) is now a very effective
visualization for answering our original questions. Landmasses are organized by
their size, and continents are colored differently than other landmasses,
making it quite clear that continents are the largest seven landmasses.
### 4\.5\.5 Histograms: the Michelson speed of light data set
The `morley` data set
contains measurements of the speed of light
collected in experiments performed in 1879\.
Five experiments were performed,
and in each experiment, 20 runs were performed—meaning that
20 measurements of the speed of light were collected
in each experiment ([Michelson 1882](#ref-lightdata)).
The `morley` data set is available in base R as a data frame,
so it does not need to be loaded.
Because the speed of light is a very large number
(the true value is 299,792\.458 km/sec), the data is coded
to be the measured speed of light minus 299,000\.
This coding allows us to focus on the variations in the measurements, which are generally
much smaller than 299,000\.
If we used the full large speed measurements, the variations in the measurements
would not be noticeable, making it difficult to study the differences between the experiments.
Note that we convert the `morley` data to a tibble to take advantage of the nicer print output
these specialized data frames provide.
**Question:** Given what we know now about the speed of
light (299,792\.458 kilometres per second), how accurate were each of the experiments?
```
# michelson morley experimental data
morley <- as_tibble(morley)
morley
```
```
## # A tibble: 100 × 3
## Expt Run Speed
## <int> <int> <int>
## 1 1 1 850
## 2 1 2 740
## 3 1 3 900
## 4 1 4 1070
## 5 1 5 930
## 6 1 6 850
## 7 1 7 950
## 8 1 8 980
## 9 1 9 980
## 10 1 10 880
## # ℹ 90 more rows
```
In this experimental data,
Michelson was trying to measure just a single quantitative number
(the speed of light).
The data set contains many measurements of this single quantity.
To tell how accurate the experiments were,
we need to visualize the distribution of the measurements
(i.e., all their possible values and how often each occurs).
We can do this using a *histogram*.
A histogram
helps us visualize how a particular variable is distributed in a data set
by separating the data into bins,
and then using vertical bars to show how many data points fell in each bin.
To create a histogram in `ggplot2` we will use the `geom_histogram` geometric
object, setting the `x` axis to the `Speed` measurement variable. As usual,
let’s use the default arguments just to see how things look.
```
morley_hist <- ggplot(morley, aes(x = Speed)) +
geom_histogram()
morley_hist
```
Figure 4\.19: Histogram of Michelson’s speed of light data.
Figure [4\.19](viz.html#fig:03-data-morley-hist) is a great start.
However,
we cannot tell how accurate the measurements are using this visualization
unless we can see the true value.
In order to visualize the true speed of light,
we will add a vertical line with the `geom_vline` function.
To draw a vertical line with `geom_vline`,
we need to specify where on the x\-axis the line should be drawn.
We can do this by setting the `xintercept` argument.
Here we set it to 792\.458, which is the true value of light speed
minus 299,000; this ensures it is coded the same way as the
measurements in the `morley` data frame.
We would also like to fine tune this vertical line,
styling it so that it is dashed by setting `linetype = "dashed"`.
There is a similar function, `geom_hline`,
that is used for plotting horizontal lines.
Note that
*vertical lines* are used to denote quantities on the *horizontal axis*,
while *horizontal lines* are used to denote quantities on the *vertical axis*.
```
morley_hist <- ggplot(morley, aes(x = Speed)) +
geom_histogram() +
geom_vline(xintercept = 792.458, linetype = "dashed")
morley_hist
```
Figure 4\.20: Histogram of Michelson’s speed of light data with vertical line indicating true speed of light.
In Figure [4\.20](viz.html#fig:03-data-morley-hist-2),
we still cannot tell which experiments (denoted in the `Expt` column)
led to which measurements;
perhaps some experiments were more accurate than others.
To fully answer our question,
we need to separate the measurements from each other visually.
We can try to do this using a *colored* histogram,
where counts from different experiments are stacked on top of each other
in different colors.
We can create a histogram colored by the `Expt` variable
by adding it to the `fill` aesthetic mapping.
We make sure the different colors can be seen
(despite them all sitting on top of each other)
by setting the `alpha` argument in `geom_histogram` to `0.5`
to make the bars slightly translucent.
We also specify `position = "identity"` in `geom_histogram` to ensure
the histograms for each experiment will be overlaid side\-by\-side,
instead of stacked bars
(which is the default for bar plots or histograms
when they are colored by another categorical variable).
```
morley_hist <- ggplot(morley, aes(x = Speed, fill = Expt)) +
geom_histogram(alpha = 0.5, position = "identity") +
geom_vline(xintercept = 792.458, linetype = "dashed")
morley_hist
```
Figure 4\.21: Histogram of Michelson’s speed of light data where an attempt is made to color the bars by experiment.
Alright great, Figure [4\.21](viz.html#fig:03-data-morley-hist-3) looks…wait a second! The
histogram is still all the same color! What is going on here? Well, if you
recall from Chapter [3](wrangling.html#wrangling), the *data type* you use for each variable
can influence how R and `tidyverse` treats it. Here, we indeed have an issue
with the data types in the `morley` data frame. In particular, the `Expt` column
is currently an *integer* (you can see the label `<int>` underneath the `Expt` column in the printed
data frame at the start of this section). But we want to treat it as a
*category*, i.e., there should be one category per type of experiment.
To fix this issue we can convert the `Expt` variable into a *factor* by
passing it to `as_factor` in the `fill` aesthetic mapping.
Recall that factor is a data type in R that is often used to represent
categories. By writing
`as_factor(Expt)` we are ensuring that R will treat this variable as a factor,
and the color will be mapped discretely.
```
morley_hist <- ggplot(morley, aes(x = Speed, fill = as_factor(Expt))) +
geom_histogram(alpha = 0.5, position = "identity") +
geom_vline(xintercept = 792.458, linetype = "dashed")
morley_hist
```
Figure 4\.22: Histogram of Michelson’s speed of light data colored by experiment as factor.
> **Note:** Factors impact plots in two ways:
> (1\) ensuring a color is mapped as discretely where appropriate (as in this
> example) and (2\) the ordering of levels in a plot. `ggplot` takes into account
> the order of the factor levels as opposed to the order of data in
> your data frame. Learning how to reorder your factor levels will help you with
> reordering the labels of a factor on a plot.
Unfortunately, the attempt to separate out the experiment number visually has
created a bit of a mess. All of the colors in Figure
[4\.22](viz.html#fig:03-data-morley-hist-with-factor) are blending together, and although it is
possible to derive *some* insight from this (e.g., experiments 1 and 3 had some
of the most incorrect measurements), it isn’t the clearest way to convey our
message and answer the question. Let’s try a different strategy of creating
grid of separate histogram plots.
We use the `facet_grid` function to create a plot
that has multiple subplots arranged in a grid.
The argument to `facet_grid` specifies the variable(s) used to split the plot
into subplots, and how to split them (i.e., into rows or columns).
If the plot is to be split horizontally, into rows,
then the `rows` argument is used.
If the plot is to be split vertically, into columns,
then the `cols` argument is used.
Both the `rows` and `cols` arguments take the column names on which to split the data when creating the subplots.
Note that the column names must be surrounded by the `vars` function.
This function allows the column names to be correctly evaluated
in the context of the data frame.
```
morley_hist <- ggplot(morley, aes(x = Speed, fill = as_factor(Expt))) +
geom_histogram() +
facet_grid(rows = vars(Expt)) +
geom_vline(xintercept = 792.458, linetype = "dashed")
morley_hist
```
Figure 4\.23: Histogram of Michelson’s speed of light data split vertically by experiment.
The visualization in Figure [4\.23](viz.html#fig:03-data-morley-hist-4)
now makes it quite clear how accurate the different experiments were
with respect to one another.
The most variable measurements came from Experiment 1\.
There the measurements ranged from about 650–1050 km/sec.
The least variable measurements came from Experiment 2\.
There, the measurements ranged from about 750–950 km/sec.
The most different experiments still obtained quite similar results!
There are two finishing touches to make this visualization even clearer. First and foremost, we need to add informative axis labels
using the `labs` function, and increase the font size to make it readable using the `theme` function. Second, and perhaps more subtly, even though it
is easy to compare the experiments on this plot to one another, it is hard to get a sense
of just how accurate all the experiments were overall. For example, how accurate is the value 800 on the plot, relative to the true speed of light?
To answer this question, we’ll use the `mutate` function to transform our data into a relative measure of accuracy rather than absolute measurements:
```
morley_rel <- mutate(morley,
relative_accuracy = 100 *
((299000 + Speed) - 299792.458) / (299792.458))
morley_hist <- ggplot(morley_rel,
aes(x = relative_accuracy,
fill = as_factor(Expt))) +
geom_histogram() +
facet_grid(rows = vars(Expt)) +
geom_vline(xintercept = 0, linetype = "dashed") +
labs(x = "Relative Accuracy (%)",
y = "# Measurements",
fill = "Experiment ID") +
theme(text = element_text(size = 12))
morley_hist
```
Figure 4\.24: Histogram of relative accuracy split vertically by experiment with clearer axes and labels.
Wow, impressive! These measurements of the speed of light from 1879 had errors around *0\.05%* of the true speed. Figure [4\.24](viz.html#fig:03-data-morley-hist-5) shows you that even though experiments 2 and 5 were perhaps the most accurate, all of the experiments did quite an
admirable job given the technology available at the time.
#### Choosing a binwidth for histograms
When you create a histogram in R, the default number of bins used is 30\.
Naturally, this is not always the right number to use.
You can set the number of bins yourself by using
the `bins` argument in the `geom_histogram` geometric object.
You can also set the *width* of the bins using the
`binwidth` argument in the `geom_histogram` geometric object.
But what number of bins, or bin width, is the right one to use?
Unfortunately there is no hard rule for what the right bin number
or width is. It depends entirely on your problem; the *right* number of bins
or bin width is
the one that *helps you answer the question* you asked.
Choosing the correct setting for your problem
is something that commonly takes iteration.
We recommend setting the *bin width* (not the *number of bins*) because
it often more directly corresponds to values in your problem of interest. For example,
if you are looking at a histogram of human heights,
a bin width of 1 inch would likely be reasonable, while the number of bins to use is
not immediately clear.
It’s usually a good idea to try out several bin widths to see which one
most clearly captures your data in the context of the question
you want to answer.
To get a sense for how different bin widths affect visualizations,
let’s experiment with the histogram that we have been working on in this section.
In Figure [4\.25](viz.html#fig:03-data-morley-hist-binwidth),
we compare the default setting with three other histograms where we set the
`binwidth` to 0\.001, 0\.01 and 0\.1\.
In this case, we can see that both the default number of bins
and the binwidth of 0\.01 are effective for helping answer our question.
On the other hand, the bin widths of 0\.001 and 0\.1 are too small and too big, respectively.
Figure 4\.25: Effect of varying bin width on histograms.
#### Adding layers to a `ggplot` plot object
One of the powerful features of `ggplot` is that you
can continue to iterate on a single plot object, adding and refining
one layer at a time. If you stored your plot as a named object
using the assignment symbol (`<-`), you can
add to it using the `+` operator.
For example, if we wanted to add a title to the last plot we created (`morley_hist`),
we can use the `+` operator to add a title layer with the `ggtitle` function.
The result is shown in Figure [4\.26](viz.html#fig:03-data-morley-hist-addlayer).
```
morley_hist_title <- morley_hist +
ggtitle("Speed of light experiments \n were accurate to about 0.05%")
morley_hist_title
```
Figure 4\.26: Histogram of relative accuracy split vertically by experiment with a descriptive title highlighting the take home message of the visualization.
> **Note:** Good visualization titles clearly communicate
> the take home message to the audience. Typically,
> that is the answer to the question you posed before making the visualization.
4\.6 Explaining the visualization
---------------------------------
#### *Tell a story*
Typically, your visualization will not be shown entirely on its own, but rather
it will be part of a larger presentation. Further, visualizations can provide
supporting information for any aspect of a presentation, from opening to
conclusion. For example, you could use an exploratory visualization in the
opening of the presentation to motivate your choice of a more detailed data
analysis / model, a visualization of the results of your analysis to show what
your analysis has uncovered, or even one at the end of a presentation to help
suggest directions for future work.
Regardless of where it appears, a good way to discuss your visualization is as
a story:
1. Establish the setting and scope, and describe why you did what you did.
2. Pose the question that your visualization answers. Justify why the question is important to answer.
3. Answer the question using your visualization. Make sure you describe *all* aspects of the visualization (including describing the axes). But you
can emphasize different aspects based on what is important to answer your question:
* **trends (lines):** Does a line describe the trend well? If so, the trend is *linear*, and if not, the trend is *nonlinear*. Is the trend increasing, decreasing, or neither?
Is there a periodic oscillation (wiggle) in the trend? Is the trend noisy (does the line “jump around” a lot) or smooth?
* **distributions (scatters, histograms):** How spread out are the data? Where are they centered, roughly? Are there any obvious “clusters” or “subgroups”, which would be visible as multiple bumps in the histogram?
* **distributions of two variables (scatters):** Is there a clear / strong relationship between the variables (points fall in a distinct pattern), a weak one (points fall in a pattern but there is some noise), or no discernible
relationship (the data are too noisy to make any conclusion)?
* **amounts (bars):** How large are the bars relative to one another? Are there patterns in different groups of bars?
4. Summarize your findings, and use them to motivate whatever you will discuss next.
Below are two examples of how one might take these four steps in describing the example visualizations that appeared earlier in this chapter.
Each of the steps is denoted by its numeral in parentheses, e.g. (3\).
**Mauna Loa Atmospheric CO\\(\_{\\text{2}}\\) Measurements:** (1\) Many
current forms of energy generation and conversion—from automotive
engines to natural gas power plants—rely on burning fossil fuels and produce
greenhouse gases, typically primarily carbon dioxide (CO\\(\_{\\text{2}}\\)), as a
byproduct. Too much of these gases in the Earth’s atmosphere will cause it to
trap more heat from the sun, leading to global warming. (2\) In order to assess
how quickly the atmospheric concentration of CO\\(\_{\\text{2}}\\) is increasing over
time, we (3\) used a data set from the Mauna Loa observatory in Hawaii,
consisting of CO\\(\_{\\text{2}}\\) measurements from 1980 to 2020\. We plotted the
measured concentration of CO\\(\_{\\text{2}}\\) (on the vertical axis) over time (on
the horizontal axis). From this plot, you can see a clear, increasing, and
generally linear trend over time. There is also a periodic oscillation that
occurs once per year and aligns with Hawaii’s seasons, with an amplitude that
is small relative to the growth in the overall trend. This shows that
atmospheric CO\\(\_{\\text{2}}\\) is clearly increasing over time, and (4\) it is
perhaps worth investigating more into the causes.
**Michelson Light Speed Experiments:** (1\) Our
modern understanding of the physics of light has advanced significantly from
the late 1800s when Michelson and Morley’s experiments first demonstrated that
it had a finite speed. We now know, based on modern experiments, that it moves at
roughly 299,792\.458 kilometers per second. (2\) But how accurately were we first
able to measure this fundamental physical constant, and did certain experiments
produce more accurate results than others? (3\) To better understand this, we
plotted data from 5 experiments by Michelson in 1879, each with 20 trials, as
histograms stacked on top of one another. The horizontal axis shows the
accuracy of the measurements relative to the true speed of light as we know it
today, expressed as a percentage. From this visualization, you can see that
most results had relative errors of at most 0\.05%. You can also see that
experiments 1 and 3 had measurements that were the farthest from the true
value, and experiment 5 tended to provide the most consistently accurate
result. (4\) It would be worth further investigating the differences between
these experiments to see why they produced different results.
4\.7 Saving the visualization
-----------------------------
#### *Choose the right output format for your needs*
Just as there are many ways to store data sets, there are many ways to store
visualizations and images. Which one you choose can depend on several factors,
such as file size/type limitations (e.g., if you are submitting your
visualization as part of a conference paper or to a poster printing shop) and
where it will be displayed (e.g., online, in a paper, on a poster, on a
billboard, in talk slides). Generally speaking, images come in two flavors:
*raster* formats
and *vector* formats.
**Raster** images are represented as a 2\-D grid of square pixels, each
with its own color. Raster images are often *compressed* before storing so they
take up less space. A compressed format is *lossy* if the image cannot be
perfectly re\-created when loading and displaying, with the hope that the change
is not noticeable. *Lossless* formats, on the other hand, allow a perfect
display of the original image.
* *Common file types:*
+ [JPEG](https://en.wikipedia.org/wiki/JPEG) (`.jpg`, `.jpeg`): lossy, usually used for photographs
+ [PNG](https://en.wikipedia.org/wiki/Portable_Network_Graphics) (`.png`): lossless, usually used for plots / line drawings
+ [BMP](https://en.wikipedia.org/wiki/BMP_file_format) (`.bmp`): lossless, raw image data, no compression (rarely used)
+ [TIFF](https://en.wikipedia.org/wiki/TIFF) (`.tif`, `.tiff`): typically lossless, no compression, used mostly in graphic arts, publishing
* *Open\-source software:* [GIMP](https://www.gimp.org/)
**Vector** images are represented as a collection of mathematical
objects (lines, surfaces, shapes, curves). When the computer displays the image, it
redraws all of the elements using their mathematical formulas.
* *Common file types:*
+ [SVG](https://en.wikipedia.org/wiki/Scalable_Vector_Graphics) (`.svg`): general\-purpose use
+ [EPS](https://en.wikipedia.org/wiki/Encapsulated_PostScript) (`.eps`), general\-purpose use (rarely used)
* *Open\-source software:* [Inkscape](https://inkscape.org/)
Raster and vector images have opposing advantages and disadvantages. A raster
image of a fixed width / height takes the same amount of space and time to load
regardless of what the image shows (the one caveat is that the compression algorithms may
shrink the image more or run faster for certain images). A vector image takes
space and time to load corresponding to how complex the image is, since the
computer has to draw all the elements each time it is displayed. For example,
if you have a scatter plot with 1 million points stored as an SVG file, it may
take your computer some time to open the image. On the other hand, you can zoom
into / scale up vector graphics as much as you like without the image looking
bad, while raster images eventually start to look “pixelated.”
> **Note:** The portable document format [PDF](https://en.wikipedia.org/wiki/PDF) (`.pdf`) is commonly used to
> store *both* raster and vector formats. If you try to open a PDF and it’s taking a long time
> to load, it may be because there is a complicated vector graphics image that your computer is rendering.
Let’s learn how to save plot images to these different file formats using a
scatter plot of
the [Old Faithful data set](https://www.stat.cmu.edu/~larry/all-of-statistics/=data/faithful.dat) ([Hardle 1991](#ref-faithfuldata)),
shown in Figure [4\.27](viz.html#fig:03-plot-line).
```
library(svglite) # we need this to save SVG files
faithful_plot <- ggplot(data = faithful, aes(x = waiting, y = eruptions)) +
geom_point() +
labs(x = "Waiting time to next eruption \n (minutes)",
y = "Eruption time \n (minutes)") +
theme(text = element_text(size = 12))
faithful_plot
```
Figure 4\.27: Scatter plot of waiting time and eruption time.
Now that we have a named `ggplot` plot object, we can use the `ggsave` function
to save a file containing this image.
`ggsave` works by taking a file name to create for the image
as its first argument.
This can include the path to the directory where you would like to save the file
(e.g., `img/viz/filename.png` to save a file named `filename` to the `img/viz/` directory),
and the name of the plot object to save as its second argument.
The kind of image to save is specified by the file extension.
For example,
to create a PNG image file, we specify that the file extension is `.png`.
Below we demonstrate how to save PNG, JPG, BMP, TIFF and SVG file types
for the `faithful_plot`:
```
ggsave("img/viz/faithful_plot.png", faithful_plot)
ggsave("img/viz/faithful_plot.jpg", faithful_plot)
ggsave("img/viz/faithful_plot.bmp", faithful_plot)
ggsave("img/viz/faithful_plot.tiff", faithful_plot)
ggsave("img/viz/faithful_plot.svg", faithful_plot)
```
Table 4\.1: File sizes of the scatter plot of the Old Faithful data set when saved as different file formats.
| Image type | File type | Image size |
| --- | --- | --- |
| Raster | PNG | 0\.15 MB |
| Raster | JPG | 0\.42 MB |
| Raster | BMP | 3\.15 MB |
| Raster | TIFF | 9\.44 MB |
| Vector | SVG | 0\.03 MB |
Take a look at the file sizes in Table [4\.1](viz.html#tab:filesizes).
Wow, that’s quite a difference! Notice that for such a simple plot with few
graphical elements (points), the vector graphics format (SVG) is over 100 times
smaller than the uncompressed raster images (BMP, TIFF). Also, note that the
JPG format is twice as large as the PNG format since the JPG compression
algorithm is designed for natural images (not plots).
In Figure [4\.28](viz.html#fig:03-raster-image), we also show what
the images look like when we zoom in to a rectangle with only 2 data points.
You can see why vector graphics formats are so useful: because they’re just
based on mathematical formulas, vector graphics can be scaled up to arbitrary
sizes. This makes them great for presentation media of all sizes, from papers
to posters to billboards.
Figure 4\.28: Zoomed in `faithful`, raster (PNG, left) and vector (SVG, right) formats.
4\.8 Exercises
--------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Effective data visualization” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
4\.9 Additional resources
-------------------------
* The [`ggplot2` R package page](https://ggplot2.tidyverse.org) ([Wickham, Chang, et al. 2021](#ref-ggplot)) is
where you should look if you want to learn more about the functions in this
chapter, the full set of arguments you can use, and other related functions.
The site also provides a very nice cheat sheet that summarizes many of the data
wrangling functions from this chapter.
* The *Fundamentals of Data Visualization* ([Wilke 2019](#ref-wilkeviz)) has
a wealth of information on designing effective visualizations. It is not
specific to any particular programming language or library. If you want to
improve your visualization skills, this is the next place to look.
* *R for Data Science* ([Wickham and Grolemund 2016](#ref-wickham2016r)) has a [chapter on creating visualizations using
`ggplot2`](https://r4ds.had.co.nz/data-visualisation.html). This reference is
specific to R and `ggplot2`, but provides a much more detailed introduction to
the full set of tools that `ggplot2` provides. This chapter is where you should
look if you want to learn how to make more intricate visualizations in
`ggplot2` than what is included in this chapter.
* The [`theme` function documentation](https://ggplot2.tidyverse.org/reference/theme.html)
is an excellent reference to see how you can fine tune the non\-data aspects
of your visualization.
* *R for Data Science* ([Wickham and Grolemund 2016](#ref-wickham2016r)) has a chapter on [dates and
times](https://r4ds.had.co.nz/dates-and-times.html). This chapter is where
you should look if you want to learn about `date` vectors, including how to
create them, and how to use them to effectively handle durations, periods and
intervals using the `lubridate` package.
4\.1 Overview
-------------
This chapter will introduce concepts and tools relating to data visualization
beyond what we have seen and practiced so far. We will focus on guiding
principles for effective data visualization and explaining visualizations
independent of any particular tool or programming language. In the process, we
will cover some specifics of creating visualizations (scatter plots, bar
plots, line plots, and histograms) for data using R.
4\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Describe when to use the following kinds of visualizations to answer specific questions using a data set:
+ scatter plots
+ line plots
+ bar plots
+ histogram plots
* Given a data set and a question, select from the above plot types and use R to create a visualization that best answers the question.
* Evaluate the effectiveness of a visualization and suggest improvements to better answer a given question.
* Referring to the visualization, communicate the conclusions in non\-technical terms.
* Identify rules of thumb for creating effective visualizations.
* Use the `ggplot2` package in R to create and refine the above visualizations using:
+ geometric objects: `geom_point`, `geom_line`, `geom_histogram`, `geom_bar`, `geom_vline`, `geom_hline`
+ scales: `xlim`, `ylim`
+ aesthetic mappings: `x`, `y`, `fill`, `color`, `shape`
+ labeling: `xlab`, `ylab`, `labs`
+ font control and legend positioning: `theme`
+ subplots: `facet_grid`
* Define the three key aspects of `ggplot2` objects:
+ aesthetic mappings
+ geometric objects
+ scales
* Describe the difference in raster and vector output formats.
* Use `ggsave` to save visualizations in `.png` and `.svg` format.
4\.3 Choosing the visualization
-------------------------------
#### *Ask a question, and answer it*
The purpose of a visualization is to answer a question
about a data set of interest. So naturally, the
first thing to do **before** creating a visualization is to formulate the
question about the data you are trying to answer. A good visualization will
clearly answer your question without distraction; a *great* visualization will
suggest even what the question was itself without additional explanation.
Imagine your visualization as part of a poster presentation for a project; even
if you aren’t standing at the poster explaining things, an effective
visualization will convey your message to the audience.
Recall the different data analysis questions
from Chapter [1](intro.html#intro).
With the visualizations we will cover in this chapter,
we will be able to answer *only descriptive and exploratory* questions.
Be careful to not answer any *predictive, inferential, causal*
*or mechanistic* questions with the visualizations presented here,
as we have not learned the tools necessary to do that properly just yet.
As with most coding tasks, it is totally fine (and quite common) to make
mistakes and iterate a few times before you find the right visualization for
your data and question. There are many different kinds of plotting
graphics available to use (see Chapter 5 of *Fundamentals of Data Visualization* ([Wilke 2019](#ref-wilkeviz)) for a directory).
The types of plot that we introduce in this book are shown in Figure [4\.1](viz.html#fig:plot-sketches);
which one you should select depends on your data
and the question you want to answer.
In general, the guiding principles of when to use each type of plot
are as follows:
* **scatter plots** visualize the relationship between two quantitative variables
* **line plots** visualize trends with respect to an independent, ordered quantity (e.g., time)
* **bar plots** visualize comparisons of amounts
* **histograms** visualize the distribution of one quantitative variable (i.e., all its possible values and how often they occur)
Figure 4\.1: Examples of scatter, line and bar plots, as well as histograms.
All types of visualization have their (mis)uses, but three kinds are usually
hard to understand or are easily replaced with an oft\-better alternative. In
particular, you should avoid **pie charts**; it is generally better to use
bars, as it is easier to compare bar heights than pie slice sizes. You should
also not use **3\-D visualizations**, as they are typically hard to understand
when converted to a static 2\-D image format. Finally, do not use tables to make
numerical comparisons; humans are much better at quickly processing visual
information than text and math. Bar plots are again typically a better
alternative.
#### *Ask a question, and answer it*
The purpose of a visualization is to answer a question
about a data set of interest. So naturally, the
first thing to do **before** creating a visualization is to formulate the
question about the data you are trying to answer. A good visualization will
clearly answer your question without distraction; a *great* visualization will
suggest even what the question was itself without additional explanation.
Imagine your visualization as part of a poster presentation for a project; even
if you aren’t standing at the poster explaining things, an effective
visualization will convey your message to the audience.
Recall the different data analysis questions
from Chapter [1](intro.html#intro).
With the visualizations we will cover in this chapter,
we will be able to answer *only descriptive and exploratory* questions.
Be careful to not answer any *predictive, inferential, causal*
*or mechanistic* questions with the visualizations presented here,
as we have not learned the tools necessary to do that properly just yet.
As with most coding tasks, it is totally fine (and quite common) to make
mistakes and iterate a few times before you find the right visualization for
your data and question. There are many different kinds of plotting
graphics available to use (see Chapter 5 of *Fundamentals of Data Visualization* ([Wilke 2019](#ref-wilkeviz)) for a directory).
The types of plot that we introduce in this book are shown in Figure [4\.1](viz.html#fig:plot-sketches);
which one you should select depends on your data
and the question you want to answer.
In general, the guiding principles of when to use each type of plot
are as follows:
* **scatter plots** visualize the relationship between two quantitative variables
* **line plots** visualize trends with respect to an independent, ordered quantity (e.g., time)
* **bar plots** visualize comparisons of amounts
* **histograms** visualize the distribution of one quantitative variable (i.e., all its possible values and how often they occur)
Figure 4\.1: Examples of scatter, line and bar plots, as well as histograms.
All types of visualization have their (mis)uses, but three kinds are usually
hard to understand or are easily replaced with an oft\-better alternative. In
particular, you should avoid **pie charts**; it is generally better to use
bars, as it is easier to compare bar heights than pie slice sizes. You should
also not use **3\-D visualizations**, as they are typically hard to understand
when converted to a static 2\-D image format. Finally, do not use tables to make
numerical comparisons; humans are much better at quickly processing visual
information than text and math. Bar plots are again typically a better
alternative.
4\.4 Refining the visualization
-------------------------------
#### *Convey the message, minimize noise*
Just being able to make a visualization in R (or any other language,
for that matter) doesn’t mean that it effectively communicates your message to
others. Once you have selected a broad type of visualization to use, you will
have to refine it to suit your particular need. Some rules of thumb for doing
this are listed below. They generally fall into two classes: you want to
*make your visualization convey your message*, and you want to *reduce visual noise*
as much as possible. Humans have limited cognitive ability to process
information; both of these types of refinement aim to reduce the mental load on
your audience when viewing your visualization, making it easier for them to
understand and remember your message quickly.
**Convey the message**
* Make sure the visualization answers the question you have asked most simply and plainly as possible.
* Use legends and labels so that your visualization is understandable without reading the surrounding text.
* Ensure the text, symbols, lines, etc., on your visualization are big enough to be easily read.
* Ensure the data are clearly visible; don’t hide the shape/distribution of the data behind other objects (e.g., a bar).
* Make sure to use color schemes that are understandable by those with
colorblindness (a surprisingly large fraction of the overall
population—from about 1% to 10%, depending on sex and ancestry ([Deeb 2005](#ref-deebblind))).
For example, [ColorBrewer](https://colorbrewer2.org)
and [the `RColorBrewer` R package](https://cran.r-project.org/web/packages/RColorBrewer/index.html) ([Neuwirth 2014](#ref-RColorBrewer)) provide the
ability to pick such color schemes, and you can check your visualizations
after you have created them by uploading to online tools
such as a [color blindness simulator](https://www.color-blindness.com/coblis-color-blindness-simulator/).
* Redundancy can be helpful; sometimes conveying the same message in multiple ways reinforces it for the audience.
**Minimize noise**
* Use colors sparingly. Too many different colors can be distracting, create false patterns, and detract from the message.
* Be wary of overplotting. Overplotting is when marks that represent the data
overlap, and is problematic as it prevents you from seeing how many data
points are represented in areas of the visualization where this occurs. If your
plot has too many dots or lines and starts to look like a mess, you need to do
something different.
* Only make the plot area (where the dots, lines, bars are) as big as needed. Simple plots can be made small.
* Don’t adjust the axes to zoom in on small differences. If the difference is small, show that it’s small!
#### *Convey the message, minimize noise*
Just being able to make a visualization in R (or any other language,
for that matter) doesn’t mean that it effectively communicates your message to
others. Once you have selected a broad type of visualization to use, you will
have to refine it to suit your particular need. Some rules of thumb for doing
this are listed below. They generally fall into two classes: you want to
*make your visualization convey your message*, and you want to *reduce visual noise*
as much as possible. Humans have limited cognitive ability to process
information; both of these types of refinement aim to reduce the mental load on
your audience when viewing your visualization, making it easier for them to
understand and remember your message quickly.
**Convey the message**
* Make sure the visualization answers the question you have asked most simply and plainly as possible.
* Use legends and labels so that your visualization is understandable without reading the surrounding text.
* Ensure the text, symbols, lines, etc., on your visualization are big enough to be easily read.
* Ensure the data are clearly visible; don’t hide the shape/distribution of the data behind other objects (e.g., a bar).
* Make sure to use color schemes that are understandable by those with
colorblindness (a surprisingly large fraction of the overall
population—from about 1% to 10%, depending on sex and ancestry ([Deeb 2005](#ref-deebblind))).
For example, [ColorBrewer](https://colorbrewer2.org)
and [the `RColorBrewer` R package](https://cran.r-project.org/web/packages/RColorBrewer/index.html) ([Neuwirth 2014](#ref-RColorBrewer)) provide the
ability to pick such color schemes, and you can check your visualizations
after you have created them by uploading to online tools
such as a [color blindness simulator](https://www.color-blindness.com/coblis-color-blindness-simulator/).
* Redundancy can be helpful; sometimes conveying the same message in multiple ways reinforces it for the audience.
**Minimize noise**
* Use colors sparingly. Too many different colors can be distracting, create false patterns, and detract from the message.
* Be wary of overplotting. Overplotting is when marks that represent the data
overlap, and is problematic as it prevents you from seeing how many data
points are represented in areas of the visualization where this occurs. If your
plot has too many dots or lines and starts to look like a mess, you need to do
something different.
* Only make the plot area (where the dots, lines, bars are) as big as needed. Simple plots can be made small.
* Don’t adjust the axes to zoom in on small differences. If the difference is small, show that it’s small!
4\.5 Creating visualizations with `ggplot2`
-------------------------------------------
#### *Build the visualization iteratively*
This section will cover examples of how to choose and refine a visualization
given a data set and a question that you want to answer, and then how to create
the visualization in R using the `ggplot2` R package. Given that
the `ggplot2` package is loaded by the `tidyverse` metapackage, we still
need to load only \`tidyverse’:
```
library(tidyverse)
```
### 4\.5\.1 Scatter plots and line plots: the Mauna Loa CO\\(\_{\\text{2}}\\) data set
The [Mauna Loa CO\\(\_{\\text{2}}\\) data set](https://www.esrl.noaa.gov/gmd/ccgg/trends/data.html),
curated by Dr. Pieter Tans, NOAA/GML
and Dr. Ralph Keeling, Scripps Institution of Oceanography,
records the atmospheric concentration of carbon dioxide
(CO\\(\_{\\text{2}}\\), in parts per million)
at the Mauna Loa research station in Hawaii
from 1959 onward ([Tans and Keeling 2020](#ref-maunadata)).
For this book, we are going to focus on the years 1980\-2020\.
**Question:**
Does the concentration of atmospheric CO\\(\_{\\text{2}}\\) change over time,
and are there any interesting patterns to note?
To get started, we will read and inspect the data:
```
# mauna loa carbon dioxide data
co2_df <- read_csv("data/mauna_loa_data.csv")
co2_df
```
```
## # A tibble: 484 × 2
## date_measured ppm
## <date> <dbl>
## 1 1980-02-01 338.
## 2 1980-03-01 340.
## 3 1980-04-01 341.
## 4 1980-05-01 341.
## 5 1980-06-01 341.
## 6 1980-07-01 339.
## 7 1980-08-01 338.
## 8 1980-09-01 336.
## 9 1980-10-01 336.
## 10 1980-11-01 337.
## # ℹ 474 more rows
```
We see that there are two columns in the `co2_df` data frame; `date_measured` and `ppm`.
The `date_measured` column holds the date the measurement was taken,
and is of type `date`.
The `ppm` column holds the value of CO\\(\_{\\text{2}}\\) in parts per million
that was measured on each date, and is type `double`.
> **Note:** `read_csv` was able to parse the `date_measured` column into the
> `date` vector type because it was entered
> in the international standard date format,
> called ISO 8601, which lists dates as `year-month-day`.
> `date` vectors are `double` vectors with special properties that allow
> them to handle dates correctly.
> For example, `date` type vectors allow functions like `ggplot`
> to treat them as numeric dates and not as character vectors,
> even though they contain non\-numeric characters
> (e.g., in the `date_measured` column in the `co2_df` data frame).
> This means R will not accidentally plot the dates in the wrong order
> (i.e., not alphanumerically as would happen if it was a character vector).
> An in\-depth study of dates and times is beyond the scope of the book,
> but interested readers
> may consult the Dates and Times chapter of *R for Data Science* ([Wickham and Grolemund 2016](#ref-wickham2016r));
> see the additional resources at the end of this chapter.
Since we are investigating a relationship between two variables
(CO\\(\_{\\text{2}}\\) concentration and date),
a scatter plot is a good place to start.
Scatter plots show the data as individual points with `x` (horizontal axis)
and `y` (vertical axis) coordinates.
Here, we will use the measurement date as the `x` coordinate
and the CO\\(\_{\\text{2}}\\) concentration as the `y` coordinate.
When using the `ggplot2` package,
we create a plot object with the `ggplot` function.
There are a few basic aspects of a plot that we need to specify:
* The name of the data frame object to visualize.
+ Here, we specify the `co2_df` data frame.
* The **aesthetic mapping**, which tells `ggplot` how the columns in the data frame map to properties of the visualization.
+ To create an aesthetic mapping, we use the `aes` function.
+ Here, we set the plot `x` axis to the `date_measured` variable, and the plot `y` axis to the `ppm` variable.
* The `+` operator, which tells `ggplot` that we would like to add another layer to the plot.
* The **geometric object**, which specifies how the mapped data should be displayed.
+ To create a geometric object, we use a `geom_*` function (see the [ggplot reference](https://ggplot2.tidyverse.org/reference/) for a list of geometric objects).
+ Here, we use the `geom_point` function to visualize our data as a scatter plot.
```
co2_scatter <- ggplot(co2_df, aes(x = date_measured, y = ppm)) +
geom_point()
co2_scatter
```
Figure 4\.2: Scatter plot of atmospheric concentration of CO\\(\_{2}\\) over time.
The visualization in Figure [4\.2](viz.html#fig:03-data-co2-scatter)
shows a clear upward trend
in the atmospheric concentration of CO\\(\_{\\text{2}}\\) over time.
This plot answers the first part of our question in the affirmative,
but that appears to be the only conclusion one can make
from the scatter visualization.
One important thing to note about this data is that one of the variables
we are exploring is time.
Time is a special kind of quantitative variable
because it forces additional structure on the data—the
data points have a natural order.
Specifically, each observation in the data set has a predecessor
and a successor, and the order of the observations matters; changing their order
alters their meaning.
In situations like this, we typically use a line plot to visualize
the data. Line plots connect the sequence of `x` and `y` coordinates
of the observations with line segments, thereby emphasizing their order.
We can create a line plot in `ggplot` using the `geom_line` function.
Let’s now try to visualize the `co2_df` as a line plot
with just the default arguments:
```
co2_line <- ggplot(co2_df, aes(x = date_measured, y = ppm)) +
geom_line()
co2_line
```
Figure 4\.3: Line plot of atmospheric concentration of CO\\(\_{2}\\) over time.
Aha! Figure [4\.3](viz.html#fig:03-data-co2-line) shows us there *is* another interesting
phenomenon in the data: in addition to increasing over time, the concentration
seems to oscillate as well. Given the visualization as it is now, it is still
hard to tell how fast the oscillation is, but nevertheless, the line seems to
be a better choice for answering the question than the scatter plot was. The
comparison between these two visualizations also illustrates a common issue with
scatter plots: often, the points are shown too close together or even on top of
one another, muddling information that would otherwise be clear
(*overplotting*).
Now that we have settled on the rough details of the visualization, it is time
to refine things. This plot is fairly straightforward, and there is not much
visual noise to remove. But there are a few things we must do to improve
clarity, such as adding informative axis labels and making the font a more
readable size. To add axis labels, we use the `xlab` and `ylab` functions. To
change the font size, we use the `theme` function with the `text` argument:
```
co2_line <- ggplot(co2_df, aes(x = date_measured, y = ppm)) +
geom_line() +
xlab("Year") +
ylab("Atmospheric CO2 (ppm)") +
theme(text = element_text(size = 12))
co2_line
```
Figure 4\.4: Line plot of atmospheric concentration of CO\\(\_{2}\\) over time with clearer axes and labels.
> **Note:** The `theme` function is quite complex and has many arguments
> that can be specified to control many non\-data aspects of a visualization.
> An in\-depth discussion of the `theme` function is beyond the scope of this book.
> Interested readers may consult the `theme` function documentation;
> see the additional resources section at the end of this chapter.
Finally, let’s see if we can better understand the oscillation by changing the
visualization slightly. Note that it is totally fine to use a small number of
visualizations to answer different aspects of the question you are trying to
answer. We will accomplish this by using *scales*,
another important feature of `ggplot2` that easily transforms the different
variables and set limits. We scale the horizontal axis using the `xlim` function,
and the vertical axis with the `ylim` function.
In particular, here, we will use the `xlim` function to zoom in
on just five years of data (say, 1990\-1994\).
`xlim` takes a vector of length two
to specify the upper and lower bounds to limit the axis.
We can create that using the `c` function.
Note that it is important that the vector given to `xlim` must be of the same
type as the data that is mapped to that axis.
Here, we have mapped a date to the x\-axis,
and so we need to use the `date` function
(from the `tidyverse` [`lubridate` R package](https://lubridate.tidyverse.org/) ([Spinu, Grolemund, and Wickham 2021](#ref-lubridate); [Grolemund and Wickham 2011](#ref-lubridatepaper)))
to convert the character strings we provide to `c` to `date` vectors.
> **Note:** `lubridate` is a package that is installed by the `tidyverse` metapackage,
> but is not loaded by it.
> Hence we need to load it separately in the code below.
```
library(lubridate)
co2_line <- ggplot(co2_df, aes(x = date_measured, y = ppm)) +
geom_line() +
xlab("Year") +
ylab("Atmospheric CO2 (ppm)") +
xlim(c(date("1990-01-01"), date("1993-12-01"))) +
theme(text = element_text(size = 12))
co2_line
```
Figure 4\.5: Line plot of atmospheric concentration of CO\\(\_{2}\\) from 1990 to 1994\.
Interesting! It seems that each year, the atmospheric CO\\(\_{\\text{2}}\\) increases until it reaches its peak somewhere around April, decreases until around late September,
and finally increases again until the end of the year. In Hawaii, there are two seasons: summer from May through October, and winter from November through April.
Therefore, the oscillating pattern in CO\\(\_{\\text{2}}\\) matches up fairly closely with the two seasons.
As you might have noticed from the code used to create the final visualization
of the `co2_df` data frame,
we construct the visualizations in `ggplot` with layers.
New layers are added with the `+` operator,
and we can really add as many as we would like!
A useful analogy to constructing a data visualization is painting a picture.
We start with a blank canvas,
and the first thing we do is prepare the surface
for our painting by adding primer.
In our data visualization this is akin to calling `ggplot`
and specifying the data set we will be using.
Next, we sketch out the background of the painting.
In our data visualization,
this would be when we map data to the axes in the `aes` function.
Then we add our key visual subjects to the painting.
In our data visualization,
this would be the geometric objects (e.g., `geom_point`, `geom_line`, etc.).
And finally, we work on adding details and refinements to the painting.
In our data visualization this would be when we fine tune axis labels,
change the font, adjust the point size, and do other related things.
### 4\.5\.2 Scatter plots: the Old Faithful eruption time data set
The `faithful` data set contains measurements
of the waiting time between eruptions
and the subsequent eruption duration (in minutes) of the Old Faithful
geyser in Yellowstone National Park, Wyoming, United States.
The `faithful` data set is available in base R as a data frame,
so it does not need to be loaded.
We convert it to a tibble to take advantage of the nicer print output
these specialized data frames provide.
**Question:**
Is there a relationship between the waiting time before an eruption
and the duration of the eruption?
```
# old faithful eruption time / wait time data
faithful <- as_tibble(faithful)
faithful
```
```
## # A tibble: 272 × 2
## eruptions waiting
## <dbl> <dbl>
## 1 3.6 79
## 2 1.8 54
## 3 3.33 74
## 4 2.28 62
## 5 4.53 85
## 6 2.88 55
## 7 4.7 88
## 8 3.6 85
## 9 1.95 51
## 10 4.35 85
## # ℹ 262 more rows
```
Here again, we investigate the relationship between two quantitative variables
(waiting time and eruption time).
But if you look at the output of the data frame,
you’ll notice that unlike time in the Mauna Loa CO\\(\_{\\text{2}}\\) data set,
neither of the variables here have a natural order to them.
So a scatter plot is likely to be the most appropriate
visualization. Let’s create a scatter plot using the `ggplot`
function with the `waiting` variable on the horizontal axis, the `eruptions`
variable on the vertical axis, and the `geom_point` geometric object.
The result is shown in Figure [4\.6](viz.html#fig:03-data-faithful-scatter).
```
faithful_scatter <- ggplot(faithful, aes(x = waiting, y = eruptions)) +
geom_point()
faithful_scatter
```
Figure 4\.6: Scatter plot of waiting time and eruption time.
We can see in Figure [4\.6](viz.html#fig:03-data-faithful-scatter) that the data tend to fall
into two groups: one with short waiting and eruption times, and one with long
waiting and eruption times. Note that in this case, there is no overplotting:
the points are generally nicely visually separated, and the pattern they form
is clear. In order to refine the visualization, we need only to add axis
labels and make the font more readable:
```
faithful_scatter <- ggplot(faithful, aes(x = waiting, y = eruptions)) +
geom_point() +
xlab("Waiting Time (mins)") +
ylab("Eruption Duration (mins)") +
theme(text = element_text(size = 12))
faithful_scatter
```
Figure 4\.7: Scatter plot of waiting time and eruption time with clearer axes and labels.
### 4\.5\.3 Axis transformation and colored scatter plots: the Canadian languages data set
Recall the `can_lang` data set ([Timbers 2020](#ref-timbers2020canlang)) from Chapters [1](intro.html#intro), [2](reading.html#reading), and [3](wrangling.html#wrangling),
which contains counts of languages from the 2016
Canadian census.
**Question:** Is there a relationship between
the percentage of people who speak a language as their mother tongue and
the percentage for whom that is the primary language spoken at home?
And is there a pattern in the strength of this relationship in the
higher\-level language categories (Official languages, Aboriginal languages, or
non\-official and non\-Aboriginal languages)?
To get started, we will read and inspect the data:
```
can_lang <- read_csv("data/can_lang.csv")
can_lang
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
We will begin with a scatter plot of the `mother_tongue` and `most_at_home` columns from our data frame.
The resulting plot is shown in Figure [4\.8](viz.html#fig:03-mother-tongue-vs-most-at-home).
```
ggplot(can_lang, aes(x = most_at_home, y = mother_tongue)) +
geom_point()
```
Figure 4\.8: Scatter plot of number of Canadians reporting a language as their mother tongue vs the primary language at home.
To make an initial improvement in the interpretability
of Figure [4\.8](viz.html#fig:03-mother-tongue-vs-most-at-home), we should
replace the default axis
names with more informative labels. We can use `\n` to create a line break in
the axis names so that the words after `\n` are printed on a new line. This will
make the axes labels on the plots more readable.
We should also increase the font size to further
improve readability.
```
ggplot(can_lang, aes(x = most_at_home, y = mother_tongue)) +
geom_point() +
xlab("Language spoken most at home \n (number of Canadian residents)") +
ylab("Mother tongue \n (number of Canadian residents)") +
theme(text = element_text(size = 12))
```
Figure 4\.9: Scatter plot of number of Canadians reporting a language as their mother tongue vs the primary language at home with x and y labels.
Okay! The axes and labels in Figure [4\.9](viz.html#fig:03-mother-tongue-vs-most-at-home-labs) are
much more readable and interpretable now. However, the scatter points themselves could use
some work; most of the 214 data points are bunched
up in the lower left\-hand side of the visualization. The data is clumped because
many more people in Canada speak English or French (the two points in
the upper right corner) than other languages.
In particular, the most common mother tongue language
has 19,460,850 speakers,
while the least common has only 10\.
That’s a 6\-decimal\-place difference
in the magnitude of these two numbers!
We can confirm that the two points in the upper right\-hand corner correspond
to Canada’s two official languages by filtering the data:
```
can_lang |>
filter(language == "English" | language == "French")
```
```
## # A tibble: 2 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Official languages English 19460850 22162865 15265335 29748265
## 2 Official languages French 7166700 6943800 3825215 10242945
```
Recall that our question about this data pertains to *all* languages;
so to properly answer our question,
we will need to adjust the scale of the axes so that we can clearly
see all of the scatter points.
In particular, we will improve the plot by adjusting the horizontal
and vertical axes so that they are on a **logarithmic** (or **log**) scale.
Log scaling is useful when your data take both *very large* and *very small* values,
because it helps space out small values and squishes larger values together.
For example, \\(\\log\_{10}(1\) \= 0\\), \\(\\log\_{10}(10\) \= 1\\), \\(\\log\_{10}(100\) \= 2\\), and \\(\\log\_{10}(1000\) \= 3\\);
on the logarithmic scale,
the values 1, 10, 100, and 1000 are all the same distance apart!
So we see that applying this function is moving big values closer together
and moving small values farther apart.
Note that if your data can take the value 0, logarithmic scaling may not
be appropriate (since `log10(0)` is `-Inf` in R). There are other ways to transform
the data in such a case, but these are beyond the scope of the book.
We can accomplish logarithmic scaling in a `ggplot` visualization
using the `scale_x_log10` and `scale_y_log10` functions.
Given that the x and y axes have large numbers, we should also format the axis labels
to put commas in these numbers to increase their readability.
We can do this in R by passing the `label_comma` function (from the `scales` package)
to the `labels` argument of the `scale_x_log10` and `scale_x_log10` functions.
```
library(scales)
ggplot(can_lang, aes(x = most_at_home, y = mother_tongue)) +
geom_point() +
xlab("Language spoken most at home \n (number of Canadian residents)") +
ylab("Mother tongue \n (number of Canadian residents)") +
theme(text = element_text(size = 12)) +
scale_x_log10(labels = label_comma()) +
scale_y_log10(labels = label_comma())
```
Figure 4\.10: Scatter plot of number of Canadians reporting a language as their mother tongue vs the primary language at home with log adjusted x and y axes.
Similar to some of the examples in Chapter [3](wrangling.html#wrangling),
we can convert the counts to percentages to give them context
and make them easier to understand.
We can do this by dividing the number of people reporting a given language
as their mother tongue or primary language at home
by the number of people who live in Canada and multiplying by 100%.
For example,
the percentage of people who reported that their mother tongue was English
in the 2016 Canadian census
was 19,460,850
/ 35,151,728 \\(\\times\\)
100 % \=
55\.36%.
Below we use `mutate` to calculate the percentage of people reporting a given
language as their mother tongue and primary language at home for all the
languages in the `can_lang` data set. Since the new columns are appended to the
end of the data table, we selected the new columns after the transformation so
you can clearly see the mutated output from the table.
```
can_lang <- can_lang |>
mutate(
mother_tongue_percent = (mother_tongue / 35151728) * 100,
most_at_home_percent = (most_at_home / 35151728) * 100
)
can_lang |>
select(mother_tongue_percent, most_at_home_percent)
```
```
## # A tibble: 214 × 2
## mother_tongue_percent most_at_home_percent
## <dbl> <dbl>
## 1 0.00168 0.000669
## 2 0.0292 0.0136
## 3 0.00327 0.00127
## 4 0.0383 0.0170
## 5 0.0765 0.0374
## 6 0.000128 0.0000284
## 7 0.00358 0.00105
## 8 0.00764 0.00859
## 9 0.0639 0.0364
## 10 1.19 0.636
## # ℹ 204 more rows
```
Finally, we will edit the visualization to use the percentages we just computed
(and change our axis labels to reflect this change in
units). Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props) displays
the final result.
```
ggplot(can_lang, aes(x = most_at_home_percent, y = mother_tongue_percent)) +
geom_point() +
xlab("Language spoken most at home \n (percentage of Canadian residents)") +
ylab("Mother tongue \n (percentage of Canadian residents)") +
theme(text = element_text(size = 12)) +
scale_x_log10(labels = comma) +
scale_y_log10(labels = comma)
```
Figure 4\.11: Scatter plot of percentage of Canadians reporting a language as their mother tongue vs the primary language at home.
Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props) is the appropriate
visualization to use to answer the first question in this section, i.e.,
whether there is a relationship between the percentage of people who speak
a language as their mother tongue and the percentage for whom that
is the primary language spoken at home.
To fully answer the question, we need to use
Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props)
to assess a few key characteristics of the data:
* **Direction:** if the y variable tends to increase when the x variable increases, then y has a **positive** relationship with x. If
y tends to decrease when x increases, then y has a **negative** relationship with x. If y does not meaningfully increase or decrease
as x increases, then y has **little or no** relationship with x.
* **Strength:** if the y variable *reliably* increases, decreases, or stays flat as x increases,
then the relationship is **strong**. Otherwise, the relationship is **weak**. Intuitively,
the relationship is strong when the scatter points are close together and look more like a “line” or “curve” than a “cloud.”
* **Shape:** if you can draw a straight line roughly through the data points, the relationship is **linear**. Otherwise, it is **nonlinear**.
In Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props), we see that
as the percentage of people who have a language as their mother tongue increases,
so does the percentage of people who speak that language at home.
Therefore, there is a **positive** relationship between these two variables.
Furthermore, because the points in Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props)
are fairly close together, and the points look more like a “line” than a “cloud”,
we can say that this is a **strong** relationship.
And finally, because drawing a straight line through these points in
Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props)
would fit the pattern we observe quite well, we say that the relationship is **linear**.
Onto the second part of our exploratory data analysis question!
Recall that we are interested in knowing whether the strength
of the relationship we uncovered
in Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props) depends
on the higher\-level language category (Official languages, Aboriginal languages,
and non\-official, non\-Aboriginal languages).
One common way to explore this
is to color the data points on the scatter plot we have already created by
group. For example, given that we have the higher\-level language category for
each language recorded in the 2016 Canadian census, we can color the points in
our previous
scatter plot to represent each language’s higher\-level language category.
Here we want to distinguish the values according to the `category` group with
which they belong. We can add an argument to the `aes` function, specifying
that the `category` column should color the points. Adding this argument will
color the points according to their group and add a legend at the side of the
plot.
```
ggplot(can_lang, aes(x = most_at_home_percent,
y = mother_tongue_percent,
color = category)) +
geom_point() +
xlab("Language spoken most at home \n (percentage of Canadian residents)") +
ylab("Mother tongue \n (percentage of Canadian residents)") +
theme(text = element_text(size = 12)) +
scale_x_log10(labels = comma) +
scale_y_log10(labels = comma)
```
Figure 4\.12: Scatter plot of percentage of Canadians reporting a language as their mother tongue vs the primary language at home colored by language category.
The legend in Figure [4\.12](viz.html#fig:03-scatter-color-by-category)
takes up valuable plot area.
We can improve this by moving the legend title using the `legend.position`
and `legend.direction`
arguments of the `theme` function.
Here we set `legend.position` to `"top"` to put the legend above the plot
and `legend.direction` to `"vertical"` so that the legend items remain
vertically stacked on top of each other.
When the `legend.position` is set to either `"top"` or `"bottom"`
the default direction is to stack the legend items horizontally.
However, that will not work well for this particular visualization
because the legend labels are quite long
and would run off the page if displayed this way.
```
ggplot(can_lang, aes(x = most_at_home_percent,
y = mother_tongue_percent,
color = category)) +
geom_point() +
xlab("Language spoken most at home \n (percentage of Canadian residents)") +
ylab("Mother tongue \n (percentage of Canadian residents)") +
theme(text = element_text(size = 12),
legend.position = "top",
legend.direction = "vertical") +
scale_x_log10(labels = comma) +
scale_y_log10(labels = comma)
```
Figure 4\.13: Scatter plot of percentage of Canadians reporting a language as their mother tongue vs the primary language at home colored by language category with the legend edited.
In Figure [4\.13](viz.html#fig:03-scatter-color-by-category-legend-edit), the points are colored with
the default `ggplot2` color palette. But what if you want to use different
colors? In R, two packages that provide alternative color
palettes are `RColorBrewer` ([Neuwirth 2014](#ref-RColorBrewer))
and `ggthemes` ([Arnold 2019](#ref-ggthemes)); in this book we will cover how to use `RColorBrewer`.
You can visualize the list of color
palettes that `RColorBrewer` has to offer with the `display.brewer.all`
function. You can also print a list of color\-blind friendly palettes by adding
`colorblindFriendly = TRUE` to the function.
```
library(RColorBrewer)
display.brewer.all(colorblindFriendly = TRUE)
```
Figure 4\.14: Color palettes available from the `RColorBrewer` R package.
From Figure [4\.14](viz.html#fig:rcolorbrewer),
we can choose the color palette we want to use in our plot.
To change the color palette,
we add the `scale_color_brewer` layer indicating the palette we want to use.
You can use
this [color blindness simulator](https://www.color-blindness.com/coblis-color-blindness-simulator/) to check
if your visualizations
are color\-blind friendly.
Below we pick the `"Set2"` palette, with the result shown
in Figure [4\.15](viz.html#fig:scatter-color-by-category-palette).
We also set the `shape` aesthetic mapping to the `category` variable as well;
this makes the scatter point shapes different for each category. This kind of
visual redundancy—i.e., conveying the same information with both scatter point color and shape—can
further improve the clarity and accessibility of your visualization.
```
ggplot(can_lang, aes(x = most_at_home_percent,
y = mother_tongue_percent,
color = category,
shape = category)) +
geom_point() +
xlab("Language spoken most at home \n (percentage of Canadian residents)") +
ylab("Mother tongue \n (percentage of Canadian residents)") +
theme(text = element_text(size = 12),
legend.position = "top",
legend.direction = "vertical") +
scale_x_log10(labels = comma) +
scale_y_log10(labels = comma) +
scale_color_brewer(palette = "Set2")
```
Figure 4\.15: Scatter plot of percentage of Canadians reporting a language as their mother tongue vs the primary language at home colored by language category with color\-blind friendly colors.
From the visualization in Figure [4\.15](viz.html#fig:scatter-color-by-category-palette),
we can now clearly see that the vast majority of Canadians reported one of the official languages
as their mother tongue and as the language they speak most often at home.
What do we see when considering the second part of our exploratory question?
Do we see a difference in the relationship
between languages spoken as a mother tongue and as a primary language
at home across the higher\-level language categories?
Based on Figure [4\.15](viz.html#fig:scatter-color-by-category-palette), there does not
appear to be much of a difference.
For each higher\-level language category,
there appears to be a strong, positive, and linear relationship between
the percentage of people who speak a language as their mother tongue
and the percentage who speak it as their primary language at home.
The relationship looks similar regardless of the category.
Does this mean that this relationship is positive for all languages in the
world? And further, can we use this data visualization on its own to predict how many people
have a given language as their mother tongue if we know how many people speak
it as their primary language at home? The answer to both these questions is
“no!” However, with exploratory data analysis, we can create new hypotheses,
ideas, and questions (like the ones at the beginning of this paragraph).
Answering those questions often involves doing more complex analyses, and sometimes
even gathering additional data. We will see more of such complex analyses later on in
this book.
### 4\.5\.4 Bar plots: the island landmass data set
The `islands.csv` data set contains a list of Earth’s landmasses as well as their area (in thousands of square miles) ([McNeil 1977](#ref-islandsdata)).
**Question:** Are the continents (North / South America, Africa, Europe, Asia, Australia, Antarctica) Earth’s seven largest landmasses? If so, what are the next few largest landmasses after those?
To get started, we will read and inspect the data:
```
# islands data
islands_df <- read_csv("data/islands.csv")
islands_df
```
```
## # A tibble: 48 × 3
## landmass size landmass_type
## <chr> <dbl> <chr>
## 1 Africa 11506 Continent
## 2 Antarctica 5500 Continent
## 3 Asia 16988 Continent
## 4 Australia 2968 Continent
## 5 Axel Heiberg 16 Other
## 6 Baffin 184 Other
## 7 Banks 23 Other
## 8 Borneo 280 Other
## 9 Britain 84 Other
## 10 Celebes 73 Other
## # ℹ 38 more rows
```
Here, we have a data frame of Earth’s landmasses,
and are trying to compare their sizes.
The right type of visualization to answer this question is a bar plot.
In a bar plot, the height of each bar represents the value of an *amount*
(a size, count, proportion, percentage, etc).
They are particularly useful for comparing counts or proportions across different
groups of a categorical variable. Note, however, that bar plots should generally not be
used to display mean or median values, as they hide important information about
the variation of the data. Instead it’s better to show the distribution of
all the individual data points, e.g., using a histogram, which we will discuss further in Section [4\.5\.5](viz.html#histogramsviz).
We specify that we would like to use a bar plot
via the `geom_bar` function in `ggplot2`.
However, by default, `geom_bar` sets the heights
of bars to the number of times a value appears in a data frame (its *count*); here, we want to plot exactly the values in the data frame, i.e.,
the landmass sizes. So we have to pass the `stat = "identity"` argument to `geom_bar`. The result is
shown in Figure [4\.16](viz.html#fig:03-data-islands-bar).
```
islands_bar <- ggplot(islands_df, aes(x = landmass, y = size)) +
geom_bar(stat = "identity")
islands_bar
```
Figure 4\.16: Bar plot of Earth’s landmass sizes with squished labels.
Alright, not bad! The plot in Figure [4\.16](viz.html#fig:03-data-islands-bar) is
definitely the right kind of visualization, as we can clearly see and compare
sizes of landmasses. The major issues are that the smaller landmasses’ sizes
are hard to distinguish, and the names of the landmasses are obscuring each
other as they have been squished into too little space. But remember that the
question we asked was only about the largest landmasses; let’s make the plot a
little bit clearer by keeping only the largest 12 landmasses. We do this using
the `slice_max` function: the `order_by` argument is the name of the column we
want to use for comparing which is largest, and the `n` argument specifies how many
rows to keep. Then to give the labels enough
space, we’ll use horizontal bars instead of vertical ones. We do this by
swapping the `x` and `y` variables.
> **Note:** Recall that in Chapter [1](intro.html#intro), we used `arrange` followed by `slice` to
> obtain the ten rows with the largest values of a variable. We could have instead used
> the `slice_max` function for this purpose. The `slice_max` and `slice_min` functions
> achieve the same goal as `arrange` followed by `slice`, but are slightly more efficient
> because they are specialized for this purpose. In general, it is good to use more specialized
> functions when they are available!
```
islands_top12 <- slice_max(islands_df, order_by = size, n = 12)
islands_bar <- ggplot(islands_top12, aes(x = size, y = landmass)) +
geom_bar(stat = "identity")
islands_bar
```
Figure 4\.17: Bar plot of size for Earth’s largest 12 landmasses.
The plot in Figure [4\.17](viz.html#fig:03-data-islands-bar-2) is definitely clearer now,
and allows us to answer our question
(“Are the top 7 largest landmasses continents?”) in the affirmative.
However, we could still improve this visualization by
coloring the bars based on whether they correspond to a continent,
and by organizing the bars by landmass size rather than by alphabetical order.
The data for coloring the bars is stored in the `landmass_type` column,
so we add the `fill` argument to the aesthetic mapping
and set it to `landmass_type`. We manually select two colors for the bars
using the `scale_fill_manual` function:`"darkorange"`
for orange and `"steelblue"` for blue.
To organize the landmasses by their `size` variable,
we will use the `tidyverse` `fct_reorder` function
in the aesthetic mapping to organize the landmasses by their `size` variable.
The first argument passed to `fct_reorder` is the name of the factor column
whose levels we would like to reorder (here, `landmass`).
The second argument is the column name
that holds the values we would like to use to do the ordering (here, `size`).
The `fct_reorder` function uses ascending order by default,
but this can be changed to descending order
by setting `.desc = TRUE`.
We do this here so that the largest bar will be closest to the axis line,
which is more visually appealing.
To finalize this plot we will customize the axis and legend labels,
and add a title to the chart. Plot titles are not always required, especially when
it would be redundant with an already\-existing
caption or surrounding context (e.g., in a slide presentation with annotations).
But if you decide to include one, a good plot title should provide the take home message
that you want readers to focus on, e.g., “Earth’s seven largest landmasses are continents,”
or a more general summary of the information displayed, e.g., “Earth’s twelve largest landmasses.”
To make these final adjustments we will use the `labs` function rather than the `xlab` and `ylab` functions
we have seen earlier in this chapter, as `labs` lets us modify the legend label and title in addition to axis labels.
We provide a label for each aesthetic mapping in the plot—in this case, `x`, `y`, and `fill`—as well as one for the `title` argument.
Finally, we again use the `theme` function
to change the font size.
```
islands_bar <- ggplot(islands_top12,
aes(x = size,
y = fct_reorder(landmass, size, .desc = TRUE),
fill = landmass_type)) +
geom_bar(stat = "identity") +
labs(x = "Size (1000 square mi)",
y = "Landmass",
fill = "Type",
title = "Earth's twelve largest landmasses") +
scale_fill_manual(values = c("steelblue", "darkorange")) +
theme(text = element_text(size = 10))
islands_bar
```
Figure 4\.18: Bar plot of size for Earth’s largest 12 landmasses, colored by landmass type, with clearer axes and labels.
The plot in Figure [4\.18](viz.html#fig:03-data-islands-bar-4) is now a very effective
visualization for answering our original questions. Landmasses are organized by
their size, and continents are colored differently than other landmasses,
making it quite clear that continents are the largest seven landmasses.
### 4\.5\.5 Histograms: the Michelson speed of light data set
The `morley` data set
contains measurements of the speed of light
collected in experiments performed in 1879\.
Five experiments were performed,
and in each experiment, 20 runs were performed—meaning that
20 measurements of the speed of light were collected
in each experiment ([Michelson 1882](#ref-lightdata)).
The `morley` data set is available in base R as a data frame,
so it does not need to be loaded.
Because the speed of light is a very large number
(the true value is 299,792\.458 km/sec), the data is coded
to be the measured speed of light minus 299,000\.
This coding allows us to focus on the variations in the measurements, which are generally
much smaller than 299,000\.
If we used the full large speed measurements, the variations in the measurements
would not be noticeable, making it difficult to study the differences between the experiments.
Note that we convert the `morley` data to a tibble to take advantage of the nicer print output
these specialized data frames provide.
**Question:** Given what we know now about the speed of
light (299,792\.458 kilometres per second), how accurate were each of the experiments?
```
# michelson morley experimental data
morley <- as_tibble(morley)
morley
```
```
## # A tibble: 100 × 3
## Expt Run Speed
## <int> <int> <int>
## 1 1 1 850
## 2 1 2 740
## 3 1 3 900
## 4 1 4 1070
## 5 1 5 930
## 6 1 6 850
## 7 1 7 950
## 8 1 8 980
## 9 1 9 980
## 10 1 10 880
## # ℹ 90 more rows
```
In this experimental data,
Michelson was trying to measure just a single quantitative number
(the speed of light).
The data set contains many measurements of this single quantity.
To tell how accurate the experiments were,
we need to visualize the distribution of the measurements
(i.e., all their possible values and how often each occurs).
We can do this using a *histogram*.
A histogram
helps us visualize how a particular variable is distributed in a data set
by separating the data into bins,
and then using vertical bars to show how many data points fell in each bin.
To create a histogram in `ggplot2` we will use the `geom_histogram` geometric
object, setting the `x` axis to the `Speed` measurement variable. As usual,
let’s use the default arguments just to see how things look.
```
morley_hist <- ggplot(morley, aes(x = Speed)) +
geom_histogram()
morley_hist
```
Figure 4\.19: Histogram of Michelson’s speed of light data.
Figure [4\.19](viz.html#fig:03-data-morley-hist) is a great start.
However,
we cannot tell how accurate the measurements are using this visualization
unless we can see the true value.
In order to visualize the true speed of light,
we will add a vertical line with the `geom_vline` function.
To draw a vertical line with `geom_vline`,
we need to specify where on the x\-axis the line should be drawn.
We can do this by setting the `xintercept` argument.
Here we set it to 792\.458, which is the true value of light speed
minus 299,000; this ensures it is coded the same way as the
measurements in the `morley` data frame.
We would also like to fine tune this vertical line,
styling it so that it is dashed by setting `linetype = "dashed"`.
There is a similar function, `geom_hline`,
that is used for plotting horizontal lines.
Note that
*vertical lines* are used to denote quantities on the *horizontal axis*,
while *horizontal lines* are used to denote quantities on the *vertical axis*.
```
morley_hist <- ggplot(morley, aes(x = Speed)) +
geom_histogram() +
geom_vline(xintercept = 792.458, linetype = "dashed")
morley_hist
```
Figure 4\.20: Histogram of Michelson’s speed of light data with vertical line indicating true speed of light.
In Figure [4\.20](viz.html#fig:03-data-morley-hist-2),
we still cannot tell which experiments (denoted in the `Expt` column)
led to which measurements;
perhaps some experiments were more accurate than others.
To fully answer our question,
we need to separate the measurements from each other visually.
We can try to do this using a *colored* histogram,
where counts from different experiments are stacked on top of each other
in different colors.
We can create a histogram colored by the `Expt` variable
by adding it to the `fill` aesthetic mapping.
We make sure the different colors can be seen
(despite them all sitting on top of each other)
by setting the `alpha` argument in `geom_histogram` to `0.5`
to make the bars slightly translucent.
We also specify `position = "identity"` in `geom_histogram` to ensure
the histograms for each experiment will be overlaid side\-by\-side,
instead of stacked bars
(which is the default for bar plots or histograms
when they are colored by another categorical variable).
```
morley_hist <- ggplot(morley, aes(x = Speed, fill = Expt)) +
geom_histogram(alpha = 0.5, position = "identity") +
geom_vline(xintercept = 792.458, linetype = "dashed")
morley_hist
```
Figure 4\.21: Histogram of Michelson’s speed of light data where an attempt is made to color the bars by experiment.
Alright great, Figure [4\.21](viz.html#fig:03-data-morley-hist-3) looks…wait a second! The
histogram is still all the same color! What is going on here? Well, if you
recall from Chapter [3](wrangling.html#wrangling), the *data type* you use for each variable
can influence how R and `tidyverse` treats it. Here, we indeed have an issue
with the data types in the `morley` data frame. In particular, the `Expt` column
is currently an *integer* (you can see the label `<int>` underneath the `Expt` column in the printed
data frame at the start of this section). But we want to treat it as a
*category*, i.e., there should be one category per type of experiment.
To fix this issue we can convert the `Expt` variable into a *factor* by
passing it to `as_factor` in the `fill` aesthetic mapping.
Recall that factor is a data type in R that is often used to represent
categories. By writing
`as_factor(Expt)` we are ensuring that R will treat this variable as a factor,
and the color will be mapped discretely.
```
morley_hist <- ggplot(morley, aes(x = Speed, fill = as_factor(Expt))) +
geom_histogram(alpha = 0.5, position = "identity") +
geom_vline(xintercept = 792.458, linetype = "dashed")
morley_hist
```
Figure 4\.22: Histogram of Michelson’s speed of light data colored by experiment as factor.
> **Note:** Factors impact plots in two ways:
> (1\) ensuring a color is mapped as discretely where appropriate (as in this
> example) and (2\) the ordering of levels in a plot. `ggplot` takes into account
> the order of the factor levels as opposed to the order of data in
> your data frame. Learning how to reorder your factor levels will help you with
> reordering the labels of a factor on a plot.
Unfortunately, the attempt to separate out the experiment number visually has
created a bit of a mess. All of the colors in Figure
[4\.22](viz.html#fig:03-data-morley-hist-with-factor) are blending together, and although it is
possible to derive *some* insight from this (e.g., experiments 1 and 3 had some
of the most incorrect measurements), it isn’t the clearest way to convey our
message and answer the question. Let’s try a different strategy of creating
grid of separate histogram plots.
We use the `facet_grid` function to create a plot
that has multiple subplots arranged in a grid.
The argument to `facet_grid` specifies the variable(s) used to split the plot
into subplots, and how to split them (i.e., into rows or columns).
If the plot is to be split horizontally, into rows,
then the `rows` argument is used.
If the plot is to be split vertically, into columns,
then the `cols` argument is used.
Both the `rows` and `cols` arguments take the column names on which to split the data when creating the subplots.
Note that the column names must be surrounded by the `vars` function.
This function allows the column names to be correctly evaluated
in the context of the data frame.
```
morley_hist <- ggplot(morley, aes(x = Speed, fill = as_factor(Expt))) +
geom_histogram() +
facet_grid(rows = vars(Expt)) +
geom_vline(xintercept = 792.458, linetype = "dashed")
morley_hist
```
Figure 4\.23: Histogram of Michelson’s speed of light data split vertically by experiment.
The visualization in Figure [4\.23](viz.html#fig:03-data-morley-hist-4)
now makes it quite clear how accurate the different experiments were
with respect to one another.
The most variable measurements came from Experiment 1\.
There the measurements ranged from about 650–1050 km/sec.
The least variable measurements came from Experiment 2\.
There, the measurements ranged from about 750–950 km/sec.
The most different experiments still obtained quite similar results!
There are two finishing touches to make this visualization even clearer. First and foremost, we need to add informative axis labels
using the `labs` function, and increase the font size to make it readable using the `theme` function. Second, and perhaps more subtly, even though it
is easy to compare the experiments on this plot to one another, it is hard to get a sense
of just how accurate all the experiments were overall. For example, how accurate is the value 800 on the plot, relative to the true speed of light?
To answer this question, we’ll use the `mutate` function to transform our data into a relative measure of accuracy rather than absolute measurements:
```
morley_rel <- mutate(morley,
relative_accuracy = 100 *
((299000 + Speed) - 299792.458) / (299792.458))
morley_hist <- ggplot(morley_rel,
aes(x = relative_accuracy,
fill = as_factor(Expt))) +
geom_histogram() +
facet_grid(rows = vars(Expt)) +
geom_vline(xintercept = 0, linetype = "dashed") +
labs(x = "Relative Accuracy (%)",
y = "# Measurements",
fill = "Experiment ID") +
theme(text = element_text(size = 12))
morley_hist
```
Figure 4\.24: Histogram of relative accuracy split vertically by experiment with clearer axes and labels.
Wow, impressive! These measurements of the speed of light from 1879 had errors around *0\.05%* of the true speed. Figure [4\.24](viz.html#fig:03-data-morley-hist-5) shows you that even though experiments 2 and 5 were perhaps the most accurate, all of the experiments did quite an
admirable job given the technology available at the time.
#### Choosing a binwidth for histograms
When you create a histogram in R, the default number of bins used is 30\.
Naturally, this is not always the right number to use.
You can set the number of bins yourself by using
the `bins` argument in the `geom_histogram` geometric object.
You can also set the *width* of the bins using the
`binwidth` argument in the `geom_histogram` geometric object.
But what number of bins, or bin width, is the right one to use?
Unfortunately there is no hard rule for what the right bin number
or width is. It depends entirely on your problem; the *right* number of bins
or bin width is
the one that *helps you answer the question* you asked.
Choosing the correct setting for your problem
is something that commonly takes iteration.
We recommend setting the *bin width* (not the *number of bins*) because
it often more directly corresponds to values in your problem of interest. For example,
if you are looking at a histogram of human heights,
a bin width of 1 inch would likely be reasonable, while the number of bins to use is
not immediately clear.
It’s usually a good idea to try out several bin widths to see which one
most clearly captures your data in the context of the question
you want to answer.
To get a sense for how different bin widths affect visualizations,
let’s experiment with the histogram that we have been working on in this section.
In Figure [4\.25](viz.html#fig:03-data-morley-hist-binwidth),
we compare the default setting with three other histograms where we set the
`binwidth` to 0\.001, 0\.01 and 0\.1\.
In this case, we can see that both the default number of bins
and the binwidth of 0\.01 are effective for helping answer our question.
On the other hand, the bin widths of 0\.001 and 0\.1 are too small and too big, respectively.
Figure 4\.25: Effect of varying bin width on histograms.
#### Adding layers to a `ggplot` plot object
One of the powerful features of `ggplot` is that you
can continue to iterate on a single plot object, adding and refining
one layer at a time. If you stored your plot as a named object
using the assignment symbol (`<-`), you can
add to it using the `+` operator.
For example, if we wanted to add a title to the last plot we created (`morley_hist`),
we can use the `+` operator to add a title layer with the `ggtitle` function.
The result is shown in Figure [4\.26](viz.html#fig:03-data-morley-hist-addlayer).
```
morley_hist_title <- morley_hist +
ggtitle("Speed of light experiments \n were accurate to about 0.05%")
morley_hist_title
```
Figure 4\.26: Histogram of relative accuracy split vertically by experiment with a descriptive title highlighting the take home message of the visualization.
> **Note:** Good visualization titles clearly communicate
> the take home message to the audience. Typically,
> that is the answer to the question you posed before making the visualization.
#### *Build the visualization iteratively*
This section will cover examples of how to choose and refine a visualization
given a data set and a question that you want to answer, and then how to create
the visualization in R using the `ggplot2` R package. Given that
the `ggplot2` package is loaded by the `tidyverse` metapackage, we still
need to load only \`tidyverse’:
```
library(tidyverse)
```
### 4\.5\.1 Scatter plots and line plots: the Mauna Loa CO\\(\_{\\text{2}}\\) data set
The [Mauna Loa CO\\(\_{\\text{2}}\\) data set](https://www.esrl.noaa.gov/gmd/ccgg/trends/data.html),
curated by Dr. Pieter Tans, NOAA/GML
and Dr. Ralph Keeling, Scripps Institution of Oceanography,
records the atmospheric concentration of carbon dioxide
(CO\\(\_{\\text{2}}\\), in parts per million)
at the Mauna Loa research station in Hawaii
from 1959 onward ([Tans and Keeling 2020](#ref-maunadata)).
For this book, we are going to focus on the years 1980\-2020\.
**Question:**
Does the concentration of atmospheric CO\\(\_{\\text{2}}\\) change over time,
and are there any interesting patterns to note?
To get started, we will read and inspect the data:
```
# mauna loa carbon dioxide data
co2_df <- read_csv("data/mauna_loa_data.csv")
co2_df
```
```
## # A tibble: 484 × 2
## date_measured ppm
## <date> <dbl>
## 1 1980-02-01 338.
## 2 1980-03-01 340.
## 3 1980-04-01 341.
## 4 1980-05-01 341.
## 5 1980-06-01 341.
## 6 1980-07-01 339.
## 7 1980-08-01 338.
## 8 1980-09-01 336.
## 9 1980-10-01 336.
## 10 1980-11-01 337.
## # ℹ 474 more rows
```
We see that there are two columns in the `co2_df` data frame; `date_measured` and `ppm`.
The `date_measured` column holds the date the measurement was taken,
and is of type `date`.
The `ppm` column holds the value of CO\\(\_{\\text{2}}\\) in parts per million
that was measured on each date, and is type `double`.
> **Note:** `read_csv` was able to parse the `date_measured` column into the
> `date` vector type because it was entered
> in the international standard date format,
> called ISO 8601, which lists dates as `year-month-day`.
> `date` vectors are `double` vectors with special properties that allow
> them to handle dates correctly.
> For example, `date` type vectors allow functions like `ggplot`
> to treat them as numeric dates and not as character vectors,
> even though they contain non\-numeric characters
> (e.g., in the `date_measured` column in the `co2_df` data frame).
> This means R will not accidentally plot the dates in the wrong order
> (i.e., not alphanumerically as would happen if it was a character vector).
> An in\-depth study of dates and times is beyond the scope of the book,
> but interested readers
> may consult the Dates and Times chapter of *R for Data Science* ([Wickham and Grolemund 2016](#ref-wickham2016r));
> see the additional resources at the end of this chapter.
Since we are investigating a relationship between two variables
(CO\\(\_{\\text{2}}\\) concentration and date),
a scatter plot is a good place to start.
Scatter plots show the data as individual points with `x` (horizontal axis)
and `y` (vertical axis) coordinates.
Here, we will use the measurement date as the `x` coordinate
and the CO\\(\_{\\text{2}}\\) concentration as the `y` coordinate.
When using the `ggplot2` package,
we create a plot object with the `ggplot` function.
There are a few basic aspects of a plot that we need to specify:
* The name of the data frame object to visualize.
+ Here, we specify the `co2_df` data frame.
* The **aesthetic mapping**, which tells `ggplot` how the columns in the data frame map to properties of the visualization.
+ To create an aesthetic mapping, we use the `aes` function.
+ Here, we set the plot `x` axis to the `date_measured` variable, and the plot `y` axis to the `ppm` variable.
* The `+` operator, which tells `ggplot` that we would like to add another layer to the plot.
* The **geometric object**, which specifies how the mapped data should be displayed.
+ To create a geometric object, we use a `geom_*` function (see the [ggplot reference](https://ggplot2.tidyverse.org/reference/) for a list of geometric objects).
+ Here, we use the `geom_point` function to visualize our data as a scatter plot.
```
co2_scatter <- ggplot(co2_df, aes(x = date_measured, y = ppm)) +
geom_point()
co2_scatter
```
Figure 4\.2: Scatter plot of atmospheric concentration of CO\\(\_{2}\\) over time.
The visualization in Figure [4\.2](viz.html#fig:03-data-co2-scatter)
shows a clear upward trend
in the atmospheric concentration of CO\\(\_{\\text{2}}\\) over time.
This plot answers the first part of our question in the affirmative,
but that appears to be the only conclusion one can make
from the scatter visualization.
One important thing to note about this data is that one of the variables
we are exploring is time.
Time is a special kind of quantitative variable
because it forces additional structure on the data—the
data points have a natural order.
Specifically, each observation in the data set has a predecessor
and a successor, and the order of the observations matters; changing their order
alters their meaning.
In situations like this, we typically use a line plot to visualize
the data. Line plots connect the sequence of `x` and `y` coordinates
of the observations with line segments, thereby emphasizing their order.
We can create a line plot in `ggplot` using the `geom_line` function.
Let’s now try to visualize the `co2_df` as a line plot
with just the default arguments:
```
co2_line <- ggplot(co2_df, aes(x = date_measured, y = ppm)) +
geom_line()
co2_line
```
Figure 4\.3: Line plot of atmospheric concentration of CO\\(\_{2}\\) over time.
Aha! Figure [4\.3](viz.html#fig:03-data-co2-line) shows us there *is* another interesting
phenomenon in the data: in addition to increasing over time, the concentration
seems to oscillate as well. Given the visualization as it is now, it is still
hard to tell how fast the oscillation is, but nevertheless, the line seems to
be a better choice for answering the question than the scatter plot was. The
comparison between these two visualizations also illustrates a common issue with
scatter plots: often, the points are shown too close together or even on top of
one another, muddling information that would otherwise be clear
(*overplotting*).
Now that we have settled on the rough details of the visualization, it is time
to refine things. This plot is fairly straightforward, and there is not much
visual noise to remove. But there are a few things we must do to improve
clarity, such as adding informative axis labels and making the font a more
readable size. To add axis labels, we use the `xlab` and `ylab` functions. To
change the font size, we use the `theme` function with the `text` argument:
```
co2_line <- ggplot(co2_df, aes(x = date_measured, y = ppm)) +
geom_line() +
xlab("Year") +
ylab("Atmospheric CO2 (ppm)") +
theme(text = element_text(size = 12))
co2_line
```
Figure 4\.4: Line plot of atmospheric concentration of CO\\(\_{2}\\) over time with clearer axes and labels.
> **Note:** The `theme` function is quite complex and has many arguments
> that can be specified to control many non\-data aspects of a visualization.
> An in\-depth discussion of the `theme` function is beyond the scope of this book.
> Interested readers may consult the `theme` function documentation;
> see the additional resources section at the end of this chapter.
Finally, let’s see if we can better understand the oscillation by changing the
visualization slightly. Note that it is totally fine to use a small number of
visualizations to answer different aspects of the question you are trying to
answer. We will accomplish this by using *scales*,
another important feature of `ggplot2` that easily transforms the different
variables and set limits. We scale the horizontal axis using the `xlim` function,
and the vertical axis with the `ylim` function.
In particular, here, we will use the `xlim` function to zoom in
on just five years of data (say, 1990\-1994\).
`xlim` takes a vector of length two
to specify the upper and lower bounds to limit the axis.
We can create that using the `c` function.
Note that it is important that the vector given to `xlim` must be of the same
type as the data that is mapped to that axis.
Here, we have mapped a date to the x\-axis,
and so we need to use the `date` function
(from the `tidyverse` [`lubridate` R package](https://lubridate.tidyverse.org/) ([Spinu, Grolemund, and Wickham 2021](#ref-lubridate); [Grolemund and Wickham 2011](#ref-lubridatepaper)))
to convert the character strings we provide to `c` to `date` vectors.
> **Note:** `lubridate` is a package that is installed by the `tidyverse` metapackage,
> but is not loaded by it.
> Hence we need to load it separately in the code below.
```
library(lubridate)
co2_line <- ggplot(co2_df, aes(x = date_measured, y = ppm)) +
geom_line() +
xlab("Year") +
ylab("Atmospheric CO2 (ppm)") +
xlim(c(date("1990-01-01"), date("1993-12-01"))) +
theme(text = element_text(size = 12))
co2_line
```
Figure 4\.5: Line plot of atmospheric concentration of CO\\(\_{2}\\) from 1990 to 1994\.
Interesting! It seems that each year, the atmospheric CO\\(\_{\\text{2}}\\) increases until it reaches its peak somewhere around April, decreases until around late September,
and finally increases again until the end of the year. In Hawaii, there are two seasons: summer from May through October, and winter from November through April.
Therefore, the oscillating pattern in CO\\(\_{\\text{2}}\\) matches up fairly closely with the two seasons.
As you might have noticed from the code used to create the final visualization
of the `co2_df` data frame,
we construct the visualizations in `ggplot` with layers.
New layers are added with the `+` operator,
and we can really add as many as we would like!
A useful analogy to constructing a data visualization is painting a picture.
We start with a blank canvas,
and the first thing we do is prepare the surface
for our painting by adding primer.
In our data visualization this is akin to calling `ggplot`
and specifying the data set we will be using.
Next, we sketch out the background of the painting.
In our data visualization,
this would be when we map data to the axes in the `aes` function.
Then we add our key visual subjects to the painting.
In our data visualization,
this would be the geometric objects (e.g., `geom_point`, `geom_line`, etc.).
And finally, we work on adding details and refinements to the painting.
In our data visualization this would be when we fine tune axis labels,
change the font, adjust the point size, and do other related things.
### 4\.5\.2 Scatter plots: the Old Faithful eruption time data set
The `faithful` data set contains measurements
of the waiting time between eruptions
and the subsequent eruption duration (in minutes) of the Old Faithful
geyser in Yellowstone National Park, Wyoming, United States.
The `faithful` data set is available in base R as a data frame,
so it does not need to be loaded.
We convert it to a tibble to take advantage of the nicer print output
these specialized data frames provide.
**Question:**
Is there a relationship between the waiting time before an eruption
and the duration of the eruption?
```
# old faithful eruption time / wait time data
faithful <- as_tibble(faithful)
faithful
```
```
## # A tibble: 272 × 2
## eruptions waiting
## <dbl> <dbl>
## 1 3.6 79
## 2 1.8 54
## 3 3.33 74
## 4 2.28 62
## 5 4.53 85
## 6 2.88 55
## 7 4.7 88
## 8 3.6 85
## 9 1.95 51
## 10 4.35 85
## # ℹ 262 more rows
```
Here again, we investigate the relationship between two quantitative variables
(waiting time and eruption time).
But if you look at the output of the data frame,
you’ll notice that unlike time in the Mauna Loa CO\\(\_{\\text{2}}\\) data set,
neither of the variables here have a natural order to them.
So a scatter plot is likely to be the most appropriate
visualization. Let’s create a scatter plot using the `ggplot`
function with the `waiting` variable on the horizontal axis, the `eruptions`
variable on the vertical axis, and the `geom_point` geometric object.
The result is shown in Figure [4\.6](viz.html#fig:03-data-faithful-scatter).
```
faithful_scatter <- ggplot(faithful, aes(x = waiting, y = eruptions)) +
geom_point()
faithful_scatter
```
Figure 4\.6: Scatter plot of waiting time and eruption time.
We can see in Figure [4\.6](viz.html#fig:03-data-faithful-scatter) that the data tend to fall
into two groups: one with short waiting and eruption times, and one with long
waiting and eruption times. Note that in this case, there is no overplotting:
the points are generally nicely visually separated, and the pattern they form
is clear. In order to refine the visualization, we need only to add axis
labels and make the font more readable:
```
faithful_scatter <- ggplot(faithful, aes(x = waiting, y = eruptions)) +
geom_point() +
xlab("Waiting Time (mins)") +
ylab("Eruption Duration (mins)") +
theme(text = element_text(size = 12))
faithful_scatter
```
Figure 4\.7: Scatter plot of waiting time and eruption time with clearer axes and labels.
### 4\.5\.3 Axis transformation and colored scatter plots: the Canadian languages data set
Recall the `can_lang` data set ([Timbers 2020](#ref-timbers2020canlang)) from Chapters [1](intro.html#intro), [2](reading.html#reading), and [3](wrangling.html#wrangling),
which contains counts of languages from the 2016
Canadian census.
**Question:** Is there a relationship between
the percentage of people who speak a language as their mother tongue and
the percentage for whom that is the primary language spoken at home?
And is there a pattern in the strength of this relationship in the
higher\-level language categories (Official languages, Aboriginal languages, or
non\-official and non\-Aboriginal languages)?
To get started, we will read and inspect the data:
```
can_lang <- read_csv("data/can_lang.csv")
can_lang
```
```
## # A tibble: 214 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Aboriginal langu… Aborigi… 590 235 30 665
## 2 Non-Official & N… Afrikaa… 10260 4785 85 23415
## 3 Non-Official & N… Afro-As… 1150 445 10 2775
## 4 Non-Official & N… Akan (T… 13460 5985 25 22150
## 5 Non-Official & N… Albanian 26895 13135 345 31930
## 6 Aboriginal langu… Algonqu… 45 10 0 120
## 7 Aboriginal langu… Algonqu… 1260 370 40 2480
## 8 Non-Official & N… America… 2685 3020 1145 21930
## 9 Non-Official & N… Amharic 22465 12785 200 33670
## 10 Non-Official & N… Arabic 419890 223535 5585 629055
## # ℹ 204 more rows
```
We will begin with a scatter plot of the `mother_tongue` and `most_at_home` columns from our data frame.
The resulting plot is shown in Figure [4\.8](viz.html#fig:03-mother-tongue-vs-most-at-home).
```
ggplot(can_lang, aes(x = most_at_home, y = mother_tongue)) +
geom_point()
```
Figure 4\.8: Scatter plot of number of Canadians reporting a language as their mother tongue vs the primary language at home.
To make an initial improvement in the interpretability
of Figure [4\.8](viz.html#fig:03-mother-tongue-vs-most-at-home), we should
replace the default axis
names with more informative labels. We can use `\n` to create a line break in
the axis names so that the words after `\n` are printed on a new line. This will
make the axes labels on the plots more readable.
We should also increase the font size to further
improve readability.
```
ggplot(can_lang, aes(x = most_at_home, y = mother_tongue)) +
geom_point() +
xlab("Language spoken most at home \n (number of Canadian residents)") +
ylab("Mother tongue \n (number of Canadian residents)") +
theme(text = element_text(size = 12))
```
Figure 4\.9: Scatter plot of number of Canadians reporting a language as their mother tongue vs the primary language at home with x and y labels.
Okay! The axes and labels in Figure [4\.9](viz.html#fig:03-mother-tongue-vs-most-at-home-labs) are
much more readable and interpretable now. However, the scatter points themselves could use
some work; most of the 214 data points are bunched
up in the lower left\-hand side of the visualization. The data is clumped because
many more people in Canada speak English or French (the two points in
the upper right corner) than other languages.
In particular, the most common mother tongue language
has 19,460,850 speakers,
while the least common has only 10\.
That’s a 6\-decimal\-place difference
in the magnitude of these two numbers!
We can confirm that the two points in the upper right\-hand corner correspond
to Canada’s two official languages by filtering the data:
```
can_lang |>
filter(language == "English" | language == "French")
```
```
## # A tibble: 2 × 6
## category language mother_tongue most_at_home most_at_work lang_known
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Official languages English 19460850 22162865 15265335 29748265
## 2 Official languages French 7166700 6943800 3825215 10242945
```
Recall that our question about this data pertains to *all* languages;
so to properly answer our question,
we will need to adjust the scale of the axes so that we can clearly
see all of the scatter points.
In particular, we will improve the plot by adjusting the horizontal
and vertical axes so that they are on a **logarithmic** (or **log**) scale.
Log scaling is useful when your data take both *very large* and *very small* values,
because it helps space out small values and squishes larger values together.
For example, \\(\\log\_{10}(1\) \= 0\\), \\(\\log\_{10}(10\) \= 1\\), \\(\\log\_{10}(100\) \= 2\\), and \\(\\log\_{10}(1000\) \= 3\\);
on the logarithmic scale,
the values 1, 10, 100, and 1000 are all the same distance apart!
So we see that applying this function is moving big values closer together
and moving small values farther apart.
Note that if your data can take the value 0, logarithmic scaling may not
be appropriate (since `log10(0)` is `-Inf` in R). There are other ways to transform
the data in such a case, but these are beyond the scope of the book.
We can accomplish logarithmic scaling in a `ggplot` visualization
using the `scale_x_log10` and `scale_y_log10` functions.
Given that the x and y axes have large numbers, we should also format the axis labels
to put commas in these numbers to increase their readability.
We can do this in R by passing the `label_comma` function (from the `scales` package)
to the `labels` argument of the `scale_x_log10` and `scale_x_log10` functions.
```
library(scales)
ggplot(can_lang, aes(x = most_at_home, y = mother_tongue)) +
geom_point() +
xlab("Language spoken most at home \n (number of Canadian residents)") +
ylab("Mother tongue \n (number of Canadian residents)") +
theme(text = element_text(size = 12)) +
scale_x_log10(labels = label_comma()) +
scale_y_log10(labels = label_comma())
```
Figure 4\.10: Scatter plot of number of Canadians reporting a language as their mother tongue vs the primary language at home with log adjusted x and y axes.
Similar to some of the examples in Chapter [3](wrangling.html#wrangling),
we can convert the counts to percentages to give them context
and make them easier to understand.
We can do this by dividing the number of people reporting a given language
as their mother tongue or primary language at home
by the number of people who live in Canada and multiplying by 100%.
For example,
the percentage of people who reported that their mother tongue was English
in the 2016 Canadian census
was 19,460,850
/ 35,151,728 \\(\\times\\)
100 % \=
55\.36%.
Below we use `mutate` to calculate the percentage of people reporting a given
language as their mother tongue and primary language at home for all the
languages in the `can_lang` data set. Since the new columns are appended to the
end of the data table, we selected the new columns after the transformation so
you can clearly see the mutated output from the table.
```
can_lang <- can_lang |>
mutate(
mother_tongue_percent = (mother_tongue / 35151728) * 100,
most_at_home_percent = (most_at_home / 35151728) * 100
)
can_lang |>
select(mother_tongue_percent, most_at_home_percent)
```
```
## # A tibble: 214 × 2
## mother_tongue_percent most_at_home_percent
## <dbl> <dbl>
## 1 0.00168 0.000669
## 2 0.0292 0.0136
## 3 0.00327 0.00127
## 4 0.0383 0.0170
## 5 0.0765 0.0374
## 6 0.000128 0.0000284
## 7 0.00358 0.00105
## 8 0.00764 0.00859
## 9 0.0639 0.0364
## 10 1.19 0.636
## # ℹ 204 more rows
```
Finally, we will edit the visualization to use the percentages we just computed
(and change our axis labels to reflect this change in
units). Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props) displays
the final result.
```
ggplot(can_lang, aes(x = most_at_home_percent, y = mother_tongue_percent)) +
geom_point() +
xlab("Language spoken most at home \n (percentage of Canadian residents)") +
ylab("Mother tongue \n (percentage of Canadian residents)") +
theme(text = element_text(size = 12)) +
scale_x_log10(labels = comma) +
scale_y_log10(labels = comma)
```
Figure 4\.11: Scatter plot of percentage of Canadians reporting a language as their mother tongue vs the primary language at home.
Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props) is the appropriate
visualization to use to answer the first question in this section, i.e.,
whether there is a relationship between the percentage of people who speak
a language as their mother tongue and the percentage for whom that
is the primary language spoken at home.
To fully answer the question, we need to use
Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props)
to assess a few key characteristics of the data:
* **Direction:** if the y variable tends to increase when the x variable increases, then y has a **positive** relationship with x. If
y tends to decrease when x increases, then y has a **negative** relationship with x. If y does not meaningfully increase or decrease
as x increases, then y has **little or no** relationship with x.
* **Strength:** if the y variable *reliably* increases, decreases, or stays flat as x increases,
then the relationship is **strong**. Otherwise, the relationship is **weak**. Intuitively,
the relationship is strong when the scatter points are close together and look more like a “line” or “curve” than a “cloud.”
* **Shape:** if you can draw a straight line roughly through the data points, the relationship is **linear**. Otherwise, it is **nonlinear**.
In Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props), we see that
as the percentage of people who have a language as their mother tongue increases,
so does the percentage of people who speak that language at home.
Therefore, there is a **positive** relationship between these two variables.
Furthermore, because the points in Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props)
are fairly close together, and the points look more like a “line” than a “cloud”,
we can say that this is a **strong** relationship.
And finally, because drawing a straight line through these points in
Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props)
would fit the pattern we observe quite well, we say that the relationship is **linear**.
Onto the second part of our exploratory data analysis question!
Recall that we are interested in knowing whether the strength
of the relationship we uncovered
in Figure [4\.11](viz.html#fig:03-mother-tongue-vs-most-at-home-scale-props) depends
on the higher\-level language category (Official languages, Aboriginal languages,
and non\-official, non\-Aboriginal languages).
One common way to explore this
is to color the data points on the scatter plot we have already created by
group. For example, given that we have the higher\-level language category for
each language recorded in the 2016 Canadian census, we can color the points in
our previous
scatter plot to represent each language’s higher\-level language category.
Here we want to distinguish the values according to the `category` group with
which they belong. We can add an argument to the `aes` function, specifying
that the `category` column should color the points. Adding this argument will
color the points according to their group and add a legend at the side of the
plot.
```
ggplot(can_lang, aes(x = most_at_home_percent,
y = mother_tongue_percent,
color = category)) +
geom_point() +
xlab("Language spoken most at home \n (percentage of Canadian residents)") +
ylab("Mother tongue \n (percentage of Canadian residents)") +
theme(text = element_text(size = 12)) +
scale_x_log10(labels = comma) +
scale_y_log10(labels = comma)
```
Figure 4\.12: Scatter plot of percentage of Canadians reporting a language as their mother tongue vs the primary language at home colored by language category.
The legend in Figure [4\.12](viz.html#fig:03-scatter-color-by-category)
takes up valuable plot area.
We can improve this by moving the legend title using the `legend.position`
and `legend.direction`
arguments of the `theme` function.
Here we set `legend.position` to `"top"` to put the legend above the plot
and `legend.direction` to `"vertical"` so that the legend items remain
vertically stacked on top of each other.
When the `legend.position` is set to either `"top"` or `"bottom"`
the default direction is to stack the legend items horizontally.
However, that will not work well for this particular visualization
because the legend labels are quite long
and would run off the page if displayed this way.
```
ggplot(can_lang, aes(x = most_at_home_percent,
y = mother_tongue_percent,
color = category)) +
geom_point() +
xlab("Language spoken most at home \n (percentage of Canadian residents)") +
ylab("Mother tongue \n (percentage of Canadian residents)") +
theme(text = element_text(size = 12),
legend.position = "top",
legend.direction = "vertical") +
scale_x_log10(labels = comma) +
scale_y_log10(labels = comma)
```
Figure 4\.13: Scatter plot of percentage of Canadians reporting a language as their mother tongue vs the primary language at home colored by language category with the legend edited.
In Figure [4\.13](viz.html#fig:03-scatter-color-by-category-legend-edit), the points are colored with
the default `ggplot2` color palette. But what if you want to use different
colors? In R, two packages that provide alternative color
palettes are `RColorBrewer` ([Neuwirth 2014](#ref-RColorBrewer))
and `ggthemes` ([Arnold 2019](#ref-ggthemes)); in this book we will cover how to use `RColorBrewer`.
You can visualize the list of color
palettes that `RColorBrewer` has to offer with the `display.brewer.all`
function. You can also print a list of color\-blind friendly palettes by adding
`colorblindFriendly = TRUE` to the function.
```
library(RColorBrewer)
display.brewer.all(colorblindFriendly = TRUE)
```
Figure 4\.14: Color palettes available from the `RColorBrewer` R package.
From Figure [4\.14](viz.html#fig:rcolorbrewer),
we can choose the color palette we want to use in our plot.
To change the color palette,
we add the `scale_color_brewer` layer indicating the palette we want to use.
You can use
this [color blindness simulator](https://www.color-blindness.com/coblis-color-blindness-simulator/) to check
if your visualizations
are color\-blind friendly.
Below we pick the `"Set2"` palette, with the result shown
in Figure [4\.15](viz.html#fig:scatter-color-by-category-palette).
We also set the `shape` aesthetic mapping to the `category` variable as well;
this makes the scatter point shapes different for each category. This kind of
visual redundancy—i.e., conveying the same information with both scatter point color and shape—can
further improve the clarity and accessibility of your visualization.
```
ggplot(can_lang, aes(x = most_at_home_percent,
y = mother_tongue_percent,
color = category,
shape = category)) +
geom_point() +
xlab("Language spoken most at home \n (percentage of Canadian residents)") +
ylab("Mother tongue \n (percentage of Canadian residents)") +
theme(text = element_text(size = 12),
legend.position = "top",
legend.direction = "vertical") +
scale_x_log10(labels = comma) +
scale_y_log10(labels = comma) +
scale_color_brewer(palette = "Set2")
```
Figure 4\.15: Scatter plot of percentage of Canadians reporting a language as their mother tongue vs the primary language at home colored by language category with color\-blind friendly colors.
From the visualization in Figure [4\.15](viz.html#fig:scatter-color-by-category-palette),
we can now clearly see that the vast majority of Canadians reported one of the official languages
as their mother tongue and as the language they speak most often at home.
What do we see when considering the second part of our exploratory question?
Do we see a difference in the relationship
between languages spoken as a mother tongue and as a primary language
at home across the higher\-level language categories?
Based on Figure [4\.15](viz.html#fig:scatter-color-by-category-palette), there does not
appear to be much of a difference.
For each higher\-level language category,
there appears to be a strong, positive, and linear relationship between
the percentage of people who speak a language as their mother tongue
and the percentage who speak it as their primary language at home.
The relationship looks similar regardless of the category.
Does this mean that this relationship is positive for all languages in the
world? And further, can we use this data visualization on its own to predict how many people
have a given language as their mother tongue if we know how many people speak
it as their primary language at home? The answer to both these questions is
“no!” However, with exploratory data analysis, we can create new hypotheses,
ideas, and questions (like the ones at the beginning of this paragraph).
Answering those questions often involves doing more complex analyses, and sometimes
even gathering additional data. We will see more of such complex analyses later on in
this book.
### 4\.5\.4 Bar plots: the island landmass data set
The `islands.csv` data set contains a list of Earth’s landmasses as well as their area (in thousands of square miles) ([McNeil 1977](#ref-islandsdata)).
**Question:** Are the continents (North / South America, Africa, Europe, Asia, Australia, Antarctica) Earth’s seven largest landmasses? If so, what are the next few largest landmasses after those?
To get started, we will read and inspect the data:
```
# islands data
islands_df <- read_csv("data/islands.csv")
islands_df
```
```
## # A tibble: 48 × 3
## landmass size landmass_type
## <chr> <dbl> <chr>
## 1 Africa 11506 Continent
## 2 Antarctica 5500 Continent
## 3 Asia 16988 Continent
## 4 Australia 2968 Continent
## 5 Axel Heiberg 16 Other
## 6 Baffin 184 Other
## 7 Banks 23 Other
## 8 Borneo 280 Other
## 9 Britain 84 Other
## 10 Celebes 73 Other
## # ℹ 38 more rows
```
Here, we have a data frame of Earth’s landmasses,
and are trying to compare their sizes.
The right type of visualization to answer this question is a bar plot.
In a bar plot, the height of each bar represents the value of an *amount*
(a size, count, proportion, percentage, etc).
They are particularly useful for comparing counts or proportions across different
groups of a categorical variable. Note, however, that bar plots should generally not be
used to display mean or median values, as they hide important information about
the variation of the data. Instead it’s better to show the distribution of
all the individual data points, e.g., using a histogram, which we will discuss further in Section [4\.5\.5](viz.html#histogramsviz).
We specify that we would like to use a bar plot
via the `geom_bar` function in `ggplot2`.
However, by default, `geom_bar` sets the heights
of bars to the number of times a value appears in a data frame (its *count*); here, we want to plot exactly the values in the data frame, i.e.,
the landmass sizes. So we have to pass the `stat = "identity"` argument to `geom_bar`. The result is
shown in Figure [4\.16](viz.html#fig:03-data-islands-bar).
```
islands_bar <- ggplot(islands_df, aes(x = landmass, y = size)) +
geom_bar(stat = "identity")
islands_bar
```
Figure 4\.16: Bar plot of Earth’s landmass sizes with squished labels.
Alright, not bad! The plot in Figure [4\.16](viz.html#fig:03-data-islands-bar) is
definitely the right kind of visualization, as we can clearly see and compare
sizes of landmasses. The major issues are that the smaller landmasses’ sizes
are hard to distinguish, and the names of the landmasses are obscuring each
other as they have been squished into too little space. But remember that the
question we asked was only about the largest landmasses; let’s make the plot a
little bit clearer by keeping only the largest 12 landmasses. We do this using
the `slice_max` function: the `order_by` argument is the name of the column we
want to use for comparing which is largest, and the `n` argument specifies how many
rows to keep. Then to give the labels enough
space, we’ll use horizontal bars instead of vertical ones. We do this by
swapping the `x` and `y` variables.
> **Note:** Recall that in Chapter [1](intro.html#intro), we used `arrange` followed by `slice` to
> obtain the ten rows with the largest values of a variable. We could have instead used
> the `slice_max` function for this purpose. The `slice_max` and `slice_min` functions
> achieve the same goal as `arrange` followed by `slice`, but are slightly more efficient
> because they are specialized for this purpose. In general, it is good to use more specialized
> functions when they are available!
```
islands_top12 <- slice_max(islands_df, order_by = size, n = 12)
islands_bar <- ggplot(islands_top12, aes(x = size, y = landmass)) +
geom_bar(stat = "identity")
islands_bar
```
Figure 4\.17: Bar plot of size for Earth’s largest 12 landmasses.
The plot in Figure [4\.17](viz.html#fig:03-data-islands-bar-2) is definitely clearer now,
and allows us to answer our question
(“Are the top 7 largest landmasses continents?”) in the affirmative.
However, we could still improve this visualization by
coloring the bars based on whether they correspond to a continent,
and by organizing the bars by landmass size rather than by alphabetical order.
The data for coloring the bars is stored in the `landmass_type` column,
so we add the `fill` argument to the aesthetic mapping
and set it to `landmass_type`. We manually select two colors for the bars
using the `scale_fill_manual` function:`"darkorange"`
for orange and `"steelblue"` for blue.
To organize the landmasses by their `size` variable,
we will use the `tidyverse` `fct_reorder` function
in the aesthetic mapping to organize the landmasses by their `size` variable.
The first argument passed to `fct_reorder` is the name of the factor column
whose levels we would like to reorder (here, `landmass`).
The second argument is the column name
that holds the values we would like to use to do the ordering (here, `size`).
The `fct_reorder` function uses ascending order by default,
but this can be changed to descending order
by setting `.desc = TRUE`.
We do this here so that the largest bar will be closest to the axis line,
which is more visually appealing.
To finalize this plot we will customize the axis and legend labels,
and add a title to the chart. Plot titles are not always required, especially when
it would be redundant with an already\-existing
caption or surrounding context (e.g., in a slide presentation with annotations).
But if you decide to include one, a good plot title should provide the take home message
that you want readers to focus on, e.g., “Earth’s seven largest landmasses are continents,”
or a more general summary of the information displayed, e.g., “Earth’s twelve largest landmasses.”
To make these final adjustments we will use the `labs` function rather than the `xlab` and `ylab` functions
we have seen earlier in this chapter, as `labs` lets us modify the legend label and title in addition to axis labels.
We provide a label for each aesthetic mapping in the plot—in this case, `x`, `y`, and `fill`—as well as one for the `title` argument.
Finally, we again use the `theme` function
to change the font size.
```
islands_bar <- ggplot(islands_top12,
aes(x = size,
y = fct_reorder(landmass, size, .desc = TRUE),
fill = landmass_type)) +
geom_bar(stat = "identity") +
labs(x = "Size (1000 square mi)",
y = "Landmass",
fill = "Type",
title = "Earth's twelve largest landmasses") +
scale_fill_manual(values = c("steelblue", "darkorange")) +
theme(text = element_text(size = 10))
islands_bar
```
Figure 4\.18: Bar plot of size for Earth’s largest 12 landmasses, colored by landmass type, with clearer axes and labels.
The plot in Figure [4\.18](viz.html#fig:03-data-islands-bar-4) is now a very effective
visualization for answering our original questions. Landmasses are organized by
their size, and continents are colored differently than other landmasses,
making it quite clear that continents are the largest seven landmasses.
### 4\.5\.5 Histograms: the Michelson speed of light data set
The `morley` data set
contains measurements of the speed of light
collected in experiments performed in 1879\.
Five experiments were performed,
and in each experiment, 20 runs were performed—meaning that
20 measurements of the speed of light were collected
in each experiment ([Michelson 1882](#ref-lightdata)).
The `morley` data set is available in base R as a data frame,
so it does not need to be loaded.
Because the speed of light is a very large number
(the true value is 299,792\.458 km/sec), the data is coded
to be the measured speed of light minus 299,000\.
This coding allows us to focus on the variations in the measurements, which are generally
much smaller than 299,000\.
If we used the full large speed measurements, the variations in the measurements
would not be noticeable, making it difficult to study the differences between the experiments.
Note that we convert the `morley` data to a tibble to take advantage of the nicer print output
these specialized data frames provide.
**Question:** Given what we know now about the speed of
light (299,792\.458 kilometres per second), how accurate were each of the experiments?
```
# michelson morley experimental data
morley <- as_tibble(morley)
morley
```
```
## # A tibble: 100 × 3
## Expt Run Speed
## <int> <int> <int>
## 1 1 1 850
## 2 1 2 740
## 3 1 3 900
## 4 1 4 1070
## 5 1 5 930
## 6 1 6 850
## 7 1 7 950
## 8 1 8 980
## 9 1 9 980
## 10 1 10 880
## # ℹ 90 more rows
```
In this experimental data,
Michelson was trying to measure just a single quantitative number
(the speed of light).
The data set contains many measurements of this single quantity.
To tell how accurate the experiments were,
we need to visualize the distribution of the measurements
(i.e., all their possible values and how often each occurs).
We can do this using a *histogram*.
A histogram
helps us visualize how a particular variable is distributed in a data set
by separating the data into bins,
and then using vertical bars to show how many data points fell in each bin.
To create a histogram in `ggplot2` we will use the `geom_histogram` geometric
object, setting the `x` axis to the `Speed` measurement variable. As usual,
let’s use the default arguments just to see how things look.
```
morley_hist <- ggplot(morley, aes(x = Speed)) +
geom_histogram()
morley_hist
```
Figure 4\.19: Histogram of Michelson’s speed of light data.
Figure [4\.19](viz.html#fig:03-data-morley-hist) is a great start.
However,
we cannot tell how accurate the measurements are using this visualization
unless we can see the true value.
In order to visualize the true speed of light,
we will add a vertical line with the `geom_vline` function.
To draw a vertical line with `geom_vline`,
we need to specify where on the x\-axis the line should be drawn.
We can do this by setting the `xintercept` argument.
Here we set it to 792\.458, which is the true value of light speed
minus 299,000; this ensures it is coded the same way as the
measurements in the `morley` data frame.
We would also like to fine tune this vertical line,
styling it so that it is dashed by setting `linetype = "dashed"`.
There is a similar function, `geom_hline`,
that is used for plotting horizontal lines.
Note that
*vertical lines* are used to denote quantities on the *horizontal axis*,
while *horizontal lines* are used to denote quantities on the *vertical axis*.
```
morley_hist <- ggplot(morley, aes(x = Speed)) +
geom_histogram() +
geom_vline(xintercept = 792.458, linetype = "dashed")
morley_hist
```
Figure 4\.20: Histogram of Michelson’s speed of light data with vertical line indicating true speed of light.
In Figure [4\.20](viz.html#fig:03-data-morley-hist-2),
we still cannot tell which experiments (denoted in the `Expt` column)
led to which measurements;
perhaps some experiments were more accurate than others.
To fully answer our question,
we need to separate the measurements from each other visually.
We can try to do this using a *colored* histogram,
where counts from different experiments are stacked on top of each other
in different colors.
We can create a histogram colored by the `Expt` variable
by adding it to the `fill` aesthetic mapping.
We make sure the different colors can be seen
(despite them all sitting on top of each other)
by setting the `alpha` argument in `geom_histogram` to `0.5`
to make the bars slightly translucent.
We also specify `position = "identity"` in `geom_histogram` to ensure
the histograms for each experiment will be overlaid side\-by\-side,
instead of stacked bars
(which is the default for bar plots or histograms
when they are colored by another categorical variable).
```
morley_hist <- ggplot(morley, aes(x = Speed, fill = Expt)) +
geom_histogram(alpha = 0.5, position = "identity") +
geom_vline(xintercept = 792.458, linetype = "dashed")
morley_hist
```
Figure 4\.21: Histogram of Michelson’s speed of light data where an attempt is made to color the bars by experiment.
Alright great, Figure [4\.21](viz.html#fig:03-data-morley-hist-3) looks…wait a second! The
histogram is still all the same color! What is going on here? Well, if you
recall from Chapter [3](wrangling.html#wrangling), the *data type* you use for each variable
can influence how R and `tidyverse` treats it. Here, we indeed have an issue
with the data types in the `morley` data frame. In particular, the `Expt` column
is currently an *integer* (you can see the label `<int>` underneath the `Expt` column in the printed
data frame at the start of this section). But we want to treat it as a
*category*, i.e., there should be one category per type of experiment.
To fix this issue we can convert the `Expt` variable into a *factor* by
passing it to `as_factor` in the `fill` aesthetic mapping.
Recall that factor is a data type in R that is often used to represent
categories. By writing
`as_factor(Expt)` we are ensuring that R will treat this variable as a factor,
and the color will be mapped discretely.
```
morley_hist <- ggplot(morley, aes(x = Speed, fill = as_factor(Expt))) +
geom_histogram(alpha = 0.5, position = "identity") +
geom_vline(xintercept = 792.458, linetype = "dashed")
morley_hist
```
Figure 4\.22: Histogram of Michelson’s speed of light data colored by experiment as factor.
> **Note:** Factors impact plots in two ways:
> (1\) ensuring a color is mapped as discretely where appropriate (as in this
> example) and (2\) the ordering of levels in a plot. `ggplot` takes into account
> the order of the factor levels as opposed to the order of data in
> your data frame. Learning how to reorder your factor levels will help you with
> reordering the labels of a factor on a plot.
Unfortunately, the attempt to separate out the experiment number visually has
created a bit of a mess. All of the colors in Figure
[4\.22](viz.html#fig:03-data-morley-hist-with-factor) are blending together, and although it is
possible to derive *some* insight from this (e.g., experiments 1 and 3 had some
of the most incorrect measurements), it isn’t the clearest way to convey our
message and answer the question. Let’s try a different strategy of creating
grid of separate histogram plots.
We use the `facet_grid` function to create a plot
that has multiple subplots arranged in a grid.
The argument to `facet_grid` specifies the variable(s) used to split the plot
into subplots, and how to split them (i.e., into rows or columns).
If the plot is to be split horizontally, into rows,
then the `rows` argument is used.
If the plot is to be split vertically, into columns,
then the `cols` argument is used.
Both the `rows` and `cols` arguments take the column names on which to split the data when creating the subplots.
Note that the column names must be surrounded by the `vars` function.
This function allows the column names to be correctly evaluated
in the context of the data frame.
```
morley_hist <- ggplot(morley, aes(x = Speed, fill = as_factor(Expt))) +
geom_histogram() +
facet_grid(rows = vars(Expt)) +
geom_vline(xintercept = 792.458, linetype = "dashed")
morley_hist
```
Figure 4\.23: Histogram of Michelson’s speed of light data split vertically by experiment.
The visualization in Figure [4\.23](viz.html#fig:03-data-morley-hist-4)
now makes it quite clear how accurate the different experiments were
with respect to one another.
The most variable measurements came from Experiment 1\.
There the measurements ranged from about 650–1050 km/sec.
The least variable measurements came from Experiment 2\.
There, the measurements ranged from about 750–950 km/sec.
The most different experiments still obtained quite similar results!
There are two finishing touches to make this visualization even clearer. First and foremost, we need to add informative axis labels
using the `labs` function, and increase the font size to make it readable using the `theme` function. Second, and perhaps more subtly, even though it
is easy to compare the experiments on this plot to one another, it is hard to get a sense
of just how accurate all the experiments were overall. For example, how accurate is the value 800 on the plot, relative to the true speed of light?
To answer this question, we’ll use the `mutate` function to transform our data into a relative measure of accuracy rather than absolute measurements:
```
morley_rel <- mutate(morley,
relative_accuracy = 100 *
((299000 + Speed) - 299792.458) / (299792.458))
morley_hist <- ggplot(morley_rel,
aes(x = relative_accuracy,
fill = as_factor(Expt))) +
geom_histogram() +
facet_grid(rows = vars(Expt)) +
geom_vline(xintercept = 0, linetype = "dashed") +
labs(x = "Relative Accuracy (%)",
y = "# Measurements",
fill = "Experiment ID") +
theme(text = element_text(size = 12))
morley_hist
```
Figure 4\.24: Histogram of relative accuracy split vertically by experiment with clearer axes and labels.
Wow, impressive! These measurements of the speed of light from 1879 had errors around *0\.05%* of the true speed. Figure [4\.24](viz.html#fig:03-data-morley-hist-5) shows you that even though experiments 2 and 5 were perhaps the most accurate, all of the experiments did quite an
admirable job given the technology available at the time.
#### Choosing a binwidth for histograms
When you create a histogram in R, the default number of bins used is 30\.
Naturally, this is not always the right number to use.
You can set the number of bins yourself by using
the `bins` argument in the `geom_histogram` geometric object.
You can also set the *width* of the bins using the
`binwidth` argument in the `geom_histogram` geometric object.
But what number of bins, or bin width, is the right one to use?
Unfortunately there is no hard rule for what the right bin number
or width is. It depends entirely on your problem; the *right* number of bins
or bin width is
the one that *helps you answer the question* you asked.
Choosing the correct setting for your problem
is something that commonly takes iteration.
We recommend setting the *bin width* (not the *number of bins*) because
it often more directly corresponds to values in your problem of interest. For example,
if you are looking at a histogram of human heights,
a bin width of 1 inch would likely be reasonable, while the number of bins to use is
not immediately clear.
It’s usually a good idea to try out several bin widths to see which one
most clearly captures your data in the context of the question
you want to answer.
To get a sense for how different bin widths affect visualizations,
let’s experiment with the histogram that we have been working on in this section.
In Figure [4\.25](viz.html#fig:03-data-morley-hist-binwidth),
we compare the default setting with three other histograms where we set the
`binwidth` to 0\.001, 0\.01 and 0\.1\.
In this case, we can see that both the default number of bins
and the binwidth of 0\.01 are effective for helping answer our question.
On the other hand, the bin widths of 0\.001 and 0\.1 are too small and too big, respectively.
Figure 4\.25: Effect of varying bin width on histograms.
#### Adding layers to a `ggplot` plot object
One of the powerful features of `ggplot` is that you
can continue to iterate on a single plot object, adding and refining
one layer at a time. If you stored your plot as a named object
using the assignment symbol (`<-`), you can
add to it using the `+` operator.
For example, if we wanted to add a title to the last plot we created (`morley_hist`),
we can use the `+` operator to add a title layer with the `ggtitle` function.
The result is shown in Figure [4\.26](viz.html#fig:03-data-morley-hist-addlayer).
```
morley_hist_title <- morley_hist +
ggtitle("Speed of light experiments \n were accurate to about 0.05%")
morley_hist_title
```
Figure 4\.26: Histogram of relative accuracy split vertically by experiment with a descriptive title highlighting the take home message of the visualization.
> **Note:** Good visualization titles clearly communicate
> the take home message to the audience. Typically,
> that is the answer to the question you posed before making the visualization.
#### Choosing a binwidth for histograms
When you create a histogram in R, the default number of bins used is 30\.
Naturally, this is not always the right number to use.
You can set the number of bins yourself by using
the `bins` argument in the `geom_histogram` geometric object.
You can also set the *width* of the bins using the
`binwidth` argument in the `geom_histogram` geometric object.
But what number of bins, or bin width, is the right one to use?
Unfortunately there is no hard rule for what the right bin number
or width is. It depends entirely on your problem; the *right* number of bins
or bin width is
the one that *helps you answer the question* you asked.
Choosing the correct setting for your problem
is something that commonly takes iteration.
We recommend setting the *bin width* (not the *number of bins*) because
it often more directly corresponds to values in your problem of interest. For example,
if you are looking at a histogram of human heights,
a bin width of 1 inch would likely be reasonable, while the number of bins to use is
not immediately clear.
It’s usually a good idea to try out several bin widths to see which one
most clearly captures your data in the context of the question
you want to answer.
To get a sense for how different bin widths affect visualizations,
let’s experiment with the histogram that we have been working on in this section.
In Figure [4\.25](viz.html#fig:03-data-morley-hist-binwidth),
we compare the default setting with three other histograms where we set the
`binwidth` to 0\.001, 0\.01 and 0\.1\.
In this case, we can see that both the default number of bins
and the binwidth of 0\.01 are effective for helping answer our question.
On the other hand, the bin widths of 0\.001 and 0\.1 are too small and too big, respectively.
Figure 4\.25: Effect of varying bin width on histograms.
#### Adding layers to a `ggplot` plot object
One of the powerful features of `ggplot` is that you
can continue to iterate on a single plot object, adding and refining
one layer at a time. If you stored your plot as a named object
using the assignment symbol (`<-`), you can
add to it using the `+` operator.
For example, if we wanted to add a title to the last plot we created (`morley_hist`),
we can use the `+` operator to add a title layer with the `ggtitle` function.
The result is shown in Figure [4\.26](viz.html#fig:03-data-morley-hist-addlayer).
```
morley_hist_title <- morley_hist +
ggtitle("Speed of light experiments \n were accurate to about 0.05%")
morley_hist_title
```
Figure 4\.26: Histogram of relative accuracy split vertically by experiment with a descriptive title highlighting the take home message of the visualization.
> **Note:** Good visualization titles clearly communicate
> the take home message to the audience. Typically,
> that is the answer to the question you posed before making the visualization.
4\.6 Explaining the visualization
---------------------------------
#### *Tell a story*
Typically, your visualization will not be shown entirely on its own, but rather
it will be part of a larger presentation. Further, visualizations can provide
supporting information for any aspect of a presentation, from opening to
conclusion. For example, you could use an exploratory visualization in the
opening of the presentation to motivate your choice of a more detailed data
analysis / model, a visualization of the results of your analysis to show what
your analysis has uncovered, or even one at the end of a presentation to help
suggest directions for future work.
Regardless of where it appears, a good way to discuss your visualization is as
a story:
1. Establish the setting and scope, and describe why you did what you did.
2. Pose the question that your visualization answers. Justify why the question is important to answer.
3. Answer the question using your visualization. Make sure you describe *all* aspects of the visualization (including describing the axes). But you
can emphasize different aspects based on what is important to answer your question:
* **trends (lines):** Does a line describe the trend well? If so, the trend is *linear*, and if not, the trend is *nonlinear*. Is the trend increasing, decreasing, or neither?
Is there a periodic oscillation (wiggle) in the trend? Is the trend noisy (does the line “jump around” a lot) or smooth?
* **distributions (scatters, histograms):** How spread out are the data? Where are they centered, roughly? Are there any obvious “clusters” or “subgroups”, which would be visible as multiple bumps in the histogram?
* **distributions of two variables (scatters):** Is there a clear / strong relationship between the variables (points fall in a distinct pattern), a weak one (points fall in a pattern but there is some noise), or no discernible
relationship (the data are too noisy to make any conclusion)?
* **amounts (bars):** How large are the bars relative to one another? Are there patterns in different groups of bars?
4. Summarize your findings, and use them to motivate whatever you will discuss next.
Below are two examples of how one might take these four steps in describing the example visualizations that appeared earlier in this chapter.
Each of the steps is denoted by its numeral in parentheses, e.g. (3\).
**Mauna Loa Atmospheric CO\\(\_{\\text{2}}\\) Measurements:** (1\) Many
current forms of energy generation and conversion—from automotive
engines to natural gas power plants—rely on burning fossil fuels and produce
greenhouse gases, typically primarily carbon dioxide (CO\\(\_{\\text{2}}\\)), as a
byproduct. Too much of these gases in the Earth’s atmosphere will cause it to
trap more heat from the sun, leading to global warming. (2\) In order to assess
how quickly the atmospheric concentration of CO\\(\_{\\text{2}}\\) is increasing over
time, we (3\) used a data set from the Mauna Loa observatory in Hawaii,
consisting of CO\\(\_{\\text{2}}\\) measurements from 1980 to 2020\. We plotted the
measured concentration of CO\\(\_{\\text{2}}\\) (on the vertical axis) over time (on
the horizontal axis). From this plot, you can see a clear, increasing, and
generally linear trend over time. There is also a periodic oscillation that
occurs once per year and aligns with Hawaii’s seasons, with an amplitude that
is small relative to the growth in the overall trend. This shows that
atmospheric CO\\(\_{\\text{2}}\\) is clearly increasing over time, and (4\) it is
perhaps worth investigating more into the causes.
**Michelson Light Speed Experiments:** (1\) Our
modern understanding of the physics of light has advanced significantly from
the late 1800s when Michelson and Morley’s experiments first demonstrated that
it had a finite speed. We now know, based on modern experiments, that it moves at
roughly 299,792\.458 kilometers per second. (2\) But how accurately were we first
able to measure this fundamental physical constant, and did certain experiments
produce more accurate results than others? (3\) To better understand this, we
plotted data from 5 experiments by Michelson in 1879, each with 20 trials, as
histograms stacked on top of one another. The horizontal axis shows the
accuracy of the measurements relative to the true speed of light as we know it
today, expressed as a percentage. From this visualization, you can see that
most results had relative errors of at most 0\.05%. You can also see that
experiments 1 and 3 had measurements that were the farthest from the true
value, and experiment 5 tended to provide the most consistently accurate
result. (4\) It would be worth further investigating the differences between
these experiments to see why they produced different results.
#### *Tell a story*
Typically, your visualization will not be shown entirely on its own, but rather
it will be part of a larger presentation. Further, visualizations can provide
supporting information for any aspect of a presentation, from opening to
conclusion. For example, you could use an exploratory visualization in the
opening of the presentation to motivate your choice of a more detailed data
analysis / model, a visualization of the results of your analysis to show what
your analysis has uncovered, or even one at the end of a presentation to help
suggest directions for future work.
Regardless of where it appears, a good way to discuss your visualization is as
a story:
1. Establish the setting and scope, and describe why you did what you did.
2. Pose the question that your visualization answers. Justify why the question is important to answer.
3. Answer the question using your visualization. Make sure you describe *all* aspects of the visualization (including describing the axes). But you
can emphasize different aspects based on what is important to answer your question:
* **trends (lines):** Does a line describe the trend well? If so, the trend is *linear*, and if not, the trend is *nonlinear*. Is the trend increasing, decreasing, or neither?
Is there a periodic oscillation (wiggle) in the trend? Is the trend noisy (does the line “jump around” a lot) or smooth?
* **distributions (scatters, histograms):** How spread out are the data? Where are they centered, roughly? Are there any obvious “clusters” or “subgroups”, which would be visible as multiple bumps in the histogram?
* **distributions of two variables (scatters):** Is there a clear / strong relationship between the variables (points fall in a distinct pattern), a weak one (points fall in a pattern but there is some noise), or no discernible
relationship (the data are too noisy to make any conclusion)?
* **amounts (bars):** How large are the bars relative to one another? Are there patterns in different groups of bars?
4. Summarize your findings, and use them to motivate whatever you will discuss next.
Below are two examples of how one might take these four steps in describing the example visualizations that appeared earlier in this chapter.
Each of the steps is denoted by its numeral in parentheses, e.g. (3\).
**Mauna Loa Atmospheric CO\\(\_{\\text{2}}\\) Measurements:** (1\) Many
current forms of energy generation and conversion—from automotive
engines to natural gas power plants—rely on burning fossil fuels and produce
greenhouse gases, typically primarily carbon dioxide (CO\\(\_{\\text{2}}\\)), as a
byproduct. Too much of these gases in the Earth’s atmosphere will cause it to
trap more heat from the sun, leading to global warming. (2\) In order to assess
how quickly the atmospheric concentration of CO\\(\_{\\text{2}}\\) is increasing over
time, we (3\) used a data set from the Mauna Loa observatory in Hawaii,
consisting of CO\\(\_{\\text{2}}\\) measurements from 1980 to 2020\. We plotted the
measured concentration of CO\\(\_{\\text{2}}\\) (on the vertical axis) over time (on
the horizontal axis). From this plot, you can see a clear, increasing, and
generally linear trend over time. There is also a periodic oscillation that
occurs once per year and aligns with Hawaii’s seasons, with an amplitude that
is small relative to the growth in the overall trend. This shows that
atmospheric CO\\(\_{\\text{2}}\\) is clearly increasing over time, and (4\) it is
perhaps worth investigating more into the causes.
**Michelson Light Speed Experiments:** (1\) Our
modern understanding of the physics of light has advanced significantly from
the late 1800s when Michelson and Morley’s experiments first demonstrated that
it had a finite speed. We now know, based on modern experiments, that it moves at
roughly 299,792\.458 kilometers per second. (2\) But how accurately were we first
able to measure this fundamental physical constant, and did certain experiments
produce more accurate results than others? (3\) To better understand this, we
plotted data from 5 experiments by Michelson in 1879, each with 20 trials, as
histograms stacked on top of one another. The horizontal axis shows the
accuracy of the measurements relative to the true speed of light as we know it
today, expressed as a percentage. From this visualization, you can see that
most results had relative errors of at most 0\.05%. You can also see that
experiments 1 and 3 had measurements that were the farthest from the true
value, and experiment 5 tended to provide the most consistently accurate
result. (4\) It would be worth further investigating the differences between
these experiments to see why they produced different results.
4\.7 Saving the visualization
-----------------------------
#### *Choose the right output format for your needs*
Just as there are many ways to store data sets, there are many ways to store
visualizations and images. Which one you choose can depend on several factors,
such as file size/type limitations (e.g., if you are submitting your
visualization as part of a conference paper or to a poster printing shop) and
where it will be displayed (e.g., online, in a paper, on a poster, on a
billboard, in talk slides). Generally speaking, images come in two flavors:
*raster* formats
and *vector* formats.
**Raster** images are represented as a 2\-D grid of square pixels, each
with its own color. Raster images are often *compressed* before storing so they
take up less space. A compressed format is *lossy* if the image cannot be
perfectly re\-created when loading and displaying, with the hope that the change
is not noticeable. *Lossless* formats, on the other hand, allow a perfect
display of the original image.
* *Common file types:*
+ [JPEG](https://en.wikipedia.org/wiki/JPEG) (`.jpg`, `.jpeg`): lossy, usually used for photographs
+ [PNG](https://en.wikipedia.org/wiki/Portable_Network_Graphics) (`.png`): lossless, usually used for plots / line drawings
+ [BMP](https://en.wikipedia.org/wiki/BMP_file_format) (`.bmp`): lossless, raw image data, no compression (rarely used)
+ [TIFF](https://en.wikipedia.org/wiki/TIFF) (`.tif`, `.tiff`): typically lossless, no compression, used mostly in graphic arts, publishing
* *Open\-source software:* [GIMP](https://www.gimp.org/)
**Vector** images are represented as a collection of mathematical
objects (lines, surfaces, shapes, curves). When the computer displays the image, it
redraws all of the elements using their mathematical formulas.
* *Common file types:*
+ [SVG](https://en.wikipedia.org/wiki/Scalable_Vector_Graphics) (`.svg`): general\-purpose use
+ [EPS](https://en.wikipedia.org/wiki/Encapsulated_PostScript) (`.eps`), general\-purpose use (rarely used)
* *Open\-source software:* [Inkscape](https://inkscape.org/)
Raster and vector images have opposing advantages and disadvantages. A raster
image of a fixed width / height takes the same amount of space and time to load
regardless of what the image shows (the one caveat is that the compression algorithms may
shrink the image more or run faster for certain images). A vector image takes
space and time to load corresponding to how complex the image is, since the
computer has to draw all the elements each time it is displayed. For example,
if you have a scatter plot with 1 million points stored as an SVG file, it may
take your computer some time to open the image. On the other hand, you can zoom
into / scale up vector graphics as much as you like without the image looking
bad, while raster images eventually start to look “pixelated.”
> **Note:** The portable document format [PDF](https://en.wikipedia.org/wiki/PDF) (`.pdf`) is commonly used to
> store *both* raster and vector formats. If you try to open a PDF and it’s taking a long time
> to load, it may be because there is a complicated vector graphics image that your computer is rendering.
Let’s learn how to save plot images to these different file formats using a
scatter plot of
the [Old Faithful data set](https://www.stat.cmu.edu/~larry/all-of-statistics/=data/faithful.dat) ([Hardle 1991](#ref-faithfuldata)),
shown in Figure [4\.27](viz.html#fig:03-plot-line).
```
library(svglite) # we need this to save SVG files
faithful_plot <- ggplot(data = faithful, aes(x = waiting, y = eruptions)) +
geom_point() +
labs(x = "Waiting time to next eruption \n (minutes)",
y = "Eruption time \n (minutes)") +
theme(text = element_text(size = 12))
faithful_plot
```
Figure 4\.27: Scatter plot of waiting time and eruption time.
Now that we have a named `ggplot` plot object, we can use the `ggsave` function
to save a file containing this image.
`ggsave` works by taking a file name to create for the image
as its first argument.
This can include the path to the directory where you would like to save the file
(e.g., `img/viz/filename.png` to save a file named `filename` to the `img/viz/` directory),
and the name of the plot object to save as its second argument.
The kind of image to save is specified by the file extension.
For example,
to create a PNG image file, we specify that the file extension is `.png`.
Below we demonstrate how to save PNG, JPG, BMP, TIFF and SVG file types
for the `faithful_plot`:
```
ggsave("img/viz/faithful_plot.png", faithful_plot)
ggsave("img/viz/faithful_plot.jpg", faithful_plot)
ggsave("img/viz/faithful_plot.bmp", faithful_plot)
ggsave("img/viz/faithful_plot.tiff", faithful_plot)
ggsave("img/viz/faithful_plot.svg", faithful_plot)
```
Table 4\.1: File sizes of the scatter plot of the Old Faithful data set when saved as different file formats.
| Image type | File type | Image size |
| --- | --- | --- |
| Raster | PNG | 0\.15 MB |
| Raster | JPG | 0\.42 MB |
| Raster | BMP | 3\.15 MB |
| Raster | TIFF | 9\.44 MB |
| Vector | SVG | 0\.03 MB |
Take a look at the file sizes in Table [4\.1](viz.html#tab:filesizes).
Wow, that’s quite a difference! Notice that for such a simple plot with few
graphical elements (points), the vector graphics format (SVG) is over 100 times
smaller than the uncompressed raster images (BMP, TIFF). Also, note that the
JPG format is twice as large as the PNG format since the JPG compression
algorithm is designed for natural images (not plots).
In Figure [4\.28](viz.html#fig:03-raster-image), we also show what
the images look like when we zoom in to a rectangle with only 2 data points.
You can see why vector graphics formats are so useful: because they’re just
based on mathematical formulas, vector graphics can be scaled up to arbitrary
sizes. This makes them great for presentation media of all sizes, from papers
to posters to billboards.
Figure 4\.28: Zoomed in `faithful`, raster (PNG, left) and vector (SVG, right) formats.
#### *Choose the right output format for your needs*
Just as there are many ways to store data sets, there are many ways to store
visualizations and images. Which one you choose can depend on several factors,
such as file size/type limitations (e.g., if you are submitting your
visualization as part of a conference paper or to a poster printing shop) and
where it will be displayed (e.g., online, in a paper, on a poster, on a
billboard, in talk slides). Generally speaking, images come in two flavors:
*raster* formats
and *vector* formats.
**Raster** images are represented as a 2\-D grid of square pixels, each
with its own color. Raster images are often *compressed* before storing so they
take up less space. A compressed format is *lossy* if the image cannot be
perfectly re\-created when loading and displaying, with the hope that the change
is not noticeable. *Lossless* formats, on the other hand, allow a perfect
display of the original image.
* *Common file types:*
+ [JPEG](https://en.wikipedia.org/wiki/JPEG) (`.jpg`, `.jpeg`): lossy, usually used for photographs
+ [PNG](https://en.wikipedia.org/wiki/Portable_Network_Graphics) (`.png`): lossless, usually used for plots / line drawings
+ [BMP](https://en.wikipedia.org/wiki/BMP_file_format) (`.bmp`): lossless, raw image data, no compression (rarely used)
+ [TIFF](https://en.wikipedia.org/wiki/TIFF) (`.tif`, `.tiff`): typically lossless, no compression, used mostly in graphic arts, publishing
* *Open\-source software:* [GIMP](https://www.gimp.org/)
**Vector** images are represented as a collection of mathematical
objects (lines, surfaces, shapes, curves). When the computer displays the image, it
redraws all of the elements using their mathematical formulas.
* *Common file types:*
+ [SVG](https://en.wikipedia.org/wiki/Scalable_Vector_Graphics) (`.svg`): general\-purpose use
+ [EPS](https://en.wikipedia.org/wiki/Encapsulated_PostScript) (`.eps`), general\-purpose use (rarely used)
* *Open\-source software:* [Inkscape](https://inkscape.org/)
Raster and vector images have opposing advantages and disadvantages. A raster
image of a fixed width / height takes the same amount of space and time to load
regardless of what the image shows (the one caveat is that the compression algorithms may
shrink the image more or run faster for certain images). A vector image takes
space and time to load corresponding to how complex the image is, since the
computer has to draw all the elements each time it is displayed. For example,
if you have a scatter plot with 1 million points stored as an SVG file, it may
take your computer some time to open the image. On the other hand, you can zoom
into / scale up vector graphics as much as you like without the image looking
bad, while raster images eventually start to look “pixelated.”
> **Note:** The portable document format [PDF](https://en.wikipedia.org/wiki/PDF) (`.pdf`) is commonly used to
> store *both* raster and vector formats. If you try to open a PDF and it’s taking a long time
> to load, it may be because there is a complicated vector graphics image that your computer is rendering.
Let’s learn how to save plot images to these different file formats using a
scatter plot of
the [Old Faithful data set](https://www.stat.cmu.edu/~larry/all-of-statistics/=data/faithful.dat) ([Hardle 1991](#ref-faithfuldata)),
shown in Figure [4\.27](viz.html#fig:03-plot-line).
```
library(svglite) # we need this to save SVG files
faithful_plot <- ggplot(data = faithful, aes(x = waiting, y = eruptions)) +
geom_point() +
labs(x = "Waiting time to next eruption \n (minutes)",
y = "Eruption time \n (minutes)") +
theme(text = element_text(size = 12))
faithful_plot
```
Figure 4\.27: Scatter plot of waiting time and eruption time.
Now that we have a named `ggplot` plot object, we can use the `ggsave` function
to save a file containing this image.
`ggsave` works by taking a file name to create for the image
as its first argument.
This can include the path to the directory where you would like to save the file
(e.g., `img/viz/filename.png` to save a file named `filename` to the `img/viz/` directory),
and the name of the plot object to save as its second argument.
The kind of image to save is specified by the file extension.
For example,
to create a PNG image file, we specify that the file extension is `.png`.
Below we demonstrate how to save PNG, JPG, BMP, TIFF and SVG file types
for the `faithful_plot`:
```
ggsave("img/viz/faithful_plot.png", faithful_plot)
ggsave("img/viz/faithful_plot.jpg", faithful_plot)
ggsave("img/viz/faithful_plot.bmp", faithful_plot)
ggsave("img/viz/faithful_plot.tiff", faithful_plot)
ggsave("img/viz/faithful_plot.svg", faithful_plot)
```
Table 4\.1: File sizes of the scatter plot of the Old Faithful data set when saved as different file formats.
| Image type | File type | Image size |
| --- | --- | --- |
| Raster | PNG | 0\.15 MB |
| Raster | JPG | 0\.42 MB |
| Raster | BMP | 3\.15 MB |
| Raster | TIFF | 9\.44 MB |
| Vector | SVG | 0\.03 MB |
Take a look at the file sizes in Table [4\.1](viz.html#tab:filesizes).
Wow, that’s quite a difference! Notice that for such a simple plot with few
graphical elements (points), the vector graphics format (SVG) is over 100 times
smaller than the uncompressed raster images (BMP, TIFF). Also, note that the
JPG format is twice as large as the PNG format since the JPG compression
algorithm is designed for natural images (not plots).
In Figure [4\.28](viz.html#fig:03-raster-image), we also show what
the images look like when we zoom in to a rectangle with only 2 data points.
You can see why vector graphics formats are so useful: because they’re just
based on mathematical formulas, vector graphics can be scaled up to arbitrary
sizes. This makes them great for presentation media of all sizes, from papers
to posters to billboards.
Figure 4\.28: Zoomed in `faithful`, raster (PNG, left) and vector (SVG, right) formats.
4\.8 Exercises
--------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Effective data visualization” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
4\.9 Additional resources
-------------------------
* The [`ggplot2` R package page](https://ggplot2.tidyverse.org) ([Wickham, Chang, et al. 2021](#ref-ggplot)) is
where you should look if you want to learn more about the functions in this
chapter, the full set of arguments you can use, and other related functions.
The site also provides a very nice cheat sheet that summarizes many of the data
wrangling functions from this chapter.
* The *Fundamentals of Data Visualization* ([Wilke 2019](#ref-wilkeviz)) has
a wealth of information on designing effective visualizations. It is not
specific to any particular programming language or library. If you want to
improve your visualization skills, this is the next place to look.
* *R for Data Science* ([Wickham and Grolemund 2016](#ref-wickham2016r)) has a [chapter on creating visualizations using
`ggplot2`](https://r4ds.had.co.nz/data-visualisation.html). This reference is
specific to R and `ggplot2`, but provides a much more detailed introduction to
the full set of tools that `ggplot2` provides. This chapter is where you should
look if you want to learn how to make more intricate visualizations in
`ggplot2` than what is included in this chapter.
* The [`theme` function documentation](https://ggplot2.tidyverse.org/reference/theme.html)
is an excellent reference to see how you can fine tune the non\-data aspects
of your visualization.
* *R for Data Science* ([Wickham and Grolemund 2016](#ref-wickham2016r)) has a chapter on [dates and
times](https://r4ds.had.co.nz/dates-and-times.html). This chapter is where
you should look if you want to learn about `date` vectors, including how to
create them, and how to use them to effectively handle durations, periods and
intervals using the `lubridate` package.
| Data Science |
ubc-dsci.github.io | https://ubc-dsci.github.io/introduction-to-datascience/classification1.html |
Chapter 5 Classification I: training \& predicting
==================================================
5\.1 Overview
-------------
In previous chapters, we focused solely on descriptive and exploratory
data analysis questions.
This chapter and the next together serve as our first
foray into answering *predictive* questions about data. In particular, we will
focus on *classification*, i.e., using one or more
variables to predict the value of a categorical variable of interest. This chapter
will cover the basics of classification, how to preprocess data to make it
suitable for use in a classifier, and how to use our observed data to make
predictions. The next chapter will focus on how to evaluate how accurate the
predictions from our classifier are, as well as how to improve our classifier
(where possible) to maximize its accuracy.
5\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Recognize situations where a classifier would be appropriate for making predictions.
* Describe what a training data set is and how it is used in classification.
* Interpret the output of a classifier.
* Compute, by hand, the straight\-line (Euclidean) distance between points on a graph when there are two predictor variables.
* Explain the K\-nearest neighbors classification algorithm.
* Perform K\-nearest neighbors classification in R using `tidymodels`.
* Use a `recipe` to center, scale, balance, and impute data as a preprocessing step.
* Combine preprocessing and model training using a `workflow`.
5\.3 The classification problem
-------------------------------
In many situations, we want to make predictions based on the current situation
as well as past experiences. For instance, a doctor may want to diagnose a
patient as either diseased or healthy based on their symptoms and the doctor’s
past experience with patients; an email provider might want to tag a given
email as “spam” or “not spam” based on the email’s text and past email text data;
or a credit card company may want to predict whether a purchase is fraudulent based
on the current purchase item, amount, and location as well as past purchases.
These tasks are all examples of **classification**, i.e., predicting a
categorical class (sometimes called a *label*) for an observation given its
other variables (sometimes called *features*).
Generally, a classifier assigns an observation without a known class (e.g., a new patient)
to a class (e.g., diseased or healthy) on the basis of how similar it is to other observations
for which we do know the class (e.g., previous patients with known diseases and
symptoms). These observations with known classes that we use as a basis for
prediction are called a **training set**; this name comes from the fact that
we use these data to train, or teach, our classifier. Once taught, we can use
the classifier to make predictions on new data for which we do not know the class.
There are many possible methods that we could use to predict
a categorical class/label for an observation. In this book, we will
focus on the widely used **K\-nearest neighbors** algorithm ([Fix and Hodges 1951](#ref-knnfix); [Cover and Hart 1967](#ref-knncover)).
In your future studies, you might encounter decision trees, support vector machines (SVMs),
logistic regression, neural networks, and more; see the additional resources
section at the end of the next chapter for where to begin learning more about
these other methods. It is also worth mentioning that there are many
variations on the basic classification problem. For example,
we focus on the setting of **binary classification** where only two
classes are involved (e.g., a diagnosis of either healthy or diseased), but you may
also run into multiclass classification problems with more than two
categories (e.g., a diagnosis of healthy, bronchitis, pneumonia, or a common cold).
5\.4 Exploring a data set
-------------------------
In this chapter and the next, we will study a data set of
[digitized breast cancer image features](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29),
created by Dr. William H. Wolberg, W. Nick Street, and Olvi L. Mangasarian ([Street, Wolberg, and Mangasarian 1993](#ref-streetbreastcancer)).
Each row in the data set represents an
image of a tumor sample, including the diagnosis (benign or malignant) and
several other measurements (nucleus texture, perimeter, area, and more).
Diagnosis for each image was conducted by physicians.
As with all data analyses, we first need to formulate a precise question that
we want to answer. Here, the question is *predictive*: can
we use the tumor
image measurements available to us to predict whether a future tumor image
(with unknown diagnosis) shows a benign or malignant tumor? Answering this
question is important because traditional, non\-data\-driven methods for tumor
diagnosis are quite subjective and dependent upon how skilled and experienced
the diagnosing physician is. Furthermore, benign tumors are not normally
dangerous; the cells stay in the same place, and the tumor stops growing before
it gets very large. By contrast, in malignant tumors, the cells invade the
surrounding tissue and spread into nearby organs, where they can cause serious
damage ([Stanford Health Care 2021](#ref-stanfordhealthcare)).
Thus, it is important to quickly and accurately diagnose the tumor type to
guide patient treatment.
### 5\.4\.1 Loading the cancer data
Our first step is to load, wrangle, and explore the data using visualizations
in order to better understand the data we are working with. We start by
loading the `tidyverse` package needed for our analysis.
```
library(tidyverse)
```
In this case, the file containing the breast cancer data set is a `.csv`
file with headers. We’ll use the `read_csv` function with no additional
arguments, and then inspect its contents:
```
cancer <- read_csv("data/wdbc.csv")
cancer
```
```
## # A tibble: 569 × 12
## ID Class Radius Texture Perimeter Area Smoothness Compactness Concavity
## <dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 8.42e5 M 1.10 -2.07 1.27 0.984 1.57 3.28 2.65
## 2 8.43e5 M 1.83 -0.353 1.68 1.91 -0.826 -0.487 -0.0238
## 3 8.43e7 M 1.58 0.456 1.57 1.56 0.941 1.05 1.36
## 4 8.43e7 M -0.768 0.254 -0.592 -0.764 3.28 3.40 1.91
## 5 8.44e7 M 1.75 -1.15 1.78 1.82 0.280 0.539 1.37
## 6 8.44e5 M -0.476 -0.835 -0.387 -0.505 2.24 1.24 0.866
## 7 8.44e5 M 1.17 0.161 1.14 1.09 -0.123 0.0882 0.300
## 8 8.45e7 M -0.118 0.358 -0.0728 -0.219 1.60 1.14 0.0610
## 9 8.45e5 M -0.320 0.588 -0.184 -0.384 2.20 1.68 1.22
## 10 8.45e7 M -0.473 1.10 -0.329 -0.509 1.58 2.56 1.74
## # ℹ 559 more rows
## # ℹ 3 more variables: Concave_Points <dbl>, Symmetry <dbl>,
## # Fractal_Dimension <dbl>
```
### 5\.4\.2 Describing the variables in the cancer data set
Breast tumors can be diagnosed by performing a *biopsy*, a process where
tissue is removed from the body and examined for the presence of disease.
Traditionally these procedures were quite invasive; modern methods such as fine
needle aspiration, used to collect the present data set, extract only a small
amount of tissue and are less invasive. Based on a digital image of each breast
tissue sample collected for this data set, ten different variables were measured
for each cell nucleus in the image (items 3–12 of the list of variables below), and then the mean
for each variable across the nuclei was recorded. As part of the
data preparation, these values have been *standardized (centered and scaled)*; we will discuss what this
means and why we do it later in this chapter. Each image additionally was given
a unique ID and a diagnosis by a physician. Therefore, the
total set of variables per image in this data set is:
1. ID: identification number
2. Class: the diagnosis (M \= malignant or B \= benign)
3. Radius: the mean of distances from center to points on the perimeter
4. Texture: the standard deviation of gray\-scale values
5. Perimeter: the length of the surrounding contour
6. Area: the area inside the contour
7. Smoothness: the local variation in radius lengths
8. Compactness: the ratio of squared perimeter and area
9. Concavity: severity of concave portions of the contour
10. Concave Points: the number of concave portions of the contour
11. Symmetry: how similar the nucleus is when mirrored
12. Fractal Dimension: a measurement of how “rough” the perimeter is
Below we use `glimpse` to preview the data frame. This function can
make it easier to inspect the data when we have a lot of columns,
as it prints the data such that the columns go down
the page (instead of across).
```
glimpse(cancer)
```
```
## Rows: 569
## Columns: 12
## $ ID <dbl> 842302, 842517, 84300903, 84348301, 84358402, 843786…
## $ Class <chr> "M", "M", "M", "M", "M", "M", "M", "M", "M", "M", "M…
## $ Radius <dbl> 1.0960995, 1.8282120, 1.5784992, -0.7682333, 1.74875…
## $ Texture <dbl> -2.0715123, -0.3533215, 0.4557859, 0.2535091, -1.150…
## $ Perimeter <dbl> 1.26881726, 1.68447255, 1.56512598, -0.59216612, 1.7…
## $ Area <dbl> 0.98350952, 1.90703027, 1.55751319, -0.76379174, 1.8…
## $ Smoothness <dbl> 1.56708746, -0.82623545, 0.94138212, 3.28066684, 0.2…
## $ Compactness <dbl> 3.28062806, -0.48664348, 1.05199990, 3.39991742, 0.5…
## $ Concavity <dbl> 2.65054179, -0.02382489, 1.36227979, 1.91421287, 1.3…
## $ Concave_Points <dbl> 2.53024886, 0.54766227, 2.03543978, 1.45043113, 1.42…
## $ Symmetry <dbl> 2.215565542, 0.001391139, 0.938858720, 2.864862154, …
## $ Fractal_Dimension <dbl> 2.25376381, -0.86788881, -0.39765801, 4.90660199, -0…
```
From the summary of the data above, we can see that `Class` is of type character
(denoted by `<chr>`). We can use the `distinct` function to see all the unique
values present in that column. We see that there are two diagnoses: benign, represented by “B”,
and malignant, represented by “M”.
```
cancer |>
distinct(Class)
```
```
## # A tibble: 2 × 1
## Class
## <chr>
## 1 M
## 2 B
```
Since we will be working with `Class` as a categorical
variable, it is a good idea to convert it to a factor type using the `as_factor` function.
We will also improve the readability of our analysis by renaming “M” to
“Malignant” and “B” to “Benign” using the `fct_recode` method. The `fct_recode` method
is used to replace the names of factor values with other names. The arguments of `fct_recode` are the column that you
want to modify, followed any number of arguments of the form `"new name" = "old name"` to specify the renaming scheme.
```
cancer <- cancer |>
mutate(Class = as_factor(Class)) |>
mutate(Class = fct_recode(Class, "Malignant" = "M", "Benign" = "B"))
glimpse(cancer)
```
```
## Rows: 569
## Columns: 12
## $ ID <dbl> 842302, 842517, 84300903, 84348301, 84358402, 843786…
## $ Class <fct> Malignant, Malignant, Malignant, Malignant, Malignan…
## $ Radius <dbl> 1.0960995, 1.8282120, 1.5784992, -0.7682333, 1.74875…
## $ Texture <dbl> -2.0715123, -0.3533215, 0.4557859, 0.2535091, -1.150…
## $ Perimeter <dbl> 1.26881726, 1.68447255, 1.56512598, -0.59216612, 1.7…
## $ Area <dbl> 0.98350952, 1.90703027, 1.55751319, -0.76379174, 1.8…
## $ Smoothness <dbl> 1.56708746, -0.82623545, 0.94138212, 3.28066684, 0.2…
## $ Compactness <dbl> 3.28062806, -0.48664348, 1.05199990, 3.39991742, 0.5…
## $ Concavity <dbl> 2.65054179, -0.02382489, 1.36227979, 1.91421287, 1.3…
## $ Concave_Points <dbl> 2.53024886, 0.54766227, 2.03543978, 1.45043113, 1.42…
## $ Symmetry <dbl> 2.215565542, 0.001391139, 0.938858720, 2.864862154, …
## $ Fractal_Dimension <dbl> 2.25376381, -0.86788881, -0.39765801, 4.90660199, -0…
```
Let’s verify that we have successfully converted the `Class` column to a factor variable
and renamed its values to “Benign” and “Malignant” using the `distinct` function once more.
```
cancer |>
distinct(Class)
```
```
## # A tibble: 2 × 1
## Class
## <fct>
## 1 Malignant
## 2 Benign
```
### 5\.4\.3 Exploring the cancer data
Before we start doing any modeling, let’s explore our data set. Below we use
the `group_by`, `summarize` and `n` functions to find the number and percentage
of benign and malignant tumor observations in our data set. The `n` function within
`summarize`, when paired with `group_by`, counts the number of observations in each `Class` group.
Then we calculate the percentage in each group by dividing by the total number of observations
and multiplying by 100\. We have 357 (63%) benign and 212 (37%) malignant tumor observations.
```
num_obs <- nrow(cancer)
cancer |>
group_by(Class) |>
summarize(
count = n(),
percentage = n() / num_obs * 100
)
```
```
## # A tibble: 2 × 3
## Class count percentage
## <fct> <int> <dbl>
## 1 Malignant 212 37.3
## 2 Benign 357 62.7
```
Next, let’s draw a scatter plot to visualize the relationship between the
perimeter and concavity variables. Rather than use `ggplot's` default palette,
we select our own colorblind\-friendly colors—`"darkorange"`
for orange and `"steelblue"` for blue—and
pass them as the `values` argument to the `scale_color_manual` function.
```
perim_concav <- cancer |>
ggplot(aes(x = Perimeter, y = Concavity, color = Class)) +
geom_point(alpha = 0.6) +
labs(x = "Perimeter (standardized)",
y = "Concavity (standardized)",
color = "Diagnosis") +
scale_color_manual(values = c("darkorange", "steelblue")) +
theme(text = element_text(size = 12))
perim_concav
```
Figure 5\.1: Scatter plot of concavity versus perimeter colored by diagnosis label.
In Figure [5\.1](classification1.html#fig:05-scatter), we can see that malignant observations typically fall in
the upper right\-hand corner of the plot area. By contrast, benign
observations typically fall in the lower left\-hand corner of the plot. In other words,
benign observations tend to have lower concavity and perimeter values, and malignant
ones tend to have larger values. Suppose we
obtain a new observation not in the current data set that has all the variables
measured *except* the label (i.e., an image without the physician’s diagnosis
for the tumor class). We could compute the standardized perimeter and concavity values,
resulting in values of, say, 1 and 1\. Could we use this information to classify
that observation as benign or malignant? Based on the scatter plot, how might
you classify that new observation? If the standardized concavity and perimeter
values are 1 and 1 respectively, the point would lie in the middle of the
orange cloud of malignant points and thus we could probably classify it as
malignant. Based on our visualization, it seems like it may be possible
to make accurate predictions of the `Class` variable (i.e., a diagnosis) for
tumor images with unknown diagnoses.
5\.5 Classification with K\-nearest neighbors
---------------------------------------------
In order to actually make predictions for new observations in practice, we
will need a classification algorithm.
In this book, we will use the K\-nearest neighbors classification algorithm.
To predict the label of a new observation (here, classify it as either benign
or malignant), the K\-nearest neighbors classifier generally finds the \\(K\\)
“nearest” or “most similar” observations in our training set, and then uses
their diagnoses to make a prediction for the new observation’s diagnosis. \\(K\\)
is a number that we must choose in advance; for now, we will assume that someone has chosen
\\(K\\) for us. We will cover how to choose \\(K\\) ourselves in the next chapter.
To illustrate the concept of K\-nearest neighbors classification, we
will walk through an example. Suppose we have a
new observation, with standardized perimeter of 2 and standardized concavity of 4, whose
diagnosis “Class” is unknown. This new observation is depicted by the red, diamond point in
Figure [5\.2](classification1.html#fig:05-knn-1).
Figure 5\.2: Scatter plot of concavity versus perimeter with new observation represented as a red diamond.
Figure [5\.3](classification1.html#fig:05-knn-2) shows that the nearest point to this new observation is **malignant** and
located at the coordinates (2\.1, 3\.6\). The idea here is that if a point is close to another in the scatter plot,
then the perimeter and concavity values are similar, and so we may expect that
they would have the same diagnosis.
Figure 5\.3: Scatter plot of concavity versus perimeter. The new observation is represented as a red diamond with a line to the one nearest neighbor, which has a malignant label.
Suppose we have another new observation with standardized perimeter 0\.2 and
concavity of 3\.3\. Looking at the scatter plot in Figure [5\.4](classification1.html#fig:05-knn-4), how would you
classify this red, diamond observation? The nearest neighbor to this new point is a
**benign** observation at (0\.2, 2\.7\).
Does this seem like the right prediction to make for this observation? Probably
not, if you consider the other nearby points.
Figure 5\.4: Scatter plot of concavity versus perimeter. The new observation is represented as a red diamond with a line to the one nearest neighbor, which has a benign label.
To improve the prediction we can consider several
neighboring points, say \\(K \= 3\\), that are closest to the new observation
to predict its diagnosis class. Among those 3 closest points, we use the
*majority class* as our prediction for the new observation. As shown in Figure [5\.5](classification1.html#fig:05-knn-5), we
see that the diagnoses of 2 of the 3 nearest neighbors to our new observation
are malignant. Therefore we take majority vote and classify our new red, diamond
observation as malignant.
Figure 5\.5: Scatter plot of concavity versus perimeter with three nearest neighbors.
Here we chose the \\(K\=3\\) nearest observations, but there is nothing special
about \\(K\=3\\). We could have used \\(K\=4, 5\\) or more (though we may want to choose
an odd number to avoid ties). We will discuss more about choosing \\(K\\) in the
next chapter.
### 5\.5\.1 Distance between points
We decide which points are the \\(K\\) “nearest” to our new observation
using the *straight\-line distance* (we will often just refer to this as *distance*).
Suppose we have two observations \\(a\\) and \\(b\\), each having two predictor variables, \\(x\\) and \\(y\\).
Denote \\(a\_x\\) and \\(a\_y\\) to be the values of variables \\(x\\) and \\(y\\) for observation \\(a\\);
\\(b\_x\\) and \\(b\_y\\) have similar definitions for observation \\(b\\).
Then the straight\-line distance between observation \\(a\\) and \\(b\\) on the x\-y plane can
be computed using the following formula:
\\\[\\mathrm{Distance} \= \\sqrt{(a\_x \-b\_x)^2 \+ (a\_y \- b\_y)^2}\\]
To find the \\(K\\) nearest neighbors to our new observation, we compute the distance
from that new observation to each observation in our training data, and select the \\(K\\) observations corresponding to the
\\(K\\) *smallest* distance values. For example, suppose we want to use \\(K\=5\\) neighbors to classify a new
observation with perimeter of 0 and
concavity of 3\.5, shown as a red diamond in Figure [5\.6](classification1.html#fig:05-multiknn-1). Let’s calculate the distances
between our new point and each of the observations in the training set to find
the \\(K\=5\\) neighbors that are nearest to our new point.
You will see in the `mutate` step below, we compute the straight\-line
distance using the formula above: we square the differences between the two observations’ perimeter
and concavity coordinates, add the squared differences, and then take the square root.
In order to find the \\(K\=5\\) nearest neighbors, we will use the `slice_min` function.
Figure 5\.6: Scatter plot of concavity versus perimeter with new observation represented as a red diamond.
```
new_obs_Perimeter <- 0
new_obs_Concavity <- 3.5
cancer |>
select(ID, Perimeter, Concavity, Class) |>
mutate(dist_from_new = sqrt((Perimeter - new_obs_Perimeter)^2 +
(Concavity - new_obs_Concavity)^2)) |>
slice_min(dist_from_new, n = 5) # take the 5 rows of minimum distance
```
```
## # A tibble: 5 × 5
## ID Perimeter Concavity Class dist_from_new
## <dbl> <dbl> <dbl> <fct> <dbl>
## 1 86409 0.241 2.65 Benign 0.881
## 2 887181 0.750 2.87 Malignant 0.980
## 3 899667 0.623 2.54 Malignant 1.14
## 4 907914 0.417 2.31 Malignant 1.26
## 5 8710441 -1.16 4.04 Benign 1.28
```
In Table [5\.1](classification1.html#tab:05-multiknn-mathtable) we show in mathematical detail how
the `mutate` step was used to compute the `dist_from_new` variable (the
distance to the new observation) for each of the 5 nearest neighbors in the
training data.
Table 5\.1: Evaluating the distances from the new observation to each of its 5 nearest neighbors
| Perimeter | Concavity | Distance | Class |
| --- | --- | --- | --- |
| 0\.24 | 2\.65 | \\(\\sqrt{(0 \- 0\.24\)^2 \+ (3\.5 \- 2\.65\)^2} \= 0\.88\\) | Benign |
| 0\.75 | 2\.87 | \\(\\sqrt{(0 \- 0\.75\)^2 \+ (3\.5 \- 2\.87\)^2} \= 0\.98\\) | Malignant |
| 0\.62 | 2\.54 | \\(\\sqrt{(0 \- 0\.62\)^2 \+ (3\.5 \- 2\.54\)^2} \= 1\.14\\) | Malignant |
| 0\.42 | 2\.31 | \\(\\sqrt{(0 \- 0\.42\)^2 \+ (3\.5 \- 2\.31\)^2} \= 1\.26\\) | Malignant |
| \-1\.16 | 4\.04 | \\(\\sqrt{(0 \- (\-1\.16\))^2 \+ (3\.5 \- 4\.04\)^2} \= 1\.28\\) | Benign |
The result of this computation shows that 3 of the 5 nearest neighbors to our new observation are
malignant; since this is the majority, we classify our new observation as malignant.
These 5 neighbors are circled in Figure [5\.7](classification1.html#fig:05-multiknn-3).
Figure 5\.7: Scatter plot of concavity versus perimeter with 5 nearest neighbors circled.
### 5\.5\.2 More than two explanatory variables
Although the above description is directed toward two predictor variables,
exactly the same K\-nearest neighbors algorithm applies when you
have a higher number of predictor variables. Each predictor variable may give us new
information to help create our classifier. The only difference is the formula
for the distance between points. Suppose we have \\(m\\) predictor
variables for two observations \\(a\\) and \\(b\\), i.e.,
\\(a \= (a\_{1}, a\_{2}, \\dots, a\_{m})\\) and
\\(b \= (b\_{1}, b\_{2}, \\dots, b\_{m})\\).
The distance formula becomes
\\\[\\mathrm{Distance} \= \\sqrt{(a\_{1} \-b\_{1})^2 \+ (a\_{2} \- b\_{2})^2 \+ \\dots \+ (a\_{m} \- b\_{m})^2}.\\]
This formula still corresponds to a straight\-line distance, just in a space
with more dimensions. Suppose we want to calculate the distance between a new
observation with a perimeter of 0, concavity of 3\.5, and symmetry of 1, and
another observation with a perimeter, concavity, and symmetry of 0\.417, 2\.31, and
0\.837 respectively. We have two observations with three predictor variables:
perimeter, concavity, and symmetry. Previously, when we had two variables, we
added up the squared difference between each of our (two) variables, and then
took the square root. Now we will do the same, except for our three variables.
We calculate the distance as follows
\\\[\\mathrm{Distance} \=\\sqrt{(0 \- 0\.417\)^2 \+ (3\.5 \- 2\.31\)^2 \+ (1 \- 0\.837\)^2} \= 1\.27\.\\]
Let’s calculate the distances between our new observation and each of the
observations in the training set to find the \\(K\=5\\) neighbors when we have these
three predictors.
```
new_obs_Perimeter <- 0
new_obs_Concavity <- 3.5
new_obs_Symmetry <- 1
cancer |>
select(ID, Perimeter, Concavity, Symmetry, Class) |>
mutate(dist_from_new = sqrt((Perimeter - new_obs_Perimeter)^2 +
(Concavity - new_obs_Concavity)^2 +
(Symmetry - new_obs_Symmetry)^2)) |>
slice_min(dist_from_new, n = 5) # take the 5 rows of minimum distance
```
```
## # A tibble: 5 × 6
## ID Perimeter Concavity Symmetry Class dist_from_new
## <dbl> <dbl> <dbl> <dbl> <fct> <dbl>
## 1 907914 0.417 2.31 0.837 Malignant 1.27
## 2 90439701 1.33 2.89 1.10 Malignant 1.47
## 3 925622 0.470 2.08 1.15 Malignant 1.50
## 4 859471 -1.37 2.81 1.09 Benign 1.53
## 5 899667 0.623 2.54 2.06 Malignant 1.56
```
Based on \\(K\=5\\) nearest neighbors with these three predictors, we would classify
the new observation as malignant since 4 out of 5 of the nearest neighbors are from the malignant class.
Figure [5\.8](classification1.html#fig:05-more) shows what the data look like when we visualize them
as a 3\-dimensional scatter with lines from the new observation to its five nearest neighbors.
Figure 5\.8: 3D scatter plot of the standardized symmetry, concavity, and perimeter variables. Note that in general we recommend against using 3D visualizations; here we show the data in 3D only to illustrate what higher dimensions and nearest neighbors look like, for learning purposes.
### 5\.5\.3 Summary of K\-nearest neighbors algorithm
In order to classify a new observation using a K\-nearest neighbors classifier, we have to do the following:
1. Compute the distance between the new observation and each observation in the training set.
2. Sort the data table in ascending order according to the distances.
3. Choose the top \\(K\\) rows of the sorted table.
4. Classify the new observation based on a majority vote of the neighbor classes.
5\.6 K\-nearest neighbors with `tidymodels`
-------------------------------------------
Coding the K\-nearest neighbors algorithm in R ourselves can get complicated,
especially if we want to handle multiple classes, more than two variables,
or predict the class for multiple new observations. Thankfully, in R,
the K\-nearest neighbors algorithm is
implemented in [the `parsnip` R package](https://parsnip.tidymodels.org/) ([Kuhn and Vaughan 2021](#ref-parsnip))
included in `tidymodels`, along with
many [other models](https://www.tidymodels.org/find/parsnip/)
that you will encounter in this and future chapters of the book. The `tidymodels` collection
provides tools to help make and use models, such as classifiers. Using the packages
in this collection will help keep our code simple, readable and accurate; the
less we have to code ourselves, the fewer mistakes we will likely make. We
start by loading `tidymodels`.
```
library(tidymodels)
```
Let’s walk through how to use `tidymodels` to perform K\-nearest neighbors classification.
We will use the `cancer` data set from above, with
perimeter and concavity as predictors and \\(K \= 5\\) neighbors to build our classifier. Then
we will use the classifier to predict the diagnosis label for a new observation with
perimeter 0, concavity 3\.5, and an unknown diagnosis label. Let’s pick out our two desired
predictor variables and class label and store them as a new data set named `cancer_train`:
```
cancer_train <- cancer |>
select(Class, Perimeter, Concavity)
cancer_train
```
```
## # A tibble: 569 × 3
## Class Perimeter Concavity
## <fct> <dbl> <dbl>
## 1 Malignant 1.27 2.65
## 2 Malignant 1.68 -0.0238
## 3 Malignant 1.57 1.36
## 4 Malignant -0.592 1.91
## 5 Malignant 1.78 1.37
## 6 Malignant -0.387 0.866
## 7 Malignant 1.14 0.300
## 8 Malignant -0.0728 0.0610
## 9 Malignant -0.184 1.22
## 10 Malignant -0.329 1.74
## # ℹ 559 more rows
```
Next, we create a *model specification* for K\-nearest neighbors classification
by calling the `nearest_neighbor` function, specifying that we want to use \\(K \= 5\\) neighbors
(we will discuss how to choose \\(K\\) in the next chapter) and that each neighboring point should have the same weight when voting
(`weight_func = "rectangular"`). The `weight_func` argument controls
how neighbors vote when classifying a new observation; by setting it to `"rectangular"`,
each of the \\(K\\) nearest neighbors gets exactly 1 vote as described above. Other choices,
which weigh each neighbor’s vote differently, can be found on
[the `parsnip` website](https://parsnip.tidymodels.org/reference/nearest_neighbor.html).
In the `set_engine` argument, we specify which package or system will be used for training
the model. Here `kknn` is the R package we will use for performing K\-nearest neighbors classification.
Finally, we specify that this is a classification problem with the `set_mode` function.
```
knn_spec <- nearest_neighbor(weight_func = "rectangular", neighbors = 5) |>
set_engine("kknn") |>
set_mode("classification")
knn_spec
```
```
## K-Nearest Neighbor Model Specification (classification)
##
## Main Arguments:
## neighbors = 5
## weight_func = rectangular
##
## Computational engine: kknn
```
In order to fit the model on the breast cancer data, we need to pass the model specification
and the data set to the `fit` function. We also need to specify what variables to use as predictors
and what variable to use as the response. Below, the `Class ~ Perimeter + Concavity` argument specifies
that `Class` is the response variable (the one we want to predict),
and both `Perimeter` and `Concavity` are to be used as the predictors.
```
knn_fit <- knn_spec |>
fit(Class ~ Perimeter + Concavity, data = cancer_train)
```
We can also use a convenient shorthand syntax using a period, `Class ~ .`, to indicate
that we want to use every variable *except* `Class` as a predictor in the model.
In this particular setup, since `Concavity` and `Perimeter` are the only two predictors in the `cancer_train`
data frame, `Class ~ Perimeter + Concavity` and `Class ~ .` are equivalent.
In general, you can choose individual predictors using the `+` symbol, or you can specify to
use *all* predictors using the `.` symbol.
```
knn_fit <- knn_spec |>
fit(Class ~ ., data = cancer_train)
knn_fit
```
```
## parsnip model object
##
##
## Call:
## kknn::train.kknn(formula = Class ~ ., data = data, ks = min_rows(5, data, 5)
## , kernel = ~"rectangular")
##
## Type of response variable: nominal
## Minimal misclassification: 0.07557118
## Best kernel: rectangular
## Best k: 5
```
Here you can see the final trained model summary. It confirms that the computational engine used
to train the model was `kknn::train.kknn`. It also shows the fraction of errors made by
the K\-nearest neighbors model, but we will ignore this for now and discuss it in more detail
in the next chapter.
Finally, it shows (somewhat confusingly) that the “best” weight function
was “rectangular” and “best” setting of \\(K\\) was 5; but since we specified these earlier,
R is just repeating those settings to us here. In the next chapter, we will actually
let R find the value of \\(K\\) for us.
Finally, we make the prediction on the new observation by calling the `predict` function,
passing both the fit object we just created and the new observation itself. As above,
when we ran the K\-nearest neighbors
classification algorithm manually, the `knn_fit` object classifies the new observation as
malignant. Note that the `predict` function outputs a data frame with a single
variable named `.pred_class`.
```
new_obs <- tibble(Perimeter = 0, Concavity = 3.5)
predict(knn_fit, new_obs)
```
```
## # A tibble: 1 × 1
## .pred_class
## <fct>
## 1 Malignant
```
Is this predicted malignant label the actual class for this observation?
Well, we don’t know because we do not have this
observation’s diagnosis— that is what we were trying to predict! The
classifier’s prediction is not necessarily correct, but in the next chapter, we will
learn ways to quantify how accurate we think our predictions are.
5\.7 Data preprocessing with `tidymodels`
-----------------------------------------
### 5\.7\.1 Centering and scaling
When using K\-nearest neighbors classification, the *scale* of each variable
(i.e., its size and range of values) matters. Since the classifier predicts
classes by identifying observations nearest to it, any variables with
a large scale will have a much larger effect than variables with a small
scale. But just because a variable has a large scale *doesn’t mean* that it is
more important for making accurate predictions. For example, suppose you have a
data set with two features, salary (in dollars) and years of education, and
you want to predict the corresponding type of job. When we compute the
neighbor distances, a difference of $1000 is huge compared to a difference of
10 years of education. But for our conceptual understanding and answering of
the problem, it’s the opposite; 10 years of education is huge compared to a
difference of $1000 in yearly salary!
In many other predictive models, the *center* of each variable (e.g., its mean)
matters as well. For example, if we had a data set with a temperature variable
measured in degrees Kelvin, and the same data set with temperature measured in
degrees Celsius, the two variables would differ by a constant shift of 273
(even though they contain exactly the same information). Likewise, in our
hypothetical job classification example, we would likely see that the center of
the salary variable is in the tens of thousands, while the center of the years
of education variable is in the single digits. Although this doesn’t affect the
K\-nearest neighbors classification algorithm, this large shift can change the
outcome of using many other predictive models.
To scale and center our data, we need to find
our variables’ *mean* (the average, which quantifies the “central” value of a
set of numbers) and *standard deviation* (a number quantifying how spread out values are).
For each observed value of the variable, we subtract the mean (i.e., center the variable)
and divide by the standard deviation (i.e., scale the variable). When we do this, the data
is said to be *standardized*, and all variables in a data set will have a mean of 0
and a standard deviation of 1\. To illustrate the effect that standardization can have on the K\-nearest
neighbors algorithm, we will read in the original, unstandardized Wisconsin breast
cancer data set; we have been using a standardized version of the data set up
until now. As before, we will convert the `Class` variable to the factor type
and rename the values to “Malignant” and “Benign.”
To keep things simple, we will just use the `Area`, `Smoothness`, and `Class`
variables:
```
unscaled_cancer <- read_csv("data/wdbc_unscaled.csv") |>
mutate(Class = as_factor(Class)) |>
mutate(Class = fct_recode(Class, "Benign" = "B", "Malignant" = "M")) |>
select(Class, Area, Smoothness)
unscaled_cancer
```
```
## # A tibble: 569 × 3
## Class Area Smoothness
## <fct> <dbl> <dbl>
## 1 Malignant 1001 0.118
## 2 Malignant 1326 0.0847
## 3 Malignant 1203 0.110
## 4 Malignant 386. 0.142
## 5 Malignant 1297 0.100
## 6 Malignant 477. 0.128
## 7 Malignant 1040 0.0946
## 8 Malignant 578. 0.119
## 9 Malignant 520. 0.127
## 10 Malignant 476. 0.119
## # ℹ 559 more rows
```
Looking at the unscaled and uncentered data above, you can see that the differences
between the values for area measurements are much larger than those for
smoothness. Will this affect
predictions? In order to find out, we will create a scatter plot of these two
predictors (colored by diagnosis) for both the unstandardized data we just
loaded, and the standardized version of that same data. But first, we need to
standardize the `unscaled_cancer` data set with `tidymodels`.
In the `tidymodels` framework, all data preprocessing happens
using a `recipe` from [the `recipes` R package](https://recipes.tidymodels.org/) ([Kuhn and Wickham 2021](#ref-recipes)).
Here we will initialize a recipe for
the `unscaled_cancer` data above, specifying
that the `Class` variable is the response, and all other variables are predictors:
```
uc_recipe <- recipe(Class ~ ., data = unscaled_cancer)
uc_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## outcome: 1
## predictor: 2
```
So far, there is not much in the recipe; just a statement about the number of response variables
and predictors. Let’s add
scaling (`step_scale`) and
centering (`step_center`) steps for
all of the predictors so that they each have a mean of 0 and standard deviation of 1\.
Note that `tidyverse` actually provides `step_normalize`, which does both centering and scaling in
a single recipe step; in this book we will keep `step_scale` and `step_center` separate
to emphasize conceptually that there are two steps happening.
The `prep` function finalizes the recipe by using the data (here, `unscaled_cancer`)
to compute anything necessary to run the recipe (in this case, the column means and standard
deviations):
```
uc_recipe <- uc_recipe |>
step_scale(all_predictors()) |>
step_center(all_predictors()) |>
prep()
uc_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## outcome: 1
## predictor: 2
##
## ── Training information
## Training data contained 569 data points and no incomplete rows.
##
## ── Operations
## • Scaling for: Area, Smoothness | Trained
## • Centering for: Area, Smoothness | Trained
```
You can now see that the recipe includes a scaling and centering step for all predictor variables.
Note that when you add a step to a recipe, you must specify what columns to apply the step to.
Here we used the `all_predictors()` function to specify that each step should be applied to
all predictor variables. However, there are a number of different arguments one could use here,
as well as naming particular columns with the same syntax as the `select` function.
For example:
* `all_nominal()` and `all_numeric()`: specify all categorical or all numeric variables
* `all_predictors()` and `all_outcomes()`: specify all predictor or all response variables
* `Area, Smoothness`: specify both the `Area` and `Smoothness` variable
* `-Class`: specify everything except the `Class` variable
You can find a full set of all the steps and variable selection functions
on the [`recipes` reference page](https://recipes.tidymodels.org/reference/index.html).
At this point, we have calculated the required statistics based on the data input into the
recipe, but the data are not yet scaled and centered. To actually scale and center
the data, we need to apply the `bake` function to the unscaled data.
```
scaled_cancer <- bake(uc_recipe, unscaled_cancer)
scaled_cancer
```
```
## # A tibble: 569 × 3
## Area Smoothness Class
## <dbl> <dbl> <fct>
## 1 0.984 1.57 Malignant
## 2 1.91 -0.826 Malignant
## 3 1.56 0.941 Malignant
## 4 -0.764 3.28 Malignant
## 5 1.82 0.280 Malignant
## 6 -0.505 2.24 Malignant
## 7 1.09 -0.123 Malignant
## 8 -0.219 1.60 Malignant
## 9 -0.384 2.20 Malignant
## 10 -0.509 1.58 Malignant
## # ℹ 559 more rows
```
It may seem redundant that we had to both `bake` *and* `prep` to scale and center the data.
However, we do this in two steps so we can specify a different data set in the `bake` step if we want.
For example, we may want to specify new data that were not part of the training set.
You may wonder why we are doing so much work just to center and
scale our variables. Can’t we just manually scale and center the `Area` and
`Smoothness` variables ourselves before building our K\-nearest neighbors model? Well,
technically *yes*; but doing so is error\-prone. In particular, we might
accidentally forget to apply the same centering / scaling when making
predictions, or accidentally apply a *different* centering / scaling than what
we used while training. Proper use of a `recipe` helps keep our code simple,
readable, and error\-free. Furthermore, note that using `prep` and `bake` is
required only when you want to inspect the result of the preprocessing steps
yourself. You will see further on in Section
[5\.8](classification1.html#puttingittogetherworkflow) that `tidymodels` provides tools to
automatically apply `prep` and `bake` as necessary without additional coding effort.
Figure [5\.9](classification1.html#fig:05-scaling-plt) shows the two scatter plots side\-by\-side—one for `unscaled_cancer` and one for
`scaled_cancer`. Each has the same new observation annotated with its \\(K\=3\\) nearest neighbors.
In the original unstandardized data plot, you can see some odd choices
for the three nearest neighbors. In particular, the “neighbors” are visually
well within the cloud of benign observations, and the neighbors are all nearly
vertically aligned with the new observation (which is why it looks like there
is only one black line on this plot). Figure [5\.10](classification1.html#fig:05-scaling-plt-zoomed)
shows a close\-up of that region on the unstandardized plot. Here the computation of nearest
neighbors is dominated by the much larger\-scale area variable. The plot for standardized data
on the right in Figure [5\.9](classification1.html#fig:05-scaling-plt) shows a much more intuitively reasonable
selection of nearest neighbors. Thus, standardizing the data can change things
in an important way when we are using predictive algorithms.
Standardizing your data should be a part of the preprocessing you do
before predictive modeling and you should always think carefully about your problem domain and
whether you need to standardize your data.
Figure 5\.9: Comparison of K \= 3 nearest neighbors with unstandardized and standardized data.
Figure 5\.10: Close\-up of three nearest neighbors for unstandardized data.
### 5\.7\.2 Balancing
Another potential issue in a data set for a classifier is *class imbalance*,
i.e., when one label is much more common than another. Since classifiers like
the K\-nearest neighbors algorithm use the labels of nearby points to predict
the label of a new point, if there are many more data points with one label
overall, the algorithm is more likely to pick that label in general (even if
the “pattern” of data suggests otherwise). Class imbalance is actually quite a
common and important problem: from rare disease diagnosis to malicious email
detection, there are many cases in which the “important” class to identify
(presence of disease, malicious email) is much rarer than the “unimportant”
class (no disease, normal email).
To better illustrate the problem, let’s revisit the scaled breast cancer data,
`cancer`; except now we will remove many of the observations of malignant tumors, simulating
what the data would look like if the cancer was rare. We will do this by
picking only 3 observations from the malignant group, and keeping all
of the benign observations.
We choose these 3 observations using the `slice_head`
function, which takes two arguments: a data frame\-like object,
and the number of rows to select from the top (`n`).
We will use the `bind_rows` function to glue the two resulting filtered
data frames back together, and name the result `rare_cancer`.
The new imbalanced data is shown in Figure [5\.11](classification1.html#fig:05-unbalanced).
```
rare_cancer <- bind_rows(
filter(cancer, Class == "Benign"),
cancer |> filter(Class == "Malignant") |> slice_head(n = 3)
) |>
select(Class, Perimeter, Concavity)
rare_plot <- rare_cancer |>
ggplot(aes(x = Perimeter, y = Concavity, color = Class)) +
geom_point(alpha = 0.5) +
labs(x = "Perimeter (standardized)",
y = "Concavity (standardized)",
color = "Diagnosis") +
scale_color_manual(values = c("darkorange", "steelblue")) +
theme(text = element_text(size = 12))
rare_plot
```
Figure 5\.11: Imbalanced data.
Suppose we now decided to use \\(K \= 7\\) in K\-nearest neighbors classification.
With only 3 observations of malignant tumors, the classifier
will *always predict that the tumor is benign, no matter what its concavity and perimeter
are!* This is because in a majority vote of 7 observations, at most 3 will be
malignant (we only have 3 total malignant observations), so at least 4 must be
benign, and the benign vote will always win. For example, Figure [5\.12](classification1.html#fig:05-upsample)
shows what happens for a new tumor observation that is quite close to three observations
in the training data that were tagged as malignant.
Figure 5\.12: Imbalanced data with 7 nearest neighbors to a new observation highlighted.
Figure [5\.13](classification1.html#fig:05-upsample-2) shows what happens if we set the background color of
each area of the plot to the prediction the K\-nearest neighbors
classifier would make for a new observation at that location. We can see that the decision is
always “benign,” corresponding to the blue color.
Figure 5\.13: Imbalanced data with background color indicating the decision of the classifier and the points represent the labeled data.
Despite the simplicity of the problem, solving it in a statistically sound manner is actually
fairly nuanced, and a careful treatment would require a lot more detail and mathematics than we will cover in this textbook.
For the present purposes, it will suffice to rebalance the data by *oversampling* the rare class.
In other words, we will replicate rare observations multiple times in our data set to give them more
voting power in the K\-nearest neighbors algorithm. In order to do this, we will add an oversampling
step to the earlier `uc_recipe` recipe with the `step_upsample` function from the `themis` R package.
We show below how to do this, and also
use the `group_by` and `summarize` functions to see that our classes are now balanced:
```
library(themis)
ups_recipe <- recipe(Class ~ ., data = rare_cancer) |>
step_upsample(Class, over_ratio = 1, skip = FALSE) |>
prep()
ups_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## outcome: 1
## predictor: 2
##
## ── Training information
## Training data contained 360 data points and no incomplete rows.
##
## ── Operations
## • Up-sampling based on: Class | Trained
```
```
upsampled_cancer <- bake(ups_recipe, rare_cancer)
upsampled_cancer |>
group_by(Class) |>
summarize(n = n())
```
```
## # A tibble: 2 × 2
## Class n
## <fct> <int>
## 1 Malignant 357
## 2 Benign 357
```
Now suppose we train our K\-nearest neighbors classifier with \\(K\=7\\) on this *balanced* data.
Figure [5\.14](classification1.html#fig:05-upsample-plot) shows what happens now when we set the background color
of each area of our scatter plot to the decision the K\-nearest neighbors
classifier would make. We can see that the decision is more reasonable; when the points are close
to those labeled malignant, the classifier predicts a malignant tumor, and vice versa when they are
closer to the benign tumor observations.
Figure 5\.14: Upsampled data with background color indicating the decision of the classifier.
### 5\.7\.3 Missing data
One of the most common issues in real data sets in the wild is *missing data*,
i.e., observations where the values of some of the variables were not recorded.
Unfortunately, as common as it is, handling missing data properly is very
challenging and generally relies on expert knowledge about the data, setting,
and how the data were collected. One typical challenge with missing data is
that missing entries can be *informative*: the very fact that an entries were
missing is related to the values of other variables. For example, survey
participants from a marginalized group of people may be less likely to respond
to certain kinds of questions if they fear that answering honestly will come
with negative consequences. In that case, if we were to simply throw away data
with missing entries, we would bias the conclusions of the survey by
inadvertently removing many members of that group of respondents. So ignoring
this issue in real problems can easily lead to misleading analyses, with
detrimental impacts. In this book, we will cover only those techniques for
dealing with missing entries in situations where missing entries are just
“randomly missing”, i.e., where the fact that certain entries are missing
*isn’t related to anything else* about the observation.
Let’s load and examine a modified subset of the tumor image data
that has a few missing entries:
```
missing_cancer <- read_csv("data/wdbc_missing.csv") |>
select(Class, Radius, Texture, Perimeter) |>
mutate(Class = as_factor(Class)) |>
mutate(Class = fct_recode(Class, "Malignant" = "M", "Benign" = "B"))
missing_cancer
```
```
## # A tibble: 7 × 4
## Class Radius Texture Perimeter
## <fct> <dbl> <dbl> <dbl>
## 1 Malignant NA NA 1.27
## 2 Malignant 1.83 -0.353 1.68
## 3 Malignant 1.58 NA 1.57
## 4 Malignant -0.768 0.254 -0.592
## 5 Malignant 1.75 -1.15 1.78
## 6 Malignant -0.476 -0.835 -0.387
## 7 Malignant 1.17 0.161 1.14
```
Recall that K\-nearest neighbors classification makes predictions by computing
the straight\-line distance to nearby training observations, and hence requires
access to the values of *all* variables for *all* observations in the training
data. So how can we perform K\-nearest neighbors classification in the presence
of missing data? Well, since there are not too many observations with missing
entries, one option is to simply remove those observations prior to building
the K\-nearest neighbors classifier. We can accomplish this by using the
`drop_na` function from `tidyverse` prior to working with the data.
```
no_missing_cancer <- missing_cancer |> drop_na()
no_missing_cancer
```
```
## # A tibble: 5 × 4
## Class Radius Texture Perimeter
## <fct> <dbl> <dbl> <dbl>
## 1 Malignant 1.83 -0.353 1.68
## 2 Malignant -0.768 0.254 -0.592
## 3 Malignant 1.75 -1.15 1.78
## 4 Malignant -0.476 -0.835 -0.387
## 5 Malignant 1.17 0.161 1.14
```
However, this strategy will not work when many of the rows have missing
entries, as we may end up throwing away too much data. In this case, another
possible approach is to *impute* the missing entries, i.e., fill in synthetic
values based on the other observations in the data set. One reasonable choice
is to perform *mean imputation*, where missing entries are filled in using the
mean of the present entries in each variable. To perform mean imputation, we
add the `step_impute_mean`
step to the `tidymodels` preprocessing recipe.
```
impute_missing_recipe <- recipe(Class ~ ., data = missing_cancer) |>
step_impute_mean(all_predictors()) |>
prep()
impute_missing_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## outcome: 1
## predictor: 3
##
## ── Training information
## Training data contained 7 data points and 2 incomplete rows.
##
## ── Operations
## • Mean imputation for: Radius, Texture, Perimeter | Trained
```
To visualize what mean imputation does, let’s just apply the recipe directly to the `missing_cancer`
data frame using the `bake` function. The imputation step fills in the missing
entries with the mean values of their corresponding variables.
```
imputed_cancer <- bake(impute_missing_recipe, missing_cancer)
imputed_cancer
```
```
## # A tibble: 7 × 4
## Radius Texture Perimeter Class
## <dbl> <dbl> <dbl> <fct>
## 1 0.847 -0.385 1.27 Malignant
## 2 1.83 -0.353 1.68 Malignant
## 3 1.58 -0.385 1.57 Malignant
## 4 -0.768 0.254 -0.592 Malignant
## 5 1.75 -1.15 1.78 Malignant
## 6 -0.476 -0.835 -0.387 Malignant
## 7 1.17 0.161 1.14 Malignant
```
Many other options for missing data imputation can be found in
[the `recipes` documentation](https://recipes.tidymodels.org/reference/index.html). However
you decide to handle missing data in your data analysis, it is always crucial
to think critically about the setting, how the data were collected, and the
question you are answering.
5\.8 Putting it together in a `workflow`
----------------------------------------
The `tidymodels` package collection also provides the `workflow`, a way to
chain together
multiple data analysis steps without a lot of otherwise necessary code for
intermediate steps. To illustrate the whole pipeline, let’s start from scratch
with the `wdbc_unscaled.csv` data. First we will load the data, create a
model, and specify a recipe for how the data should be preprocessed:
```
# load the unscaled cancer data
# and make sure the response variable, Class, is a factor
unscaled_cancer <- read_csv("data/wdbc_unscaled.csv") |>
mutate(Class = as_factor(Class)) |>
mutate(Class = fct_recode(Class, "Malignant" = "M", "Benign" = "B"))
# create the K-NN model
knn_spec <- nearest_neighbor(weight_func = "rectangular", neighbors = 7) |>
set_engine("kknn") |>
set_mode("classification")
# create the centering / scaling recipe
uc_recipe <- recipe(Class ~ Area + Smoothness, data = unscaled_cancer) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
```
Note that each of these steps is exactly the same as earlier, except for one major difference:
we did not use the `select` function to extract the relevant variables from the data frame,
and instead simply specified the relevant variables to use via the
formula `Class ~ Area + Smoothness` (instead of `Class ~ .`) in the recipe.
You will also notice that we did not call `prep()` on the recipe; this is unnecessary when it is
placed in a workflow.
We will now place these steps in a `workflow` using the `add_recipe` and `add_model` functions,
and finally we will use the `fit` function to run the whole workflow on the `unscaled_cancer` data.
Note another difference from earlier here: we do not include a formula in the `fit` function. This
is again because we included the formula in the recipe, so there is no need to respecify it:
```
knn_fit <- workflow() |>
add_recipe(uc_recipe) |>
add_model(knn_spec) |>
fit(data = unscaled_cancer)
knn_fit
```
```
## ══ Workflow [trained] ══════════
## Preprocessor: Recipe
## Model: nearest_neighbor()
##
## ── Preprocessor ──────────
## 2 Recipe Steps
##
## • step_scale()
## • step_center()
##
## ── Model ──────────
##
## Call:
## kknn::train.kknn(formula = ..y ~ ., data = data, ks = min_rows(7, data, 5),
## kernel = ~"rectangular")
##
## Type of response variable: nominal
## Minimal misclassification: 0.112478
## Best kernel: rectangular
## Best k: 7
```
As before, the fit object lists the function that trains the model as well as the “best” settings
for the number of neighbors and weight function (for now, these are just the values we chose
manually when we created `knn_spec` above). But now the fit object also includes information about
the overall workflow, including the centering and scaling preprocessing steps.
In other words, when we use the `predict` function with the `knn_fit` object to make a prediction for a new
observation, it will first apply the same recipe steps to the new observation.
As an example, we will predict the class label of two new observations:
one with `Area = 500` and `Smoothness = 0.075`, and one with `Area = 1500` and `Smoothness = 0.1`.
```
new_observation <- tibble(Area = c(500, 1500), Smoothness = c(0.075, 0.1))
prediction <- predict(knn_fit, new_observation)
prediction
```
```
## # A tibble: 2 × 1
## .pred_class
## <fct>
## 1 Benign
## 2 Malignant
```
The classifier predicts that the first observation is benign, while the second is
malignant. Figure [5\.15](classification1.html#fig:05-workflow-plot-show) visualizes the predictions that this
trained K\-nearest neighbors model will make on a large range of new observations.
Although you have seen colored prediction map visualizations like this a few times now,
we have not included the code to generate them, as it is a little bit complicated.
For the interested reader who wants a learning challenge, we now include it below.
The basic idea is to create a grid of synthetic new observations using the `expand.grid` function,
predict the label of each, and visualize the predictions with a colored scatter having a very high transparency
(low `alpha` value) and large point radius. See if you can figure out what each line is doing!
> **Note:** Understanding this code is not required for the remainder of the
> textbook. It is included for those readers who would like to use similar
> visualizations in their own data analyses.
```
# create the grid of area/smoothness vals, and arrange in a data frame
are_grid <- seq(min(unscaled_cancer$Area),
max(unscaled_cancer$Area),
length.out = 100)
smo_grid <- seq(min(unscaled_cancer$Smoothness),
max(unscaled_cancer$Smoothness),
length.out = 100)
asgrid <- as_tibble(expand.grid(Area = are_grid,
Smoothness = smo_grid))
# use the fit workflow to make predictions at the grid points
knnPredGrid <- predict(knn_fit, asgrid)
# bind the predictions as a new column with the grid points
prediction_table <- bind_cols(knnPredGrid, asgrid) |>
rename(Class = .pred_class)
# plot:
# 1. the colored scatter of the original data
# 2. the faded colored scatter for the grid points
wkflw_plot <-
ggplot() +
geom_point(data = unscaled_cancer,
mapping = aes(x = Area,
y = Smoothness,
color = Class),
alpha = 0.75) +
geom_point(data = prediction_table,
mapping = aes(x = Area,
y = Smoothness,
color = Class),
alpha = 0.02,
size = 5) +
labs(color = "Diagnosis",
x = "Area",
y = "Smoothness") +
scale_color_manual(values = c("darkorange", "steelblue")) +
theme(text = element_text(size = 12))
wkflw_plot
```
Figure 5\.15: Scatter plot of smoothness versus area where background color indicates the decision of the classifier.
5\.9 Exercises
--------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Classification I: training and predicting” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
5\.1 Overview
-------------
In previous chapters, we focused solely on descriptive and exploratory
data analysis questions.
This chapter and the next together serve as our first
foray into answering *predictive* questions about data. In particular, we will
focus on *classification*, i.e., using one or more
variables to predict the value of a categorical variable of interest. This chapter
will cover the basics of classification, how to preprocess data to make it
suitable for use in a classifier, and how to use our observed data to make
predictions. The next chapter will focus on how to evaluate how accurate the
predictions from our classifier are, as well as how to improve our classifier
(where possible) to maximize its accuracy.
5\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Recognize situations where a classifier would be appropriate for making predictions.
* Describe what a training data set is and how it is used in classification.
* Interpret the output of a classifier.
* Compute, by hand, the straight\-line (Euclidean) distance between points on a graph when there are two predictor variables.
* Explain the K\-nearest neighbors classification algorithm.
* Perform K\-nearest neighbors classification in R using `tidymodels`.
* Use a `recipe` to center, scale, balance, and impute data as a preprocessing step.
* Combine preprocessing and model training using a `workflow`.
5\.3 The classification problem
-------------------------------
In many situations, we want to make predictions based on the current situation
as well as past experiences. For instance, a doctor may want to diagnose a
patient as either diseased or healthy based on their symptoms and the doctor’s
past experience with patients; an email provider might want to tag a given
email as “spam” or “not spam” based on the email’s text and past email text data;
or a credit card company may want to predict whether a purchase is fraudulent based
on the current purchase item, amount, and location as well as past purchases.
These tasks are all examples of **classification**, i.e., predicting a
categorical class (sometimes called a *label*) for an observation given its
other variables (sometimes called *features*).
Generally, a classifier assigns an observation without a known class (e.g., a new patient)
to a class (e.g., diseased or healthy) on the basis of how similar it is to other observations
for which we do know the class (e.g., previous patients with known diseases and
symptoms). These observations with known classes that we use as a basis for
prediction are called a **training set**; this name comes from the fact that
we use these data to train, or teach, our classifier. Once taught, we can use
the classifier to make predictions on new data for which we do not know the class.
There are many possible methods that we could use to predict
a categorical class/label for an observation. In this book, we will
focus on the widely used **K\-nearest neighbors** algorithm ([Fix and Hodges 1951](#ref-knnfix); [Cover and Hart 1967](#ref-knncover)).
In your future studies, you might encounter decision trees, support vector machines (SVMs),
logistic regression, neural networks, and more; see the additional resources
section at the end of the next chapter for where to begin learning more about
these other methods. It is also worth mentioning that there are many
variations on the basic classification problem. For example,
we focus on the setting of **binary classification** where only two
classes are involved (e.g., a diagnosis of either healthy or diseased), but you may
also run into multiclass classification problems with more than two
categories (e.g., a diagnosis of healthy, bronchitis, pneumonia, or a common cold).
5\.4 Exploring a data set
-------------------------
In this chapter and the next, we will study a data set of
[digitized breast cancer image features](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29),
created by Dr. William H. Wolberg, W. Nick Street, and Olvi L. Mangasarian ([Street, Wolberg, and Mangasarian 1993](#ref-streetbreastcancer)).
Each row in the data set represents an
image of a tumor sample, including the diagnosis (benign or malignant) and
several other measurements (nucleus texture, perimeter, area, and more).
Diagnosis for each image was conducted by physicians.
As with all data analyses, we first need to formulate a precise question that
we want to answer. Here, the question is *predictive*: can
we use the tumor
image measurements available to us to predict whether a future tumor image
(with unknown diagnosis) shows a benign or malignant tumor? Answering this
question is important because traditional, non\-data\-driven methods for tumor
diagnosis are quite subjective and dependent upon how skilled and experienced
the diagnosing physician is. Furthermore, benign tumors are not normally
dangerous; the cells stay in the same place, and the tumor stops growing before
it gets very large. By contrast, in malignant tumors, the cells invade the
surrounding tissue and spread into nearby organs, where they can cause serious
damage ([Stanford Health Care 2021](#ref-stanfordhealthcare)).
Thus, it is important to quickly and accurately diagnose the tumor type to
guide patient treatment.
### 5\.4\.1 Loading the cancer data
Our first step is to load, wrangle, and explore the data using visualizations
in order to better understand the data we are working with. We start by
loading the `tidyverse` package needed for our analysis.
```
library(tidyverse)
```
In this case, the file containing the breast cancer data set is a `.csv`
file with headers. We’ll use the `read_csv` function with no additional
arguments, and then inspect its contents:
```
cancer <- read_csv("data/wdbc.csv")
cancer
```
```
## # A tibble: 569 × 12
## ID Class Radius Texture Perimeter Area Smoothness Compactness Concavity
## <dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 8.42e5 M 1.10 -2.07 1.27 0.984 1.57 3.28 2.65
## 2 8.43e5 M 1.83 -0.353 1.68 1.91 -0.826 -0.487 -0.0238
## 3 8.43e7 M 1.58 0.456 1.57 1.56 0.941 1.05 1.36
## 4 8.43e7 M -0.768 0.254 -0.592 -0.764 3.28 3.40 1.91
## 5 8.44e7 M 1.75 -1.15 1.78 1.82 0.280 0.539 1.37
## 6 8.44e5 M -0.476 -0.835 -0.387 -0.505 2.24 1.24 0.866
## 7 8.44e5 M 1.17 0.161 1.14 1.09 -0.123 0.0882 0.300
## 8 8.45e7 M -0.118 0.358 -0.0728 -0.219 1.60 1.14 0.0610
## 9 8.45e5 M -0.320 0.588 -0.184 -0.384 2.20 1.68 1.22
## 10 8.45e7 M -0.473 1.10 -0.329 -0.509 1.58 2.56 1.74
## # ℹ 559 more rows
## # ℹ 3 more variables: Concave_Points <dbl>, Symmetry <dbl>,
## # Fractal_Dimension <dbl>
```
### 5\.4\.2 Describing the variables in the cancer data set
Breast tumors can be diagnosed by performing a *biopsy*, a process where
tissue is removed from the body and examined for the presence of disease.
Traditionally these procedures were quite invasive; modern methods such as fine
needle aspiration, used to collect the present data set, extract only a small
amount of tissue and are less invasive. Based on a digital image of each breast
tissue sample collected for this data set, ten different variables were measured
for each cell nucleus in the image (items 3–12 of the list of variables below), and then the mean
for each variable across the nuclei was recorded. As part of the
data preparation, these values have been *standardized (centered and scaled)*; we will discuss what this
means and why we do it later in this chapter. Each image additionally was given
a unique ID and a diagnosis by a physician. Therefore, the
total set of variables per image in this data set is:
1. ID: identification number
2. Class: the diagnosis (M \= malignant or B \= benign)
3. Radius: the mean of distances from center to points on the perimeter
4. Texture: the standard deviation of gray\-scale values
5. Perimeter: the length of the surrounding contour
6. Area: the area inside the contour
7. Smoothness: the local variation in radius lengths
8. Compactness: the ratio of squared perimeter and area
9. Concavity: severity of concave portions of the contour
10. Concave Points: the number of concave portions of the contour
11. Symmetry: how similar the nucleus is when mirrored
12. Fractal Dimension: a measurement of how “rough” the perimeter is
Below we use `glimpse` to preview the data frame. This function can
make it easier to inspect the data when we have a lot of columns,
as it prints the data such that the columns go down
the page (instead of across).
```
glimpse(cancer)
```
```
## Rows: 569
## Columns: 12
## $ ID <dbl> 842302, 842517, 84300903, 84348301, 84358402, 843786…
## $ Class <chr> "M", "M", "M", "M", "M", "M", "M", "M", "M", "M", "M…
## $ Radius <dbl> 1.0960995, 1.8282120, 1.5784992, -0.7682333, 1.74875…
## $ Texture <dbl> -2.0715123, -0.3533215, 0.4557859, 0.2535091, -1.150…
## $ Perimeter <dbl> 1.26881726, 1.68447255, 1.56512598, -0.59216612, 1.7…
## $ Area <dbl> 0.98350952, 1.90703027, 1.55751319, -0.76379174, 1.8…
## $ Smoothness <dbl> 1.56708746, -0.82623545, 0.94138212, 3.28066684, 0.2…
## $ Compactness <dbl> 3.28062806, -0.48664348, 1.05199990, 3.39991742, 0.5…
## $ Concavity <dbl> 2.65054179, -0.02382489, 1.36227979, 1.91421287, 1.3…
## $ Concave_Points <dbl> 2.53024886, 0.54766227, 2.03543978, 1.45043113, 1.42…
## $ Symmetry <dbl> 2.215565542, 0.001391139, 0.938858720, 2.864862154, …
## $ Fractal_Dimension <dbl> 2.25376381, -0.86788881, -0.39765801, 4.90660199, -0…
```
From the summary of the data above, we can see that `Class` is of type character
(denoted by `<chr>`). We can use the `distinct` function to see all the unique
values present in that column. We see that there are two diagnoses: benign, represented by “B”,
and malignant, represented by “M”.
```
cancer |>
distinct(Class)
```
```
## # A tibble: 2 × 1
## Class
## <chr>
## 1 M
## 2 B
```
Since we will be working with `Class` as a categorical
variable, it is a good idea to convert it to a factor type using the `as_factor` function.
We will also improve the readability of our analysis by renaming “M” to
“Malignant” and “B” to “Benign” using the `fct_recode` method. The `fct_recode` method
is used to replace the names of factor values with other names. The arguments of `fct_recode` are the column that you
want to modify, followed any number of arguments of the form `"new name" = "old name"` to specify the renaming scheme.
```
cancer <- cancer |>
mutate(Class = as_factor(Class)) |>
mutate(Class = fct_recode(Class, "Malignant" = "M", "Benign" = "B"))
glimpse(cancer)
```
```
## Rows: 569
## Columns: 12
## $ ID <dbl> 842302, 842517, 84300903, 84348301, 84358402, 843786…
## $ Class <fct> Malignant, Malignant, Malignant, Malignant, Malignan…
## $ Radius <dbl> 1.0960995, 1.8282120, 1.5784992, -0.7682333, 1.74875…
## $ Texture <dbl> -2.0715123, -0.3533215, 0.4557859, 0.2535091, -1.150…
## $ Perimeter <dbl> 1.26881726, 1.68447255, 1.56512598, -0.59216612, 1.7…
## $ Area <dbl> 0.98350952, 1.90703027, 1.55751319, -0.76379174, 1.8…
## $ Smoothness <dbl> 1.56708746, -0.82623545, 0.94138212, 3.28066684, 0.2…
## $ Compactness <dbl> 3.28062806, -0.48664348, 1.05199990, 3.39991742, 0.5…
## $ Concavity <dbl> 2.65054179, -0.02382489, 1.36227979, 1.91421287, 1.3…
## $ Concave_Points <dbl> 2.53024886, 0.54766227, 2.03543978, 1.45043113, 1.42…
## $ Symmetry <dbl> 2.215565542, 0.001391139, 0.938858720, 2.864862154, …
## $ Fractal_Dimension <dbl> 2.25376381, -0.86788881, -0.39765801, 4.90660199, -0…
```
Let’s verify that we have successfully converted the `Class` column to a factor variable
and renamed its values to “Benign” and “Malignant” using the `distinct` function once more.
```
cancer |>
distinct(Class)
```
```
## # A tibble: 2 × 1
## Class
## <fct>
## 1 Malignant
## 2 Benign
```
### 5\.4\.3 Exploring the cancer data
Before we start doing any modeling, let’s explore our data set. Below we use
the `group_by`, `summarize` and `n` functions to find the number and percentage
of benign and malignant tumor observations in our data set. The `n` function within
`summarize`, when paired with `group_by`, counts the number of observations in each `Class` group.
Then we calculate the percentage in each group by dividing by the total number of observations
and multiplying by 100\. We have 357 (63%) benign and 212 (37%) malignant tumor observations.
```
num_obs <- nrow(cancer)
cancer |>
group_by(Class) |>
summarize(
count = n(),
percentage = n() / num_obs * 100
)
```
```
## # A tibble: 2 × 3
## Class count percentage
## <fct> <int> <dbl>
## 1 Malignant 212 37.3
## 2 Benign 357 62.7
```
Next, let’s draw a scatter plot to visualize the relationship between the
perimeter and concavity variables. Rather than use `ggplot's` default palette,
we select our own colorblind\-friendly colors—`"darkorange"`
for orange and `"steelblue"` for blue—and
pass them as the `values` argument to the `scale_color_manual` function.
```
perim_concav <- cancer |>
ggplot(aes(x = Perimeter, y = Concavity, color = Class)) +
geom_point(alpha = 0.6) +
labs(x = "Perimeter (standardized)",
y = "Concavity (standardized)",
color = "Diagnosis") +
scale_color_manual(values = c("darkorange", "steelblue")) +
theme(text = element_text(size = 12))
perim_concav
```
Figure 5\.1: Scatter plot of concavity versus perimeter colored by diagnosis label.
In Figure [5\.1](classification1.html#fig:05-scatter), we can see that malignant observations typically fall in
the upper right\-hand corner of the plot area. By contrast, benign
observations typically fall in the lower left\-hand corner of the plot. In other words,
benign observations tend to have lower concavity and perimeter values, and malignant
ones tend to have larger values. Suppose we
obtain a new observation not in the current data set that has all the variables
measured *except* the label (i.e., an image without the physician’s diagnosis
for the tumor class). We could compute the standardized perimeter and concavity values,
resulting in values of, say, 1 and 1\. Could we use this information to classify
that observation as benign or malignant? Based on the scatter plot, how might
you classify that new observation? If the standardized concavity and perimeter
values are 1 and 1 respectively, the point would lie in the middle of the
orange cloud of malignant points and thus we could probably classify it as
malignant. Based on our visualization, it seems like it may be possible
to make accurate predictions of the `Class` variable (i.e., a diagnosis) for
tumor images with unknown diagnoses.
### 5\.4\.1 Loading the cancer data
Our first step is to load, wrangle, and explore the data using visualizations
in order to better understand the data we are working with. We start by
loading the `tidyverse` package needed for our analysis.
```
library(tidyverse)
```
In this case, the file containing the breast cancer data set is a `.csv`
file with headers. We’ll use the `read_csv` function with no additional
arguments, and then inspect its contents:
```
cancer <- read_csv("data/wdbc.csv")
cancer
```
```
## # A tibble: 569 × 12
## ID Class Radius Texture Perimeter Area Smoothness Compactness Concavity
## <dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 8.42e5 M 1.10 -2.07 1.27 0.984 1.57 3.28 2.65
## 2 8.43e5 M 1.83 -0.353 1.68 1.91 -0.826 -0.487 -0.0238
## 3 8.43e7 M 1.58 0.456 1.57 1.56 0.941 1.05 1.36
## 4 8.43e7 M -0.768 0.254 -0.592 -0.764 3.28 3.40 1.91
## 5 8.44e7 M 1.75 -1.15 1.78 1.82 0.280 0.539 1.37
## 6 8.44e5 M -0.476 -0.835 -0.387 -0.505 2.24 1.24 0.866
## 7 8.44e5 M 1.17 0.161 1.14 1.09 -0.123 0.0882 0.300
## 8 8.45e7 M -0.118 0.358 -0.0728 -0.219 1.60 1.14 0.0610
## 9 8.45e5 M -0.320 0.588 -0.184 -0.384 2.20 1.68 1.22
## 10 8.45e7 M -0.473 1.10 -0.329 -0.509 1.58 2.56 1.74
## # ℹ 559 more rows
## # ℹ 3 more variables: Concave_Points <dbl>, Symmetry <dbl>,
## # Fractal_Dimension <dbl>
```
### 5\.4\.2 Describing the variables in the cancer data set
Breast tumors can be diagnosed by performing a *biopsy*, a process where
tissue is removed from the body and examined for the presence of disease.
Traditionally these procedures were quite invasive; modern methods such as fine
needle aspiration, used to collect the present data set, extract only a small
amount of tissue and are less invasive. Based on a digital image of each breast
tissue sample collected for this data set, ten different variables were measured
for each cell nucleus in the image (items 3–12 of the list of variables below), and then the mean
for each variable across the nuclei was recorded. As part of the
data preparation, these values have been *standardized (centered and scaled)*; we will discuss what this
means and why we do it later in this chapter. Each image additionally was given
a unique ID and a diagnosis by a physician. Therefore, the
total set of variables per image in this data set is:
1. ID: identification number
2. Class: the diagnosis (M \= malignant or B \= benign)
3. Radius: the mean of distances from center to points on the perimeter
4. Texture: the standard deviation of gray\-scale values
5. Perimeter: the length of the surrounding contour
6. Area: the area inside the contour
7. Smoothness: the local variation in radius lengths
8. Compactness: the ratio of squared perimeter and area
9. Concavity: severity of concave portions of the contour
10. Concave Points: the number of concave portions of the contour
11. Symmetry: how similar the nucleus is when mirrored
12. Fractal Dimension: a measurement of how “rough” the perimeter is
Below we use `glimpse` to preview the data frame. This function can
make it easier to inspect the data when we have a lot of columns,
as it prints the data such that the columns go down
the page (instead of across).
```
glimpse(cancer)
```
```
## Rows: 569
## Columns: 12
## $ ID <dbl> 842302, 842517, 84300903, 84348301, 84358402, 843786…
## $ Class <chr> "M", "M", "M", "M", "M", "M", "M", "M", "M", "M", "M…
## $ Radius <dbl> 1.0960995, 1.8282120, 1.5784992, -0.7682333, 1.74875…
## $ Texture <dbl> -2.0715123, -0.3533215, 0.4557859, 0.2535091, -1.150…
## $ Perimeter <dbl> 1.26881726, 1.68447255, 1.56512598, -0.59216612, 1.7…
## $ Area <dbl> 0.98350952, 1.90703027, 1.55751319, -0.76379174, 1.8…
## $ Smoothness <dbl> 1.56708746, -0.82623545, 0.94138212, 3.28066684, 0.2…
## $ Compactness <dbl> 3.28062806, -0.48664348, 1.05199990, 3.39991742, 0.5…
## $ Concavity <dbl> 2.65054179, -0.02382489, 1.36227979, 1.91421287, 1.3…
## $ Concave_Points <dbl> 2.53024886, 0.54766227, 2.03543978, 1.45043113, 1.42…
## $ Symmetry <dbl> 2.215565542, 0.001391139, 0.938858720, 2.864862154, …
## $ Fractal_Dimension <dbl> 2.25376381, -0.86788881, -0.39765801, 4.90660199, -0…
```
From the summary of the data above, we can see that `Class` is of type character
(denoted by `<chr>`). We can use the `distinct` function to see all the unique
values present in that column. We see that there are two diagnoses: benign, represented by “B”,
and malignant, represented by “M”.
```
cancer |>
distinct(Class)
```
```
## # A tibble: 2 × 1
## Class
## <chr>
## 1 M
## 2 B
```
Since we will be working with `Class` as a categorical
variable, it is a good idea to convert it to a factor type using the `as_factor` function.
We will also improve the readability of our analysis by renaming “M” to
“Malignant” and “B” to “Benign” using the `fct_recode` method. The `fct_recode` method
is used to replace the names of factor values with other names. The arguments of `fct_recode` are the column that you
want to modify, followed any number of arguments of the form `"new name" = "old name"` to specify the renaming scheme.
```
cancer <- cancer |>
mutate(Class = as_factor(Class)) |>
mutate(Class = fct_recode(Class, "Malignant" = "M", "Benign" = "B"))
glimpse(cancer)
```
```
## Rows: 569
## Columns: 12
## $ ID <dbl> 842302, 842517, 84300903, 84348301, 84358402, 843786…
## $ Class <fct> Malignant, Malignant, Malignant, Malignant, Malignan…
## $ Radius <dbl> 1.0960995, 1.8282120, 1.5784992, -0.7682333, 1.74875…
## $ Texture <dbl> -2.0715123, -0.3533215, 0.4557859, 0.2535091, -1.150…
## $ Perimeter <dbl> 1.26881726, 1.68447255, 1.56512598, -0.59216612, 1.7…
## $ Area <dbl> 0.98350952, 1.90703027, 1.55751319, -0.76379174, 1.8…
## $ Smoothness <dbl> 1.56708746, -0.82623545, 0.94138212, 3.28066684, 0.2…
## $ Compactness <dbl> 3.28062806, -0.48664348, 1.05199990, 3.39991742, 0.5…
## $ Concavity <dbl> 2.65054179, -0.02382489, 1.36227979, 1.91421287, 1.3…
## $ Concave_Points <dbl> 2.53024886, 0.54766227, 2.03543978, 1.45043113, 1.42…
## $ Symmetry <dbl> 2.215565542, 0.001391139, 0.938858720, 2.864862154, …
## $ Fractal_Dimension <dbl> 2.25376381, -0.86788881, -0.39765801, 4.90660199, -0…
```
Let’s verify that we have successfully converted the `Class` column to a factor variable
and renamed its values to “Benign” and “Malignant” using the `distinct` function once more.
```
cancer |>
distinct(Class)
```
```
## # A tibble: 2 × 1
## Class
## <fct>
## 1 Malignant
## 2 Benign
```
### 5\.4\.3 Exploring the cancer data
Before we start doing any modeling, let’s explore our data set. Below we use
the `group_by`, `summarize` and `n` functions to find the number and percentage
of benign and malignant tumor observations in our data set. The `n` function within
`summarize`, when paired with `group_by`, counts the number of observations in each `Class` group.
Then we calculate the percentage in each group by dividing by the total number of observations
and multiplying by 100\. We have 357 (63%) benign and 212 (37%) malignant tumor observations.
```
num_obs <- nrow(cancer)
cancer |>
group_by(Class) |>
summarize(
count = n(),
percentage = n() / num_obs * 100
)
```
```
## # A tibble: 2 × 3
## Class count percentage
## <fct> <int> <dbl>
## 1 Malignant 212 37.3
## 2 Benign 357 62.7
```
Next, let’s draw a scatter plot to visualize the relationship between the
perimeter and concavity variables. Rather than use `ggplot's` default palette,
we select our own colorblind\-friendly colors—`"darkorange"`
for orange and `"steelblue"` for blue—and
pass them as the `values` argument to the `scale_color_manual` function.
```
perim_concav <- cancer |>
ggplot(aes(x = Perimeter, y = Concavity, color = Class)) +
geom_point(alpha = 0.6) +
labs(x = "Perimeter (standardized)",
y = "Concavity (standardized)",
color = "Diagnosis") +
scale_color_manual(values = c("darkorange", "steelblue")) +
theme(text = element_text(size = 12))
perim_concav
```
Figure 5\.1: Scatter plot of concavity versus perimeter colored by diagnosis label.
In Figure [5\.1](classification1.html#fig:05-scatter), we can see that malignant observations typically fall in
the upper right\-hand corner of the plot area. By contrast, benign
observations typically fall in the lower left\-hand corner of the plot. In other words,
benign observations tend to have lower concavity and perimeter values, and malignant
ones tend to have larger values. Suppose we
obtain a new observation not in the current data set that has all the variables
measured *except* the label (i.e., an image without the physician’s diagnosis
for the tumor class). We could compute the standardized perimeter and concavity values,
resulting in values of, say, 1 and 1\. Could we use this information to classify
that observation as benign or malignant? Based on the scatter plot, how might
you classify that new observation? If the standardized concavity and perimeter
values are 1 and 1 respectively, the point would lie in the middle of the
orange cloud of malignant points and thus we could probably classify it as
malignant. Based on our visualization, it seems like it may be possible
to make accurate predictions of the `Class` variable (i.e., a diagnosis) for
tumor images with unknown diagnoses.
5\.5 Classification with K\-nearest neighbors
---------------------------------------------
In order to actually make predictions for new observations in practice, we
will need a classification algorithm.
In this book, we will use the K\-nearest neighbors classification algorithm.
To predict the label of a new observation (here, classify it as either benign
or malignant), the K\-nearest neighbors classifier generally finds the \\(K\\)
“nearest” or “most similar” observations in our training set, and then uses
their diagnoses to make a prediction for the new observation’s diagnosis. \\(K\\)
is a number that we must choose in advance; for now, we will assume that someone has chosen
\\(K\\) for us. We will cover how to choose \\(K\\) ourselves in the next chapter.
To illustrate the concept of K\-nearest neighbors classification, we
will walk through an example. Suppose we have a
new observation, with standardized perimeter of 2 and standardized concavity of 4, whose
diagnosis “Class” is unknown. This new observation is depicted by the red, diamond point in
Figure [5\.2](classification1.html#fig:05-knn-1).
Figure 5\.2: Scatter plot of concavity versus perimeter with new observation represented as a red diamond.
Figure [5\.3](classification1.html#fig:05-knn-2) shows that the nearest point to this new observation is **malignant** and
located at the coordinates (2\.1, 3\.6\). The idea here is that if a point is close to another in the scatter plot,
then the perimeter and concavity values are similar, and so we may expect that
they would have the same diagnosis.
Figure 5\.3: Scatter plot of concavity versus perimeter. The new observation is represented as a red diamond with a line to the one nearest neighbor, which has a malignant label.
Suppose we have another new observation with standardized perimeter 0\.2 and
concavity of 3\.3\. Looking at the scatter plot in Figure [5\.4](classification1.html#fig:05-knn-4), how would you
classify this red, diamond observation? The nearest neighbor to this new point is a
**benign** observation at (0\.2, 2\.7\).
Does this seem like the right prediction to make for this observation? Probably
not, if you consider the other nearby points.
Figure 5\.4: Scatter plot of concavity versus perimeter. The new observation is represented as a red diamond with a line to the one nearest neighbor, which has a benign label.
To improve the prediction we can consider several
neighboring points, say \\(K \= 3\\), that are closest to the new observation
to predict its diagnosis class. Among those 3 closest points, we use the
*majority class* as our prediction for the new observation. As shown in Figure [5\.5](classification1.html#fig:05-knn-5), we
see that the diagnoses of 2 of the 3 nearest neighbors to our new observation
are malignant. Therefore we take majority vote and classify our new red, diamond
observation as malignant.
Figure 5\.5: Scatter plot of concavity versus perimeter with three nearest neighbors.
Here we chose the \\(K\=3\\) nearest observations, but there is nothing special
about \\(K\=3\\). We could have used \\(K\=4, 5\\) or more (though we may want to choose
an odd number to avoid ties). We will discuss more about choosing \\(K\\) in the
next chapter.
### 5\.5\.1 Distance between points
We decide which points are the \\(K\\) “nearest” to our new observation
using the *straight\-line distance* (we will often just refer to this as *distance*).
Suppose we have two observations \\(a\\) and \\(b\\), each having two predictor variables, \\(x\\) and \\(y\\).
Denote \\(a\_x\\) and \\(a\_y\\) to be the values of variables \\(x\\) and \\(y\\) for observation \\(a\\);
\\(b\_x\\) and \\(b\_y\\) have similar definitions for observation \\(b\\).
Then the straight\-line distance between observation \\(a\\) and \\(b\\) on the x\-y plane can
be computed using the following formula:
\\\[\\mathrm{Distance} \= \\sqrt{(a\_x \-b\_x)^2 \+ (a\_y \- b\_y)^2}\\]
To find the \\(K\\) nearest neighbors to our new observation, we compute the distance
from that new observation to each observation in our training data, and select the \\(K\\) observations corresponding to the
\\(K\\) *smallest* distance values. For example, suppose we want to use \\(K\=5\\) neighbors to classify a new
observation with perimeter of 0 and
concavity of 3\.5, shown as a red diamond in Figure [5\.6](classification1.html#fig:05-multiknn-1). Let’s calculate the distances
between our new point and each of the observations in the training set to find
the \\(K\=5\\) neighbors that are nearest to our new point.
You will see in the `mutate` step below, we compute the straight\-line
distance using the formula above: we square the differences between the two observations’ perimeter
and concavity coordinates, add the squared differences, and then take the square root.
In order to find the \\(K\=5\\) nearest neighbors, we will use the `slice_min` function.
Figure 5\.6: Scatter plot of concavity versus perimeter with new observation represented as a red diamond.
```
new_obs_Perimeter <- 0
new_obs_Concavity <- 3.5
cancer |>
select(ID, Perimeter, Concavity, Class) |>
mutate(dist_from_new = sqrt((Perimeter - new_obs_Perimeter)^2 +
(Concavity - new_obs_Concavity)^2)) |>
slice_min(dist_from_new, n = 5) # take the 5 rows of minimum distance
```
```
## # A tibble: 5 × 5
## ID Perimeter Concavity Class dist_from_new
## <dbl> <dbl> <dbl> <fct> <dbl>
## 1 86409 0.241 2.65 Benign 0.881
## 2 887181 0.750 2.87 Malignant 0.980
## 3 899667 0.623 2.54 Malignant 1.14
## 4 907914 0.417 2.31 Malignant 1.26
## 5 8710441 -1.16 4.04 Benign 1.28
```
In Table [5\.1](classification1.html#tab:05-multiknn-mathtable) we show in mathematical detail how
the `mutate` step was used to compute the `dist_from_new` variable (the
distance to the new observation) for each of the 5 nearest neighbors in the
training data.
Table 5\.1: Evaluating the distances from the new observation to each of its 5 nearest neighbors
| Perimeter | Concavity | Distance | Class |
| --- | --- | --- | --- |
| 0\.24 | 2\.65 | \\(\\sqrt{(0 \- 0\.24\)^2 \+ (3\.5 \- 2\.65\)^2} \= 0\.88\\) | Benign |
| 0\.75 | 2\.87 | \\(\\sqrt{(0 \- 0\.75\)^2 \+ (3\.5 \- 2\.87\)^2} \= 0\.98\\) | Malignant |
| 0\.62 | 2\.54 | \\(\\sqrt{(0 \- 0\.62\)^2 \+ (3\.5 \- 2\.54\)^2} \= 1\.14\\) | Malignant |
| 0\.42 | 2\.31 | \\(\\sqrt{(0 \- 0\.42\)^2 \+ (3\.5 \- 2\.31\)^2} \= 1\.26\\) | Malignant |
| \-1\.16 | 4\.04 | \\(\\sqrt{(0 \- (\-1\.16\))^2 \+ (3\.5 \- 4\.04\)^2} \= 1\.28\\) | Benign |
The result of this computation shows that 3 of the 5 nearest neighbors to our new observation are
malignant; since this is the majority, we classify our new observation as malignant.
These 5 neighbors are circled in Figure [5\.7](classification1.html#fig:05-multiknn-3).
Figure 5\.7: Scatter plot of concavity versus perimeter with 5 nearest neighbors circled.
### 5\.5\.2 More than two explanatory variables
Although the above description is directed toward two predictor variables,
exactly the same K\-nearest neighbors algorithm applies when you
have a higher number of predictor variables. Each predictor variable may give us new
information to help create our classifier. The only difference is the formula
for the distance between points. Suppose we have \\(m\\) predictor
variables for two observations \\(a\\) and \\(b\\), i.e.,
\\(a \= (a\_{1}, a\_{2}, \\dots, a\_{m})\\) and
\\(b \= (b\_{1}, b\_{2}, \\dots, b\_{m})\\).
The distance formula becomes
\\\[\\mathrm{Distance} \= \\sqrt{(a\_{1} \-b\_{1})^2 \+ (a\_{2} \- b\_{2})^2 \+ \\dots \+ (a\_{m} \- b\_{m})^2}.\\]
This formula still corresponds to a straight\-line distance, just in a space
with more dimensions. Suppose we want to calculate the distance between a new
observation with a perimeter of 0, concavity of 3\.5, and symmetry of 1, and
another observation with a perimeter, concavity, and symmetry of 0\.417, 2\.31, and
0\.837 respectively. We have two observations with three predictor variables:
perimeter, concavity, and symmetry. Previously, when we had two variables, we
added up the squared difference between each of our (two) variables, and then
took the square root. Now we will do the same, except for our three variables.
We calculate the distance as follows
\\\[\\mathrm{Distance} \=\\sqrt{(0 \- 0\.417\)^2 \+ (3\.5 \- 2\.31\)^2 \+ (1 \- 0\.837\)^2} \= 1\.27\.\\]
Let’s calculate the distances between our new observation and each of the
observations in the training set to find the \\(K\=5\\) neighbors when we have these
three predictors.
```
new_obs_Perimeter <- 0
new_obs_Concavity <- 3.5
new_obs_Symmetry <- 1
cancer |>
select(ID, Perimeter, Concavity, Symmetry, Class) |>
mutate(dist_from_new = sqrt((Perimeter - new_obs_Perimeter)^2 +
(Concavity - new_obs_Concavity)^2 +
(Symmetry - new_obs_Symmetry)^2)) |>
slice_min(dist_from_new, n = 5) # take the 5 rows of minimum distance
```
```
## # A tibble: 5 × 6
## ID Perimeter Concavity Symmetry Class dist_from_new
## <dbl> <dbl> <dbl> <dbl> <fct> <dbl>
## 1 907914 0.417 2.31 0.837 Malignant 1.27
## 2 90439701 1.33 2.89 1.10 Malignant 1.47
## 3 925622 0.470 2.08 1.15 Malignant 1.50
## 4 859471 -1.37 2.81 1.09 Benign 1.53
## 5 899667 0.623 2.54 2.06 Malignant 1.56
```
Based on \\(K\=5\\) nearest neighbors with these three predictors, we would classify
the new observation as malignant since 4 out of 5 of the nearest neighbors are from the malignant class.
Figure [5\.8](classification1.html#fig:05-more) shows what the data look like when we visualize them
as a 3\-dimensional scatter with lines from the new observation to its five nearest neighbors.
Figure 5\.8: 3D scatter plot of the standardized symmetry, concavity, and perimeter variables. Note that in general we recommend against using 3D visualizations; here we show the data in 3D only to illustrate what higher dimensions and nearest neighbors look like, for learning purposes.
### 5\.5\.3 Summary of K\-nearest neighbors algorithm
In order to classify a new observation using a K\-nearest neighbors classifier, we have to do the following:
1. Compute the distance between the new observation and each observation in the training set.
2. Sort the data table in ascending order according to the distances.
3. Choose the top \\(K\\) rows of the sorted table.
4. Classify the new observation based on a majority vote of the neighbor classes.
### 5\.5\.1 Distance between points
We decide which points are the \\(K\\) “nearest” to our new observation
using the *straight\-line distance* (we will often just refer to this as *distance*).
Suppose we have two observations \\(a\\) and \\(b\\), each having two predictor variables, \\(x\\) and \\(y\\).
Denote \\(a\_x\\) and \\(a\_y\\) to be the values of variables \\(x\\) and \\(y\\) for observation \\(a\\);
\\(b\_x\\) and \\(b\_y\\) have similar definitions for observation \\(b\\).
Then the straight\-line distance between observation \\(a\\) and \\(b\\) on the x\-y plane can
be computed using the following formula:
\\\[\\mathrm{Distance} \= \\sqrt{(a\_x \-b\_x)^2 \+ (a\_y \- b\_y)^2}\\]
To find the \\(K\\) nearest neighbors to our new observation, we compute the distance
from that new observation to each observation in our training data, and select the \\(K\\) observations corresponding to the
\\(K\\) *smallest* distance values. For example, suppose we want to use \\(K\=5\\) neighbors to classify a new
observation with perimeter of 0 and
concavity of 3\.5, shown as a red diamond in Figure [5\.6](classification1.html#fig:05-multiknn-1). Let’s calculate the distances
between our new point and each of the observations in the training set to find
the \\(K\=5\\) neighbors that are nearest to our new point.
You will see in the `mutate` step below, we compute the straight\-line
distance using the formula above: we square the differences between the two observations’ perimeter
and concavity coordinates, add the squared differences, and then take the square root.
In order to find the \\(K\=5\\) nearest neighbors, we will use the `slice_min` function.
Figure 5\.6: Scatter plot of concavity versus perimeter with new observation represented as a red diamond.
```
new_obs_Perimeter <- 0
new_obs_Concavity <- 3.5
cancer |>
select(ID, Perimeter, Concavity, Class) |>
mutate(dist_from_new = sqrt((Perimeter - new_obs_Perimeter)^2 +
(Concavity - new_obs_Concavity)^2)) |>
slice_min(dist_from_new, n = 5) # take the 5 rows of minimum distance
```
```
## # A tibble: 5 × 5
## ID Perimeter Concavity Class dist_from_new
## <dbl> <dbl> <dbl> <fct> <dbl>
## 1 86409 0.241 2.65 Benign 0.881
## 2 887181 0.750 2.87 Malignant 0.980
## 3 899667 0.623 2.54 Malignant 1.14
## 4 907914 0.417 2.31 Malignant 1.26
## 5 8710441 -1.16 4.04 Benign 1.28
```
In Table [5\.1](classification1.html#tab:05-multiknn-mathtable) we show in mathematical detail how
the `mutate` step was used to compute the `dist_from_new` variable (the
distance to the new observation) for each of the 5 nearest neighbors in the
training data.
Table 5\.1: Evaluating the distances from the new observation to each of its 5 nearest neighbors
| Perimeter | Concavity | Distance | Class |
| --- | --- | --- | --- |
| 0\.24 | 2\.65 | \\(\\sqrt{(0 \- 0\.24\)^2 \+ (3\.5 \- 2\.65\)^2} \= 0\.88\\) | Benign |
| 0\.75 | 2\.87 | \\(\\sqrt{(0 \- 0\.75\)^2 \+ (3\.5 \- 2\.87\)^2} \= 0\.98\\) | Malignant |
| 0\.62 | 2\.54 | \\(\\sqrt{(0 \- 0\.62\)^2 \+ (3\.5 \- 2\.54\)^2} \= 1\.14\\) | Malignant |
| 0\.42 | 2\.31 | \\(\\sqrt{(0 \- 0\.42\)^2 \+ (3\.5 \- 2\.31\)^2} \= 1\.26\\) | Malignant |
| \-1\.16 | 4\.04 | \\(\\sqrt{(0 \- (\-1\.16\))^2 \+ (3\.5 \- 4\.04\)^2} \= 1\.28\\) | Benign |
The result of this computation shows that 3 of the 5 nearest neighbors to our new observation are
malignant; since this is the majority, we classify our new observation as malignant.
These 5 neighbors are circled in Figure [5\.7](classification1.html#fig:05-multiknn-3).
Figure 5\.7: Scatter plot of concavity versus perimeter with 5 nearest neighbors circled.
### 5\.5\.2 More than two explanatory variables
Although the above description is directed toward two predictor variables,
exactly the same K\-nearest neighbors algorithm applies when you
have a higher number of predictor variables. Each predictor variable may give us new
information to help create our classifier. The only difference is the formula
for the distance between points. Suppose we have \\(m\\) predictor
variables for two observations \\(a\\) and \\(b\\), i.e.,
\\(a \= (a\_{1}, a\_{2}, \\dots, a\_{m})\\) and
\\(b \= (b\_{1}, b\_{2}, \\dots, b\_{m})\\).
The distance formula becomes
\\\[\\mathrm{Distance} \= \\sqrt{(a\_{1} \-b\_{1})^2 \+ (a\_{2} \- b\_{2})^2 \+ \\dots \+ (a\_{m} \- b\_{m})^2}.\\]
This formula still corresponds to a straight\-line distance, just in a space
with more dimensions. Suppose we want to calculate the distance between a new
observation with a perimeter of 0, concavity of 3\.5, and symmetry of 1, and
another observation with a perimeter, concavity, and symmetry of 0\.417, 2\.31, and
0\.837 respectively. We have two observations with three predictor variables:
perimeter, concavity, and symmetry. Previously, when we had two variables, we
added up the squared difference between each of our (two) variables, and then
took the square root. Now we will do the same, except for our three variables.
We calculate the distance as follows
\\\[\\mathrm{Distance} \=\\sqrt{(0 \- 0\.417\)^2 \+ (3\.5 \- 2\.31\)^2 \+ (1 \- 0\.837\)^2} \= 1\.27\.\\]
Let’s calculate the distances between our new observation and each of the
observations in the training set to find the \\(K\=5\\) neighbors when we have these
three predictors.
```
new_obs_Perimeter <- 0
new_obs_Concavity <- 3.5
new_obs_Symmetry <- 1
cancer |>
select(ID, Perimeter, Concavity, Symmetry, Class) |>
mutate(dist_from_new = sqrt((Perimeter - new_obs_Perimeter)^2 +
(Concavity - new_obs_Concavity)^2 +
(Symmetry - new_obs_Symmetry)^2)) |>
slice_min(dist_from_new, n = 5) # take the 5 rows of minimum distance
```
```
## # A tibble: 5 × 6
## ID Perimeter Concavity Symmetry Class dist_from_new
## <dbl> <dbl> <dbl> <dbl> <fct> <dbl>
## 1 907914 0.417 2.31 0.837 Malignant 1.27
## 2 90439701 1.33 2.89 1.10 Malignant 1.47
## 3 925622 0.470 2.08 1.15 Malignant 1.50
## 4 859471 -1.37 2.81 1.09 Benign 1.53
## 5 899667 0.623 2.54 2.06 Malignant 1.56
```
Based on \\(K\=5\\) nearest neighbors with these three predictors, we would classify
the new observation as malignant since 4 out of 5 of the nearest neighbors are from the malignant class.
Figure [5\.8](classification1.html#fig:05-more) shows what the data look like when we visualize them
as a 3\-dimensional scatter with lines from the new observation to its five nearest neighbors.
Figure 5\.8: 3D scatter plot of the standardized symmetry, concavity, and perimeter variables. Note that in general we recommend against using 3D visualizations; here we show the data in 3D only to illustrate what higher dimensions and nearest neighbors look like, for learning purposes.
### 5\.5\.3 Summary of K\-nearest neighbors algorithm
In order to classify a new observation using a K\-nearest neighbors classifier, we have to do the following:
1. Compute the distance between the new observation and each observation in the training set.
2. Sort the data table in ascending order according to the distances.
3. Choose the top \\(K\\) rows of the sorted table.
4. Classify the new observation based on a majority vote of the neighbor classes.
5\.6 K\-nearest neighbors with `tidymodels`
-------------------------------------------
Coding the K\-nearest neighbors algorithm in R ourselves can get complicated,
especially if we want to handle multiple classes, more than two variables,
or predict the class for multiple new observations. Thankfully, in R,
the K\-nearest neighbors algorithm is
implemented in [the `parsnip` R package](https://parsnip.tidymodels.org/) ([Kuhn and Vaughan 2021](#ref-parsnip))
included in `tidymodels`, along with
many [other models](https://www.tidymodels.org/find/parsnip/)
that you will encounter in this and future chapters of the book. The `tidymodels` collection
provides tools to help make and use models, such as classifiers. Using the packages
in this collection will help keep our code simple, readable and accurate; the
less we have to code ourselves, the fewer mistakes we will likely make. We
start by loading `tidymodels`.
```
library(tidymodels)
```
Let’s walk through how to use `tidymodels` to perform K\-nearest neighbors classification.
We will use the `cancer` data set from above, with
perimeter and concavity as predictors and \\(K \= 5\\) neighbors to build our classifier. Then
we will use the classifier to predict the diagnosis label for a new observation with
perimeter 0, concavity 3\.5, and an unknown diagnosis label. Let’s pick out our two desired
predictor variables and class label and store them as a new data set named `cancer_train`:
```
cancer_train <- cancer |>
select(Class, Perimeter, Concavity)
cancer_train
```
```
## # A tibble: 569 × 3
## Class Perimeter Concavity
## <fct> <dbl> <dbl>
## 1 Malignant 1.27 2.65
## 2 Malignant 1.68 -0.0238
## 3 Malignant 1.57 1.36
## 4 Malignant -0.592 1.91
## 5 Malignant 1.78 1.37
## 6 Malignant -0.387 0.866
## 7 Malignant 1.14 0.300
## 8 Malignant -0.0728 0.0610
## 9 Malignant -0.184 1.22
## 10 Malignant -0.329 1.74
## # ℹ 559 more rows
```
Next, we create a *model specification* for K\-nearest neighbors classification
by calling the `nearest_neighbor` function, specifying that we want to use \\(K \= 5\\) neighbors
(we will discuss how to choose \\(K\\) in the next chapter) and that each neighboring point should have the same weight when voting
(`weight_func = "rectangular"`). The `weight_func` argument controls
how neighbors vote when classifying a new observation; by setting it to `"rectangular"`,
each of the \\(K\\) nearest neighbors gets exactly 1 vote as described above. Other choices,
which weigh each neighbor’s vote differently, can be found on
[the `parsnip` website](https://parsnip.tidymodels.org/reference/nearest_neighbor.html).
In the `set_engine` argument, we specify which package or system will be used for training
the model. Here `kknn` is the R package we will use for performing K\-nearest neighbors classification.
Finally, we specify that this is a classification problem with the `set_mode` function.
```
knn_spec <- nearest_neighbor(weight_func = "rectangular", neighbors = 5) |>
set_engine("kknn") |>
set_mode("classification")
knn_spec
```
```
## K-Nearest Neighbor Model Specification (classification)
##
## Main Arguments:
## neighbors = 5
## weight_func = rectangular
##
## Computational engine: kknn
```
In order to fit the model on the breast cancer data, we need to pass the model specification
and the data set to the `fit` function. We also need to specify what variables to use as predictors
and what variable to use as the response. Below, the `Class ~ Perimeter + Concavity` argument specifies
that `Class` is the response variable (the one we want to predict),
and both `Perimeter` and `Concavity` are to be used as the predictors.
```
knn_fit <- knn_spec |>
fit(Class ~ Perimeter + Concavity, data = cancer_train)
```
We can also use a convenient shorthand syntax using a period, `Class ~ .`, to indicate
that we want to use every variable *except* `Class` as a predictor in the model.
In this particular setup, since `Concavity` and `Perimeter` are the only two predictors in the `cancer_train`
data frame, `Class ~ Perimeter + Concavity` and `Class ~ .` are equivalent.
In general, you can choose individual predictors using the `+` symbol, or you can specify to
use *all* predictors using the `.` symbol.
```
knn_fit <- knn_spec |>
fit(Class ~ ., data = cancer_train)
knn_fit
```
```
## parsnip model object
##
##
## Call:
## kknn::train.kknn(formula = Class ~ ., data = data, ks = min_rows(5, data, 5)
## , kernel = ~"rectangular")
##
## Type of response variable: nominal
## Minimal misclassification: 0.07557118
## Best kernel: rectangular
## Best k: 5
```
Here you can see the final trained model summary. It confirms that the computational engine used
to train the model was `kknn::train.kknn`. It also shows the fraction of errors made by
the K\-nearest neighbors model, but we will ignore this for now and discuss it in more detail
in the next chapter.
Finally, it shows (somewhat confusingly) that the “best” weight function
was “rectangular” and “best” setting of \\(K\\) was 5; but since we specified these earlier,
R is just repeating those settings to us here. In the next chapter, we will actually
let R find the value of \\(K\\) for us.
Finally, we make the prediction on the new observation by calling the `predict` function,
passing both the fit object we just created and the new observation itself. As above,
when we ran the K\-nearest neighbors
classification algorithm manually, the `knn_fit` object classifies the new observation as
malignant. Note that the `predict` function outputs a data frame with a single
variable named `.pred_class`.
```
new_obs <- tibble(Perimeter = 0, Concavity = 3.5)
predict(knn_fit, new_obs)
```
```
## # A tibble: 1 × 1
## .pred_class
## <fct>
## 1 Malignant
```
Is this predicted malignant label the actual class for this observation?
Well, we don’t know because we do not have this
observation’s diagnosis— that is what we were trying to predict! The
classifier’s prediction is not necessarily correct, but in the next chapter, we will
learn ways to quantify how accurate we think our predictions are.
5\.7 Data preprocessing with `tidymodels`
-----------------------------------------
### 5\.7\.1 Centering and scaling
When using K\-nearest neighbors classification, the *scale* of each variable
(i.e., its size and range of values) matters. Since the classifier predicts
classes by identifying observations nearest to it, any variables with
a large scale will have a much larger effect than variables with a small
scale. But just because a variable has a large scale *doesn’t mean* that it is
more important for making accurate predictions. For example, suppose you have a
data set with two features, salary (in dollars) and years of education, and
you want to predict the corresponding type of job. When we compute the
neighbor distances, a difference of $1000 is huge compared to a difference of
10 years of education. But for our conceptual understanding and answering of
the problem, it’s the opposite; 10 years of education is huge compared to a
difference of $1000 in yearly salary!
In many other predictive models, the *center* of each variable (e.g., its mean)
matters as well. For example, if we had a data set with a temperature variable
measured in degrees Kelvin, and the same data set with temperature measured in
degrees Celsius, the two variables would differ by a constant shift of 273
(even though they contain exactly the same information). Likewise, in our
hypothetical job classification example, we would likely see that the center of
the salary variable is in the tens of thousands, while the center of the years
of education variable is in the single digits. Although this doesn’t affect the
K\-nearest neighbors classification algorithm, this large shift can change the
outcome of using many other predictive models.
To scale and center our data, we need to find
our variables’ *mean* (the average, which quantifies the “central” value of a
set of numbers) and *standard deviation* (a number quantifying how spread out values are).
For each observed value of the variable, we subtract the mean (i.e., center the variable)
and divide by the standard deviation (i.e., scale the variable). When we do this, the data
is said to be *standardized*, and all variables in a data set will have a mean of 0
and a standard deviation of 1\. To illustrate the effect that standardization can have on the K\-nearest
neighbors algorithm, we will read in the original, unstandardized Wisconsin breast
cancer data set; we have been using a standardized version of the data set up
until now. As before, we will convert the `Class` variable to the factor type
and rename the values to “Malignant” and “Benign.”
To keep things simple, we will just use the `Area`, `Smoothness`, and `Class`
variables:
```
unscaled_cancer <- read_csv("data/wdbc_unscaled.csv") |>
mutate(Class = as_factor(Class)) |>
mutate(Class = fct_recode(Class, "Benign" = "B", "Malignant" = "M")) |>
select(Class, Area, Smoothness)
unscaled_cancer
```
```
## # A tibble: 569 × 3
## Class Area Smoothness
## <fct> <dbl> <dbl>
## 1 Malignant 1001 0.118
## 2 Malignant 1326 0.0847
## 3 Malignant 1203 0.110
## 4 Malignant 386. 0.142
## 5 Malignant 1297 0.100
## 6 Malignant 477. 0.128
## 7 Malignant 1040 0.0946
## 8 Malignant 578. 0.119
## 9 Malignant 520. 0.127
## 10 Malignant 476. 0.119
## # ℹ 559 more rows
```
Looking at the unscaled and uncentered data above, you can see that the differences
between the values for area measurements are much larger than those for
smoothness. Will this affect
predictions? In order to find out, we will create a scatter plot of these two
predictors (colored by diagnosis) for both the unstandardized data we just
loaded, and the standardized version of that same data. But first, we need to
standardize the `unscaled_cancer` data set with `tidymodels`.
In the `tidymodels` framework, all data preprocessing happens
using a `recipe` from [the `recipes` R package](https://recipes.tidymodels.org/) ([Kuhn and Wickham 2021](#ref-recipes)).
Here we will initialize a recipe for
the `unscaled_cancer` data above, specifying
that the `Class` variable is the response, and all other variables are predictors:
```
uc_recipe <- recipe(Class ~ ., data = unscaled_cancer)
uc_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## outcome: 1
## predictor: 2
```
So far, there is not much in the recipe; just a statement about the number of response variables
and predictors. Let’s add
scaling (`step_scale`) and
centering (`step_center`) steps for
all of the predictors so that they each have a mean of 0 and standard deviation of 1\.
Note that `tidyverse` actually provides `step_normalize`, which does both centering and scaling in
a single recipe step; in this book we will keep `step_scale` and `step_center` separate
to emphasize conceptually that there are two steps happening.
The `prep` function finalizes the recipe by using the data (here, `unscaled_cancer`)
to compute anything necessary to run the recipe (in this case, the column means and standard
deviations):
```
uc_recipe <- uc_recipe |>
step_scale(all_predictors()) |>
step_center(all_predictors()) |>
prep()
uc_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## outcome: 1
## predictor: 2
##
## ── Training information
## Training data contained 569 data points and no incomplete rows.
##
## ── Operations
## • Scaling for: Area, Smoothness | Trained
## • Centering for: Area, Smoothness | Trained
```
You can now see that the recipe includes a scaling and centering step for all predictor variables.
Note that when you add a step to a recipe, you must specify what columns to apply the step to.
Here we used the `all_predictors()` function to specify that each step should be applied to
all predictor variables. However, there are a number of different arguments one could use here,
as well as naming particular columns with the same syntax as the `select` function.
For example:
* `all_nominal()` and `all_numeric()`: specify all categorical or all numeric variables
* `all_predictors()` and `all_outcomes()`: specify all predictor or all response variables
* `Area, Smoothness`: specify both the `Area` and `Smoothness` variable
* `-Class`: specify everything except the `Class` variable
You can find a full set of all the steps and variable selection functions
on the [`recipes` reference page](https://recipes.tidymodels.org/reference/index.html).
At this point, we have calculated the required statistics based on the data input into the
recipe, but the data are not yet scaled and centered. To actually scale and center
the data, we need to apply the `bake` function to the unscaled data.
```
scaled_cancer <- bake(uc_recipe, unscaled_cancer)
scaled_cancer
```
```
## # A tibble: 569 × 3
## Area Smoothness Class
## <dbl> <dbl> <fct>
## 1 0.984 1.57 Malignant
## 2 1.91 -0.826 Malignant
## 3 1.56 0.941 Malignant
## 4 -0.764 3.28 Malignant
## 5 1.82 0.280 Malignant
## 6 -0.505 2.24 Malignant
## 7 1.09 -0.123 Malignant
## 8 -0.219 1.60 Malignant
## 9 -0.384 2.20 Malignant
## 10 -0.509 1.58 Malignant
## # ℹ 559 more rows
```
It may seem redundant that we had to both `bake` *and* `prep` to scale and center the data.
However, we do this in two steps so we can specify a different data set in the `bake` step if we want.
For example, we may want to specify new data that were not part of the training set.
You may wonder why we are doing so much work just to center and
scale our variables. Can’t we just manually scale and center the `Area` and
`Smoothness` variables ourselves before building our K\-nearest neighbors model? Well,
technically *yes*; but doing so is error\-prone. In particular, we might
accidentally forget to apply the same centering / scaling when making
predictions, or accidentally apply a *different* centering / scaling than what
we used while training. Proper use of a `recipe` helps keep our code simple,
readable, and error\-free. Furthermore, note that using `prep` and `bake` is
required only when you want to inspect the result of the preprocessing steps
yourself. You will see further on in Section
[5\.8](classification1.html#puttingittogetherworkflow) that `tidymodels` provides tools to
automatically apply `prep` and `bake` as necessary without additional coding effort.
Figure [5\.9](classification1.html#fig:05-scaling-plt) shows the two scatter plots side\-by\-side—one for `unscaled_cancer` and one for
`scaled_cancer`. Each has the same new observation annotated with its \\(K\=3\\) nearest neighbors.
In the original unstandardized data plot, you can see some odd choices
for the three nearest neighbors. In particular, the “neighbors” are visually
well within the cloud of benign observations, and the neighbors are all nearly
vertically aligned with the new observation (which is why it looks like there
is only one black line on this plot). Figure [5\.10](classification1.html#fig:05-scaling-plt-zoomed)
shows a close\-up of that region on the unstandardized plot. Here the computation of nearest
neighbors is dominated by the much larger\-scale area variable. The plot for standardized data
on the right in Figure [5\.9](classification1.html#fig:05-scaling-plt) shows a much more intuitively reasonable
selection of nearest neighbors. Thus, standardizing the data can change things
in an important way when we are using predictive algorithms.
Standardizing your data should be a part of the preprocessing you do
before predictive modeling and you should always think carefully about your problem domain and
whether you need to standardize your data.
Figure 5\.9: Comparison of K \= 3 nearest neighbors with unstandardized and standardized data.
Figure 5\.10: Close\-up of three nearest neighbors for unstandardized data.
### 5\.7\.2 Balancing
Another potential issue in a data set for a classifier is *class imbalance*,
i.e., when one label is much more common than another. Since classifiers like
the K\-nearest neighbors algorithm use the labels of nearby points to predict
the label of a new point, if there are many more data points with one label
overall, the algorithm is more likely to pick that label in general (even if
the “pattern” of data suggests otherwise). Class imbalance is actually quite a
common and important problem: from rare disease diagnosis to malicious email
detection, there are many cases in which the “important” class to identify
(presence of disease, malicious email) is much rarer than the “unimportant”
class (no disease, normal email).
To better illustrate the problem, let’s revisit the scaled breast cancer data,
`cancer`; except now we will remove many of the observations of malignant tumors, simulating
what the data would look like if the cancer was rare. We will do this by
picking only 3 observations from the malignant group, and keeping all
of the benign observations.
We choose these 3 observations using the `slice_head`
function, which takes two arguments: a data frame\-like object,
and the number of rows to select from the top (`n`).
We will use the `bind_rows` function to glue the two resulting filtered
data frames back together, and name the result `rare_cancer`.
The new imbalanced data is shown in Figure [5\.11](classification1.html#fig:05-unbalanced).
```
rare_cancer <- bind_rows(
filter(cancer, Class == "Benign"),
cancer |> filter(Class == "Malignant") |> slice_head(n = 3)
) |>
select(Class, Perimeter, Concavity)
rare_plot <- rare_cancer |>
ggplot(aes(x = Perimeter, y = Concavity, color = Class)) +
geom_point(alpha = 0.5) +
labs(x = "Perimeter (standardized)",
y = "Concavity (standardized)",
color = "Diagnosis") +
scale_color_manual(values = c("darkorange", "steelblue")) +
theme(text = element_text(size = 12))
rare_plot
```
Figure 5\.11: Imbalanced data.
Suppose we now decided to use \\(K \= 7\\) in K\-nearest neighbors classification.
With only 3 observations of malignant tumors, the classifier
will *always predict that the tumor is benign, no matter what its concavity and perimeter
are!* This is because in a majority vote of 7 observations, at most 3 will be
malignant (we only have 3 total malignant observations), so at least 4 must be
benign, and the benign vote will always win. For example, Figure [5\.12](classification1.html#fig:05-upsample)
shows what happens for a new tumor observation that is quite close to three observations
in the training data that were tagged as malignant.
Figure 5\.12: Imbalanced data with 7 nearest neighbors to a new observation highlighted.
Figure [5\.13](classification1.html#fig:05-upsample-2) shows what happens if we set the background color of
each area of the plot to the prediction the K\-nearest neighbors
classifier would make for a new observation at that location. We can see that the decision is
always “benign,” corresponding to the blue color.
Figure 5\.13: Imbalanced data with background color indicating the decision of the classifier and the points represent the labeled data.
Despite the simplicity of the problem, solving it in a statistically sound manner is actually
fairly nuanced, and a careful treatment would require a lot more detail and mathematics than we will cover in this textbook.
For the present purposes, it will suffice to rebalance the data by *oversampling* the rare class.
In other words, we will replicate rare observations multiple times in our data set to give them more
voting power in the K\-nearest neighbors algorithm. In order to do this, we will add an oversampling
step to the earlier `uc_recipe` recipe with the `step_upsample` function from the `themis` R package.
We show below how to do this, and also
use the `group_by` and `summarize` functions to see that our classes are now balanced:
```
library(themis)
ups_recipe <- recipe(Class ~ ., data = rare_cancer) |>
step_upsample(Class, over_ratio = 1, skip = FALSE) |>
prep()
ups_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## outcome: 1
## predictor: 2
##
## ── Training information
## Training data contained 360 data points and no incomplete rows.
##
## ── Operations
## • Up-sampling based on: Class | Trained
```
```
upsampled_cancer <- bake(ups_recipe, rare_cancer)
upsampled_cancer |>
group_by(Class) |>
summarize(n = n())
```
```
## # A tibble: 2 × 2
## Class n
## <fct> <int>
## 1 Malignant 357
## 2 Benign 357
```
Now suppose we train our K\-nearest neighbors classifier with \\(K\=7\\) on this *balanced* data.
Figure [5\.14](classification1.html#fig:05-upsample-plot) shows what happens now when we set the background color
of each area of our scatter plot to the decision the K\-nearest neighbors
classifier would make. We can see that the decision is more reasonable; when the points are close
to those labeled malignant, the classifier predicts a malignant tumor, and vice versa when they are
closer to the benign tumor observations.
Figure 5\.14: Upsampled data with background color indicating the decision of the classifier.
### 5\.7\.3 Missing data
One of the most common issues in real data sets in the wild is *missing data*,
i.e., observations where the values of some of the variables were not recorded.
Unfortunately, as common as it is, handling missing data properly is very
challenging and generally relies on expert knowledge about the data, setting,
and how the data were collected. One typical challenge with missing data is
that missing entries can be *informative*: the very fact that an entries were
missing is related to the values of other variables. For example, survey
participants from a marginalized group of people may be less likely to respond
to certain kinds of questions if they fear that answering honestly will come
with negative consequences. In that case, if we were to simply throw away data
with missing entries, we would bias the conclusions of the survey by
inadvertently removing many members of that group of respondents. So ignoring
this issue in real problems can easily lead to misleading analyses, with
detrimental impacts. In this book, we will cover only those techniques for
dealing with missing entries in situations where missing entries are just
“randomly missing”, i.e., where the fact that certain entries are missing
*isn’t related to anything else* about the observation.
Let’s load and examine a modified subset of the tumor image data
that has a few missing entries:
```
missing_cancer <- read_csv("data/wdbc_missing.csv") |>
select(Class, Radius, Texture, Perimeter) |>
mutate(Class = as_factor(Class)) |>
mutate(Class = fct_recode(Class, "Malignant" = "M", "Benign" = "B"))
missing_cancer
```
```
## # A tibble: 7 × 4
## Class Radius Texture Perimeter
## <fct> <dbl> <dbl> <dbl>
## 1 Malignant NA NA 1.27
## 2 Malignant 1.83 -0.353 1.68
## 3 Malignant 1.58 NA 1.57
## 4 Malignant -0.768 0.254 -0.592
## 5 Malignant 1.75 -1.15 1.78
## 6 Malignant -0.476 -0.835 -0.387
## 7 Malignant 1.17 0.161 1.14
```
Recall that K\-nearest neighbors classification makes predictions by computing
the straight\-line distance to nearby training observations, and hence requires
access to the values of *all* variables for *all* observations in the training
data. So how can we perform K\-nearest neighbors classification in the presence
of missing data? Well, since there are not too many observations with missing
entries, one option is to simply remove those observations prior to building
the K\-nearest neighbors classifier. We can accomplish this by using the
`drop_na` function from `tidyverse` prior to working with the data.
```
no_missing_cancer <- missing_cancer |> drop_na()
no_missing_cancer
```
```
## # A tibble: 5 × 4
## Class Radius Texture Perimeter
## <fct> <dbl> <dbl> <dbl>
## 1 Malignant 1.83 -0.353 1.68
## 2 Malignant -0.768 0.254 -0.592
## 3 Malignant 1.75 -1.15 1.78
## 4 Malignant -0.476 -0.835 -0.387
## 5 Malignant 1.17 0.161 1.14
```
However, this strategy will not work when many of the rows have missing
entries, as we may end up throwing away too much data. In this case, another
possible approach is to *impute* the missing entries, i.e., fill in synthetic
values based on the other observations in the data set. One reasonable choice
is to perform *mean imputation*, where missing entries are filled in using the
mean of the present entries in each variable. To perform mean imputation, we
add the `step_impute_mean`
step to the `tidymodels` preprocessing recipe.
```
impute_missing_recipe <- recipe(Class ~ ., data = missing_cancer) |>
step_impute_mean(all_predictors()) |>
prep()
impute_missing_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## outcome: 1
## predictor: 3
##
## ── Training information
## Training data contained 7 data points and 2 incomplete rows.
##
## ── Operations
## • Mean imputation for: Radius, Texture, Perimeter | Trained
```
To visualize what mean imputation does, let’s just apply the recipe directly to the `missing_cancer`
data frame using the `bake` function. The imputation step fills in the missing
entries with the mean values of their corresponding variables.
```
imputed_cancer <- bake(impute_missing_recipe, missing_cancer)
imputed_cancer
```
```
## # A tibble: 7 × 4
## Radius Texture Perimeter Class
## <dbl> <dbl> <dbl> <fct>
## 1 0.847 -0.385 1.27 Malignant
## 2 1.83 -0.353 1.68 Malignant
## 3 1.58 -0.385 1.57 Malignant
## 4 -0.768 0.254 -0.592 Malignant
## 5 1.75 -1.15 1.78 Malignant
## 6 -0.476 -0.835 -0.387 Malignant
## 7 1.17 0.161 1.14 Malignant
```
Many other options for missing data imputation can be found in
[the `recipes` documentation](https://recipes.tidymodels.org/reference/index.html). However
you decide to handle missing data in your data analysis, it is always crucial
to think critically about the setting, how the data were collected, and the
question you are answering.
### 5\.7\.1 Centering and scaling
When using K\-nearest neighbors classification, the *scale* of each variable
(i.e., its size and range of values) matters. Since the classifier predicts
classes by identifying observations nearest to it, any variables with
a large scale will have a much larger effect than variables with a small
scale. But just because a variable has a large scale *doesn’t mean* that it is
more important for making accurate predictions. For example, suppose you have a
data set with two features, salary (in dollars) and years of education, and
you want to predict the corresponding type of job. When we compute the
neighbor distances, a difference of $1000 is huge compared to a difference of
10 years of education. But for our conceptual understanding and answering of
the problem, it’s the opposite; 10 years of education is huge compared to a
difference of $1000 in yearly salary!
In many other predictive models, the *center* of each variable (e.g., its mean)
matters as well. For example, if we had a data set with a temperature variable
measured in degrees Kelvin, and the same data set with temperature measured in
degrees Celsius, the two variables would differ by a constant shift of 273
(even though they contain exactly the same information). Likewise, in our
hypothetical job classification example, we would likely see that the center of
the salary variable is in the tens of thousands, while the center of the years
of education variable is in the single digits. Although this doesn’t affect the
K\-nearest neighbors classification algorithm, this large shift can change the
outcome of using many other predictive models.
To scale and center our data, we need to find
our variables’ *mean* (the average, which quantifies the “central” value of a
set of numbers) and *standard deviation* (a number quantifying how spread out values are).
For each observed value of the variable, we subtract the mean (i.e., center the variable)
and divide by the standard deviation (i.e., scale the variable). When we do this, the data
is said to be *standardized*, and all variables in a data set will have a mean of 0
and a standard deviation of 1\. To illustrate the effect that standardization can have on the K\-nearest
neighbors algorithm, we will read in the original, unstandardized Wisconsin breast
cancer data set; we have been using a standardized version of the data set up
until now. As before, we will convert the `Class` variable to the factor type
and rename the values to “Malignant” and “Benign.”
To keep things simple, we will just use the `Area`, `Smoothness`, and `Class`
variables:
```
unscaled_cancer <- read_csv("data/wdbc_unscaled.csv") |>
mutate(Class = as_factor(Class)) |>
mutate(Class = fct_recode(Class, "Benign" = "B", "Malignant" = "M")) |>
select(Class, Area, Smoothness)
unscaled_cancer
```
```
## # A tibble: 569 × 3
## Class Area Smoothness
## <fct> <dbl> <dbl>
## 1 Malignant 1001 0.118
## 2 Malignant 1326 0.0847
## 3 Malignant 1203 0.110
## 4 Malignant 386. 0.142
## 5 Malignant 1297 0.100
## 6 Malignant 477. 0.128
## 7 Malignant 1040 0.0946
## 8 Malignant 578. 0.119
## 9 Malignant 520. 0.127
## 10 Malignant 476. 0.119
## # ℹ 559 more rows
```
Looking at the unscaled and uncentered data above, you can see that the differences
between the values for area measurements are much larger than those for
smoothness. Will this affect
predictions? In order to find out, we will create a scatter plot of these two
predictors (colored by diagnosis) for both the unstandardized data we just
loaded, and the standardized version of that same data. But first, we need to
standardize the `unscaled_cancer` data set with `tidymodels`.
In the `tidymodels` framework, all data preprocessing happens
using a `recipe` from [the `recipes` R package](https://recipes.tidymodels.org/) ([Kuhn and Wickham 2021](#ref-recipes)).
Here we will initialize a recipe for
the `unscaled_cancer` data above, specifying
that the `Class` variable is the response, and all other variables are predictors:
```
uc_recipe <- recipe(Class ~ ., data = unscaled_cancer)
uc_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## outcome: 1
## predictor: 2
```
So far, there is not much in the recipe; just a statement about the number of response variables
and predictors. Let’s add
scaling (`step_scale`) and
centering (`step_center`) steps for
all of the predictors so that they each have a mean of 0 and standard deviation of 1\.
Note that `tidyverse` actually provides `step_normalize`, which does both centering and scaling in
a single recipe step; in this book we will keep `step_scale` and `step_center` separate
to emphasize conceptually that there are two steps happening.
The `prep` function finalizes the recipe by using the data (here, `unscaled_cancer`)
to compute anything necessary to run the recipe (in this case, the column means and standard
deviations):
```
uc_recipe <- uc_recipe |>
step_scale(all_predictors()) |>
step_center(all_predictors()) |>
prep()
uc_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## outcome: 1
## predictor: 2
##
## ── Training information
## Training data contained 569 data points and no incomplete rows.
##
## ── Operations
## • Scaling for: Area, Smoothness | Trained
## • Centering for: Area, Smoothness | Trained
```
You can now see that the recipe includes a scaling and centering step for all predictor variables.
Note that when you add a step to a recipe, you must specify what columns to apply the step to.
Here we used the `all_predictors()` function to specify that each step should be applied to
all predictor variables. However, there are a number of different arguments one could use here,
as well as naming particular columns with the same syntax as the `select` function.
For example:
* `all_nominal()` and `all_numeric()`: specify all categorical or all numeric variables
* `all_predictors()` and `all_outcomes()`: specify all predictor or all response variables
* `Area, Smoothness`: specify both the `Area` and `Smoothness` variable
* `-Class`: specify everything except the `Class` variable
You can find a full set of all the steps and variable selection functions
on the [`recipes` reference page](https://recipes.tidymodels.org/reference/index.html).
At this point, we have calculated the required statistics based on the data input into the
recipe, but the data are not yet scaled and centered. To actually scale and center
the data, we need to apply the `bake` function to the unscaled data.
```
scaled_cancer <- bake(uc_recipe, unscaled_cancer)
scaled_cancer
```
```
## # A tibble: 569 × 3
## Area Smoothness Class
## <dbl> <dbl> <fct>
## 1 0.984 1.57 Malignant
## 2 1.91 -0.826 Malignant
## 3 1.56 0.941 Malignant
## 4 -0.764 3.28 Malignant
## 5 1.82 0.280 Malignant
## 6 -0.505 2.24 Malignant
## 7 1.09 -0.123 Malignant
## 8 -0.219 1.60 Malignant
## 9 -0.384 2.20 Malignant
## 10 -0.509 1.58 Malignant
## # ℹ 559 more rows
```
It may seem redundant that we had to both `bake` *and* `prep` to scale and center the data.
However, we do this in two steps so we can specify a different data set in the `bake` step if we want.
For example, we may want to specify new data that were not part of the training set.
You may wonder why we are doing so much work just to center and
scale our variables. Can’t we just manually scale and center the `Area` and
`Smoothness` variables ourselves before building our K\-nearest neighbors model? Well,
technically *yes*; but doing so is error\-prone. In particular, we might
accidentally forget to apply the same centering / scaling when making
predictions, or accidentally apply a *different* centering / scaling than what
we used while training. Proper use of a `recipe` helps keep our code simple,
readable, and error\-free. Furthermore, note that using `prep` and `bake` is
required only when you want to inspect the result of the preprocessing steps
yourself. You will see further on in Section
[5\.8](classification1.html#puttingittogetherworkflow) that `tidymodels` provides tools to
automatically apply `prep` and `bake` as necessary without additional coding effort.
Figure [5\.9](classification1.html#fig:05-scaling-plt) shows the two scatter plots side\-by\-side—one for `unscaled_cancer` and one for
`scaled_cancer`. Each has the same new observation annotated with its \\(K\=3\\) nearest neighbors.
In the original unstandardized data plot, you can see some odd choices
for the three nearest neighbors. In particular, the “neighbors” are visually
well within the cloud of benign observations, and the neighbors are all nearly
vertically aligned with the new observation (which is why it looks like there
is only one black line on this plot). Figure [5\.10](classification1.html#fig:05-scaling-plt-zoomed)
shows a close\-up of that region on the unstandardized plot. Here the computation of nearest
neighbors is dominated by the much larger\-scale area variable. The plot for standardized data
on the right in Figure [5\.9](classification1.html#fig:05-scaling-plt) shows a much more intuitively reasonable
selection of nearest neighbors. Thus, standardizing the data can change things
in an important way when we are using predictive algorithms.
Standardizing your data should be a part of the preprocessing you do
before predictive modeling and you should always think carefully about your problem domain and
whether you need to standardize your data.
Figure 5\.9: Comparison of K \= 3 nearest neighbors with unstandardized and standardized data.
Figure 5\.10: Close\-up of three nearest neighbors for unstandardized data.
### 5\.7\.2 Balancing
Another potential issue in a data set for a classifier is *class imbalance*,
i.e., when one label is much more common than another. Since classifiers like
the K\-nearest neighbors algorithm use the labels of nearby points to predict
the label of a new point, if there are many more data points with one label
overall, the algorithm is more likely to pick that label in general (even if
the “pattern” of data suggests otherwise). Class imbalance is actually quite a
common and important problem: from rare disease diagnosis to malicious email
detection, there are many cases in which the “important” class to identify
(presence of disease, malicious email) is much rarer than the “unimportant”
class (no disease, normal email).
To better illustrate the problem, let’s revisit the scaled breast cancer data,
`cancer`; except now we will remove many of the observations of malignant tumors, simulating
what the data would look like if the cancer was rare. We will do this by
picking only 3 observations from the malignant group, and keeping all
of the benign observations.
We choose these 3 observations using the `slice_head`
function, which takes two arguments: a data frame\-like object,
and the number of rows to select from the top (`n`).
We will use the `bind_rows` function to glue the two resulting filtered
data frames back together, and name the result `rare_cancer`.
The new imbalanced data is shown in Figure [5\.11](classification1.html#fig:05-unbalanced).
```
rare_cancer <- bind_rows(
filter(cancer, Class == "Benign"),
cancer |> filter(Class == "Malignant") |> slice_head(n = 3)
) |>
select(Class, Perimeter, Concavity)
rare_plot <- rare_cancer |>
ggplot(aes(x = Perimeter, y = Concavity, color = Class)) +
geom_point(alpha = 0.5) +
labs(x = "Perimeter (standardized)",
y = "Concavity (standardized)",
color = "Diagnosis") +
scale_color_manual(values = c("darkorange", "steelblue")) +
theme(text = element_text(size = 12))
rare_plot
```
Figure 5\.11: Imbalanced data.
Suppose we now decided to use \\(K \= 7\\) in K\-nearest neighbors classification.
With only 3 observations of malignant tumors, the classifier
will *always predict that the tumor is benign, no matter what its concavity and perimeter
are!* This is because in a majority vote of 7 observations, at most 3 will be
malignant (we only have 3 total malignant observations), so at least 4 must be
benign, and the benign vote will always win. For example, Figure [5\.12](classification1.html#fig:05-upsample)
shows what happens for a new tumor observation that is quite close to three observations
in the training data that were tagged as malignant.
Figure 5\.12: Imbalanced data with 7 nearest neighbors to a new observation highlighted.
Figure [5\.13](classification1.html#fig:05-upsample-2) shows what happens if we set the background color of
each area of the plot to the prediction the K\-nearest neighbors
classifier would make for a new observation at that location. We can see that the decision is
always “benign,” corresponding to the blue color.
Figure 5\.13: Imbalanced data with background color indicating the decision of the classifier and the points represent the labeled data.
Despite the simplicity of the problem, solving it in a statistically sound manner is actually
fairly nuanced, and a careful treatment would require a lot more detail and mathematics than we will cover in this textbook.
For the present purposes, it will suffice to rebalance the data by *oversampling* the rare class.
In other words, we will replicate rare observations multiple times in our data set to give them more
voting power in the K\-nearest neighbors algorithm. In order to do this, we will add an oversampling
step to the earlier `uc_recipe` recipe with the `step_upsample` function from the `themis` R package.
We show below how to do this, and also
use the `group_by` and `summarize` functions to see that our classes are now balanced:
```
library(themis)
ups_recipe <- recipe(Class ~ ., data = rare_cancer) |>
step_upsample(Class, over_ratio = 1, skip = FALSE) |>
prep()
ups_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## outcome: 1
## predictor: 2
##
## ── Training information
## Training data contained 360 data points and no incomplete rows.
##
## ── Operations
## • Up-sampling based on: Class | Trained
```
```
upsampled_cancer <- bake(ups_recipe, rare_cancer)
upsampled_cancer |>
group_by(Class) |>
summarize(n = n())
```
```
## # A tibble: 2 × 2
## Class n
## <fct> <int>
## 1 Malignant 357
## 2 Benign 357
```
Now suppose we train our K\-nearest neighbors classifier with \\(K\=7\\) on this *balanced* data.
Figure [5\.14](classification1.html#fig:05-upsample-plot) shows what happens now when we set the background color
of each area of our scatter plot to the decision the K\-nearest neighbors
classifier would make. We can see that the decision is more reasonable; when the points are close
to those labeled malignant, the classifier predicts a malignant tumor, and vice versa when they are
closer to the benign tumor observations.
Figure 5\.14: Upsampled data with background color indicating the decision of the classifier.
### 5\.7\.3 Missing data
One of the most common issues in real data sets in the wild is *missing data*,
i.e., observations where the values of some of the variables were not recorded.
Unfortunately, as common as it is, handling missing data properly is very
challenging and generally relies on expert knowledge about the data, setting,
and how the data were collected. One typical challenge with missing data is
that missing entries can be *informative*: the very fact that an entries were
missing is related to the values of other variables. For example, survey
participants from a marginalized group of people may be less likely to respond
to certain kinds of questions if they fear that answering honestly will come
with negative consequences. In that case, if we were to simply throw away data
with missing entries, we would bias the conclusions of the survey by
inadvertently removing many members of that group of respondents. So ignoring
this issue in real problems can easily lead to misleading analyses, with
detrimental impacts. In this book, we will cover only those techniques for
dealing with missing entries in situations where missing entries are just
“randomly missing”, i.e., where the fact that certain entries are missing
*isn’t related to anything else* about the observation.
Let’s load and examine a modified subset of the tumor image data
that has a few missing entries:
```
missing_cancer <- read_csv("data/wdbc_missing.csv") |>
select(Class, Radius, Texture, Perimeter) |>
mutate(Class = as_factor(Class)) |>
mutate(Class = fct_recode(Class, "Malignant" = "M", "Benign" = "B"))
missing_cancer
```
```
## # A tibble: 7 × 4
## Class Radius Texture Perimeter
## <fct> <dbl> <dbl> <dbl>
## 1 Malignant NA NA 1.27
## 2 Malignant 1.83 -0.353 1.68
## 3 Malignant 1.58 NA 1.57
## 4 Malignant -0.768 0.254 -0.592
## 5 Malignant 1.75 -1.15 1.78
## 6 Malignant -0.476 -0.835 -0.387
## 7 Malignant 1.17 0.161 1.14
```
Recall that K\-nearest neighbors classification makes predictions by computing
the straight\-line distance to nearby training observations, and hence requires
access to the values of *all* variables for *all* observations in the training
data. So how can we perform K\-nearest neighbors classification in the presence
of missing data? Well, since there are not too many observations with missing
entries, one option is to simply remove those observations prior to building
the K\-nearest neighbors classifier. We can accomplish this by using the
`drop_na` function from `tidyverse` prior to working with the data.
```
no_missing_cancer <- missing_cancer |> drop_na()
no_missing_cancer
```
```
## # A tibble: 5 × 4
## Class Radius Texture Perimeter
## <fct> <dbl> <dbl> <dbl>
## 1 Malignant 1.83 -0.353 1.68
## 2 Malignant -0.768 0.254 -0.592
## 3 Malignant 1.75 -1.15 1.78
## 4 Malignant -0.476 -0.835 -0.387
## 5 Malignant 1.17 0.161 1.14
```
However, this strategy will not work when many of the rows have missing
entries, as we may end up throwing away too much data. In this case, another
possible approach is to *impute* the missing entries, i.e., fill in synthetic
values based on the other observations in the data set. One reasonable choice
is to perform *mean imputation*, where missing entries are filled in using the
mean of the present entries in each variable. To perform mean imputation, we
add the `step_impute_mean`
step to the `tidymodels` preprocessing recipe.
```
impute_missing_recipe <- recipe(Class ~ ., data = missing_cancer) |>
step_impute_mean(all_predictors()) |>
prep()
impute_missing_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## outcome: 1
## predictor: 3
##
## ── Training information
## Training data contained 7 data points and 2 incomplete rows.
##
## ── Operations
## • Mean imputation for: Radius, Texture, Perimeter | Trained
```
To visualize what mean imputation does, let’s just apply the recipe directly to the `missing_cancer`
data frame using the `bake` function. The imputation step fills in the missing
entries with the mean values of their corresponding variables.
```
imputed_cancer <- bake(impute_missing_recipe, missing_cancer)
imputed_cancer
```
```
## # A tibble: 7 × 4
## Radius Texture Perimeter Class
## <dbl> <dbl> <dbl> <fct>
## 1 0.847 -0.385 1.27 Malignant
## 2 1.83 -0.353 1.68 Malignant
## 3 1.58 -0.385 1.57 Malignant
## 4 -0.768 0.254 -0.592 Malignant
## 5 1.75 -1.15 1.78 Malignant
## 6 -0.476 -0.835 -0.387 Malignant
## 7 1.17 0.161 1.14 Malignant
```
Many other options for missing data imputation can be found in
[the `recipes` documentation](https://recipes.tidymodels.org/reference/index.html). However
you decide to handle missing data in your data analysis, it is always crucial
to think critically about the setting, how the data were collected, and the
question you are answering.
5\.8 Putting it together in a `workflow`
----------------------------------------
The `tidymodels` package collection also provides the `workflow`, a way to
chain together
multiple data analysis steps without a lot of otherwise necessary code for
intermediate steps. To illustrate the whole pipeline, let’s start from scratch
with the `wdbc_unscaled.csv` data. First we will load the data, create a
model, and specify a recipe for how the data should be preprocessed:
```
# load the unscaled cancer data
# and make sure the response variable, Class, is a factor
unscaled_cancer <- read_csv("data/wdbc_unscaled.csv") |>
mutate(Class = as_factor(Class)) |>
mutate(Class = fct_recode(Class, "Malignant" = "M", "Benign" = "B"))
# create the K-NN model
knn_spec <- nearest_neighbor(weight_func = "rectangular", neighbors = 7) |>
set_engine("kknn") |>
set_mode("classification")
# create the centering / scaling recipe
uc_recipe <- recipe(Class ~ Area + Smoothness, data = unscaled_cancer) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
```
Note that each of these steps is exactly the same as earlier, except for one major difference:
we did not use the `select` function to extract the relevant variables from the data frame,
and instead simply specified the relevant variables to use via the
formula `Class ~ Area + Smoothness` (instead of `Class ~ .`) in the recipe.
You will also notice that we did not call `prep()` on the recipe; this is unnecessary when it is
placed in a workflow.
We will now place these steps in a `workflow` using the `add_recipe` and `add_model` functions,
and finally we will use the `fit` function to run the whole workflow on the `unscaled_cancer` data.
Note another difference from earlier here: we do not include a formula in the `fit` function. This
is again because we included the formula in the recipe, so there is no need to respecify it:
```
knn_fit <- workflow() |>
add_recipe(uc_recipe) |>
add_model(knn_spec) |>
fit(data = unscaled_cancer)
knn_fit
```
```
## ══ Workflow [trained] ══════════
## Preprocessor: Recipe
## Model: nearest_neighbor()
##
## ── Preprocessor ──────────
## 2 Recipe Steps
##
## • step_scale()
## • step_center()
##
## ── Model ──────────
##
## Call:
## kknn::train.kknn(formula = ..y ~ ., data = data, ks = min_rows(7, data, 5),
## kernel = ~"rectangular")
##
## Type of response variable: nominal
## Minimal misclassification: 0.112478
## Best kernel: rectangular
## Best k: 7
```
As before, the fit object lists the function that trains the model as well as the “best” settings
for the number of neighbors and weight function (for now, these are just the values we chose
manually when we created `knn_spec` above). But now the fit object also includes information about
the overall workflow, including the centering and scaling preprocessing steps.
In other words, when we use the `predict` function with the `knn_fit` object to make a prediction for a new
observation, it will first apply the same recipe steps to the new observation.
As an example, we will predict the class label of two new observations:
one with `Area = 500` and `Smoothness = 0.075`, and one with `Area = 1500` and `Smoothness = 0.1`.
```
new_observation <- tibble(Area = c(500, 1500), Smoothness = c(0.075, 0.1))
prediction <- predict(knn_fit, new_observation)
prediction
```
```
## # A tibble: 2 × 1
## .pred_class
## <fct>
## 1 Benign
## 2 Malignant
```
The classifier predicts that the first observation is benign, while the second is
malignant. Figure [5\.15](classification1.html#fig:05-workflow-plot-show) visualizes the predictions that this
trained K\-nearest neighbors model will make on a large range of new observations.
Although you have seen colored prediction map visualizations like this a few times now,
we have not included the code to generate them, as it is a little bit complicated.
For the interested reader who wants a learning challenge, we now include it below.
The basic idea is to create a grid of synthetic new observations using the `expand.grid` function,
predict the label of each, and visualize the predictions with a colored scatter having a very high transparency
(low `alpha` value) and large point radius. See if you can figure out what each line is doing!
> **Note:** Understanding this code is not required for the remainder of the
> textbook. It is included for those readers who would like to use similar
> visualizations in their own data analyses.
```
# create the grid of area/smoothness vals, and arrange in a data frame
are_grid <- seq(min(unscaled_cancer$Area),
max(unscaled_cancer$Area),
length.out = 100)
smo_grid <- seq(min(unscaled_cancer$Smoothness),
max(unscaled_cancer$Smoothness),
length.out = 100)
asgrid <- as_tibble(expand.grid(Area = are_grid,
Smoothness = smo_grid))
# use the fit workflow to make predictions at the grid points
knnPredGrid <- predict(knn_fit, asgrid)
# bind the predictions as a new column with the grid points
prediction_table <- bind_cols(knnPredGrid, asgrid) |>
rename(Class = .pred_class)
# plot:
# 1. the colored scatter of the original data
# 2. the faded colored scatter for the grid points
wkflw_plot <-
ggplot() +
geom_point(data = unscaled_cancer,
mapping = aes(x = Area,
y = Smoothness,
color = Class),
alpha = 0.75) +
geom_point(data = prediction_table,
mapping = aes(x = Area,
y = Smoothness,
color = Class),
alpha = 0.02,
size = 5) +
labs(color = "Diagnosis",
x = "Area",
y = "Smoothness") +
scale_color_manual(values = c("darkorange", "steelblue")) +
theme(text = element_text(size = 12))
wkflw_plot
```
Figure 5\.15: Scatter plot of smoothness versus area where background color indicates the decision of the classifier.
5\.9 Exercises
--------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Classification I: training and predicting” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
| Data Science |
ubc-dsci.github.io | https://ubc-dsci.github.io/introduction-to-datascience/classification2.html |
Chapter 6 Classification II: evaluation \& tuning
=================================================
6\.1 Overview
-------------
This chapter continues the introduction to predictive modeling through
classification. While the previous chapter covered training and data
preprocessing, this chapter focuses on how to evaluate the performance of
a classifier, as well as how to improve the classifier (where possible)
to maximize its accuracy.
6\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Describe what training, validation, and test data sets are and how they are used in classification.
* Split data into training, validation, and test data sets.
* Describe what a random seed is and its importance in reproducible data analysis.
* Set the random seed in R using the `set.seed` function.
* Describe and interpret accuracy, precision, recall, and confusion matrices.
* Evaluate classification accuracy, precision, and recall in R using a test set, a single validation set, and cross\-validation.
* Produce a confusion matrix in R.
* Choose the number of neighbors in a K\-nearest neighbors classifier by maximizing estimated cross\-validation accuracy.
* Describe underfitting and overfitting, and relate it to the number of neighbors in K\-nearest neighbors classification.
* Describe the advantages and disadvantages of the K\-nearest neighbors classification algorithm.
6\.3 Evaluating performance
---------------------------
Sometimes our classifier might make the wrong prediction. A classifier does not
need to be right 100% of the time to be useful, though we don’t want the
classifier to make too many wrong predictions. How do we measure how “good” our
classifier is? Let’s revisit the
[breast cancer images data](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29) ([Street, Wolberg, and Mangasarian 1993](#ref-streetbreastcancer))
and think about how our classifier will be used in practice. A biopsy will be
performed on a *new* patient’s tumor, the resulting image will be analyzed,
and the classifier will be asked to decide whether the tumor is benign or
malignant. The key word here is *new*: our classifier is “good” if it provides
accurate predictions on data *not seen during training*, as this implies that
it has actually learned about the relationship between the predictor variables and response variable,
as opposed to simply memorizing the labels of individual training data examples.
But then, how can we evaluate our classifier without visiting the hospital to collect more
tumor images?
The trick is to split the data into a **training set** and **test set** (Figure [6\.1](classification2.html#fig:06-training-test))
and use only the **training set** when building the classifier.
Then, to evaluate the performance of the classifier, we first set aside the labels from the **test set**,
and then use the classifier to predict the labels in the **test set**. If our predictions match the actual
labels for the observations in the **test set**, then we have some
confidence that our classifier might also accurately predict the class
labels for new observations without known class labels.
> **Note:** If there were a golden rule of machine learning, it might be this:
> *you cannot use the test data to build the model!* If you do, the model gets to
> “see” the test data in advance, making it look more accurate than it really
> is. Imagine how bad it would be to overestimate your classifier’s accuracy
> when predicting whether a patient’s tumor is malignant or benign!
Figure 6\.1: Splitting the data into training and testing sets.
How exactly can we assess how well our predictions match the actual labels for
the observations in the test set? One way we can do this is to calculate the
prediction **accuracy**. This is the fraction of examples for which the
classifier made the correct prediction. To calculate this, we divide the number
of correct predictions by the number of predictions made.
The process for assessing if our predictions match the actual labels in the
test set is illustrated in Figure [6\.2](classification2.html#fig:06-ML-paradigm-test).
\\\[\\mathrm{accuracy} \= \\frac{\\mathrm{number \\; of \\; correct \\; predictions}}{\\mathrm{total \\; number \\; of \\; predictions}}\\]
Figure 6\.2: Process for splitting the data and finding the prediction accuracy.
Accuracy is a convenient, general\-purpose way to summarize the performance of a classifier with
a single number. But prediction accuracy by itself does not tell the whole
story. In particular, accuracy alone only tells us how often the classifier
makes mistakes in general, but does not tell us anything about the *kinds* of
mistakes the classifier makes. A more comprehensive view of performance can be
obtained by additionally examining the **confusion matrix**. The confusion
matrix shows how many test set labels of each type are predicted correctly and
incorrectly, which gives us more detail about the kinds of mistakes the
classifier tends to make. Table [6\.1](classification2.html#tab:confusion-matrix) shows an example
of what a confusion matrix might look like for the tumor image data with
a test set of 65 observations.
Table 6\.1: An example confusion matrix for the tumor image data.
| | Actually Malignant | Actually Benign |
| --- | --- | --- |
| **Predicted Malignant** | 1 | 4 |
| **Predicted Benign** | 3 | 57 |
In the example in Table [6\.1](classification2.html#tab:confusion-matrix), we see that there was
1 malignant observation that was correctly classified as malignant (top left corner),
and 57 benign observations that were correctly classified as benign (bottom right corner).
However, we can also see that the classifier made some mistakes:
it classified 3 malignant observations as benign, and 4 benign observations as
malignant. The accuracy of this classifier is roughly
89%, given by the formula
\\\[\\mathrm{accuracy} \= \\frac{\\mathrm{number \\; of \\; correct \\; predictions}}{\\mathrm{total \\; number \\; of \\; predictions}} \= \\frac{1\+57}{1\+57\+4\+3} \= 0\.892\.\\]
But we can also see that the classifier only identified 1 out of 4 total malignant
tumors; in other words, it misclassified 75% of the malignant cases present in the
data set! In this example, misclassifying a malignant tumor is a potentially
disastrous error, since it may lead to a patient who requires treatment not receiving it.
Since we are particularly interested in identifying malignant cases, this
classifier would likely be unacceptable even with an accuracy of 89%.
Focusing more on one label than the other
is
common in classification problems. In such cases, we typically refer to the label we are more
interested in identifying as the *positive* label, and the other as the
*negative* label. In the tumor example, we would refer to malignant
observations as *positive*, and benign observations as *negative*. We can then
use the following terms to talk about the four kinds of prediction that the
classifier can make, corresponding to the four entries in the confusion matrix:
* **True Positive:** A malignant observation that was classified as malignant (top left in Table [6\.1](classification2.html#tab:confusion-matrix)).
* **False Positive:** A benign observation that was classified as malignant (top right in Table [6\.1](classification2.html#tab:confusion-matrix)).
* **True Negative:** A benign observation that was classified as benign (bottom right in Table [6\.1](classification2.html#tab:confusion-matrix)).
* **False Negative:** A malignant observation that was classified as benign (bottom left in Table [6\.1](classification2.html#tab:confusion-matrix)).
A perfect classifier would have zero false negatives and false positives (and
therefore, 100% accuracy). However, classifiers in practice will almost always
make some errors. So you should think about which kinds of error are most
important in your application, and use the confusion matrix to quantify and
report them. Two commonly used metrics that we can compute using the confusion
matrix are the **precision** and **recall** of the classifier. These are often
reported together with accuracy. *Precision* quantifies how many of the
positive predictions the classifier made were actually positive. Intuitively,
we would like a classifier to have a *high* precision: for a classifier with
high precision, if the classifier reports that a new observation is positive,
we can trust that the new observation is indeed positive. We can compute the
precision of a classifier using the entries in the confusion matrix, with the
formula
\\\[\\mathrm{precision} \= \\frac{\\mathrm{number \\; of \\; correct \\; positive \\; predictions}}{\\mathrm{total \\; number \\; of \\; positive \\; predictions}}.\\]
*Recall* quantifies how many of the positive observations in the test set were
identified as positive. Intuitively, we would like a classifier to have a
*high* recall: for a classifier with high recall, if there is a positive
observation in the test data, we can trust that the classifier will find it.
We can also compute the recall of the classifier using the entries in the
confusion matrix, with the formula
\\\[\\mathrm{recall} \= \\frac{\\mathrm{number \\; of \\; correct \\; positive \\; predictions}}{\\mathrm{total \\; number \\; of \\; positive \\; test \\; set \\; observations}}.\\]
In the example presented in Table [6\.1](classification2.html#tab:confusion-matrix), we have that the precision and recall are
\\\[\\mathrm{precision} \= \\frac{1}{1\+4} \= 0\.20, \\quad \\mathrm{recall} \= \\frac{1}{1\+3} \= 0\.25\.\\]
So even with an accuracy of 89%, the precision and recall of the classifier
were both relatively low. For this data analysis context, recall is
particularly important: if someone has a malignant tumor, we certainly want to
identify it. A recall of just 25% would likely be unacceptable!
> **Note:** It is difficult to achieve both high precision and high recall at
> the same time; models with high precision tend to have low recall and vice
> versa. As an example, we can easily make a classifier that has *perfect
> recall*: just *always* guess positive! This classifier will of course find
> every positive observation in the test set, but it will make lots of false
> positive predictions along the way and have low precision. Similarly, we can
> easily make a classifier that has *perfect precision*: *never* guess
> positive! This classifier will never incorrectly identify an obsevation as
> positive, but it will make a lot of false negative predictions along the way.
> In fact, this classifier will have 0% recall! Of course, most real
> classifiers fall somewhere in between these two extremes. But these examples
> serve to show that in settings where one of the classes is of interest (i.e.,
> there is a *positive* label), there is a trade\-off between precision and recall that one has to
> make when designing a classifier.
6\.4 Randomness and seeds
-------------------------
Beginning in this chapter, our data analyses will often involve the use
of *randomness*. We use randomness any time we need to make a decision in our
analysis that needs to be fair, unbiased, and not influenced by human input.
For example, in this chapter, we need to split
a data set into a training set and test set to evaluate our classifier. We
certainly do not want to choose how to split
the data ourselves by hand, as we want to avoid accidentally influencing the result
of the evaluation. So instead, we let R *randomly* split the data.
In future chapters we will use randomness
in many other ways, e.g., to help us select a small subset of data from a larger data set,
to pick groupings of data, and more.
However, the use of randomness runs counter to one of the main
tenets of good data analysis practice: *reproducibility*. Recall that a reproducible
analysis produces the same result each time it is run; if we include randomness
in the analysis, would we not get a different result each time?
The trick is that in R—and other programming languages—randomness
is not actually random! Instead, R uses a *random number generator* that
produces a sequence of numbers that
are completely determined by a
*seed value*. Once you set the seed value
using the `set.seed` function, everything after that point may *look* random,
but is actually totally reproducible. As long as you pick the same seed
value, you get the same result!
Let’s use an example to investigate how seeds work in R. Say we want
to randomly pick 10 numbers from 0 to 9 in R using the `sample` function,
but we want it to be reproducible. Before using the sample function,
we call `set.seed`, and pass it any integer as an argument.
Here, we pass in the number `1`.
```
set.seed(1)
random_numbers1 <- sample(0:9, 10, replace = TRUE)
random_numbers1
```
```
## [1] 8 3 6 0 1 6 1 2 0 4
```
You can see that `random_numbers1` is a list of 10 numbers
from 0 to 9 that, from all appearances, looks random. If
we run the `sample` function again, we will
get a fresh batch of 10 numbers that also look random.
```
random_numbers2 <- sample(0:9, 10, replace = TRUE)
random_numbers2
```
```
## [1] 4 9 5 9 6 8 4 4 8 8
```
If we want to force R to produce the same sequences of random numbers,
we can simply call the `set.seed` function again with the same argument
value.
```
set.seed(1)
random_numbers1_again <- sample(0:9, 10, replace = TRUE)
random_numbers1_again
```
```
## [1] 8 3 6 0 1 6 1 2 0 4
```
```
random_numbers2_again <- sample(0:9, 10, replace = TRUE)
random_numbers2_again
```
```
## [1] 4 9 5 9 6 8 4 4 8 8
```
Notice that after setting the seed, we get the same two sequences of numbers in the same order. `random_numbers1` and `random_numbers1_again` produce the same sequence of numbers, and the same can be said about `random_numbers2` and `random_numbers2_again`. And if we choose
a different value for the seed—say, 4235—we
obtain a different sequence of random numbers.
```
set.seed(4235)
random_numbers1_different <- sample(0:9, 10, replace = TRUE)
random_numbers1_different
```
```
## [1] 8 3 1 4 6 8 8 4 1 7
```
```
random_numbers2_different <- sample(0:9, 10, replace = TRUE)
random_numbers2_different
```
```
## [1] 3 7 8 2 8 8 6 3 3 8
```
In other words, even though the sequences of numbers that R is generating *look*
random, they are totally determined when we set a seed value!
So what does this mean for data analysis? Well, `sample` is certainly
not the only function that uses randomness in R. Many of the functions
that we use in `tidymodels`, `tidyverse`, and beyond use randomness—some of them
without even telling you about it. So at the beginning of every data analysis you
do, right after loading packages, you should call the `set.seed` function and
pass it an integer that you pick.
Also note that when R starts up, it creates its own seed to use. So if you do not
explicitly call the `set.seed` function in your code, your results will
likely not be reproducible.
And finally, be careful to set the seed *only once* at the beginning of a data
analysis. Each time you set the seed, you are inserting your own human input,
thereby influencing the analysis. If you use `set.seed` many times
throughout your analysis, the randomness that R uses will not look
as random as it should.
In summary: if you want your analysis to be reproducible, i.e., produce *the same result* each time you
run it, make sure to use `set.seed` exactly once at the beginning of the analysis.
Different argument values in `set.seed` lead to different patterns of randomness, but as long as
you pick the same argument value your result will be the same.
In the remainder of the textbook, we will set the seed once at the beginning of each chapter.
6\.5 Evaluating performance with `tidymodels`
---------------------------------------------
Back to evaluating classifiers now!
In R, we can use the `tidymodels` package not only to perform K\-nearest neighbors
classification, but also to assess how well our classification worked.
Let’s work through an example of how to use tools from `tidymodels` to evaluate a classifier
using the breast cancer data set from the previous chapter.
We begin the analysis by loading the packages we require,
reading in the breast cancer data,
and then making a quick scatter plot visualization of
tumor cell concavity versus smoothness colored by diagnosis in Figure [6\.3](classification2.html#fig:06-precode).
You will also notice that we set the random seed here at the beginning of the analysis
using the `set.seed` function, as described in Section [6\.4](classification2.html#randomseeds).
```
# load packages
library(tidyverse)
library(tidymodels)
# set the seed
set.seed(1)
# load data
cancer <- read_csv("data/wdbc_unscaled.csv") |>
# convert the character Class variable to the factor datatype
mutate(Class = as_factor(Class)) |>
# rename the factor values to be more readable
mutate(Class = fct_recode(Class, "Malignant" = "M", "Benign" = "B"))
# create scatter plot of tumor cell concavity versus smoothness,
# labeling the points be diagnosis class
perim_concav <- cancer |>
ggplot(aes(x = Smoothness, y = Concavity, color = Class)) +
geom_point(alpha = 0.5) +
labs(color = "Diagnosis") +
scale_color_manual(values = c("darkorange", "steelblue")) +
theme(text = element_text(size = 12))
perim_concav
```
Figure 6\.3: Scatter plot of tumor cell concavity versus smoothness colored by diagnosis label.
### 6\.5\.1 Create the train / test split
Once we have decided on a predictive question to answer and done some
preliminary exploration, the very next thing to do is to split the data into
the training and test sets. Typically, the training set is between 50% and 95% of
the data, while the test set is the remaining 5% to 50%; the intuition is that
you want to trade off between training an accurate model (by using a larger
training data set) and getting an accurate evaluation of its performance (by
using a larger test data set). Here, we will use 75% of the data for training,
and 25% for testing.
The `initial_split` function from `tidymodels` handles the procedure of splitting
the data for us. It also applies two very important steps when splitting to ensure
that the accuracy estimates from the test data are reasonable. First, it
**shuffles** the data before splitting, which ensures that any ordering present
in the data does not influence the data that ends up in the training and testing sets.
Second, it **stratifies** the data by the class label, to ensure that roughly
the same proportion of each class ends up in both the training and testing sets. For example,
in our data set, roughly 63% of the
observations are from the benign class, and 37% are from the malignant class,
so `initial_split` ensures that roughly 63% of the training data are benign,
37% of the training data are malignant,
and the same proportions exist in the testing data.
Let’s use the `initial_split` function to create the training and testing sets.
We will specify that `prop = 0.75` so that 75% of our original data set ends up
in the training set. We will also set the `strata` argument to the categorical label variable
(here, `Class`) to ensure that the training and testing subsets contain the
right proportions of each category of observation.
The `training` and `testing` functions then extract the training and testing
data sets into two separate data frames.
Note that the `initial_split` function uses randomness, but since we set the
seed earlier in the chapter, the split will be reproducible.
```
cancer_split <- initial_split(cancer, prop = 0.75, strata = Class)
cancer_train <- training(cancer_split)
cancer_test <- testing(cancer_split)
```
```
glimpse(cancer_train)
```
```
## Rows: 426
## Columns: 12
## $ ID <dbl> 8510426, 8510653, 8510824, 854941, 85713702, 857155,…
## $ Class <fct> Benign, Benign, Benign, Benign, Benign, Benign, Beni…
## $ Radius <dbl> 13.540, 13.080, 9.504, 13.030, 8.196, 12.050, 13.490…
## $ Texture <dbl> 14.36, 15.71, 12.44, 18.42, 16.84, 14.63, 22.30, 21.…
## $ Perimeter <dbl> 87.46, 85.63, 60.34, 82.61, 51.71, 78.04, 86.91, 74.…
## $ Area <dbl> 566.3, 520.0, 273.9, 523.8, 201.9, 449.3, 561.0, 427…
## $ Smoothness <dbl> 0.09779, 0.10750, 0.10240, 0.08983, 0.08600, 0.10310…
## $ Compactness <dbl> 0.08129, 0.12700, 0.06492, 0.03766, 0.05943, 0.09092…
## $ Concavity <dbl> 0.066640, 0.045680, 0.029560, 0.025620, 0.015880, 0.…
## $ Concave_Points <dbl> 0.047810, 0.031100, 0.020760, 0.029230, 0.005917, 0.…
## $ Symmetry <dbl> 0.1885, 0.1967, 0.1815, 0.1467, 0.1769, 0.1675, 0.18…
## $ Fractal_Dimension <dbl> 0.05766, 0.06811, 0.06905, 0.05863, 0.06503, 0.06043…
```
```
glimpse(cancer_test)
```
```
## Rows: 143
## Columns: 12
## $ ID <dbl> 842517, 84300903, 84501001, 84610002, 848406, 848620…
## $ Class <fct> Malignant, Malignant, Malignant, Malignant, Malignan…
## $ Radius <dbl> 20.570, 19.690, 12.460, 15.780, 14.680, 16.130, 19.8…
## $ Texture <dbl> 17.77, 21.25, 24.04, 17.89, 20.13, 20.68, 22.15, 14.…
## $ Perimeter <dbl> 132.90, 130.00, 83.97, 103.60, 94.74, 108.10, 130.00…
## $ Area <dbl> 1326.0, 1203.0, 475.9, 781.0, 684.5, 798.8, 1260.0, …
## $ Smoothness <dbl> 0.08474, 0.10960, 0.11860, 0.09710, 0.09867, 0.11700…
## $ Compactness <dbl> 0.07864, 0.15990, 0.23960, 0.12920, 0.07200, 0.20220…
## $ Concavity <dbl> 0.08690, 0.19740, 0.22730, 0.09954, 0.07395, 0.17220…
## $ Concave_Points <dbl> 0.070170, 0.127900, 0.085430, 0.066060, 0.052590, 0.…
## $ Symmetry <dbl> 0.1812, 0.2069, 0.2030, 0.1842, 0.1586, 0.2164, 0.15…
## $ Fractal_Dimension <dbl> 0.05667, 0.05999, 0.08243, 0.06082, 0.05922, 0.07356…
```
We can see from `glimpse` in the code above that the training set contains 426
observations, while the test set contains 143 observations. This corresponds to
a train / test split of 75% / 25%, as desired. Recall from Chapter [5](classification1.html#classification1)
that we use the `glimpse` function to view data with a large number of columns,
as it prints the data such that the columns go down the page (instead of across).
We can use `group_by` and `summarize` to find the percentage of malignant and benign classes
in `cancer_train` and we see about 63% of the training
data are benign and 37%
are malignant, indicating that our class proportions were roughly preserved when we split the data.
```
cancer_proportions <- cancer_train |>
group_by(Class) |>
summarize(n = n()) |>
mutate(percent = 100*n/nrow(cancer_train))
cancer_proportions
```
```
## # A tibble: 2 × 3
## Class n percent
## <fct> <int> <dbl>
## 1 Malignant 159 37.3
## 2 Benign 267 62.7
```
### 6\.5\.2 Preprocess the data
As we mentioned in the last chapter, K\-nearest neighbors is sensitive to the scale of the predictors,
so we should perform some preprocessing to standardize them. An
additional consideration we need to take when doing this is that we should
create the standardization preprocessor using **only the training data**. This ensures that
our test data does not influence any aspect of our model training. Once we have
created the standardization preprocessor, we can then apply it separately to both the
training and test data sets.
Fortunately, the `recipe` framework from `tidymodels` helps us handle
this properly. Below we construct and prepare the recipe using only the training
data (due to `data = cancer_train` in the first line).
```
cancer_recipe <- recipe(Class ~ Smoothness + Concavity, data = cancer_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
```
### 6\.5\.3 Train the classifier
Now that we have split our original data set into training and test sets, we
can create our K\-nearest neighbors classifier with only the training set using
the technique we learned in the previous chapter. For now, we will just choose
the number \\(K\\) of neighbors to be 3, and use concavity and smoothness as the
predictors. As before we need to create a model specification, combine
the model specification and recipe into a workflow, and then finally
use `fit` with the training data `cancer_train` to build the classifier.
```
knn_spec <- nearest_neighbor(weight_func = "rectangular", neighbors = 3) |>
set_engine("kknn") |>
set_mode("classification")
knn_fit <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit(data = cancer_train)
knn_fit
```
```
## ══ Workflow [trained] ══════════
## Preprocessor: Recipe
## Model: nearest_neighbor()
##
## ── Preprocessor ──────────
## 2 Recipe Steps
##
## • step_scale()
## • step_center()
##
## ── Model ──────────
##
## Call:
## kknn::train.kknn(formula = ..y ~ ., data = data, ks = min_rows(3, data, 5),
## kernel = ~"rectangular")
##
## Type of response variable: nominal
## Minimal misclassification: 0.1126761
## Best kernel: rectangular
## Best k: 3
```
### 6\.5\.4 Predict the labels in the test set
Now that we have a K\-nearest neighbors classifier object, we can use it to
predict the class labels for our test set. We use the `bind_cols` to add the
column of predictions to the original test data, creating the
`cancer_test_predictions` data frame. The `Class` variable contains the actual
diagnoses, while the `.pred_class` contains the predicted diagnoses from the
classifier.
```
cancer_test_predictions <- predict(knn_fit, cancer_test) |>
bind_cols(cancer_test)
cancer_test_predictions
```
```
## # A tibble: 143 × 13
## .pred_class ID Class Radius Texture Perimeter Area Smoothness
## <fct> <dbl> <fct> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Benign 842517 Malignant 20.6 17.8 133. 1326 0.0847
## 2 Malignant 84300903 Malignant 19.7 21.2 130 1203 0.110
## 3 Malignant 84501001 Malignant 12.5 24.0 84.0 476. 0.119
## 4 Malignant 84610002 Malignant 15.8 17.9 104. 781 0.0971
## 5 Benign 848406 Malignant 14.7 20.1 94.7 684. 0.0987
## 6 Malignant 84862001 Malignant 16.1 20.7 108. 799. 0.117
## 7 Malignant 849014 Malignant 19.8 22.2 130 1260 0.0983
## 8 Malignant 8511133 Malignant 15.3 14.3 102. 704. 0.107
## 9 Malignant 852552 Malignant 16.6 21.4 110 905. 0.112
## 10 Malignant 853612 Malignant 11.8 18.7 77.9 441. 0.111
## # ℹ 133 more rows
## # ℹ 5 more variables: Compactness <dbl>, Concavity <dbl>, Concave_Points <dbl>,
## # Symmetry <dbl>, Fractal_Dimension <dbl>
```
### 6\.5\.5 Evaluate performance
Finally, we can assess our classifier’s performance. First, we will examine
accuracy. To do this we use the
`metrics` function from `tidymodels`,
specifying the `truth` and `estimate` arguments:
```
cancer_test_predictions |>
metrics(truth = Class, estimate = .pred_class) |>
filter(.metric == "accuracy")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.853
```
In the metrics data frame, we filtered the `.metric` column since we are
interested in the `accuracy` row. Other entries involve other metrics that
are beyond the scope of this book. Looking at the value of the `.estimate` variable
shows that the estimated accuracy of the classifier on the test data
was 85%.
To compute the precision and recall, we can use the `precision` and `recall` functions
from `tidymodels`. We first check the order of the
labels in the `Class` variable using the `levels` function:
```
cancer_test_predictions |> pull(Class) |> levels()
```
```
## [1] "Malignant" "Benign"
```
This shows that `"Malignant"` is the first level. Therefore we will set
the `truth` and `estimate` arguments to `Class` and `.pred_class` as before,
but also specify that the “positive” class corresponds to the first factor level via `event_level="first"`.
If the labels were in the other order, we would instead use `event_level="second"`.
```
cancer_test_predictions |>
precision(truth = Class, estimate = .pred_class, event_level = "first")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 precision binary 0.767
```
```
cancer_test_predictions |>
recall(truth = Class, estimate = .pred_class, event_level = "first")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 recall binary 0.868
```
The output shows that the estimated precision and recall of the classifier on the test data was
77% and 87%, respectively.
Finally, we can look at the *confusion matrix* for the classifier using the `conf_mat` function.
```
confusion <- cancer_test_predictions |>
conf_mat(truth = Class, estimate = .pred_class)
confusion
```
```
## Truth
## Prediction Malignant Benign
## Malignant 46 14
## Benign 7 76
```
The confusion matrix shows 46 observations were correctly predicted
as malignant, and 76 were correctly predicted as benign.
It also shows that the classifier made some mistakes; in particular,
it classified 7 observations as benign when they were actually malignant,
and 14 observations as malignant when they were actually benign.
Using our formulas from earlier, we see that the accuracy, precision, and recall agree with what R reported.
\\\[\\mathrm{accuracy} \= \\frac{\\mathrm{number \\; of \\; correct \\; predictions}}{\\mathrm{total \\; number \\; of \\; predictions}} \= \\frac{46\+76}{46\+76\+14\+7} \= 0\.853\\]
\\\[\\mathrm{precision} \= \\frac{\\mathrm{number \\; of \\; correct \\; positive \\; predictions}}{\\mathrm{total \\; number \\; of \\; positive \\; predictions}} \= \\frac{46}{46 \+ 14} \= 0\.767\\]
\\\[\\mathrm{recall} \= \\frac{\\mathrm{number \\; of \\; correct \\; positive \\; predictions}}{\\mathrm{total \\; number \\; of \\; positive \\; test \\; set \\; observations}} \= \\frac{46}{46\+7} \= 0\.868\\]
### 6\.5\.6 Critically analyze performance
We now know that the classifier was 85% accurate
on the test data set, and had a precision of 77% and a recall of 87%.
That sounds pretty good! Wait, *is* it good? Or do we need something higher?
In general, a *good* value for accuracy (as well as precision and recall, if applicable)
depends on the application; you must critically analyze your accuracy in the context of the problem
you are solving. For example, if we were building a classifier for a kind of tumor that is benign 99%
of the time, a classifier with 99% accuracy is not terribly impressive (just always guess benign!).
And beyond just accuracy, we need to consider the precision and recall: as mentioned
earlier, the *kind* of mistake the classifier makes is
important in many applications as well. In the previous example with 99% benign observations, it might be very bad for the
classifier to predict “benign” when the actual class is “malignant” (a false negative), as this
might result in a patient not receiving appropriate medical attention. In other
words, in this context, we need the classifier to have a *high recall*. On the
other hand, it might be less bad for the classifier to guess “malignant” when
the actual class is “benign” (a false positive), as the patient will then likely see a doctor who
can provide an expert diagnosis. In other words, we are fine with sacrificing
some precision in the interest of achieving high recall. This is why it is
important not only to look at accuracy, but also the confusion matrix.
However, there is always an easy baseline that you can compare to for any
classification problem: the *majority classifier*. The majority classifier
*always* guesses the majority class label from the training data, regardless of
the predictor variables’ values. It helps to give you a sense of
scale when considering accuracies. If the majority classifier obtains a 90%
accuracy on a problem, then you might hope for your K\-nearest neighbors
classifier to do better than that. If your classifier provides a significant
improvement upon the majority classifier, this means that at least your method
is extracting some useful information from your predictor variables. Be
careful though: improving on the majority classifier does not *necessarily*
mean the classifier is working well enough for your application.
As an example, in the breast cancer data, recall the proportions of benign and malignant
observations in the training data are as follows:
```
cancer_proportions
```
```
## # A tibble: 2 × 3
## Class n percent
## <fct> <int> <dbl>
## 1 Malignant 159 37.3
## 2 Benign 267 62.7
```
Since the benign class represents the majority of the training data,
the majority classifier would *always* predict that a new observation
is benign. The estimated accuracy of the majority classifier is usually
fairly close to the majority class proportion in the training data.
In this case, we would suspect that the majority classifier will have
an accuracy of around 63%.
The K\-nearest neighbors classifier we built does quite a bit better than this,
with an accuracy of 85%.
This means that from the perspective of accuracy,
the K\-nearest neighbors classifier improved quite a bit on the basic
majority classifier. Hooray! But we still need to be cautious; in
this application, it is likely very important not to misdiagnose any malignant tumors to avoid missing
patients who actually need medical care. The confusion matrix above shows
that the classifier does, indeed, misdiagnose a significant number of malignant tumors as benign (7
out of 53 malignant tumors, or 13%!).
Therefore, even though the accuracy improved upon the majority classifier,
our critical analysis suggests that this classifier may not have appropriate performance
for the application.
6\.6 Tuning the classifier
--------------------------
The vast majority of predictive models in statistics and machine learning have
*parameters*. A *parameter*
is a number you have to pick in advance that determines
some aspect of how the model behaves. For example, in the K\-nearest neighbors
classification algorithm, \\(K\\) is a parameter that we have to pick
that determines how many neighbors participate in the class vote.
By picking different values of \\(K\\), we create different classifiers
that make different predictions.
So then, how do we pick the *best* value of \\(K\\), i.e., *tune* the model?
And is it possible to make this selection in a principled way? In this book,
we will focus on maximizing the accuracy of the classifier. Ideally,
we want somehow to maximize the accuracy of our classifier on data *it
hasn’t seen yet*. But we cannot use our test data set in the process of building
our model. So we will play the same trick we did before when evaluating
our classifier: we’ll split our *training data itself* into two subsets,
use one to train the model, and then use the other to evaluate it.
In this section, we will cover the details of this procedure, as well as
how to use it to help you pick a good parameter value for your classifier.
**And remember:** don’t touch the test set during the tuning process. Tuning is a part of model training!
### 6\.6\.1 Cross\-validation
The first step in choosing the parameter \\(K\\) is to be able to evaluate the
classifier using only the training data. If this is possible, then we can compare
the classifier’s performance for different values of \\(K\\)—and pick the best—using
only the training data. As suggested at the beginning of this section, we will
accomplish this by splitting the training data, training on one subset, and evaluating
on the other. The subset of training data used for evaluation is often called the **validation set**.
There is, however, one key difference from the train/test split
that we performed earlier. In particular, we were forced to make only a *single split*
of the data. This is because at the end of the day, we have to produce a single classifier.
If we had multiple different splits of the data into training and testing data,
we would produce multiple different classifiers.
But while we are tuning the classifier, we are free to create multiple classifiers
based on multiple splits of the training data, evaluate them, and then choose a parameter
value based on ***all*** of the different results. If we just split our overall training
data *once*, our best parameter choice will depend strongly on whatever data
was lucky enough to end up in the validation set. Perhaps using multiple
different train/validation splits, we’ll get a better estimate of accuracy,
which will lead to a better choice of the number of neighbors \\(K\\) for the
overall set of training data.
Let’s investigate this idea in R! In particular, we will generate five different train/validation
splits of our overall training data, train five different K\-nearest neighbors
models, and evaluate their accuracy. We will start with just a single
split.
```
# create the 25/75 split of the training data into training and validation
cancer_split <- initial_split(cancer_train, prop = 0.75, strata = Class)
cancer_subtrain <- training(cancer_split)
cancer_validation <- testing(cancer_split)
# recreate the standardization recipe from before
# (since it must be based on the training data)
cancer_recipe <- recipe(Class ~ Smoothness + Concavity,
data = cancer_subtrain) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
# fit the knn model (we can reuse the old knn_spec model from before)
knn_fit <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit(data = cancer_subtrain)
# get predictions on the validation data
validation_predicted <- predict(knn_fit, cancer_validation) |>
bind_cols(cancer_validation)
# compute the accuracy
acc <- validation_predicted |>
metrics(truth = Class, estimate = .pred_class) |>
filter(.metric == "accuracy") |>
select(.estimate) |>
pull()
acc
```
```
## [1] 0.8598131
```
The accuracy estimate using this split is 86%.
Now we repeat the above code 4 more times, which generates 4 more splits.
Therefore we get five different shuffles of the data, and therefore five different values for
accuracy: 86\.0%, 89\.7%, 88\.8%, 86\.0%, 86\.9%. None of these values are
necessarily “more correct” than any other; they’re
just five estimates of the true, underlying accuracy of our classifier built
using our overall training data. We can combine the estimates by taking their
average (here 87%) to try to get a single assessment of our
classifier’s accuracy; this has the effect of reducing the influence of any one
(un)lucky validation set on the estimate.
In practice, we don’t use random splits, but rather use a more structured
splitting procedure so that each observation in the data set is used in a
validation set only a single time. The name for this strategy is
**cross\-validation**. In **cross\-validation**, we split our **overall training
data** into \\(C\\) evenly sized chunks. Then, iteratively use \\(1\\) chunk as the
**validation set** and combine the remaining \\(C\-1\\) chunks
as the **training set**.
This procedure is shown in Figure [6\.4](classification2.html#fig:06-cv-image).
Here, \\(C\=5\\) different chunks of the data set are used,
resulting in 5 different choices for the **validation set**; we call this
*5\-fold* cross\-validation.
Figure 6\.4: 5\-fold cross\-validation.
To perform 5\-fold cross\-validation in R with `tidymodels`, we use another
function: `vfold_cv`. This function splits our training data into `v` folds
automatically. We set the `strata` argument to the categorical label variable
(here, `Class`) to ensure that the training and validation subsets contain the
right proportions of each category of observation.
```
cancer_vfold <- vfold_cv(cancer_train, v = 5, strata = Class)
cancer_vfold
```
```
## # 5-fold cross-validation using stratification
## # A tibble: 5 × 2
## splits id
## <list> <chr>
## 1 <split [340/86]> Fold1
## 2 <split [340/86]> Fold2
## 3 <split [341/85]> Fold3
## 4 <split [341/85]> Fold4
## 5 <split [342/84]> Fold5
```
Then, when we create our data analysis workflow, we use the `fit_resamples` function
instead of the `fit` function for training. This runs cross\-validation on each
train/validation split.
```
# recreate the standardization recipe from before
# (since it must be based on the training data)
cancer_recipe <- recipe(Class ~ Smoothness + Concavity,
data = cancer_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
# fit the knn model (we can reuse the old knn_spec model from before)
knn_fit <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit_resamples(resamples = cancer_vfold)
knn_fit
```
```
## # Resampling results
## # 5-fold cross-validation using stratification
## # A tibble: 5 × 4
## splits id .metrics .notes
## <list> <chr> <list> <list>
## 1 <split [340/86]> Fold1 <tibble [2 × 4]> <tibble [0 × 3]>
## 2 <split [340/86]> Fold2 <tibble [2 × 4]> <tibble [0 × 3]>
## 3 <split [341/85]> Fold3 <tibble [2 × 4]> <tibble [0 × 3]>
## 4 <split [341/85]> Fold4 <tibble [2 × 4]> <tibble [0 × 3]>
## 5 <split [342/84]> Fold5 <tibble [2 × 4]> <tibble [0 × 3]>
```
The `collect_metrics` function is used to aggregate the *mean* and *standard error*
of the classifier’s validation accuracy across the folds. You will find results
related to the accuracy in the row with `accuracy` listed under the `.metric` column.
You should consider the mean (`mean`) to be the estimated accuracy, while the standard
error (`std_err`) is a measure of how uncertain we are in the mean value. A detailed treatment of this
is beyond the scope of this chapter; but roughly, if your estimated mean is 0\.89 and standard
error is 0\.02, you can expect the *true* average accuracy of the
classifier to be somewhere roughly between 87% and 91% (although it may
fall outside this range). You may ignore the other columns in the metrics data frame,
as they do not provide any additional insight.
You can also ignore the entire second row with `roc_auc` in the `.metric` column,
as it is beyond the scope of this book.
```
knn_fit |>
collect_metrics()
```
```
## # A tibble: 2 × 6
## .metric .estimator mean n std_err .config
## <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 accuracy binary 0.890 5 0.0180 Preprocessor1_Model1
## 2 roc_auc binary 0.925 5 0.0151 Preprocessor1_Model1
```
We can choose any number of folds, and typically the more we use the better our
accuracy estimate will be (lower standard error). However, we are limited
by computational power: the
more folds we choose, the more computation it takes, and hence the more time
it takes to run the analysis. So when you do cross\-validation, you need to
consider the size of the data, the speed of the algorithm (e.g., K\-nearest
neighbors), and the speed of your computer. In practice, this is a
trial\-and\-error process, but typically \\(C\\) is chosen to be either 5 or 10\. Here
we will try 10\-fold cross\-validation to see if we get a lower standard error:
```
cancer_vfold <- vfold_cv(cancer_train, v = 10, strata = Class)
vfold_metrics <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit_resamples(resamples = cancer_vfold) |>
collect_metrics()
vfold_metrics
```
```
## # A tibble: 2 × 6
## .metric .estimator mean n std_err .config
## <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 accuracy binary 0.890 10 0.0127 Preprocessor1_Model1
## 2 roc_auc binary 0.913 10 0.0150 Preprocessor1_Model1
```
In this case, using 10\-fold instead of 5\-fold cross validation did reduce the standard error, although
by only an insignificant amount. In fact, due to the randomness in how the data are split, sometimes
you might even end up with a *higher* standard error when increasing the number of folds!
We can make the reduction in standard error more dramatic by increasing the number of folds
by a large amount. In the following code we show the result when \\(C \= 50\\);
picking such a large number of folds often takes a long time to run in practice,
so we usually stick to 5 or 10\.
```
cancer_vfold_50 <- vfold_cv(cancer_train, v = 50, strata = Class)
vfold_metrics_50 <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit_resamples(resamples = cancer_vfold_50) |>
collect_metrics()
vfold_metrics_50
```
```
## # A tibble: 2 × 6
## .metric .estimator mean n std_err .config
## <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 accuracy binary 0.884 50 0.00568 Preprocessor1_Model1
## 2 roc_auc binary 0.926 50 0.0148 Preprocessor1_Model1
```
### 6\.6\.2 Parameter value selection
Using 5\- and 10\-fold cross\-validation, we have estimated that the prediction
accuracy of our classifier is somewhere around 89%.
Whether that is good or not
depends entirely on the downstream application of the data analysis. In the
present situation, we are trying to predict a tumor diagnosis, with expensive,
damaging chemo/radiation therapy or patient death as potential consequences of
misprediction. Hence, we might like to
do better than 89% for this application.
In order to improve our classifier, we have one choice of parameter: the number of
neighbors, \\(K\\). Since cross\-validation helps us evaluate the accuracy of our
classifier, we can use cross\-validation to calculate an accuracy for each value
of \\(K\\) in a reasonable range, and then pick the value of \\(K\\) that gives us the
best accuracy. The `tidymodels` package collection provides a very simple
syntax for tuning models: each parameter in the model to be tuned should be specified
as `tune()` in the model specification rather than given a particular value.
```
knn_spec <- nearest_neighbor(weight_func = "rectangular",
neighbors = tune()) |>
set_engine("kknn") |>
set_mode("classification")
```
Then instead of using `fit` or `fit_resamples`, we will use the `tune_grid` function
to fit the model for each value in a range of parameter values.
In particular, we first create a data frame with a `neighbors`
variable that contains the sequence of values of \\(K\\) to try; below we create the `k_vals`
data frame with the `neighbors` variable containing values from 1 to 100 (stepping by 5\) using
the `seq` function.
Then we pass that data frame to the `grid` argument of `tune_grid`.
```
k_vals <- tibble(neighbors = seq(from = 1, to = 100, by = 5))
knn_results <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
tune_grid(resamples = cancer_vfold, grid = k_vals) |>
collect_metrics()
accuracies <- knn_results |>
filter(.metric == "accuracy")
accuracies
```
```
## # A tibble: 20 × 7
## neighbors .metric .estimator mean n std_err .config
## <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 1 accuracy binary 0.866 10 0.0165 Preprocessor1_Model01
## 2 6 accuracy binary 0.890 10 0.0153 Preprocessor1_Model02
## 3 11 accuracy binary 0.887 10 0.0173 Preprocessor1_Model03
## 4 16 accuracy binary 0.887 10 0.0142 Preprocessor1_Model04
## 5 21 accuracy binary 0.887 10 0.0143 Preprocessor1_Model05
## 6 26 accuracy binary 0.887 10 0.0170 Preprocessor1_Model06
## 7 31 accuracy binary 0.897 10 0.0145 Preprocessor1_Model07
## 8 36 accuracy binary 0.899 10 0.0144 Preprocessor1_Model08
## 9 41 accuracy binary 0.892 10 0.0135 Preprocessor1_Model09
## 10 46 accuracy binary 0.892 10 0.0156 Preprocessor1_Model10
## 11 51 accuracy binary 0.890 10 0.0155 Preprocessor1_Model11
## 12 56 accuracy binary 0.873 10 0.0156 Preprocessor1_Model12
## 13 61 accuracy binary 0.876 10 0.0104 Preprocessor1_Model13
## 14 66 accuracy binary 0.871 10 0.0139 Preprocessor1_Model14
## 15 71 accuracy binary 0.876 10 0.0104 Preprocessor1_Model15
## 16 76 accuracy binary 0.873 10 0.0127 Preprocessor1_Model16
## 17 81 accuracy binary 0.876 10 0.0135 Preprocessor1_Model17
## 18 86 accuracy binary 0.873 10 0.0131 Preprocessor1_Model18
## 19 91 accuracy binary 0.873 10 0.0140 Preprocessor1_Model19
## 20 96 accuracy binary 0.866 10 0.0126 Preprocessor1_Model20
```
We can decide which number of neighbors is best by plotting the accuracy versus \\(K\\),
as shown in Figure [6\.5](classification2.html#fig:06-find-k).
```
accuracy_vs_k <- ggplot(accuracies, aes(x = neighbors, y = mean)) +
geom_point() +
geom_line() +
labs(x = "Neighbors", y = "Accuracy Estimate") +
theme(text = element_text(size = 12))
accuracy_vs_k
```
Figure 6\.5: Plot of estimated accuracy versus the number of neighbors.
We can also obtain the number of neighbours with the highest accuracy
programmatically by accessing the `neighbors` variable in the `accuracies` data
frame where the `mean` variable is highest.
Note that it is still useful to visualize the results as
we did above since this provides additional information on how the model
performance varies.
```
best_k <- accuracies |>
arrange(desc(mean)) |>
head(1) |>
pull(neighbors)
best_k
```
```
## [1] 36
```
Setting the number of
neighbors to \\(K \=\\) 36
provides the highest cross\-validation accuracy estimate (89\.89%). But there is no exact or perfect answer here;
any selection from \\(K \= 30\\) and \\(60\\) would be reasonably justified, as all
of these differ in classifier accuracy by a small amount. Remember: the
values you see on this plot are *estimates* of the true accuracy of our
classifier. Although the \\(K \=\\) 36 value is higher than the others on this plot,
that doesn’t mean the classifier is actually more accurate with this parameter
value! Generally, when selecting \\(K\\) (and other parameters for other predictive
models), we are looking for a value where:
* we get roughly optimal accuracy, so that our model will likely be accurate;
* changing the value to a nearby one (e.g., adding or subtracting a small number) doesn’t decrease accuracy too much, so that our choice is reliable in the presence of uncertainty;
* the cost of training the model is not prohibitive (e.g., in our situation, if \\(K\\) is too large, predicting becomes expensive!).
We know that \\(K \=\\) 36
provides the highest estimated accuracy. Further, Figure [6\.5](classification2.html#fig:06-find-k) shows that the estimated accuracy
changes by only a small amount if we increase or decrease \\(K\\) near \\(K \=\\) 36\.
And finally, \\(K \=\\) 36 does not create a prohibitively expensive
computational cost of training. Considering these three points, we would indeed select
\\(K \=\\) 36 for the classifier.
### 6\.6\.3 Under/Overfitting
To build a bit more intuition, what happens if we keep increasing the number of
neighbors \\(K\\)? In fact, the accuracy actually starts to decrease!
Let’s specify a much larger range of values of \\(K\\) to try in the `grid`
argument of `tune_grid`. Figure [6\.6](classification2.html#fig:06-lots-of-ks) shows a plot of estimated accuracy as
we vary \\(K\\) from 1 to almost the number of observations in the training set.
```
k_lots <- tibble(neighbors = seq(from = 1, to = 385, by = 10))
knn_results <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
tune_grid(resamples = cancer_vfold, grid = k_lots) |>
collect_metrics()
accuracies_lots <- knn_results |>
filter(.metric == "accuracy")
accuracy_vs_k_lots <- ggplot(accuracies_lots, aes(x = neighbors, y = mean)) +
geom_point() +
geom_line() +
labs(x = "Neighbors", y = "Accuracy Estimate") +
theme(text = element_text(size = 12))
accuracy_vs_k_lots
```
Figure 6\.6: Plot of accuracy estimate versus number of neighbors for many K values.
**Underfitting:** What is actually happening to our classifier that causes
this? As we increase the number of neighbors, more and more of the training
observations (and those that are farther and farther away from the point) get a
“say” in what the class of a new observation is. This causes a sort of
“averaging effect” to take place, making the boundary between where our
classifier would predict a tumor to be malignant versus benign to smooth out
and become *simpler.* If you take this to the extreme, setting \\(K\\) to the total
training data set size, then the classifier will always predict the same label
regardless of what the new observation looks like. In general, if the model
*isn’t influenced enough* by the training data, it is said to **underfit** the
data.
**Overfitting:** In contrast, when we decrease the number of neighbors, each
individual data point has a stronger and stronger vote regarding nearby points.
Since the data themselves are noisy, this causes a more “jagged” boundary
corresponding to a *less simple* model. If you take this case to the extreme,
setting \\(K \= 1\\), then the classifier is essentially just matching each new
observation to its closest neighbor in the training data set. This is just as
problematic as the large \\(K\\) case, because the classifier becomes unreliable on
new data: if we had a different training set, the predictions would be
completely different. In general, if the model *is influenced too much* by the
training data, it is said to **overfit** the data.
Figure 6\.7: Effect of K in overfitting and underfitting.
Both overfitting and underfitting are problematic and will lead to a model
that does not generalize well to new data. When fitting a model, we need to strike
a balance between the two. You can see these two effects in Figure
[6\.7](classification2.html#fig:06-decision-grid-K), which shows how the classifier changes as
we set the number of neighbors \\(K\\) to 1, 7, 20, and 300\.
### 6\.6\.4 Evaluating on the test set
Now that we have tuned the K\-NN classifier and set \\(K \=\\) 36,
we are done building the model and it is time to evaluate the quality of its predictions on the held out
test data, as we did earlier in Section [6\.5\.5](classification2.html#eval-performance-cls2).
We first need to retrain the K\-NN classifier
on the entire training data set using the selected number of neighbors.
```
cancer_recipe <- recipe(Class ~ Smoothness + Concavity, data = cancer_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
knn_spec <- nearest_neighbor(weight_func = "rectangular", neighbors = best_k) |>
set_engine("kknn") |>
set_mode("classification")
knn_fit <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit(data = cancer_train)
knn_fit
```
```
## ══ Workflow [trained] ══════════════════════════════════════════════════════════
## Preprocessor: Recipe
## Model: nearest_neighbor()
##
## ── Preprocessor ────────────────────────────────────────────────────────────────
## 2 Recipe Steps
##
## • step_scale()
## • step_center()
##
## ── Model ───────────────────────────────────────────────────────────────────────
##
## Call:
## kknn::train.kknn(formula = ..y ~ ., data = data, ks = min_rows(36, data, 5), kernel = ~"rectangular")
##
## Type of response variable: nominal
## Minimal misclassification: 0.1150235
## Best kernel: rectangular
## Best k: 36
```
Then to make predictions and assess the estimated accuracy of the best model on the test data, we use the
`predict` and `metrics` functions as we did earlier in the chapter. We can then pass those predictions to
the `precision`, `recall`, and `conf_mat` functions to assess the estimated precision and recall, and print a confusion matrix.
```
cancer_test_predictions <- predict(knn_fit, cancer_test) |>
bind_cols(cancer_test)
cancer_test_predictions |>
metrics(truth = Class, estimate = .pred_class) |>
filter(.metric == "accuracy")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.860
```
```
cancer_test_predictions |>
precision(truth = Class, estimate = .pred_class, event_level="first")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 precision binary 0.8
```
```
cancer_test_predictions |>
recall(truth = Class, estimate = .pred_class, event_level="first")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 recall binary 0.830
```
```
confusion <- cancer_test_predictions |>
conf_mat(truth = Class, estimate = .pred_class)
confusion
```
```
## Truth
## Prediction Malignant Benign
## Malignant 44 11
## Benign 9 79
```
At first glance, this is a bit surprising: the accuracy of the classifier
has only changed a small amount despite tuning the number of neighbors! Our first model
with \\(K \=\\) 3 (before we knew how to tune) had an estimated accuracy of 85%,
while the tuned model with \\(K \=\\) 36 had an estimated accuracy
of 86%.
Upon examining Figure [6\.5](classification2.html#fig:06-find-k) again to see the
cross validation accuracy estimates for a range of neighbors, this result
becomes much less surprising. From 1 to around 96 neighbors, the cross
validation accuracy estimate varies only by around 3%, with
each estimate having a standard error around 1%.
Since the cross\-validation accuracy estimates the test set accuracy,
the fact that the test set accuracy also doesn’t change much is expected.
Also note that the \\(K \=\\) 3 model had a precision
precision of 77% and recall of 87%,
while the tuned model had
a precision of 80% and recall of 83%.
Given that the recall decreased—remember, in this application, recall
is critical to making sure we find all the patients with malignant tumors—the tuned model may actually be *less* preferred
in this setting. In any case, it is important to think critically about the result of tuning. Models tuned to
maximize accuracy are not necessarily better for a given application.
6\.7 Summary
------------
Classification algorithms use one or more quantitative variables to predict the
value of another categorical variable. In particular, the K\-nearest neighbors algorithm
does this by first finding the \\(K\\) points in the training data nearest
to the new observation, and then returning the majority class vote from those
training observations. We can tune and evaluate a classifier by splitting the data randomly into a
training and test data set. The training set is used to build the classifier,
and we can tune the classifier (e.g., select the number of neighbors in K\-NN)
by maximizing estimated accuracy via cross\-validation. After we have tuned the
model we can use the test set to estimate its accuracy.
The overall process is summarized in Figure [6\.8](classification2.html#fig:06-overview).
Figure 6\.8: Overview of K\-NN classification.
The overall workflow for performing K\-nearest neighbors classification using `tidymodels` is as follows:
1. Use the `initial_split` function to split the data into a training and test set. Set the `strata` argument to the class label variable. Put the test set aside for now.
2. Use the `vfold_cv` function to split up the training data for cross\-validation.
3. Create a `recipe` that specifies the class label and predictors, as well as preprocessing steps for all variables. Pass the training data as the `data` argument of the recipe.
4. Create a `nearest_neighbors` model specification, with `neighbors = tune()`.
5. Add the recipe and model specification to a `workflow()`, and use the `tune_grid` function on the train/validation splits to estimate the classifier accuracy for a range of \\(K\\) values.
6. Pick a value of \\(K\\) that yields a high accuracy estimate that doesn’t change much if you change \\(K\\) to a nearby value.
7. Make a new model specification for the best parameter value (i.e., \\(K\\)), and retrain the classifier using the `fit` function.
8. Evaluate the estimated accuracy of the classifier on the test set using the `predict` function.
In these last two chapters, we focused on the K\-nearest neighbors algorithm,
but there are many other methods we could have used to predict a categorical label.
All algorithms have their strengths and weaknesses, and we summarize these for
the K\-NN here.
**Strengths:** K\-nearest neighbors classification
1. is a simple, intuitive algorithm,
2. requires few assumptions about what the data must look like, and
3. works for binary (two\-class) and multi\-class (more than 2 classes) classification problems.
**Weaknesses:** K\-nearest neighbors classification
1. becomes very slow as the training data gets larger,
2. may not perform well with a large number of predictors, and
3. may not perform well when classes are imbalanced.
6\.8 Predictor variable selection
---------------------------------
> **Note:** This section is not required reading for the remainder of the textbook. It is included for those readers
> interested in learning how irrelevant variables can influence the performance of a classifier, and how to
> pick a subset of useful variables to include as predictors.
Another potentially important part of tuning your classifier is to choose which
variables from your data will be treated as predictor variables. Technically, you can choose
anything from using a single predictor variable to using every variable in your
data; the K\-nearest neighbors algorithm accepts any number of
predictors. However, it is **not** the case that using more predictors always
yields better predictions! In fact, sometimes including irrelevant predictors can
actually negatively affect classifier performance.
### 6\.8\.1 The effect of irrelevant predictors
Let’s take a look at an example where K\-nearest neighbors performs
worse when given more predictors to work with. In this example, we modified
the breast cancer data to have only the `Smoothness`, `Concavity`, and
`Perimeter` variables from the original data. Then, we added irrelevant
variables that we created ourselves using a random number generator.
The irrelevant variables each take a value of 0 or 1 with equal probability for each observation, regardless
of what the value `Class` variable takes. In other words, the irrelevant variables have
no meaningful relationship with the `Class` variable.
```
cancer_irrelevant |>
select(Class, Smoothness, Concavity, Perimeter, Irrelevant1, Irrelevant2)
```
```
## # A tibble: 569 × 6
## Class Smoothness Concavity Perimeter Irrelevant1 Irrelevant2
## <fct> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Malignant 0.118 0.300 123. 1 0
## 2 Malignant 0.0847 0.0869 133. 0 0
## 3 Malignant 0.110 0.197 130 0 0
## 4 Malignant 0.142 0.241 77.6 0 1
## 5 Malignant 0.100 0.198 135. 0 0
## 6 Malignant 0.128 0.158 82.6 1 0
## 7 Malignant 0.0946 0.113 120. 0 1
## 8 Malignant 0.119 0.0937 90.2 1 0
## 9 Malignant 0.127 0.186 87.5 0 0
## 10 Malignant 0.119 0.227 84.0 1 1
## # ℹ 559 more rows
```
Next, we build a sequence of K\-NN classifiers that include `Smoothness`,
`Concavity`, and `Perimeter` as predictor variables, but also increasingly many irrelevant
variables. In particular, we create 6 data sets with 0, 5, 10, 15, 20, and 40 irrelevant predictors.
Then we build a model, tuned via 5\-fold cross\-validation, for each data set.
Figure [6\.9](classification2.html#fig:06-performance-irrelevant-features) shows
the estimated cross\-validation accuracy versus the number of irrelevant predictors. As
we add more irrelevant predictor variables, the estimated accuracy of our
classifier decreases. This is because the irrelevant variables add a random
amount to the distance between each pair of observations; the more irrelevant
variables there are, the more (random) influence they have, and the more they
corrupt the set of nearest neighbors that vote on the class of the new
observation to predict.
Figure 6\.9: Effect of inclusion of irrelevant predictors.
Although the accuracy decreases as expected, one surprising thing about
Figure [6\.9](classification2.html#fig:06-performance-irrelevant-features) is that it shows that the method
still outperforms the baseline majority classifier (with about 63% accuracy)
even with 40 irrelevant variables.
How could that be? Figure [6\.10](classification2.html#fig:06-neighbors-irrelevant-features) provides the answer:
the tuning procedure for the K\-nearest neighbors classifier combats the extra randomness from the irrelevant variables
by increasing the number of neighbors. Of course, because of all the extra noise in the data from the irrelevant
variables, the number of neighbors does not increase smoothly; but the general trend is increasing.
Figure [6\.11](classification2.html#fig:06-fixed-irrelevant-features) corroborates
this evidence; if we fix the number of neighbors to \\(K\=3\\), the accuracy falls off more quickly.
Figure 6\.10: Tuned number of neighbors for varying number of irrelevant predictors.
Figure 6\.11: Accuracy versus number of irrelevant predictors for tuned and untuned number of neighbors.
### 6\.8\.2 Finding a good subset of predictors
So then, if it is not ideal to use all of our variables as predictors without consideration, how
do we choose which variables we *should* use? A simple method is to rely on your scientific understanding
of the data to tell you which variables are not likely to be useful predictors. For example, in the cancer
data that we have been studying, the `ID` variable is just a unique identifier for the observation.
As it is not related to any measured property of the cells, the `ID` variable should therefore not be used
as a predictor. That is, of course, a very clear\-cut case. But the decision for the remaining variables
is less obvious, as all seem like reasonable candidates. It
is not clear which subset of them will create the best classifier. One could use visualizations and
other exploratory analyses to try to help understand which variables are potentially relevant, but
this process is both time\-consuming and error\-prone when there are many variables to consider.
Therefore we need a more systematic and programmatic way of choosing variables.
This is a very difficult problem to solve in
general, and there are a number of methods that have been developed that apply
in particular cases of interest. Here we will discuss two basic
selection methods as an introduction to the topic. See the additional resources at the end of
this chapter to find out where you can learn more about variable selection, including more advanced methods.
The first idea you might think of for a systematic way to select predictors
is to try all possible subsets of predictors and then pick the set that results in the “best” classifier.
This procedure is indeed a well\-known variable selection method referred to
as *best subset selection* ([Beale, Kendall, and Mann 1967](#ref-bealesubset); [Hocking and Leslie 1967](#ref-hockingsubset)).
In particular, you
1. create a separate model for every possible subset of predictors,
2. tune each one using cross\-validation, and
3. pick the subset of predictors that gives you the highest cross\-validation accuracy.
Best subset selection is applicable to any classification method (K\-NN or otherwise).
However, it becomes very slow when you have even a moderate
number of predictors to choose from (say, around 10\). This is because the number of possible predictor subsets
grows very quickly with the number of predictors, and you have to train the model (itself
a slow process!) for each one. For example, if we have 2 predictors—let’s call
them A and B—then we have 3 variable sets to try: A alone, B alone, and finally A
and B together. If we have 3 predictors—A, B, and C—then we have 7
to try: A, B, C, AB, BC, AC, and ABC. In general, the number of models
we have to train for \\(m\\) predictors is \\(2^m\-1\\); in other words, when we
get to 10 predictors we have over *one thousand* models to train, and
at 20 predictors we have over *one million* models to train!
So although it is a simple method, best subset selection is usually too computationally
expensive to use in practice.
Another idea is to iteratively build up a model by adding one predictor variable
at a time. This method—known as *forward selection* ([Eforymson 1966](#ref-forwardefroymson); [Draper and Smith 1966](#ref-forwarddraper))—is also widely
applicable and fairly straightforward. It involves the following steps:
1. Start with a model having no predictors.
2. Run the following 3 steps until you run out of predictors:
1. For each unused predictor, add it to the model to form a *candidate model*.
2. Tune all of the candidate models.
3. Update the model to be the candidate model with the highest cross\-validation accuracy.
3. Select the model that provides the best trade\-off between accuracy and simplicity.
Say you have \\(m\\) total predictors to work with. In the first iteration, you have to make
\\(m\\) candidate models, each with 1 predictor. Then in the second iteration, you have
to make \\(m\-1\\) candidate models, each with 2 predictors (the one you chose before and a new one).
This pattern continues for as many iterations as you want. If you run the method
all the way until you run out of predictors to choose, you will end up training
\\(\\frac{1}{2}m(m\+1\)\\) separate models. This is a *big* improvement from the \\(2^m\-1\\)
models that best subset selection requires you to train! For example, while best subset selection requires
training over 1000 candidate models with 10 predictors, forward selection requires training only 55 candidate models.
Therefore we will continue the rest of this section using forward selection.
> **Note:** One word of caution before we move on. Every additional model that you train
> increases the likelihood that you will get unlucky and stumble
> on a model that has a high cross\-validation accuracy estimate, but a low true
> accuracy on the test data and other future observations.
> Since forward selection involves training a lot of models, you run a fairly
> high risk of this happening. To keep this risk low, only use forward selection
> when you have a large amount of data and a relatively small total number of
> predictors. More advanced methods do not suffer from this
> problem as much; see the additional resources at the end of this chapter for
> where to learn more about advanced predictor selection methods.
### 6\.8\.3 Forward selection in R
We now turn to implementing forward selection in R.
Unfortunately there is no built\-in way to do this using the `tidymodels` framework,
so we will have to code it ourselves. First we will use the `select` function to extract a smaller set of predictors
to work with in this illustrative example—`Smoothness`, `Concavity`, `Perimeter`, `Irrelevant1`, `Irrelevant2`, and `Irrelevant3`—as
well as the `Class` variable as the label. We will also extract the column names for the full set of predictors.
```
cancer_subset <- cancer_irrelevant |>
select(Class,
Smoothness,
Concavity,
Perimeter,
Irrelevant1,
Irrelevant2,
Irrelevant3)
names <- colnames(cancer_subset |> select(-Class))
cancer_subset
```
```
## # A tibble: 569 × 7
## Class Smoothness Concavity Perimeter Irrelevant1 Irrelevant2 Irrelevant3
## <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Malignant 0.118 0.300 123. 1 0 1
## 2 Malignant 0.0847 0.0869 133. 0 0 0
## 3 Malignant 0.110 0.197 130 0 0 0
## 4 Malignant 0.142 0.241 77.6 0 1 0
## 5 Malignant 0.100 0.198 135. 0 0 0
## 6 Malignant 0.128 0.158 82.6 1 0 1
## 7 Malignant 0.0946 0.113 120. 0 1 1
## 8 Malignant 0.119 0.0937 90.2 1 0 0
## 9 Malignant 0.127 0.186 87.5 0 0 1
## 10 Malignant 0.119 0.227 84.0 1 1 0
## # ℹ 559 more rows
```
The key idea of the forward selection code is to use the `paste` function (which concatenates strings
separated by spaces) to create a model formula for each subset of predictors for which we want to build a model.
The `collapse` argument tells `paste` what to put between the items in the list;
to make a formula, we need to put a `+` symbol between each variable.
As an example, let’s make a model formula for all the predictors,
which should output something like
`Class ~ Smoothness + Concavity + Perimeter + Irrelevant1 + Irrelevant2 + Irrelevant3`:
```
example_formula <- paste("Class", "~", paste(names, collapse="+"))
example_formula
```
```
## [1] "Class ~ Smoothness+Concavity+Perimeter+Irrelevant1+Irrelevant2+Irrelevant3"
```
Finally, we need to write some code that performs the task of sequentially
finding the best predictor to add to the model.
If you recall the end of the wrangling chapter, we mentioned
that sometimes one needs more flexible forms of iteration than what
we have used earlier, and in these cases one typically resorts to
a *for loop*; see [the chapter on iteration](https://r4ds.had.co.nz/iteration.html) in *R for Data Science* ([Wickham and Grolemund 2016](#ref-wickham2016r)).
Here we will use two for loops:
one over increasing predictor set sizes
(where you see `for (i in 1:length(names))` below),
and another to check which predictor to add in each round (where you see `for (j in 1:length(names))` below).
For each set of predictors to try, we construct a model formula,
pass it into a `recipe`, build a `workflow` that tunes
a K\-NN classifier using 5\-fold cross\-validation,
and finally records the estimated accuracy.
```
# create an empty tibble to store the results
accuracies <- tibble(size = integer(),
model_string = character(),
accuracy = numeric())
# create a model specification
knn_spec <- nearest_neighbor(weight_func = "rectangular",
neighbors = tune()) |>
set_engine("kknn") |>
set_mode("classification")
# create a 5-fold cross-validation object
cancer_vfold <- vfold_cv(cancer_subset, v = 5, strata = Class)
# store the total number of predictors
n_total <- length(names)
# stores selected predictors
selected <- c()
# for every size from 1 to the total number of predictors
for (i in 1:n_total) {
# for every predictor still not added yet
accs <- list()
models <- list()
for (j in 1:length(names)) {
# create a model string for this combination of predictors
preds_new <- c(selected, names[[j]])
model_string <- paste("Class", "~", paste(preds_new, collapse="+"))
# create a recipe from the model string
cancer_recipe <- recipe(as.formula(model_string),
data = cancer_subset) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
# tune the K-NN classifier with these predictors,
# and collect the accuracy for the best K
acc <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
tune_grid(resamples = cancer_vfold, grid = 10) |>
collect_metrics() |>
filter(.metric == "accuracy") |>
summarize(mx = max(mean))
acc <- acc$mx |> unlist()
# add this result to the dataframe
accs[[j]] <- acc
models[[j]] <- model_string
}
jstar <- which.max(unlist(accs))
accuracies <- accuracies |>
add_row(size = i,
model_string = models[[jstar]],
accuracy = accs[[jstar]])
selected <- c(selected, names[[jstar]])
names <- names[-jstar]
}
accuracies
```
```
## # A tibble: 6 × 3
## size model_string accuracy
## <int> <chr> <dbl>
## 1 1 Class ~ Perimeter 0.896
## 2 2 Class ~ Perimeter+Concavity 0.916
## 3 3 Class ~ Perimeter+Concavity+Smoothness 0.931
## 4 4 Class ~ Perimeter+Concavity+Smoothness+Irrelevant1 0.928
## 5 5 Class ~ Perimeter+Concavity+Smoothness+Irrelevant1+Irrelevant3 0.924
## 6 6 Class ~ Perimeter+Concavity+Smoothness+Irrelevant1+Irrelevant3… 0.902
```
Interesting! The forward selection procedure first added the three meaningful variables `Perimeter`,
`Concavity`, and `Smoothness`, followed by the irrelevant variables. Figure [6\.12](classification2.html#fig:06-fwdsel-3)
visualizes the accuracy versus the number of predictors in the model. You can see that
as meaningful predictors are added, the estimated accuracy increases substantially; and as you add irrelevant
variables, the accuracy either exhibits small fluctuations or decreases as the model attempts to tune the number
of neighbors to account for the extra noise. In order to pick the right model from the sequence, you have
to balance high accuracy and model simplicity (i.e., having fewer predictors and a lower chance of overfitting). The
way to find that balance is to look for the *elbow*
in Figure [6\.12](classification2.html#fig:06-fwdsel-3), i.e., the place on the plot where the accuracy stops increasing dramatically and
levels off or begins to decrease. The elbow in Figure [6\.12](classification2.html#fig:06-fwdsel-3) appears to occur at the model with
3 predictors; after that point the accuracy levels off. So here the right trade\-off of accuracy and number of predictors
occurs with 3 variables: `Class ~ Perimeter + Concavity + Smoothness`. In other words, we have successfully removed irrelevant
predictors from the model! It is always worth remembering, however, that what cross\-validation gives you
is an *estimate* of the true accuracy; you have to use your judgement when looking at this plot to decide
where the elbow occurs, and whether adding a variable provides a meaningful increase in accuracy.
Figure 6\.12: Estimated accuracy versus the number of predictors for the sequence of models built using forward selection.
> **Note:** Since the choice of which variables to include as predictors is
> part of tuning your classifier, you *cannot use your test data* for this
> process!
6\.9 Exercises
--------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Classification II: evaluation and tuning” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
6\.10 Additional resources
--------------------------
* The [`tidymodels` website](https://tidymodels.org/packages) is an excellent
reference for more details on, and advanced usage of, the functions and
packages in the past two chapters. Aside from that, it also has a [nice
beginner’s tutorial](https://www.tidymodels.org/start/) and [an extensive list
of more advanced examples](https://www.tidymodels.org/learn/) that you can use
to continue learning beyond the scope of this book. It’s worth noting that the
`tidymodels` package does a lot more than just classification, and so the
examples on the website similarly go beyond classification as well. In the next
two chapters, you’ll learn about another kind of predictive modeling setting,
so it might be worth visiting the website only after reading through those
chapters.
* *An Introduction to Statistical Learning* ([James et al. 2013](#ref-james2013introduction)) provides
a great next stop in the process of
learning about classification. Chapter 4 discusses additional basic techniques
for classification that we do not cover, such as logistic regression, linear
discriminant analysis, and naive Bayes. Chapter 5 goes into much more detail
about cross\-validation. Chapters 8 and 9 cover decision trees and support
vector machines, two very popular but more advanced classification methods.
Finally, Chapter 6 covers a number of methods for selecting predictor
variables. Note that while this book is still a very accessible introductory
text, it requires a bit more mathematical background than we require.
6\.1 Overview
-------------
This chapter continues the introduction to predictive modeling through
classification. While the previous chapter covered training and data
preprocessing, this chapter focuses on how to evaluate the performance of
a classifier, as well as how to improve the classifier (where possible)
to maximize its accuracy.
6\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Describe what training, validation, and test data sets are and how they are used in classification.
* Split data into training, validation, and test data sets.
* Describe what a random seed is and its importance in reproducible data analysis.
* Set the random seed in R using the `set.seed` function.
* Describe and interpret accuracy, precision, recall, and confusion matrices.
* Evaluate classification accuracy, precision, and recall in R using a test set, a single validation set, and cross\-validation.
* Produce a confusion matrix in R.
* Choose the number of neighbors in a K\-nearest neighbors classifier by maximizing estimated cross\-validation accuracy.
* Describe underfitting and overfitting, and relate it to the number of neighbors in K\-nearest neighbors classification.
* Describe the advantages and disadvantages of the K\-nearest neighbors classification algorithm.
6\.3 Evaluating performance
---------------------------
Sometimes our classifier might make the wrong prediction. A classifier does not
need to be right 100% of the time to be useful, though we don’t want the
classifier to make too many wrong predictions. How do we measure how “good” our
classifier is? Let’s revisit the
[breast cancer images data](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29) ([Street, Wolberg, and Mangasarian 1993](#ref-streetbreastcancer))
and think about how our classifier will be used in practice. A biopsy will be
performed on a *new* patient’s tumor, the resulting image will be analyzed,
and the classifier will be asked to decide whether the tumor is benign or
malignant. The key word here is *new*: our classifier is “good” if it provides
accurate predictions on data *not seen during training*, as this implies that
it has actually learned about the relationship between the predictor variables and response variable,
as opposed to simply memorizing the labels of individual training data examples.
But then, how can we evaluate our classifier without visiting the hospital to collect more
tumor images?
The trick is to split the data into a **training set** and **test set** (Figure [6\.1](classification2.html#fig:06-training-test))
and use only the **training set** when building the classifier.
Then, to evaluate the performance of the classifier, we first set aside the labels from the **test set**,
and then use the classifier to predict the labels in the **test set**. If our predictions match the actual
labels for the observations in the **test set**, then we have some
confidence that our classifier might also accurately predict the class
labels for new observations without known class labels.
> **Note:** If there were a golden rule of machine learning, it might be this:
> *you cannot use the test data to build the model!* If you do, the model gets to
> “see” the test data in advance, making it look more accurate than it really
> is. Imagine how bad it would be to overestimate your classifier’s accuracy
> when predicting whether a patient’s tumor is malignant or benign!
Figure 6\.1: Splitting the data into training and testing sets.
How exactly can we assess how well our predictions match the actual labels for
the observations in the test set? One way we can do this is to calculate the
prediction **accuracy**. This is the fraction of examples for which the
classifier made the correct prediction. To calculate this, we divide the number
of correct predictions by the number of predictions made.
The process for assessing if our predictions match the actual labels in the
test set is illustrated in Figure [6\.2](classification2.html#fig:06-ML-paradigm-test).
\\\[\\mathrm{accuracy} \= \\frac{\\mathrm{number \\; of \\; correct \\; predictions}}{\\mathrm{total \\; number \\; of \\; predictions}}\\]
Figure 6\.2: Process for splitting the data and finding the prediction accuracy.
Accuracy is a convenient, general\-purpose way to summarize the performance of a classifier with
a single number. But prediction accuracy by itself does not tell the whole
story. In particular, accuracy alone only tells us how often the classifier
makes mistakes in general, but does not tell us anything about the *kinds* of
mistakes the classifier makes. A more comprehensive view of performance can be
obtained by additionally examining the **confusion matrix**. The confusion
matrix shows how many test set labels of each type are predicted correctly and
incorrectly, which gives us more detail about the kinds of mistakes the
classifier tends to make. Table [6\.1](classification2.html#tab:confusion-matrix) shows an example
of what a confusion matrix might look like for the tumor image data with
a test set of 65 observations.
Table 6\.1: An example confusion matrix for the tumor image data.
| | Actually Malignant | Actually Benign |
| --- | --- | --- |
| **Predicted Malignant** | 1 | 4 |
| **Predicted Benign** | 3 | 57 |
In the example in Table [6\.1](classification2.html#tab:confusion-matrix), we see that there was
1 malignant observation that was correctly classified as malignant (top left corner),
and 57 benign observations that were correctly classified as benign (bottom right corner).
However, we can also see that the classifier made some mistakes:
it classified 3 malignant observations as benign, and 4 benign observations as
malignant. The accuracy of this classifier is roughly
89%, given by the formula
\\\[\\mathrm{accuracy} \= \\frac{\\mathrm{number \\; of \\; correct \\; predictions}}{\\mathrm{total \\; number \\; of \\; predictions}} \= \\frac{1\+57}{1\+57\+4\+3} \= 0\.892\.\\]
But we can also see that the classifier only identified 1 out of 4 total malignant
tumors; in other words, it misclassified 75% of the malignant cases present in the
data set! In this example, misclassifying a malignant tumor is a potentially
disastrous error, since it may lead to a patient who requires treatment not receiving it.
Since we are particularly interested in identifying malignant cases, this
classifier would likely be unacceptable even with an accuracy of 89%.
Focusing more on one label than the other
is
common in classification problems. In such cases, we typically refer to the label we are more
interested in identifying as the *positive* label, and the other as the
*negative* label. In the tumor example, we would refer to malignant
observations as *positive*, and benign observations as *negative*. We can then
use the following terms to talk about the four kinds of prediction that the
classifier can make, corresponding to the four entries in the confusion matrix:
* **True Positive:** A malignant observation that was classified as malignant (top left in Table [6\.1](classification2.html#tab:confusion-matrix)).
* **False Positive:** A benign observation that was classified as malignant (top right in Table [6\.1](classification2.html#tab:confusion-matrix)).
* **True Negative:** A benign observation that was classified as benign (bottom right in Table [6\.1](classification2.html#tab:confusion-matrix)).
* **False Negative:** A malignant observation that was classified as benign (bottom left in Table [6\.1](classification2.html#tab:confusion-matrix)).
A perfect classifier would have zero false negatives and false positives (and
therefore, 100% accuracy). However, classifiers in practice will almost always
make some errors. So you should think about which kinds of error are most
important in your application, and use the confusion matrix to quantify and
report them. Two commonly used metrics that we can compute using the confusion
matrix are the **precision** and **recall** of the classifier. These are often
reported together with accuracy. *Precision* quantifies how many of the
positive predictions the classifier made were actually positive. Intuitively,
we would like a classifier to have a *high* precision: for a classifier with
high precision, if the classifier reports that a new observation is positive,
we can trust that the new observation is indeed positive. We can compute the
precision of a classifier using the entries in the confusion matrix, with the
formula
\\\[\\mathrm{precision} \= \\frac{\\mathrm{number \\; of \\; correct \\; positive \\; predictions}}{\\mathrm{total \\; number \\; of \\; positive \\; predictions}}.\\]
*Recall* quantifies how many of the positive observations in the test set were
identified as positive. Intuitively, we would like a classifier to have a
*high* recall: for a classifier with high recall, if there is a positive
observation in the test data, we can trust that the classifier will find it.
We can also compute the recall of the classifier using the entries in the
confusion matrix, with the formula
\\\[\\mathrm{recall} \= \\frac{\\mathrm{number \\; of \\; correct \\; positive \\; predictions}}{\\mathrm{total \\; number \\; of \\; positive \\; test \\; set \\; observations}}.\\]
In the example presented in Table [6\.1](classification2.html#tab:confusion-matrix), we have that the precision and recall are
\\\[\\mathrm{precision} \= \\frac{1}{1\+4} \= 0\.20, \\quad \\mathrm{recall} \= \\frac{1}{1\+3} \= 0\.25\.\\]
So even with an accuracy of 89%, the precision and recall of the classifier
were both relatively low. For this data analysis context, recall is
particularly important: if someone has a malignant tumor, we certainly want to
identify it. A recall of just 25% would likely be unacceptable!
> **Note:** It is difficult to achieve both high precision and high recall at
> the same time; models with high precision tend to have low recall and vice
> versa. As an example, we can easily make a classifier that has *perfect
> recall*: just *always* guess positive! This classifier will of course find
> every positive observation in the test set, but it will make lots of false
> positive predictions along the way and have low precision. Similarly, we can
> easily make a classifier that has *perfect precision*: *never* guess
> positive! This classifier will never incorrectly identify an obsevation as
> positive, but it will make a lot of false negative predictions along the way.
> In fact, this classifier will have 0% recall! Of course, most real
> classifiers fall somewhere in between these two extremes. But these examples
> serve to show that in settings where one of the classes is of interest (i.e.,
> there is a *positive* label), there is a trade\-off between precision and recall that one has to
> make when designing a classifier.
6\.4 Randomness and seeds
-------------------------
Beginning in this chapter, our data analyses will often involve the use
of *randomness*. We use randomness any time we need to make a decision in our
analysis that needs to be fair, unbiased, and not influenced by human input.
For example, in this chapter, we need to split
a data set into a training set and test set to evaluate our classifier. We
certainly do not want to choose how to split
the data ourselves by hand, as we want to avoid accidentally influencing the result
of the evaluation. So instead, we let R *randomly* split the data.
In future chapters we will use randomness
in many other ways, e.g., to help us select a small subset of data from a larger data set,
to pick groupings of data, and more.
However, the use of randomness runs counter to one of the main
tenets of good data analysis practice: *reproducibility*. Recall that a reproducible
analysis produces the same result each time it is run; if we include randomness
in the analysis, would we not get a different result each time?
The trick is that in R—and other programming languages—randomness
is not actually random! Instead, R uses a *random number generator* that
produces a sequence of numbers that
are completely determined by a
*seed value*. Once you set the seed value
using the `set.seed` function, everything after that point may *look* random,
but is actually totally reproducible. As long as you pick the same seed
value, you get the same result!
Let’s use an example to investigate how seeds work in R. Say we want
to randomly pick 10 numbers from 0 to 9 in R using the `sample` function,
but we want it to be reproducible. Before using the sample function,
we call `set.seed`, and pass it any integer as an argument.
Here, we pass in the number `1`.
```
set.seed(1)
random_numbers1 <- sample(0:9, 10, replace = TRUE)
random_numbers1
```
```
## [1] 8 3 6 0 1 6 1 2 0 4
```
You can see that `random_numbers1` is a list of 10 numbers
from 0 to 9 that, from all appearances, looks random. If
we run the `sample` function again, we will
get a fresh batch of 10 numbers that also look random.
```
random_numbers2 <- sample(0:9, 10, replace = TRUE)
random_numbers2
```
```
## [1] 4 9 5 9 6 8 4 4 8 8
```
If we want to force R to produce the same sequences of random numbers,
we can simply call the `set.seed` function again with the same argument
value.
```
set.seed(1)
random_numbers1_again <- sample(0:9, 10, replace = TRUE)
random_numbers1_again
```
```
## [1] 8 3 6 0 1 6 1 2 0 4
```
```
random_numbers2_again <- sample(0:9, 10, replace = TRUE)
random_numbers2_again
```
```
## [1] 4 9 5 9 6 8 4 4 8 8
```
Notice that after setting the seed, we get the same two sequences of numbers in the same order. `random_numbers1` and `random_numbers1_again` produce the same sequence of numbers, and the same can be said about `random_numbers2` and `random_numbers2_again`. And if we choose
a different value for the seed—say, 4235—we
obtain a different sequence of random numbers.
```
set.seed(4235)
random_numbers1_different <- sample(0:9, 10, replace = TRUE)
random_numbers1_different
```
```
## [1] 8 3 1 4 6 8 8 4 1 7
```
```
random_numbers2_different <- sample(0:9, 10, replace = TRUE)
random_numbers2_different
```
```
## [1] 3 7 8 2 8 8 6 3 3 8
```
In other words, even though the sequences of numbers that R is generating *look*
random, they are totally determined when we set a seed value!
So what does this mean for data analysis? Well, `sample` is certainly
not the only function that uses randomness in R. Many of the functions
that we use in `tidymodels`, `tidyverse`, and beyond use randomness—some of them
without even telling you about it. So at the beginning of every data analysis you
do, right after loading packages, you should call the `set.seed` function and
pass it an integer that you pick.
Also note that when R starts up, it creates its own seed to use. So if you do not
explicitly call the `set.seed` function in your code, your results will
likely not be reproducible.
And finally, be careful to set the seed *only once* at the beginning of a data
analysis. Each time you set the seed, you are inserting your own human input,
thereby influencing the analysis. If you use `set.seed` many times
throughout your analysis, the randomness that R uses will not look
as random as it should.
In summary: if you want your analysis to be reproducible, i.e., produce *the same result* each time you
run it, make sure to use `set.seed` exactly once at the beginning of the analysis.
Different argument values in `set.seed` lead to different patterns of randomness, but as long as
you pick the same argument value your result will be the same.
In the remainder of the textbook, we will set the seed once at the beginning of each chapter.
6\.5 Evaluating performance with `tidymodels`
---------------------------------------------
Back to evaluating classifiers now!
In R, we can use the `tidymodels` package not only to perform K\-nearest neighbors
classification, but also to assess how well our classification worked.
Let’s work through an example of how to use tools from `tidymodels` to evaluate a classifier
using the breast cancer data set from the previous chapter.
We begin the analysis by loading the packages we require,
reading in the breast cancer data,
and then making a quick scatter plot visualization of
tumor cell concavity versus smoothness colored by diagnosis in Figure [6\.3](classification2.html#fig:06-precode).
You will also notice that we set the random seed here at the beginning of the analysis
using the `set.seed` function, as described in Section [6\.4](classification2.html#randomseeds).
```
# load packages
library(tidyverse)
library(tidymodels)
# set the seed
set.seed(1)
# load data
cancer <- read_csv("data/wdbc_unscaled.csv") |>
# convert the character Class variable to the factor datatype
mutate(Class = as_factor(Class)) |>
# rename the factor values to be more readable
mutate(Class = fct_recode(Class, "Malignant" = "M", "Benign" = "B"))
# create scatter plot of tumor cell concavity versus smoothness,
# labeling the points be diagnosis class
perim_concav <- cancer |>
ggplot(aes(x = Smoothness, y = Concavity, color = Class)) +
geom_point(alpha = 0.5) +
labs(color = "Diagnosis") +
scale_color_manual(values = c("darkorange", "steelblue")) +
theme(text = element_text(size = 12))
perim_concav
```
Figure 6\.3: Scatter plot of tumor cell concavity versus smoothness colored by diagnosis label.
### 6\.5\.1 Create the train / test split
Once we have decided on a predictive question to answer and done some
preliminary exploration, the very next thing to do is to split the data into
the training and test sets. Typically, the training set is between 50% and 95% of
the data, while the test set is the remaining 5% to 50%; the intuition is that
you want to trade off between training an accurate model (by using a larger
training data set) and getting an accurate evaluation of its performance (by
using a larger test data set). Here, we will use 75% of the data for training,
and 25% for testing.
The `initial_split` function from `tidymodels` handles the procedure of splitting
the data for us. It also applies two very important steps when splitting to ensure
that the accuracy estimates from the test data are reasonable. First, it
**shuffles** the data before splitting, which ensures that any ordering present
in the data does not influence the data that ends up in the training and testing sets.
Second, it **stratifies** the data by the class label, to ensure that roughly
the same proportion of each class ends up in both the training and testing sets. For example,
in our data set, roughly 63% of the
observations are from the benign class, and 37% are from the malignant class,
so `initial_split` ensures that roughly 63% of the training data are benign,
37% of the training data are malignant,
and the same proportions exist in the testing data.
Let’s use the `initial_split` function to create the training and testing sets.
We will specify that `prop = 0.75` so that 75% of our original data set ends up
in the training set. We will also set the `strata` argument to the categorical label variable
(here, `Class`) to ensure that the training and testing subsets contain the
right proportions of each category of observation.
The `training` and `testing` functions then extract the training and testing
data sets into two separate data frames.
Note that the `initial_split` function uses randomness, but since we set the
seed earlier in the chapter, the split will be reproducible.
```
cancer_split <- initial_split(cancer, prop = 0.75, strata = Class)
cancer_train <- training(cancer_split)
cancer_test <- testing(cancer_split)
```
```
glimpse(cancer_train)
```
```
## Rows: 426
## Columns: 12
## $ ID <dbl> 8510426, 8510653, 8510824, 854941, 85713702, 857155,…
## $ Class <fct> Benign, Benign, Benign, Benign, Benign, Benign, Beni…
## $ Radius <dbl> 13.540, 13.080, 9.504, 13.030, 8.196, 12.050, 13.490…
## $ Texture <dbl> 14.36, 15.71, 12.44, 18.42, 16.84, 14.63, 22.30, 21.…
## $ Perimeter <dbl> 87.46, 85.63, 60.34, 82.61, 51.71, 78.04, 86.91, 74.…
## $ Area <dbl> 566.3, 520.0, 273.9, 523.8, 201.9, 449.3, 561.0, 427…
## $ Smoothness <dbl> 0.09779, 0.10750, 0.10240, 0.08983, 0.08600, 0.10310…
## $ Compactness <dbl> 0.08129, 0.12700, 0.06492, 0.03766, 0.05943, 0.09092…
## $ Concavity <dbl> 0.066640, 0.045680, 0.029560, 0.025620, 0.015880, 0.…
## $ Concave_Points <dbl> 0.047810, 0.031100, 0.020760, 0.029230, 0.005917, 0.…
## $ Symmetry <dbl> 0.1885, 0.1967, 0.1815, 0.1467, 0.1769, 0.1675, 0.18…
## $ Fractal_Dimension <dbl> 0.05766, 0.06811, 0.06905, 0.05863, 0.06503, 0.06043…
```
```
glimpse(cancer_test)
```
```
## Rows: 143
## Columns: 12
## $ ID <dbl> 842517, 84300903, 84501001, 84610002, 848406, 848620…
## $ Class <fct> Malignant, Malignant, Malignant, Malignant, Malignan…
## $ Radius <dbl> 20.570, 19.690, 12.460, 15.780, 14.680, 16.130, 19.8…
## $ Texture <dbl> 17.77, 21.25, 24.04, 17.89, 20.13, 20.68, 22.15, 14.…
## $ Perimeter <dbl> 132.90, 130.00, 83.97, 103.60, 94.74, 108.10, 130.00…
## $ Area <dbl> 1326.0, 1203.0, 475.9, 781.0, 684.5, 798.8, 1260.0, …
## $ Smoothness <dbl> 0.08474, 0.10960, 0.11860, 0.09710, 0.09867, 0.11700…
## $ Compactness <dbl> 0.07864, 0.15990, 0.23960, 0.12920, 0.07200, 0.20220…
## $ Concavity <dbl> 0.08690, 0.19740, 0.22730, 0.09954, 0.07395, 0.17220…
## $ Concave_Points <dbl> 0.070170, 0.127900, 0.085430, 0.066060, 0.052590, 0.…
## $ Symmetry <dbl> 0.1812, 0.2069, 0.2030, 0.1842, 0.1586, 0.2164, 0.15…
## $ Fractal_Dimension <dbl> 0.05667, 0.05999, 0.08243, 0.06082, 0.05922, 0.07356…
```
We can see from `glimpse` in the code above that the training set contains 426
observations, while the test set contains 143 observations. This corresponds to
a train / test split of 75% / 25%, as desired. Recall from Chapter [5](classification1.html#classification1)
that we use the `glimpse` function to view data with a large number of columns,
as it prints the data such that the columns go down the page (instead of across).
We can use `group_by` and `summarize` to find the percentage of malignant and benign classes
in `cancer_train` and we see about 63% of the training
data are benign and 37%
are malignant, indicating that our class proportions were roughly preserved when we split the data.
```
cancer_proportions <- cancer_train |>
group_by(Class) |>
summarize(n = n()) |>
mutate(percent = 100*n/nrow(cancer_train))
cancer_proportions
```
```
## # A tibble: 2 × 3
## Class n percent
## <fct> <int> <dbl>
## 1 Malignant 159 37.3
## 2 Benign 267 62.7
```
### 6\.5\.2 Preprocess the data
As we mentioned in the last chapter, K\-nearest neighbors is sensitive to the scale of the predictors,
so we should perform some preprocessing to standardize them. An
additional consideration we need to take when doing this is that we should
create the standardization preprocessor using **only the training data**. This ensures that
our test data does not influence any aspect of our model training. Once we have
created the standardization preprocessor, we can then apply it separately to both the
training and test data sets.
Fortunately, the `recipe` framework from `tidymodels` helps us handle
this properly. Below we construct and prepare the recipe using only the training
data (due to `data = cancer_train` in the first line).
```
cancer_recipe <- recipe(Class ~ Smoothness + Concavity, data = cancer_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
```
### 6\.5\.3 Train the classifier
Now that we have split our original data set into training and test sets, we
can create our K\-nearest neighbors classifier with only the training set using
the technique we learned in the previous chapter. For now, we will just choose
the number \\(K\\) of neighbors to be 3, and use concavity and smoothness as the
predictors. As before we need to create a model specification, combine
the model specification and recipe into a workflow, and then finally
use `fit` with the training data `cancer_train` to build the classifier.
```
knn_spec <- nearest_neighbor(weight_func = "rectangular", neighbors = 3) |>
set_engine("kknn") |>
set_mode("classification")
knn_fit <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit(data = cancer_train)
knn_fit
```
```
## ══ Workflow [trained] ══════════
## Preprocessor: Recipe
## Model: nearest_neighbor()
##
## ── Preprocessor ──────────
## 2 Recipe Steps
##
## • step_scale()
## • step_center()
##
## ── Model ──────────
##
## Call:
## kknn::train.kknn(formula = ..y ~ ., data = data, ks = min_rows(3, data, 5),
## kernel = ~"rectangular")
##
## Type of response variable: nominal
## Minimal misclassification: 0.1126761
## Best kernel: rectangular
## Best k: 3
```
### 6\.5\.4 Predict the labels in the test set
Now that we have a K\-nearest neighbors classifier object, we can use it to
predict the class labels for our test set. We use the `bind_cols` to add the
column of predictions to the original test data, creating the
`cancer_test_predictions` data frame. The `Class` variable contains the actual
diagnoses, while the `.pred_class` contains the predicted diagnoses from the
classifier.
```
cancer_test_predictions <- predict(knn_fit, cancer_test) |>
bind_cols(cancer_test)
cancer_test_predictions
```
```
## # A tibble: 143 × 13
## .pred_class ID Class Radius Texture Perimeter Area Smoothness
## <fct> <dbl> <fct> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Benign 842517 Malignant 20.6 17.8 133. 1326 0.0847
## 2 Malignant 84300903 Malignant 19.7 21.2 130 1203 0.110
## 3 Malignant 84501001 Malignant 12.5 24.0 84.0 476. 0.119
## 4 Malignant 84610002 Malignant 15.8 17.9 104. 781 0.0971
## 5 Benign 848406 Malignant 14.7 20.1 94.7 684. 0.0987
## 6 Malignant 84862001 Malignant 16.1 20.7 108. 799. 0.117
## 7 Malignant 849014 Malignant 19.8 22.2 130 1260 0.0983
## 8 Malignant 8511133 Malignant 15.3 14.3 102. 704. 0.107
## 9 Malignant 852552 Malignant 16.6 21.4 110 905. 0.112
## 10 Malignant 853612 Malignant 11.8 18.7 77.9 441. 0.111
## # ℹ 133 more rows
## # ℹ 5 more variables: Compactness <dbl>, Concavity <dbl>, Concave_Points <dbl>,
## # Symmetry <dbl>, Fractal_Dimension <dbl>
```
### 6\.5\.5 Evaluate performance
Finally, we can assess our classifier’s performance. First, we will examine
accuracy. To do this we use the
`metrics` function from `tidymodels`,
specifying the `truth` and `estimate` arguments:
```
cancer_test_predictions |>
metrics(truth = Class, estimate = .pred_class) |>
filter(.metric == "accuracy")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.853
```
In the metrics data frame, we filtered the `.metric` column since we are
interested in the `accuracy` row. Other entries involve other metrics that
are beyond the scope of this book. Looking at the value of the `.estimate` variable
shows that the estimated accuracy of the classifier on the test data
was 85%.
To compute the precision and recall, we can use the `precision` and `recall` functions
from `tidymodels`. We first check the order of the
labels in the `Class` variable using the `levels` function:
```
cancer_test_predictions |> pull(Class) |> levels()
```
```
## [1] "Malignant" "Benign"
```
This shows that `"Malignant"` is the first level. Therefore we will set
the `truth` and `estimate` arguments to `Class` and `.pred_class` as before,
but also specify that the “positive” class corresponds to the first factor level via `event_level="first"`.
If the labels were in the other order, we would instead use `event_level="second"`.
```
cancer_test_predictions |>
precision(truth = Class, estimate = .pred_class, event_level = "first")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 precision binary 0.767
```
```
cancer_test_predictions |>
recall(truth = Class, estimate = .pred_class, event_level = "first")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 recall binary 0.868
```
The output shows that the estimated precision and recall of the classifier on the test data was
77% and 87%, respectively.
Finally, we can look at the *confusion matrix* for the classifier using the `conf_mat` function.
```
confusion <- cancer_test_predictions |>
conf_mat(truth = Class, estimate = .pred_class)
confusion
```
```
## Truth
## Prediction Malignant Benign
## Malignant 46 14
## Benign 7 76
```
The confusion matrix shows 46 observations were correctly predicted
as malignant, and 76 were correctly predicted as benign.
It also shows that the classifier made some mistakes; in particular,
it classified 7 observations as benign when they were actually malignant,
and 14 observations as malignant when they were actually benign.
Using our formulas from earlier, we see that the accuracy, precision, and recall agree with what R reported.
\\\[\\mathrm{accuracy} \= \\frac{\\mathrm{number \\; of \\; correct \\; predictions}}{\\mathrm{total \\; number \\; of \\; predictions}} \= \\frac{46\+76}{46\+76\+14\+7} \= 0\.853\\]
\\\[\\mathrm{precision} \= \\frac{\\mathrm{number \\; of \\; correct \\; positive \\; predictions}}{\\mathrm{total \\; number \\; of \\; positive \\; predictions}} \= \\frac{46}{46 \+ 14} \= 0\.767\\]
\\\[\\mathrm{recall} \= \\frac{\\mathrm{number \\; of \\; correct \\; positive \\; predictions}}{\\mathrm{total \\; number \\; of \\; positive \\; test \\; set \\; observations}} \= \\frac{46}{46\+7} \= 0\.868\\]
### 6\.5\.6 Critically analyze performance
We now know that the classifier was 85% accurate
on the test data set, and had a precision of 77% and a recall of 87%.
That sounds pretty good! Wait, *is* it good? Or do we need something higher?
In general, a *good* value for accuracy (as well as precision and recall, if applicable)
depends on the application; you must critically analyze your accuracy in the context of the problem
you are solving. For example, if we were building a classifier for a kind of tumor that is benign 99%
of the time, a classifier with 99% accuracy is not terribly impressive (just always guess benign!).
And beyond just accuracy, we need to consider the precision and recall: as mentioned
earlier, the *kind* of mistake the classifier makes is
important in many applications as well. In the previous example with 99% benign observations, it might be very bad for the
classifier to predict “benign” when the actual class is “malignant” (a false negative), as this
might result in a patient not receiving appropriate medical attention. In other
words, in this context, we need the classifier to have a *high recall*. On the
other hand, it might be less bad for the classifier to guess “malignant” when
the actual class is “benign” (a false positive), as the patient will then likely see a doctor who
can provide an expert diagnosis. In other words, we are fine with sacrificing
some precision in the interest of achieving high recall. This is why it is
important not only to look at accuracy, but also the confusion matrix.
However, there is always an easy baseline that you can compare to for any
classification problem: the *majority classifier*. The majority classifier
*always* guesses the majority class label from the training data, regardless of
the predictor variables’ values. It helps to give you a sense of
scale when considering accuracies. If the majority classifier obtains a 90%
accuracy on a problem, then you might hope for your K\-nearest neighbors
classifier to do better than that. If your classifier provides a significant
improvement upon the majority classifier, this means that at least your method
is extracting some useful information from your predictor variables. Be
careful though: improving on the majority classifier does not *necessarily*
mean the classifier is working well enough for your application.
As an example, in the breast cancer data, recall the proportions of benign and malignant
observations in the training data are as follows:
```
cancer_proportions
```
```
## # A tibble: 2 × 3
## Class n percent
## <fct> <int> <dbl>
## 1 Malignant 159 37.3
## 2 Benign 267 62.7
```
Since the benign class represents the majority of the training data,
the majority classifier would *always* predict that a new observation
is benign. The estimated accuracy of the majority classifier is usually
fairly close to the majority class proportion in the training data.
In this case, we would suspect that the majority classifier will have
an accuracy of around 63%.
The K\-nearest neighbors classifier we built does quite a bit better than this,
with an accuracy of 85%.
This means that from the perspective of accuracy,
the K\-nearest neighbors classifier improved quite a bit on the basic
majority classifier. Hooray! But we still need to be cautious; in
this application, it is likely very important not to misdiagnose any malignant tumors to avoid missing
patients who actually need medical care. The confusion matrix above shows
that the classifier does, indeed, misdiagnose a significant number of malignant tumors as benign (7
out of 53 malignant tumors, or 13%!).
Therefore, even though the accuracy improved upon the majority classifier,
our critical analysis suggests that this classifier may not have appropriate performance
for the application.
### 6\.5\.1 Create the train / test split
Once we have decided on a predictive question to answer and done some
preliminary exploration, the very next thing to do is to split the data into
the training and test sets. Typically, the training set is between 50% and 95% of
the data, while the test set is the remaining 5% to 50%; the intuition is that
you want to trade off between training an accurate model (by using a larger
training data set) and getting an accurate evaluation of its performance (by
using a larger test data set). Here, we will use 75% of the data for training,
and 25% for testing.
The `initial_split` function from `tidymodels` handles the procedure of splitting
the data for us. It also applies two very important steps when splitting to ensure
that the accuracy estimates from the test data are reasonable. First, it
**shuffles** the data before splitting, which ensures that any ordering present
in the data does not influence the data that ends up in the training and testing sets.
Second, it **stratifies** the data by the class label, to ensure that roughly
the same proportion of each class ends up in both the training and testing sets. For example,
in our data set, roughly 63% of the
observations are from the benign class, and 37% are from the malignant class,
so `initial_split` ensures that roughly 63% of the training data are benign,
37% of the training data are malignant,
and the same proportions exist in the testing data.
Let’s use the `initial_split` function to create the training and testing sets.
We will specify that `prop = 0.75` so that 75% of our original data set ends up
in the training set. We will also set the `strata` argument to the categorical label variable
(here, `Class`) to ensure that the training and testing subsets contain the
right proportions of each category of observation.
The `training` and `testing` functions then extract the training and testing
data sets into two separate data frames.
Note that the `initial_split` function uses randomness, but since we set the
seed earlier in the chapter, the split will be reproducible.
```
cancer_split <- initial_split(cancer, prop = 0.75, strata = Class)
cancer_train <- training(cancer_split)
cancer_test <- testing(cancer_split)
```
```
glimpse(cancer_train)
```
```
## Rows: 426
## Columns: 12
## $ ID <dbl> 8510426, 8510653, 8510824, 854941, 85713702, 857155,…
## $ Class <fct> Benign, Benign, Benign, Benign, Benign, Benign, Beni…
## $ Radius <dbl> 13.540, 13.080, 9.504, 13.030, 8.196, 12.050, 13.490…
## $ Texture <dbl> 14.36, 15.71, 12.44, 18.42, 16.84, 14.63, 22.30, 21.…
## $ Perimeter <dbl> 87.46, 85.63, 60.34, 82.61, 51.71, 78.04, 86.91, 74.…
## $ Area <dbl> 566.3, 520.0, 273.9, 523.8, 201.9, 449.3, 561.0, 427…
## $ Smoothness <dbl> 0.09779, 0.10750, 0.10240, 0.08983, 0.08600, 0.10310…
## $ Compactness <dbl> 0.08129, 0.12700, 0.06492, 0.03766, 0.05943, 0.09092…
## $ Concavity <dbl> 0.066640, 0.045680, 0.029560, 0.025620, 0.015880, 0.…
## $ Concave_Points <dbl> 0.047810, 0.031100, 0.020760, 0.029230, 0.005917, 0.…
## $ Symmetry <dbl> 0.1885, 0.1967, 0.1815, 0.1467, 0.1769, 0.1675, 0.18…
## $ Fractal_Dimension <dbl> 0.05766, 0.06811, 0.06905, 0.05863, 0.06503, 0.06043…
```
```
glimpse(cancer_test)
```
```
## Rows: 143
## Columns: 12
## $ ID <dbl> 842517, 84300903, 84501001, 84610002, 848406, 848620…
## $ Class <fct> Malignant, Malignant, Malignant, Malignant, Malignan…
## $ Radius <dbl> 20.570, 19.690, 12.460, 15.780, 14.680, 16.130, 19.8…
## $ Texture <dbl> 17.77, 21.25, 24.04, 17.89, 20.13, 20.68, 22.15, 14.…
## $ Perimeter <dbl> 132.90, 130.00, 83.97, 103.60, 94.74, 108.10, 130.00…
## $ Area <dbl> 1326.0, 1203.0, 475.9, 781.0, 684.5, 798.8, 1260.0, …
## $ Smoothness <dbl> 0.08474, 0.10960, 0.11860, 0.09710, 0.09867, 0.11700…
## $ Compactness <dbl> 0.07864, 0.15990, 0.23960, 0.12920, 0.07200, 0.20220…
## $ Concavity <dbl> 0.08690, 0.19740, 0.22730, 0.09954, 0.07395, 0.17220…
## $ Concave_Points <dbl> 0.070170, 0.127900, 0.085430, 0.066060, 0.052590, 0.…
## $ Symmetry <dbl> 0.1812, 0.2069, 0.2030, 0.1842, 0.1586, 0.2164, 0.15…
## $ Fractal_Dimension <dbl> 0.05667, 0.05999, 0.08243, 0.06082, 0.05922, 0.07356…
```
We can see from `glimpse` in the code above that the training set contains 426
observations, while the test set contains 143 observations. This corresponds to
a train / test split of 75% / 25%, as desired. Recall from Chapter [5](classification1.html#classification1)
that we use the `glimpse` function to view data with a large number of columns,
as it prints the data such that the columns go down the page (instead of across).
We can use `group_by` and `summarize` to find the percentage of malignant and benign classes
in `cancer_train` and we see about 63% of the training
data are benign and 37%
are malignant, indicating that our class proportions were roughly preserved when we split the data.
```
cancer_proportions <- cancer_train |>
group_by(Class) |>
summarize(n = n()) |>
mutate(percent = 100*n/nrow(cancer_train))
cancer_proportions
```
```
## # A tibble: 2 × 3
## Class n percent
## <fct> <int> <dbl>
## 1 Malignant 159 37.3
## 2 Benign 267 62.7
```
### 6\.5\.2 Preprocess the data
As we mentioned in the last chapter, K\-nearest neighbors is sensitive to the scale of the predictors,
so we should perform some preprocessing to standardize them. An
additional consideration we need to take when doing this is that we should
create the standardization preprocessor using **only the training data**. This ensures that
our test data does not influence any aspect of our model training. Once we have
created the standardization preprocessor, we can then apply it separately to both the
training and test data sets.
Fortunately, the `recipe` framework from `tidymodels` helps us handle
this properly. Below we construct and prepare the recipe using only the training
data (due to `data = cancer_train` in the first line).
```
cancer_recipe <- recipe(Class ~ Smoothness + Concavity, data = cancer_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
```
### 6\.5\.3 Train the classifier
Now that we have split our original data set into training and test sets, we
can create our K\-nearest neighbors classifier with only the training set using
the technique we learned in the previous chapter. For now, we will just choose
the number \\(K\\) of neighbors to be 3, and use concavity and smoothness as the
predictors. As before we need to create a model specification, combine
the model specification and recipe into a workflow, and then finally
use `fit` with the training data `cancer_train` to build the classifier.
```
knn_spec <- nearest_neighbor(weight_func = "rectangular", neighbors = 3) |>
set_engine("kknn") |>
set_mode("classification")
knn_fit <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit(data = cancer_train)
knn_fit
```
```
## ══ Workflow [trained] ══════════
## Preprocessor: Recipe
## Model: nearest_neighbor()
##
## ── Preprocessor ──────────
## 2 Recipe Steps
##
## • step_scale()
## • step_center()
##
## ── Model ──────────
##
## Call:
## kknn::train.kknn(formula = ..y ~ ., data = data, ks = min_rows(3, data, 5),
## kernel = ~"rectangular")
##
## Type of response variable: nominal
## Minimal misclassification: 0.1126761
## Best kernel: rectangular
## Best k: 3
```
### 6\.5\.4 Predict the labels in the test set
Now that we have a K\-nearest neighbors classifier object, we can use it to
predict the class labels for our test set. We use the `bind_cols` to add the
column of predictions to the original test data, creating the
`cancer_test_predictions` data frame. The `Class` variable contains the actual
diagnoses, while the `.pred_class` contains the predicted diagnoses from the
classifier.
```
cancer_test_predictions <- predict(knn_fit, cancer_test) |>
bind_cols(cancer_test)
cancer_test_predictions
```
```
## # A tibble: 143 × 13
## .pred_class ID Class Radius Texture Perimeter Area Smoothness
## <fct> <dbl> <fct> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Benign 842517 Malignant 20.6 17.8 133. 1326 0.0847
## 2 Malignant 84300903 Malignant 19.7 21.2 130 1203 0.110
## 3 Malignant 84501001 Malignant 12.5 24.0 84.0 476. 0.119
## 4 Malignant 84610002 Malignant 15.8 17.9 104. 781 0.0971
## 5 Benign 848406 Malignant 14.7 20.1 94.7 684. 0.0987
## 6 Malignant 84862001 Malignant 16.1 20.7 108. 799. 0.117
## 7 Malignant 849014 Malignant 19.8 22.2 130 1260 0.0983
## 8 Malignant 8511133 Malignant 15.3 14.3 102. 704. 0.107
## 9 Malignant 852552 Malignant 16.6 21.4 110 905. 0.112
## 10 Malignant 853612 Malignant 11.8 18.7 77.9 441. 0.111
## # ℹ 133 more rows
## # ℹ 5 more variables: Compactness <dbl>, Concavity <dbl>, Concave_Points <dbl>,
## # Symmetry <dbl>, Fractal_Dimension <dbl>
```
### 6\.5\.5 Evaluate performance
Finally, we can assess our classifier’s performance. First, we will examine
accuracy. To do this we use the
`metrics` function from `tidymodels`,
specifying the `truth` and `estimate` arguments:
```
cancer_test_predictions |>
metrics(truth = Class, estimate = .pred_class) |>
filter(.metric == "accuracy")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.853
```
In the metrics data frame, we filtered the `.metric` column since we are
interested in the `accuracy` row. Other entries involve other metrics that
are beyond the scope of this book. Looking at the value of the `.estimate` variable
shows that the estimated accuracy of the classifier on the test data
was 85%.
To compute the precision and recall, we can use the `precision` and `recall` functions
from `tidymodels`. We first check the order of the
labels in the `Class` variable using the `levels` function:
```
cancer_test_predictions |> pull(Class) |> levels()
```
```
## [1] "Malignant" "Benign"
```
This shows that `"Malignant"` is the first level. Therefore we will set
the `truth` and `estimate` arguments to `Class` and `.pred_class` as before,
but also specify that the “positive” class corresponds to the first factor level via `event_level="first"`.
If the labels were in the other order, we would instead use `event_level="second"`.
```
cancer_test_predictions |>
precision(truth = Class, estimate = .pred_class, event_level = "first")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 precision binary 0.767
```
```
cancer_test_predictions |>
recall(truth = Class, estimate = .pred_class, event_level = "first")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 recall binary 0.868
```
The output shows that the estimated precision and recall of the classifier on the test data was
77% and 87%, respectively.
Finally, we can look at the *confusion matrix* for the classifier using the `conf_mat` function.
```
confusion <- cancer_test_predictions |>
conf_mat(truth = Class, estimate = .pred_class)
confusion
```
```
## Truth
## Prediction Malignant Benign
## Malignant 46 14
## Benign 7 76
```
The confusion matrix shows 46 observations were correctly predicted
as malignant, and 76 were correctly predicted as benign.
It also shows that the classifier made some mistakes; in particular,
it classified 7 observations as benign when they were actually malignant,
and 14 observations as malignant when they were actually benign.
Using our formulas from earlier, we see that the accuracy, precision, and recall agree with what R reported.
\\\[\\mathrm{accuracy} \= \\frac{\\mathrm{number \\; of \\; correct \\; predictions}}{\\mathrm{total \\; number \\; of \\; predictions}} \= \\frac{46\+76}{46\+76\+14\+7} \= 0\.853\\]
\\\[\\mathrm{precision} \= \\frac{\\mathrm{number \\; of \\; correct \\; positive \\; predictions}}{\\mathrm{total \\; number \\; of \\; positive \\; predictions}} \= \\frac{46}{46 \+ 14} \= 0\.767\\]
\\\[\\mathrm{recall} \= \\frac{\\mathrm{number \\; of \\; correct \\; positive \\; predictions}}{\\mathrm{total \\; number \\; of \\; positive \\; test \\; set \\; observations}} \= \\frac{46}{46\+7} \= 0\.868\\]
### 6\.5\.6 Critically analyze performance
We now know that the classifier was 85% accurate
on the test data set, and had a precision of 77% and a recall of 87%.
That sounds pretty good! Wait, *is* it good? Or do we need something higher?
In general, a *good* value for accuracy (as well as precision and recall, if applicable)
depends on the application; you must critically analyze your accuracy in the context of the problem
you are solving. For example, if we were building a classifier for a kind of tumor that is benign 99%
of the time, a classifier with 99% accuracy is not terribly impressive (just always guess benign!).
And beyond just accuracy, we need to consider the precision and recall: as mentioned
earlier, the *kind* of mistake the classifier makes is
important in many applications as well. In the previous example with 99% benign observations, it might be very bad for the
classifier to predict “benign” when the actual class is “malignant” (a false negative), as this
might result in a patient not receiving appropriate medical attention. In other
words, in this context, we need the classifier to have a *high recall*. On the
other hand, it might be less bad for the classifier to guess “malignant” when
the actual class is “benign” (a false positive), as the patient will then likely see a doctor who
can provide an expert diagnosis. In other words, we are fine with sacrificing
some precision in the interest of achieving high recall. This is why it is
important not only to look at accuracy, but also the confusion matrix.
However, there is always an easy baseline that you can compare to for any
classification problem: the *majority classifier*. The majority classifier
*always* guesses the majority class label from the training data, regardless of
the predictor variables’ values. It helps to give you a sense of
scale when considering accuracies. If the majority classifier obtains a 90%
accuracy on a problem, then you might hope for your K\-nearest neighbors
classifier to do better than that. If your classifier provides a significant
improvement upon the majority classifier, this means that at least your method
is extracting some useful information from your predictor variables. Be
careful though: improving on the majority classifier does not *necessarily*
mean the classifier is working well enough for your application.
As an example, in the breast cancer data, recall the proportions of benign and malignant
observations in the training data are as follows:
```
cancer_proportions
```
```
## # A tibble: 2 × 3
## Class n percent
## <fct> <int> <dbl>
## 1 Malignant 159 37.3
## 2 Benign 267 62.7
```
Since the benign class represents the majority of the training data,
the majority classifier would *always* predict that a new observation
is benign. The estimated accuracy of the majority classifier is usually
fairly close to the majority class proportion in the training data.
In this case, we would suspect that the majority classifier will have
an accuracy of around 63%.
The K\-nearest neighbors classifier we built does quite a bit better than this,
with an accuracy of 85%.
This means that from the perspective of accuracy,
the K\-nearest neighbors classifier improved quite a bit on the basic
majority classifier. Hooray! But we still need to be cautious; in
this application, it is likely very important not to misdiagnose any malignant tumors to avoid missing
patients who actually need medical care. The confusion matrix above shows
that the classifier does, indeed, misdiagnose a significant number of malignant tumors as benign (7
out of 53 malignant tumors, or 13%!).
Therefore, even though the accuracy improved upon the majority classifier,
our critical analysis suggests that this classifier may not have appropriate performance
for the application.
6\.6 Tuning the classifier
--------------------------
The vast majority of predictive models in statistics and machine learning have
*parameters*. A *parameter*
is a number you have to pick in advance that determines
some aspect of how the model behaves. For example, in the K\-nearest neighbors
classification algorithm, \\(K\\) is a parameter that we have to pick
that determines how many neighbors participate in the class vote.
By picking different values of \\(K\\), we create different classifiers
that make different predictions.
So then, how do we pick the *best* value of \\(K\\), i.e., *tune* the model?
And is it possible to make this selection in a principled way? In this book,
we will focus on maximizing the accuracy of the classifier. Ideally,
we want somehow to maximize the accuracy of our classifier on data *it
hasn’t seen yet*. But we cannot use our test data set in the process of building
our model. So we will play the same trick we did before when evaluating
our classifier: we’ll split our *training data itself* into two subsets,
use one to train the model, and then use the other to evaluate it.
In this section, we will cover the details of this procedure, as well as
how to use it to help you pick a good parameter value for your classifier.
**And remember:** don’t touch the test set during the tuning process. Tuning is a part of model training!
### 6\.6\.1 Cross\-validation
The first step in choosing the parameter \\(K\\) is to be able to evaluate the
classifier using only the training data. If this is possible, then we can compare
the classifier’s performance for different values of \\(K\\)—and pick the best—using
only the training data. As suggested at the beginning of this section, we will
accomplish this by splitting the training data, training on one subset, and evaluating
on the other. The subset of training data used for evaluation is often called the **validation set**.
There is, however, one key difference from the train/test split
that we performed earlier. In particular, we were forced to make only a *single split*
of the data. This is because at the end of the day, we have to produce a single classifier.
If we had multiple different splits of the data into training and testing data,
we would produce multiple different classifiers.
But while we are tuning the classifier, we are free to create multiple classifiers
based on multiple splits of the training data, evaluate them, and then choose a parameter
value based on ***all*** of the different results. If we just split our overall training
data *once*, our best parameter choice will depend strongly on whatever data
was lucky enough to end up in the validation set. Perhaps using multiple
different train/validation splits, we’ll get a better estimate of accuracy,
which will lead to a better choice of the number of neighbors \\(K\\) for the
overall set of training data.
Let’s investigate this idea in R! In particular, we will generate five different train/validation
splits of our overall training data, train five different K\-nearest neighbors
models, and evaluate their accuracy. We will start with just a single
split.
```
# create the 25/75 split of the training data into training and validation
cancer_split <- initial_split(cancer_train, prop = 0.75, strata = Class)
cancer_subtrain <- training(cancer_split)
cancer_validation <- testing(cancer_split)
# recreate the standardization recipe from before
# (since it must be based on the training data)
cancer_recipe <- recipe(Class ~ Smoothness + Concavity,
data = cancer_subtrain) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
# fit the knn model (we can reuse the old knn_spec model from before)
knn_fit <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit(data = cancer_subtrain)
# get predictions on the validation data
validation_predicted <- predict(knn_fit, cancer_validation) |>
bind_cols(cancer_validation)
# compute the accuracy
acc <- validation_predicted |>
metrics(truth = Class, estimate = .pred_class) |>
filter(.metric == "accuracy") |>
select(.estimate) |>
pull()
acc
```
```
## [1] 0.8598131
```
The accuracy estimate using this split is 86%.
Now we repeat the above code 4 more times, which generates 4 more splits.
Therefore we get five different shuffles of the data, and therefore five different values for
accuracy: 86\.0%, 89\.7%, 88\.8%, 86\.0%, 86\.9%. None of these values are
necessarily “more correct” than any other; they’re
just five estimates of the true, underlying accuracy of our classifier built
using our overall training data. We can combine the estimates by taking their
average (here 87%) to try to get a single assessment of our
classifier’s accuracy; this has the effect of reducing the influence of any one
(un)lucky validation set on the estimate.
In practice, we don’t use random splits, but rather use a more structured
splitting procedure so that each observation in the data set is used in a
validation set only a single time. The name for this strategy is
**cross\-validation**. In **cross\-validation**, we split our **overall training
data** into \\(C\\) evenly sized chunks. Then, iteratively use \\(1\\) chunk as the
**validation set** and combine the remaining \\(C\-1\\) chunks
as the **training set**.
This procedure is shown in Figure [6\.4](classification2.html#fig:06-cv-image).
Here, \\(C\=5\\) different chunks of the data set are used,
resulting in 5 different choices for the **validation set**; we call this
*5\-fold* cross\-validation.
Figure 6\.4: 5\-fold cross\-validation.
To perform 5\-fold cross\-validation in R with `tidymodels`, we use another
function: `vfold_cv`. This function splits our training data into `v` folds
automatically. We set the `strata` argument to the categorical label variable
(here, `Class`) to ensure that the training and validation subsets contain the
right proportions of each category of observation.
```
cancer_vfold <- vfold_cv(cancer_train, v = 5, strata = Class)
cancer_vfold
```
```
## # 5-fold cross-validation using stratification
## # A tibble: 5 × 2
## splits id
## <list> <chr>
## 1 <split [340/86]> Fold1
## 2 <split [340/86]> Fold2
## 3 <split [341/85]> Fold3
## 4 <split [341/85]> Fold4
## 5 <split [342/84]> Fold5
```
Then, when we create our data analysis workflow, we use the `fit_resamples` function
instead of the `fit` function for training. This runs cross\-validation on each
train/validation split.
```
# recreate the standardization recipe from before
# (since it must be based on the training data)
cancer_recipe <- recipe(Class ~ Smoothness + Concavity,
data = cancer_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
# fit the knn model (we can reuse the old knn_spec model from before)
knn_fit <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit_resamples(resamples = cancer_vfold)
knn_fit
```
```
## # Resampling results
## # 5-fold cross-validation using stratification
## # A tibble: 5 × 4
## splits id .metrics .notes
## <list> <chr> <list> <list>
## 1 <split [340/86]> Fold1 <tibble [2 × 4]> <tibble [0 × 3]>
## 2 <split [340/86]> Fold2 <tibble [2 × 4]> <tibble [0 × 3]>
## 3 <split [341/85]> Fold3 <tibble [2 × 4]> <tibble [0 × 3]>
## 4 <split [341/85]> Fold4 <tibble [2 × 4]> <tibble [0 × 3]>
## 5 <split [342/84]> Fold5 <tibble [2 × 4]> <tibble [0 × 3]>
```
The `collect_metrics` function is used to aggregate the *mean* and *standard error*
of the classifier’s validation accuracy across the folds. You will find results
related to the accuracy in the row with `accuracy` listed under the `.metric` column.
You should consider the mean (`mean`) to be the estimated accuracy, while the standard
error (`std_err`) is a measure of how uncertain we are in the mean value. A detailed treatment of this
is beyond the scope of this chapter; but roughly, if your estimated mean is 0\.89 and standard
error is 0\.02, you can expect the *true* average accuracy of the
classifier to be somewhere roughly between 87% and 91% (although it may
fall outside this range). You may ignore the other columns in the metrics data frame,
as they do not provide any additional insight.
You can also ignore the entire second row with `roc_auc` in the `.metric` column,
as it is beyond the scope of this book.
```
knn_fit |>
collect_metrics()
```
```
## # A tibble: 2 × 6
## .metric .estimator mean n std_err .config
## <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 accuracy binary 0.890 5 0.0180 Preprocessor1_Model1
## 2 roc_auc binary 0.925 5 0.0151 Preprocessor1_Model1
```
We can choose any number of folds, and typically the more we use the better our
accuracy estimate will be (lower standard error). However, we are limited
by computational power: the
more folds we choose, the more computation it takes, and hence the more time
it takes to run the analysis. So when you do cross\-validation, you need to
consider the size of the data, the speed of the algorithm (e.g., K\-nearest
neighbors), and the speed of your computer. In practice, this is a
trial\-and\-error process, but typically \\(C\\) is chosen to be either 5 or 10\. Here
we will try 10\-fold cross\-validation to see if we get a lower standard error:
```
cancer_vfold <- vfold_cv(cancer_train, v = 10, strata = Class)
vfold_metrics <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit_resamples(resamples = cancer_vfold) |>
collect_metrics()
vfold_metrics
```
```
## # A tibble: 2 × 6
## .metric .estimator mean n std_err .config
## <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 accuracy binary 0.890 10 0.0127 Preprocessor1_Model1
## 2 roc_auc binary 0.913 10 0.0150 Preprocessor1_Model1
```
In this case, using 10\-fold instead of 5\-fold cross validation did reduce the standard error, although
by only an insignificant amount. In fact, due to the randomness in how the data are split, sometimes
you might even end up with a *higher* standard error when increasing the number of folds!
We can make the reduction in standard error more dramatic by increasing the number of folds
by a large amount. In the following code we show the result when \\(C \= 50\\);
picking such a large number of folds often takes a long time to run in practice,
so we usually stick to 5 or 10\.
```
cancer_vfold_50 <- vfold_cv(cancer_train, v = 50, strata = Class)
vfold_metrics_50 <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit_resamples(resamples = cancer_vfold_50) |>
collect_metrics()
vfold_metrics_50
```
```
## # A tibble: 2 × 6
## .metric .estimator mean n std_err .config
## <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 accuracy binary 0.884 50 0.00568 Preprocessor1_Model1
## 2 roc_auc binary 0.926 50 0.0148 Preprocessor1_Model1
```
### 6\.6\.2 Parameter value selection
Using 5\- and 10\-fold cross\-validation, we have estimated that the prediction
accuracy of our classifier is somewhere around 89%.
Whether that is good or not
depends entirely on the downstream application of the data analysis. In the
present situation, we are trying to predict a tumor diagnosis, with expensive,
damaging chemo/radiation therapy or patient death as potential consequences of
misprediction. Hence, we might like to
do better than 89% for this application.
In order to improve our classifier, we have one choice of parameter: the number of
neighbors, \\(K\\). Since cross\-validation helps us evaluate the accuracy of our
classifier, we can use cross\-validation to calculate an accuracy for each value
of \\(K\\) in a reasonable range, and then pick the value of \\(K\\) that gives us the
best accuracy. The `tidymodels` package collection provides a very simple
syntax for tuning models: each parameter in the model to be tuned should be specified
as `tune()` in the model specification rather than given a particular value.
```
knn_spec <- nearest_neighbor(weight_func = "rectangular",
neighbors = tune()) |>
set_engine("kknn") |>
set_mode("classification")
```
Then instead of using `fit` or `fit_resamples`, we will use the `tune_grid` function
to fit the model for each value in a range of parameter values.
In particular, we first create a data frame with a `neighbors`
variable that contains the sequence of values of \\(K\\) to try; below we create the `k_vals`
data frame with the `neighbors` variable containing values from 1 to 100 (stepping by 5\) using
the `seq` function.
Then we pass that data frame to the `grid` argument of `tune_grid`.
```
k_vals <- tibble(neighbors = seq(from = 1, to = 100, by = 5))
knn_results <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
tune_grid(resamples = cancer_vfold, grid = k_vals) |>
collect_metrics()
accuracies <- knn_results |>
filter(.metric == "accuracy")
accuracies
```
```
## # A tibble: 20 × 7
## neighbors .metric .estimator mean n std_err .config
## <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 1 accuracy binary 0.866 10 0.0165 Preprocessor1_Model01
## 2 6 accuracy binary 0.890 10 0.0153 Preprocessor1_Model02
## 3 11 accuracy binary 0.887 10 0.0173 Preprocessor1_Model03
## 4 16 accuracy binary 0.887 10 0.0142 Preprocessor1_Model04
## 5 21 accuracy binary 0.887 10 0.0143 Preprocessor1_Model05
## 6 26 accuracy binary 0.887 10 0.0170 Preprocessor1_Model06
## 7 31 accuracy binary 0.897 10 0.0145 Preprocessor1_Model07
## 8 36 accuracy binary 0.899 10 0.0144 Preprocessor1_Model08
## 9 41 accuracy binary 0.892 10 0.0135 Preprocessor1_Model09
## 10 46 accuracy binary 0.892 10 0.0156 Preprocessor1_Model10
## 11 51 accuracy binary 0.890 10 0.0155 Preprocessor1_Model11
## 12 56 accuracy binary 0.873 10 0.0156 Preprocessor1_Model12
## 13 61 accuracy binary 0.876 10 0.0104 Preprocessor1_Model13
## 14 66 accuracy binary 0.871 10 0.0139 Preprocessor1_Model14
## 15 71 accuracy binary 0.876 10 0.0104 Preprocessor1_Model15
## 16 76 accuracy binary 0.873 10 0.0127 Preprocessor1_Model16
## 17 81 accuracy binary 0.876 10 0.0135 Preprocessor1_Model17
## 18 86 accuracy binary 0.873 10 0.0131 Preprocessor1_Model18
## 19 91 accuracy binary 0.873 10 0.0140 Preprocessor1_Model19
## 20 96 accuracy binary 0.866 10 0.0126 Preprocessor1_Model20
```
We can decide which number of neighbors is best by plotting the accuracy versus \\(K\\),
as shown in Figure [6\.5](classification2.html#fig:06-find-k).
```
accuracy_vs_k <- ggplot(accuracies, aes(x = neighbors, y = mean)) +
geom_point() +
geom_line() +
labs(x = "Neighbors", y = "Accuracy Estimate") +
theme(text = element_text(size = 12))
accuracy_vs_k
```
Figure 6\.5: Plot of estimated accuracy versus the number of neighbors.
We can also obtain the number of neighbours with the highest accuracy
programmatically by accessing the `neighbors` variable in the `accuracies` data
frame where the `mean` variable is highest.
Note that it is still useful to visualize the results as
we did above since this provides additional information on how the model
performance varies.
```
best_k <- accuracies |>
arrange(desc(mean)) |>
head(1) |>
pull(neighbors)
best_k
```
```
## [1] 36
```
Setting the number of
neighbors to \\(K \=\\) 36
provides the highest cross\-validation accuracy estimate (89\.89%). But there is no exact or perfect answer here;
any selection from \\(K \= 30\\) and \\(60\\) would be reasonably justified, as all
of these differ in classifier accuracy by a small amount. Remember: the
values you see on this plot are *estimates* of the true accuracy of our
classifier. Although the \\(K \=\\) 36 value is higher than the others on this plot,
that doesn’t mean the classifier is actually more accurate with this parameter
value! Generally, when selecting \\(K\\) (and other parameters for other predictive
models), we are looking for a value where:
* we get roughly optimal accuracy, so that our model will likely be accurate;
* changing the value to a nearby one (e.g., adding or subtracting a small number) doesn’t decrease accuracy too much, so that our choice is reliable in the presence of uncertainty;
* the cost of training the model is not prohibitive (e.g., in our situation, if \\(K\\) is too large, predicting becomes expensive!).
We know that \\(K \=\\) 36
provides the highest estimated accuracy. Further, Figure [6\.5](classification2.html#fig:06-find-k) shows that the estimated accuracy
changes by only a small amount if we increase or decrease \\(K\\) near \\(K \=\\) 36\.
And finally, \\(K \=\\) 36 does not create a prohibitively expensive
computational cost of training. Considering these three points, we would indeed select
\\(K \=\\) 36 for the classifier.
### 6\.6\.3 Under/Overfitting
To build a bit more intuition, what happens if we keep increasing the number of
neighbors \\(K\\)? In fact, the accuracy actually starts to decrease!
Let’s specify a much larger range of values of \\(K\\) to try in the `grid`
argument of `tune_grid`. Figure [6\.6](classification2.html#fig:06-lots-of-ks) shows a plot of estimated accuracy as
we vary \\(K\\) from 1 to almost the number of observations in the training set.
```
k_lots <- tibble(neighbors = seq(from = 1, to = 385, by = 10))
knn_results <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
tune_grid(resamples = cancer_vfold, grid = k_lots) |>
collect_metrics()
accuracies_lots <- knn_results |>
filter(.metric == "accuracy")
accuracy_vs_k_lots <- ggplot(accuracies_lots, aes(x = neighbors, y = mean)) +
geom_point() +
geom_line() +
labs(x = "Neighbors", y = "Accuracy Estimate") +
theme(text = element_text(size = 12))
accuracy_vs_k_lots
```
Figure 6\.6: Plot of accuracy estimate versus number of neighbors for many K values.
**Underfitting:** What is actually happening to our classifier that causes
this? As we increase the number of neighbors, more and more of the training
observations (and those that are farther and farther away from the point) get a
“say” in what the class of a new observation is. This causes a sort of
“averaging effect” to take place, making the boundary between where our
classifier would predict a tumor to be malignant versus benign to smooth out
and become *simpler.* If you take this to the extreme, setting \\(K\\) to the total
training data set size, then the classifier will always predict the same label
regardless of what the new observation looks like. In general, if the model
*isn’t influenced enough* by the training data, it is said to **underfit** the
data.
**Overfitting:** In contrast, when we decrease the number of neighbors, each
individual data point has a stronger and stronger vote regarding nearby points.
Since the data themselves are noisy, this causes a more “jagged” boundary
corresponding to a *less simple* model. If you take this case to the extreme,
setting \\(K \= 1\\), then the classifier is essentially just matching each new
observation to its closest neighbor in the training data set. This is just as
problematic as the large \\(K\\) case, because the classifier becomes unreliable on
new data: if we had a different training set, the predictions would be
completely different. In general, if the model *is influenced too much* by the
training data, it is said to **overfit** the data.
Figure 6\.7: Effect of K in overfitting and underfitting.
Both overfitting and underfitting are problematic and will lead to a model
that does not generalize well to new data. When fitting a model, we need to strike
a balance between the two. You can see these two effects in Figure
[6\.7](classification2.html#fig:06-decision-grid-K), which shows how the classifier changes as
we set the number of neighbors \\(K\\) to 1, 7, 20, and 300\.
### 6\.6\.4 Evaluating on the test set
Now that we have tuned the K\-NN classifier and set \\(K \=\\) 36,
we are done building the model and it is time to evaluate the quality of its predictions on the held out
test data, as we did earlier in Section [6\.5\.5](classification2.html#eval-performance-cls2).
We first need to retrain the K\-NN classifier
on the entire training data set using the selected number of neighbors.
```
cancer_recipe <- recipe(Class ~ Smoothness + Concavity, data = cancer_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
knn_spec <- nearest_neighbor(weight_func = "rectangular", neighbors = best_k) |>
set_engine("kknn") |>
set_mode("classification")
knn_fit <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit(data = cancer_train)
knn_fit
```
```
## ══ Workflow [trained] ══════════════════════════════════════════════════════════
## Preprocessor: Recipe
## Model: nearest_neighbor()
##
## ── Preprocessor ────────────────────────────────────────────────────────────────
## 2 Recipe Steps
##
## • step_scale()
## • step_center()
##
## ── Model ───────────────────────────────────────────────────────────────────────
##
## Call:
## kknn::train.kknn(formula = ..y ~ ., data = data, ks = min_rows(36, data, 5), kernel = ~"rectangular")
##
## Type of response variable: nominal
## Minimal misclassification: 0.1150235
## Best kernel: rectangular
## Best k: 36
```
Then to make predictions and assess the estimated accuracy of the best model on the test data, we use the
`predict` and `metrics` functions as we did earlier in the chapter. We can then pass those predictions to
the `precision`, `recall`, and `conf_mat` functions to assess the estimated precision and recall, and print a confusion matrix.
```
cancer_test_predictions <- predict(knn_fit, cancer_test) |>
bind_cols(cancer_test)
cancer_test_predictions |>
metrics(truth = Class, estimate = .pred_class) |>
filter(.metric == "accuracy")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.860
```
```
cancer_test_predictions |>
precision(truth = Class, estimate = .pred_class, event_level="first")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 precision binary 0.8
```
```
cancer_test_predictions |>
recall(truth = Class, estimate = .pred_class, event_level="first")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 recall binary 0.830
```
```
confusion <- cancer_test_predictions |>
conf_mat(truth = Class, estimate = .pred_class)
confusion
```
```
## Truth
## Prediction Malignant Benign
## Malignant 44 11
## Benign 9 79
```
At first glance, this is a bit surprising: the accuracy of the classifier
has only changed a small amount despite tuning the number of neighbors! Our first model
with \\(K \=\\) 3 (before we knew how to tune) had an estimated accuracy of 85%,
while the tuned model with \\(K \=\\) 36 had an estimated accuracy
of 86%.
Upon examining Figure [6\.5](classification2.html#fig:06-find-k) again to see the
cross validation accuracy estimates for a range of neighbors, this result
becomes much less surprising. From 1 to around 96 neighbors, the cross
validation accuracy estimate varies only by around 3%, with
each estimate having a standard error around 1%.
Since the cross\-validation accuracy estimates the test set accuracy,
the fact that the test set accuracy also doesn’t change much is expected.
Also note that the \\(K \=\\) 3 model had a precision
precision of 77% and recall of 87%,
while the tuned model had
a precision of 80% and recall of 83%.
Given that the recall decreased—remember, in this application, recall
is critical to making sure we find all the patients with malignant tumors—the tuned model may actually be *less* preferred
in this setting. In any case, it is important to think critically about the result of tuning. Models tuned to
maximize accuracy are not necessarily better for a given application.
### 6\.6\.1 Cross\-validation
The first step in choosing the parameter \\(K\\) is to be able to evaluate the
classifier using only the training data. If this is possible, then we can compare
the classifier’s performance for different values of \\(K\\)—and pick the best—using
only the training data. As suggested at the beginning of this section, we will
accomplish this by splitting the training data, training on one subset, and evaluating
on the other. The subset of training data used for evaluation is often called the **validation set**.
There is, however, one key difference from the train/test split
that we performed earlier. In particular, we were forced to make only a *single split*
of the data. This is because at the end of the day, we have to produce a single classifier.
If we had multiple different splits of the data into training and testing data,
we would produce multiple different classifiers.
But while we are tuning the classifier, we are free to create multiple classifiers
based on multiple splits of the training data, evaluate them, and then choose a parameter
value based on ***all*** of the different results. If we just split our overall training
data *once*, our best parameter choice will depend strongly on whatever data
was lucky enough to end up in the validation set. Perhaps using multiple
different train/validation splits, we’ll get a better estimate of accuracy,
which will lead to a better choice of the number of neighbors \\(K\\) for the
overall set of training data.
Let’s investigate this idea in R! In particular, we will generate five different train/validation
splits of our overall training data, train five different K\-nearest neighbors
models, and evaluate their accuracy. We will start with just a single
split.
```
# create the 25/75 split of the training data into training and validation
cancer_split <- initial_split(cancer_train, prop = 0.75, strata = Class)
cancer_subtrain <- training(cancer_split)
cancer_validation <- testing(cancer_split)
# recreate the standardization recipe from before
# (since it must be based on the training data)
cancer_recipe <- recipe(Class ~ Smoothness + Concavity,
data = cancer_subtrain) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
# fit the knn model (we can reuse the old knn_spec model from before)
knn_fit <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit(data = cancer_subtrain)
# get predictions on the validation data
validation_predicted <- predict(knn_fit, cancer_validation) |>
bind_cols(cancer_validation)
# compute the accuracy
acc <- validation_predicted |>
metrics(truth = Class, estimate = .pred_class) |>
filter(.metric == "accuracy") |>
select(.estimate) |>
pull()
acc
```
```
## [1] 0.8598131
```
The accuracy estimate using this split is 86%.
Now we repeat the above code 4 more times, which generates 4 more splits.
Therefore we get five different shuffles of the data, and therefore five different values for
accuracy: 86\.0%, 89\.7%, 88\.8%, 86\.0%, 86\.9%. None of these values are
necessarily “more correct” than any other; they’re
just five estimates of the true, underlying accuracy of our classifier built
using our overall training data. We can combine the estimates by taking their
average (here 87%) to try to get a single assessment of our
classifier’s accuracy; this has the effect of reducing the influence of any one
(un)lucky validation set on the estimate.
In practice, we don’t use random splits, but rather use a more structured
splitting procedure so that each observation in the data set is used in a
validation set only a single time. The name for this strategy is
**cross\-validation**. In **cross\-validation**, we split our **overall training
data** into \\(C\\) evenly sized chunks. Then, iteratively use \\(1\\) chunk as the
**validation set** and combine the remaining \\(C\-1\\) chunks
as the **training set**.
This procedure is shown in Figure [6\.4](classification2.html#fig:06-cv-image).
Here, \\(C\=5\\) different chunks of the data set are used,
resulting in 5 different choices for the **validation set**; we call this
*5\-fold* cross\-validation.
Figure 6\.4: 5\-fold cross\-validation.
To perform 5\-fold cross\-validation in R with `tidymodels`, we use another
function: `vfold_cv`. This function splits our training data into `v` folds
automatically. We set the `strata` argument to the categorical label variable
(here, `Class`) to ensure that the training and validation subsets contain the
right proportions of each category of observation.
```
cancer_vfold <- vfold_cv(cancer_train, v = 5, strata = Class)
cancer_vfold
```
```
## # 5-fold cross-validation using stratification
## # A tibble: 5 × 2
## splits id
## <list> <chr>
## 1 <split [340/86]> Fold1
## 2 <split [340/86]> Fold2
## 3 <split [341/85]> Fold3
## 4 <split [341/85]> Fold4
## 5 <split [342/84]> Fold5
```
Then, when we create our data analysis workflow, we use the `fit_resamples` function
instead of the `fit` function for training. This runs cross\-validation on each
train/validation split.
```
# recreate the standardization recipe from before
# (since it must be based on the training data)
cancer_recipe <- recipe(Class ~ Smoothness + Concavity,
data = cancer_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
# fit the knn model (we can reuse the old knn_spec model from before)
knn_fit <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit_resamples(resamples = cancer_vfold)
knn_fit
```
```
## # Resampling results
## # 5-fold cross-validation using stratification
## # A tibble: 5 × 4
## splits id .metrics .notes
## <list> <chr> <list> <list>
## 1 <split [340/86]> Fold1 <tibble [2 × 4]> <tibble [0 × 3]>
## 2 <split [340/86]> Fold2 <tibble [2 × 4]> <tibble [0 × 3]>
## 3 <split [341/85]> Fold3 <tibble [2 × 4]> <tibble [0 × 3]>
## 4 <split [341/85]> Fold4 <tibble [2 × 4]> <tibble [0 × 3]>
## 5 <split [342/84]> Fold5 <tibble [2 × 4]> <tibble [0 × 3]>
```
The `collect_metrics` function is used to aggregate the *mean* and *standard error*
of the classifier’s validation accuracy across the folds. You will find results
related to the accuracy in the row with `accuracy` listed under the `.metric` column.
You should consider the mean (`mean`) to be the estimated accuracy, while the standard
error (`std_err`) is a measure of how uncertain we are in the mean value. A detailed treatment of this
is beyond the scope of this chapter; but roughly, if your estimated mean is 0\.89 and standard
error is 0\.02, you can expect the *true* average accuracy of the
classifier to be somewhere roughly between 87% and 91% (although it may
fall outside this range). You may ignore the other columns in the metrics data frame,
as they do not provide any additional insight.
You can also ignore the entire second row with `roc_auc` in the `.metric` column,
as it is beyond the scope of this book.
```
knn_fit |>
collect_metrics()
```
```
## # A tibble: 2 × 6
## .metric .estimator mean n std_err .config
## <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 accuracy binary 0.890 5 0.0180 Preprocessor1_Model1
## 2 roc_auc binary 0.925 5 0.0151 Preprocessor1_Model1
```
We can choose any number of folds, and typically the more we use the better our
accuracy estimate will be (lower standard error). However, we are limited
by computational power: the
more folds we choose, the more computation it takes, and hence the more time
it takes to run the analysis. So when you do cross\-validation, you need to
consider the size of the data, the speed of the algorithm (e.g., K\-nearest
neighbors), and the speed of your computer. In practice, this is a
trial\-and\-error process, but typically \\(C\\) is chosen to be either 5 or 10\. Here
we will try 10\-fold cross\-validation to see if we get a lower standard error:
```
cancer_vfold <- vfold_cv(cancer_train, v = 10, strata = Class)
vfold_metrics <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit_resamples(resamples = cancer_vfold) |>
collect_metrics()
vfold_metrics
```
```
## # A tibble: 2 × 6
## .metric .estimator mean n std_err .config
## <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 accuracy binary 0.890 10 0.0127 Preprocessor1_Model1
## 2 roc_auc binary 0.913 10 0.0150 Preprocessor1_Model1
```
In this case, using 10\-fold instead of 5\-fold cross validation did reduce the standard error, although
by only an insignificant amount. In fact, due to the randomness in how the data are split, sometimes
you might even end up with a *higher* standard error when increasing the number of folds!
We can make the reduction in standard error more dramatic by increasing the number of folds
by a large amount. In the following code we show the result when \\(C \= 50\\);
picking such a large number of folds often takes a long time to run in practice,
so we usually stick to 5 or 10\.
```
cancer_vfold_50 <- vfold_cv(cancer_train, v = 50, strata = Class)
vfold_metrics_50 <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit_resamples(resamples = cancer_vfold_50) |>
collect_metrics()
vfold_metrics_50
```
```
## # A tibble: 2 × 6
## .metric .estimator mean n std_err .config
## <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 accuracy binary 0.884 50 0.00568 Preprocessor1_Model1
## 2 roc_auc binary 0.926 50 0.0148 Preprocessor1_Model1
```
### 6\.6\.2 Parameter value selection
Using 5\- and 10\-fold cross\-validation, we have estimated that the prediction
accuracy of our classifier is somewhere around 89%.
Whether that is good or not
depends entirely on the downstream application of the data analysis. In the
present situation, we are trying to predict a tumor diagnosis, with expensive,
damaging chemo/radiation therapy or patient death as potential consequences of
misprediction. Hence, we might like to
do better than 89% for this application.
In order to improve our classifier, we have one choice of parameter: the number of
neighbors, \\(K\\). Since cross\-validation helps us evaluate the accuracy of our
classifier, we can use cross\-validation to calculate an accuracy for each value
of \\(K\\) in a reasonable range, and then pick the value of \\(K\\) that gives us the
best accuracy. The `tidymodels` package collection provides a very simple
syntax for tuning models: each parameter in the model to be tuned should be specified
as `tune()` in the model specification rather than given a particular value.
```
knn_spec <- nearest_neighbor(weight_func = "rectangular",
neighbors = tune()) |>
set_engine("kknn") |>
set_mode("classification")
```
Then instead of using `fit` or `fit_resamples`, we will use the `tune_grid` function
to fit the model for each value in a range of parameter values.
In particular, we first create a data frame with a `neighbors`
variable that contains the sequence of values of \\(K\\) to try; below we create the `k_vals`
data frame with the `neighbors` variable containing values from 1 to 100 (stepping by 5\) using
the `seq` function.
Then we pass that data frame to the `grid` argument of `tune_grid`.
```
k_vals <- tibble(neighbors = seq(from = 1, to = 100, by = 5))
knn_results <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
tune_grid(resamples = cancer_vfold, grid = k_vals) |>
collect_metrics()
accuracies <- knn_results |>
filter(.metric == "accuracy")
accuracies
```
```
## # A tibble: 20 × 7
## neighbors .metric .estimator mean n std_err .config
## <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 1 accuracy binary 0.866 10 0.0165 Preprocessor1_Model01
## 2 6 accuracy binary 0.890 10 0.0153 Preprocessor1_Model02
## 3 11 accuracy binary 0.887 10 0.0173 Preprocessor1_Model03
## 4 16 accuracy binary 0.887 10 0.0142 Preprocessor1_Model04
## 5 21 accuracy binary 0.887 10 0.0143 Preprocessor1_Model05
## 6 26 accuracy binary 0.887 10 0.0170 Preprocessor1_Model06
## 7 31 accuracy binary 0.897 10 0.0145 Preprocessor1_Model07
## 8 36 accuracy binary 0.899 10 0.0144 Preprocessor1_Model08
## 9 41 accuracy binary 0.892 10 0.0135 Preprocessor1_Model09
## 10 46 accuracy binary 0.892 10 0.0156 Preprocessor1_Model10
## 11 51 accuracy binary 0.890 10 0.0155 Preprocessor1_Model11
## 12 56 accuracy binary 0.873 10 0.0156 Preprocessor1_Model12
## 13 61 accuracy binary 0.876 10 0.0104 Preprocessor1_Model13
## 14 66 accuracy binary 0.871 10 0.0139 Preprocessor1_Model14
## 15 71 accuracy binary 0.876 10 0.0104 Preprocessor1_Model15
## 16 76 accuracy binary 0.873 10 0.0127 Preprocessor1_Model16
## 17 81 accuracy binary 0.876 10 0.0135 Preprocessor1_Model17
## 18 86 accuracy binary 0.873 10 0.0131 Preprocessor1_Model18
## 19 91 accuracy binary 0.873 10 0.0140 Preprocessor1_Model19
## 20 96 accuracy binary 0.866 10 0.0126 Preprocessor1_Model20
```
We can decide which number of neighbors is best by plotting the accuracy versus \\(K\\),
as shown in Figure [6\.5](classification2.html#fig:06-find-k).
```
accuracy_vs_k <- ggplot(accuracies, aes(x = neighbors, y = mean)) +
geom_point() +
geom_line() +
labs(x = "Neighbors", y = "Accuracy Estimate") +
theme(text = element_text(size = 12))
accuracy_vs_k
```
Figure 6\.5: Plot of estimated accuracy versus the number of neighbors.
We can also obtain the number of neighbours with the highest accuracy
programmatically by accessing the `neighbors` variable in the `accuracies` data
frame where the `mean` variable is highest.
Note that it is still useful to visualize the results as
we did above since this provides additional information on how the model
performance varies.
```
best_k <- accuracies |>
arrange(desc(mean)) |>
head(1) |>
pull(neighbors)
best_k
```
```
## [1] 36
```
Setting the number of
neighbors to \\(K \=\\) 36
provides the highest cross\-validation accuracy estimate (89\.89%). But there is no exact or perfect answer here;
any selection from \\(K \= 30\\) and \\(60\\) would be reasonably justified, as all
of these differ in classifier accuracy by a small amount. Remember: the
values you see on this plot are *estimates* of the true accuracy of our
classifier. Although the \\(K \=\\) 36 value is higher than the others on this plot,
that doesn’t mean the classifier is actually more accurate with this parameter
value! Generally, when selecting \\(K\\) (and other parameters for other predictive
models), we are looking for a value where:
* we get roughly optimal accuracy, so that our model will likely be accurate;
* changing the value to a nearby one (e.g., adding or subtracting a small number) doesn’t decrease accuracy too much, so that our choice is reliable in the presence of uncertainty;
* the cost of training the model is not prohibitive (e.g., in our situation, if \\(K\\) is too large, predicting becomes expensive!).
We know that \\(K \=\\) 36
provides the highest estimated accuracy. Further, Figure [6\.5](classification2.html#fig:06-find-k) shows that the estimated accuracy
changes by only a small amount if we increase or decrease \\(K\\) near \\(K \=\\) 36\.
And finally, \\(K \=\\) 36 does not create a prohibitively expensive
computational cost of training. Considering these three points, we would indeed select
\\(K \=\\) 36 for the classifier.
### 6\.6\.3 Under/Overfitting
To build a bit more intuition, what happens if we keep increasing the number of
neighbors \\(K\\)? In fact, the accuracy actually starts to decrease!
Let’s specify a much larger range of values of \\(K\\) to try in the `grid`
argument of `tune_grid`. Figure [6\.6](classification2.html#fig:06-lots-of-ks) shows a plot of estimated accuracy as
we vary \\(K\\) from 1 to almost the number of observations in the training set.
```
k_lots <- tibble(neighbors = seq(from = 1, to = 385, by = 10))
knn_results <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
tune_grid(resamples = cancer_vfold, grid = k_lots) |>
collect_metrics()
accuracies_lots <- knn_results |>
filter(.metric == "accuracy")
accuracy_vs_k_lots <- ggplot(accuracies_lots, aes(x = neighbors, y = mean)) +
geom_point() +
geom_line() +
labs(x = "Neighbors", y = "Accuracy Estimate") +
theme(text = element_text(size = 12))
accuracy_vs_k_lots
```
Figure 6\.6: Plot of accuracy estimate versus number of neighbors for many K values.
**Underfitting:** What is actually happening to our classifier that causes
this? As we increase the number of neighbors, more and more of the training
observations (and those that are farther and farther away from the point) get a
“say” in what the class of a new observation is. This causes a sort of
“averaging effect” to take place, making the boundary between where our
classifier would predict a tumor to be malignant versus benign to smooth out
and become *simpler.* If you take this to the extreme, setting \\(K\\) to the total
training data set size, then the classifier will always predict the same label
regardless of what the new observation looks like. In general, if the model
*isn’t influenced enough* by the training data, it is said to **underfit** the
data.
**Overfitting:** In contrast, when we decrease the number of neighbors, each
individual data point has a stronger and stronger vote regarding nearby points.
Since the data themselves are noisy, this causes a more “jagged” boundary
corresponding to a *less simple* model. If you take this case to the extreme,
setting \\(K \= 1\\), then the classifier is essentially just matching each new
observation to its closest neighbor in the training data set. This is just as
problematic as the large \\(K\\) case, because the classifier becomes unreliable on
new data: if we had a different training set, the predictions would be
completely different. In general, if the model *is influenced too much* by the
training data, it is said to **overfit** the data.
Figure 6\.7: Effect of K in overfitting and underfitting.
Both overfitting and underfitting are problematic and will lead to a model
that does not generalize well to new data. When fitting a model, we need to strike
a balance between the two. You can see these two effects in Figure
[6\.7](classification2.html#fig:06-decision-grid-K), which shows how the classifier changes as
we set the number of neighbors \\(K\\) to 1, 7, 20, and 300\.
### 6\.6\.4 Evaluating on the test set
Now that we have tuned the K\-NN classifier and set \\(K \=\\) 36,
we are done building the model and it is time to evaluate the quality of its predictions on the held out
test data, as we did earlier in Section [6\.5\.5](classification2.html#eval-performance-cls2).
We first need to retrain the K\-NN classifier
on the entire training data set using the selected number of neighbors.
```
cancer_recipe <- recipe(Class ~ Smoothness + Concavity, data = cancer_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
knn_spec <- nearest_neighbor(weight_func = "rectangular", neighbors = best_k) |>
set_engine("kknn") |>
set_mode("classification")
knn_fit <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
fit(data = cancer_train)
knn_fit
```
```
## ══ Workflow [trained] ══════════════════════════════════════════════════════════
## Preprocessor: Recipe
## Model: nearest_neighbor()
##
## ── Preprocessor ────────────────────────────────────────────────────────────────
## 2 Recipe Steps
##
## • step_scale()
## • step_center()
##
## ── Model ───────────────────────────────────────────────────────────────────────
##
## Call:
## kknn::train.kknn(formula = ..y ~ ., data = data, ks = min_rows(36, data, 5), kernel = ~"rectangular")
##
## Type of response variable: nominal
## Minimal misclassification: 0.1150235
## Best kernel: rectangular
## Best k: 36
```
Then to make predictions and assess the estimated accuracy of the best model on the test data, we use the
`predict` and `metrics` functions as we did earlier in the chapter. We can then pass those predictions to
the `precision`, `recall`, and `conf_mat` functions to assess the estimated precision and recall, and print a confusion matrix.
```
cancer_test_predictions <- predict(knn_fit, cancer_test) |>
bind_cols(cancer_test)
cancer_test_predictions |>
metrics(truth = Class, estimate = .pred_class) |>
filter(.metric == "accuracy")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 accuracy binary 0.860
```
```
cancer_test_predictions |>
precision(truth = Class, estimate = .pred_class, event_level="first")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 precision binary 0.8
```
```
cancer_test_predictions |>
recall(truth = Class, estimate = .pred_class, event_level="first")
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 recall binary 0.830
```
```
confusion <- cancer_test_predictions |>
conf_mat(truth = Class, estimate = .pred_class)
confusion
```
```
## Truth
## Prediction Malignant Benign
## Malignant 44 11
## Benign 9 79
```
At first glance, this is a bit surprising: the accuracy of the classifier
has only changed a small amount despite tuning the number of neighbors! Our first model
with \\(K \=\\) 3 (before we knew how to tune) had an estimated accuracy of 85%,
while the tuned model with \\(K \=\\) 36 had an estimated accuracy
of 86%.
Upon examining Figure [6\.5](classification2.html#fig:06-find-k) again to see the
cross validation accuracy estimates for a range of neighbors, this result
becomes much less surprising. From 1 to around 96 neighbors, the cross
validation accuracy estimate varies only by around 3%, with
each estimate having a standard error around 1%.
Since the cross\-validation accuracy estimates the test set accuracy,
the fact that the test set accuracy also doesn’t change much is expected.
Also note that the \\(K \=\\) 3 model had a precision
precision of 77% and recall of 87%,
while the tuned model had
a precision of 80% and recall of 83%.
Given that the recall decreased—remember, in this application, recall
is critical to making sure we find all the patients with malignant tumors—the tuned model may actually be *less* preferred
in this setting. In any case, it is important to think critically about the result of tuning. Models tuned to
maximize accuracy are not necessarily better for a given application.
6\.7 Summary
------------
Classification algorithms use one or more quantitative variables to predict the
value of another categorical variable. In particular, the K\-nearest neighbors algorithm
does this by first finding the \\(K\\) points in the training data nearest
to the new observation, and then returning the majority class vote from those
training observations. We can tune and evaluate a classifier by splitting the data randomly into a
training and test data set. The training set is used to build the classifier,
and we can tune the classifier (e.g., select the number of neighbors in K\-NN)
by maximizing estimated accuracy via cross\-validation. After we have tuned the
model we can use the test set to estimate its accuracy.
The overall process is summarized in Figure [6\.8](classification2.html#fig:06-overview).
Figure 6\.8: Overview of K\-NN classification.
The overall workflow for performing K\-nearest neighbors classification using `tidymodels` is as follows:
1. Use the `initial_split` function to split the data into a training and test set. Set the `strata` argument to the class label variable. Put the test set aside for now.
2. Use the `vfold_cv` function to split up the training data for cross\-validation.
3. Create a `recipe` that specifies the class label and predictors, as well as preprocessing steps for all variables. Pass the training data as the `data` argument of the recipe.
4. Create a `nearest_neighbors` model specification, with `neighbors = tune()`.
5. Add the recipe and model specification to a `workflow()`, and use the `tune_grid` function on the train/validation splits to estimate the classifier accuracy for a range of \\(K\\) values.
6. Pick a value of \\(K\\) that yields a high accuracy estimate that doesn’t change much if you change \\(K\\) to a nearby value.
7. Make a new model specification for the best parameter value (i.e., \\(K\\)), and retrain the classifier using the `fit` function.
8. Evaluate the estimated accuracy of the classifier on the test set using the `predict` function.
In these last two chapters, we focused on the K\-nearest neighbors algorithm,
but there are many other methods we could have used to predict a categorical label.
All algorithms have their strengths and weaknesses, and we summarize these for
the K\-NN here.
**Strengths:** K\-nearest neighbors classification
1. is a simple, intuitive algorithm,
2. requires few assumptions about what the data must look like, and
3. works for binary (two\-class) and multi\-class (more than 2 classes) classification problems.
**Weaknesses:** K\-nearest neighbors classification
1. becomes very slow as the training data gets larger,
2. may not perform well with a large number of predictors, and
3. may not perform well when classes are imbalanced.
6\.8 Predictor variable selection
---------------------------------
> **Note:** This section is not required reading for the remainder of the textbook. It is included for those readers
> interested in learning how irrelevant variables can influence the performance of a classifier, and how to
> pick a subset of useful variables to include as predictors.
Another potentially important part of tuning your classifier is to choose which
variables from your data will be treated as predictor variables. Technically, you can choose
anything from using a single predictor variable to using every variable in your
data; the K\-nearest neighbors algorithm accepts any number of
predictors. However, it is **not** the case that using more predictors always
yields better predictions! In fact, sometimes including irrelevant predictors can
actually negatively affect classifier performance.
### 6\.8\.1 The effect of irrelevant predictors
Let’s take a look at an example where K\-nearest neighbors performs
worse when given more predictors to work with. In this example, we modified
the breast cancer data to have only the `Smoothness`, `Concavity`, and
`Perimeter` variables from the original data. Then, we added irrelevant
variables that we created ourselves using a random number generator.
The irrelevant variables each take a value of 0 or 1 with equal probability for each observation, regardless
of what the value `Class` variable takes. In other words, the irrelevant variables have
no meaningful relationship with the `Class` variable.
```
cancer_irrelevant |>
select(Class, Smoothness, Concavity, Perimeter, Irrelevant1, Irrelevant2)
```
```
## # A tibble: 569 × 6
## Class Smoothness Concavity Perimeter Irrelevant1 Irrelevant2
## <fct> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Malignant 0.118 0.300 123. 1 0
## 2 Malignant 0.0847 0.0869 133. 0 0
## 3 Malignant 0.110 0.197 130 0 0
## 4 Malignant 0.142 0.241 77.6 0 1
## 5 Malignant 0.100 0.198 135. 0 0
## 6 Malignant 0.128 0.158 82.6 1 0
## 7 Malignant 0.0946 0.113 120. 0 1
## 8 Malignant 0.119 0.0937 90.2 1 0
## 9 Malignant 0.127 0.186 87.5 0 0
## 10 Malignant 0.119 0.227 84.0 1 1
## # ℹ 559 more rows
```
Next, we build a sequence of K\-NN classifiers that include `Smoothness`,
`Concavity`, and `Perimeter` as predictor variables, but also increasingly many irrelevant
variables. In particular, we create 6 data sets with 0, 5, 10, 15, 20, and 40 irrelevant predictors.
Then we build a model, tuned via 5\-fold cross\-validation, for each data set.
Figure [6\.9](classification2.html#fig:06-performance-irrelevant-features) shows
the estimated cross\-validation accuracy versus the number of irrelevant predictors. As
we add more irrelevant predictor variables, the estimated accuracy of our
classifier decreases. This is because the irrelevant variables add a random
amount to the distance between each pair of observations; the more irrelevant
variables there are, the more (random) influence they have, and the more they
corrupt the set of nearest neighbors that vote on the class of the new
observation to predict.
Figure 6\.9: Effect of inclusion of irrelevant predictors.
Although the accuracy decreases as expected, one surprising thing about
Figure [6\.9](classification2.html#fig:06-performance-irrelevant-features) is that it shows that the method
still outperforms the baseline majority classifier (with about 63% accuracy)
even with 40 irrelevant variables.
How could that be? Figure [6\.10](classification2.html#fig:06-neighbors-irrelevant-features) provides the answer:
the tuning procedure for the K\-nearest neighbors classifier combats the extra randomness from the irrelevant variables
by increasing the number of neighbors. Of course, because of all the extra noise in the data from the irrelevant
variables, the number of neighbors does not increase smoothly; but the general trend is increasing.
Figure [6\.11](classification2.html#fig:06-fixed-irrelevant-features) corroborates
this evidence; if we fix the number of neighbors to \\(K\=3\\), the accuracy falls off more quickly.
Figure 6\.10: Tuned number of neighbors for varying number of irrelevant predictors.
Figure 6\.11: Accuracy versus number of irrelevant predictors for tuned and untuned number of neighbors.
### 6\.8\.2 Finding a good subset of predictors
So then, if it is not ideal to use all of our variables as predictors without consideration, how
do we choose which variables we *should* use? A simple method is to rely on your scientific understanding
of the data to tell you which variables are not likely to be useful predictors. For example, in the cancer
data that we have been studying, the `ID` variable is just a unique identifier for the observation.
As it is not related to any measured property of the cells, the `ID` variable should therefore not be used
as a predictor. That is, of course, a very clear\-cut case. But the decision for the remaining variables
is less obvious, as all seem like reasonable candidates. It
is not clear which subset of them will create the best classifier. One could use visualizations and
other exploratory analyses to try to help understand which variables are potentially relevant, but
this process is both time\-consuming and error\-prone when there are many variables to consider.
Therefore we need a more systematic and programmatic way of choosing variables.
This is a very difficult problem to solve in
general, and there are a number of methods that have been developed that apply
in particular cases of interest. Here we will discuss two basic
selection methods as an introduction to the topic. See the additional resources at the end of
this chapter to find out where you can learn more about variable selection, including more advanced methods.
The first idea you might think of for a systematic way to select predictors
is to try all possible subsets of predictors and then pick the set that results in the “best” classifier.
This procedure is indeed a well\-known variable selection method referred to
as *best subset selection* ([Beale, Kendall, and Mann 1967](#ref-bealesubset); [Hocking and Leslie 1967](#ref-hockingsubset)).
In particular, you
1. create a separate model for every possible subset of predictors,
2. tune each one using cross\-validation, and
3. pick the subset of predictors that gives you the highest cross\-validation accuracy.
Best subset selection is applicable to any classification method (K\-NN or otherwise).
However, it becomes very slow when you have even a moderate
number of predictors to choose from (say, around 10\). This is because the number of possible predictor subsets
grows very quickly with the number of predictors, and you have to train the model (itself
a slow process!) for each one. For example, if we have 2 predictors—let’s call
them A and B—then we have 3 variable sets to try: A alone, B alone, and finally A
and B together. If we have 3 predictors—A, B, and C—then we have 7
to try: A, B, C, AB, BC, AC, and ABC. In general, the number of models
we have to train for \\(m\\) predictors is \\(2^m\-1\\); in other words, when we
get to 10 predictors we have over *one thousand* models to train, and
at 20 predictors we have over *one million* models to train!
So although it is a simple method, best subset selection is usually too computationally
expensive to use in practice.
Another idea is to iteratively build up a model by adding one predictor variable
at a time. This method—known as *forward selection* ([Eforymson 1966](#ref-forwardefroymson); [Draper and Smith 1966](#ref-forwarddraper))—is also widely
applicable and fairly straightforward. It involves the following steps:
1. Start with a model having no predictors.
2. Run the following 3 steps until you run out of predictors:
1. For each unused predictor, add it to the model to form a *candidate model*.
2. Tune all of the candidate models.
3. Update the model to be the candidate model with the highest cross\-validation accuracy.
3. Select the model that provides the best trade\-off between accuracy and simplicity.
Say you have \\(m\\) total predictors to work with. In the first iteration, you have to make
\\(m\\) candidate models, each with 1 predictor. Then in the second iteration, you have
to make \\(m\-1\\) candidate models, each with 2 predictors (the one you chose before and a new one).
This pattern continues for as many iterations as you want. If you run the method
all the way until you run out of predictors to choose, you will end up training
\\(\\frac{1}{2}m(m\+1\)\\) separate models. This is a *big* improvement from the \\(2^m\-1\\)
models that best subset selection requires you to train! For example, while best subset selection requires
training over 1000 candidate models with 10 predictors, forward selection requires training only 55 candidate models.
Therefore we will continue the rest of this section using forward selection.
> **Note:** One word of caution before we move on. Every additional model that you train
> increases the likelihood that you will get unlucky and stumble
> on a model that has a high cross\-validation accuracy estimate, but a low true
> accuracy on the test data and other future observations.
> Since forward selection involves training a lot of models, you run a fairly
> high risk of this happening. To keep this risk low, only use forward selection
> when you have a large amount of data and a relatively small total number of
> predictors. More advanced methods do not suffer from this
> problem as much; see the additional resources at the end of this chapter for
> where to learn more about advanced predictor selection methods.
### 6\.8\.3 Forward selection in R
We now turn to implementing forward selection in R.
Unfortunately there is no built\-in way to do this using the `tidymodels` framework,
so we will have to code it ourselves. First we will use the `select` function to extract a smaller set of predictors
to work with in this illustrative example—`Smoothness`, `Concavity`, `Perimeter`, `Irrelevant1`, `Irrelevant2`, and `Irrelevant3`—as
well as the `Class` variable as the label. We will also extract the column names for the full set of predictors.
```
cancer_subset <- cancer_irrelevant |>
select(Class,
Smoothness,
Concavity,
Perimeter,
Irrelevant1,
Irrelevant2,
Irrelevant3)
names <- colnames(cancer_subset |> select(-Class))
cancer_subset
```
```
## # A tibble: 569 × 7
## Class Smoothness Concavity Perimeter Irrelevant1 Irrelevant2 Irrelevant3
## <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Malignant 0.118 0.300 123. 1 0 1
## 2 Malignant 0.0847 0.0869 133. 0 0 0
## 3 Malignant 0.110 0.197 130 0 0 0
## 4 Malignant 0.142 0.241 77.6 0 1 0
## 5 Malignant 0.100 0.198 135. 0 0 0
## 6 Malignant 0.128 0.158 82.6 1 0 1
## 7 Malignant 0.0946 0.113 120. 0 1 1
## 8 Malignant 0.119 0.0937 90.2 1 0 0
## 9 Malignant 0.127 0.186 87.5 0 0 1
## 10 Malignant 0.119 0.227 84.0 1 1 0
## # ℹ 559 more rows
```
The key idea of the forward selection code is to use the `paste` function (which concatenates strings
separated by spaces) to create a model formula for each subset of predictors for which we want to build a model.
The `collapse` argument tells `paste` what to put between the items in the list;
to make a formula, we need to put a `+` symbol between each variable.
As an example, let’s make a model formula for all the predictors,
which should output something like
`Class ~ Smoothness + Concavity + Perimeter + Irrelevant1 + Irrelevant2 + Irrelevant3`:
```
example_formula <- paste("Class", "~", paste(names, collapse="+"))
example_formula
```
```
## [1] "Class ~ Smoothness+Concavity+Perimeter+Irrelevant1+Irrelevant2+Irrelevant3"
```
Finally, we need to write some code that performs the task of sequentially
finding the best predictor to add to the model.
If you recall the end of the wrangling chapter, we mentioned
that sometimes one needs more flexible forms of iteration than what
we have used earlier, and in these cases one typically resorts to
a *for loop*; see [the chapter on iteration](https://r4ds.had.co.nz/iteration.html) in *R for Data Science* ([Wickham and Grolemund 2016](#ref-wickham2016r)).
Here we will use two for loops:
one over increasing predictor set sizes
(where you see `for (i in 1:length(names))` below),
and another to check which predictor to add in each round (where you see `for (j in 1:length(names))` below).
For each set of predictors to try, we construct a model formula,
pass it into a `recipe`, build a `workflow` that tunes
a K\-NN classifier using 5\-fold cross\-validation,
and finally records the estimated accuracy.
```
# create an empty tibble to store the results
accuracies <- tibble(size = integer(),
model_string = character(),
accuracy = numeric())
# create a model specification
knn_spec <- nearest_neighbor(weight_func = "rectangular",
neighbors = tune()) |>
set_engine("kknn") |>
set_mode("classification")
# create a 5-fold cross-validation object
cancer_vfold <- vfold_cv(cancer_subset, v = 5, strata = Class)
# store the total number of predictors
n_total <- length(names)
# stores selected predictors
selected <- c()
# for every size from 1 to the total number of predictors
for (i in 1:n_total) {
# for every predictor still not added yet
accs <- list()
models <- list()
for (j in 1:length(names)) {
# create a model string for this combination of predictors
preds_new <- c(selected, names[[j]])
model_string <- paste("Class", "~", paste(preds_new, collapse="+"))
# create a recipe from the model string
cancer_recipe <- recipe(as.formula(model_string),
data = cancer_subset) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
# tune the K-NN classifier with these predictors,
# and collect the accuracy for the best K
acc <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
tune_grid(resamples = cancer_vfold, grid = 10) |>
collect_metrics() |>
filter(.metric == "accuracy") |>
summarize(mx = max(mean))
acc <- acc$mx |> unlist()
# add this result to the dataframe
accs[[j]] <- acc
models[[j]] <- model_string
}
jstar <- which.max(unlist(accs))
accuracies <- accuracies |>
add_row(size = i,
model_string = models[[jstar]],
accuracy = accs[[jstar]])
selected <- c(selected, names[[jstar]])
names <- names[-jstar]
}
accuracies
```
```
## # A tibble: 6 × 3
## size model_string accuracy
## <int> <chr> <dbl>
## 1 1 Class ~ Perimeter 0.896
## 2 2 Class ~ Perimeter+Concavity 0.916
## 3 3 Class ~ Perimeter+Concavity+Smoothness 0.931
## 4 4 Class ~ Perimeter+Concavity+Smoothness+Irrelevant1 0.928
## 5 5 Class ~ Perimeter+Concavity+Smoothness+Irrelevant1+Irrelevant3 0.924
## 6 6 Class ~ Perimeter+Concavity+Smoothness+Irrelevant1+Irrelevant3… 0.902
```
Interesting! The forward selection procedure first added the three meaningful variables `Perimeter`,
`Concavity`, and `Smoothness`, followed by the irrelevant variables. Figure [6\.12](classification2.html#fig:06-fwdsel-3)
visualizes the accuracy versus the number of predictors in the model. You can see that
as meaningful predictors are added, the estimated accuracy increases substantially; and as you add irrelevant
variables, the accuracy either exhibits small fluctuations or decreases as the model attempts to tune the number
of neighbors to account for the extra noise. In order to pick the right model from the sequence, you have
to balance high accuracy and model simplicity (i.e., having fewer predictors and a lower chance of overfitting). The
way to find that balance is to look for the *elbow*
in Figure [6\.12](classification2.html#fig:06-fwdsel-3), i.e., the place on the plot where the accuracy stops increasing dramatically and
levels off or begins to decrease. The elbow in Figure [6\.12](classification2.html#fig:06-fwdsel-3) appears to occur at the model with
3 predictors; after that point the accuracy levels off. So here the right trade\-off of accuracy and number of predictors
occurs with 3 variables: `Class ~ Perimeter + Concavity + Smoothness`. In other words, we have successfully removed irrelevant
predictors from the model! It is always worth remembering, however, that what cross\-validation gives you
is an *estimate* of the true accuracy; you have to use your judgement when looking at this plot to decide
where the elbow occurs, and whether adding a variable provides a meaningful increase in accuracy.
Figure 6\.12: Estimated accuracy versus the number of predictors for the sequence of models built using forward selection.
> **Note:** Since the choice of which variables to include as predictors is
> part of tuning your classifier, you *cannot use your test data* for this
> process!
### 6\.8\.1 The effect of irrelevant predictors
Let’s take a look at an example where K\-nearest neighbors performs
worse when given more predictors to work with. In this example, we modified
the breast cancer data to have only the `Smoothness`, `Concavity`, and
`Perimeter` variables from the original data. Then, we added irrelevant
variables that we created ourselves using a random number generator.
The irrelevant variables each take a value of 0 or 1 with equal probability for each observation, regardless
of what the value `Class` variable takes. In other words, the irrelevant variables have
no meaningful relationship with the `Class` variable.
```
cancer_irrelevant |>
select(Class, Smoothness, Concavity, Perimeter, Irrelevant1, Irrelevant2)
```
```
## # A tibble: 569 × 6
## Class Smoothness Concavity Perimeter Irrelevant1 Irrelevant2
## <fct> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Malignant 0.118 0.300 123. 1 0
## 2 Malignant 0.0847 0.0869 133. 0 0
## 3 Malignant 0.110 0.197 130 0 0
## 4 Malignant 0.142 0.241 77.6 0 1
## 5 Malignant 0.100 0.198 135. 0 0
## 6 Malignant 0.128 0.158 82.6 1 0
## 7 Malignant 0.0946 0.113 120. 0 1
## 8 Malignant 0.119 0.0937 90.2 1 0
## 9 Malignant 0.127 0.186 87.5 0 0
## 10 Malignant 0.119 0.227 84.0 1 1
## # ℹ 559 more rows
```
Next, we build a sequence of K\-NN classifiers that include `Smoothness`,
`Concavity`, and `Perimeter` as predictor variables, but also increasingly many irrelevant
variables. In particular, we create 6 data sets with 0, 5, 10, 15, 20, and 40 irrelevant predictors.
Then we build a model, tuned via 5\-fold cross\-validation, for each data set.
Figure [6\.9](classification2.html#fig:06-performance-irrelevant-features) shows
the estimated cross\-validation accuracy versus the number of irrelevant predictors. As
we add more irrelevant predictor variables, the estimated accuracy of our
classifier decreases. This is because the irrelevant variables add a random
amount to the distance between each pair of observations; the more irrelevant
variables there are, the more (random) influence they have, and the more they
corrupt the set of nearest neighbors that vote on the class of the new
observation to predict.
Figure 6\.9: Effect of inclusion of irrelevant predictors.
Although the accuracy decreases as expected, one surprising thing about
Figure [6\.9](classification2.html#fig:06-performance-irrelevant-features) is that it shows that the method
still outperforms the baseline majority classifier (with about 63% accuracy)
even with 40 irrelevant variables.
How could that be? Figure [6\.10](classification2.html#fig:06-neighbors-irrelevant-features) provides the answer:
the tuning procedure for the K\-nearest neighbors classifier combats the extra randomness from the irrelevant variables
by increasing the number of neighbors. Of course, because of all the extra noise in the data from the irrelevant
variables, the number of neighbors does not increase smoothly; but the general trend is increasing.
Figure [6\.11](classification2.html#fig:06-fixed-irrelevant-features) corroborates
this evidence; if we fix the number of neighbors to \\(K\=3\\), the accuracy falls off more quickly.
Figure 6\.10: Tuned number of neighbors for varying number of irrelevant predictors.
Figure 6\.11: Accuracy versus number of irrelevant predictors for tuned and untuned number of neighbors.
### 6\.8\.2 Finding a good subset of predictors
So then, if it is not ideal to use all of our variables as predictors without consideration, how
do we choose which variables we *should* use? A simple method is to rely on your scientific understanding
of the data to tell you which variables are not likely to be useful predictors. For example, in the cancer
data that we have been studying, the `ID` variable is just a unique identifier for the observation.
As it is not related to any measured property of the cells, the `ID` variable should therefore not be used
as a predictor. That is, of course, a very clear\-cut case. But the decision for the remaining variables
is less obvious, as all seem like reasonable candidates. It
is not clear which subset of them will create the best classifier. One could use visualizations and
other exploratory analyses to try to help understand which variables are potentially relevant, but
this process is both time\-consuming and error\-prone when there are many variables to consider.
Therefore we need a more systematic and programmatic way of choosing variables.
This is a very difficult problem to solve in
general, and there are a number of methods that have been developed that apply
in particular cases of interest. Here we will discuss two basic
selection methods as an introduction to the topic. See the additional resources at the end of
this chapter to find out where you can learn more about variable selection, including more advanced methods.
The first idea you might think of for a systematic way to select predictors
is to try all possible subsets of predictors and then pick the set that results in the “best” classifier.
This procedure is indeed a well\-known variable selection method referred to
as *best subset selection* ([Beale, Kendall, and Mann 1967](#ref-bealesubset); [Hocking and Leslie 1967](#ref-hockingsubset)).
In particular, you
1. create a separate model for every possible subset of predictors,
2. tune each one using cross\-validation, and
3. pick the subset of predictors that gives you the highest cross\-validation accuracy.
Best subset selection is applicable to any classification method (K\-NN or otherwise).
However, it becomes very slow when you have even a moderate
number of predictors to choose from (say, around 10\). This is because the number of possible predictor subsets
grows very quickly with the number of predictors, and you have to train the model (itself
a slow process!) for each one. For example, if we have 2 predictors—let’s call
them A and B—then we have 3 variable sets to try: A alone, B alone, and finally A
and B together. If we have 3 predictors—A, B, and C—then we have 7
to try: A, B, C, AB, BC, AC, and ABC. In general, the number of models
we have to train for \\(m\\) predictors is \\(2^m\-1\\); in other words, when we
get to 10 predictors we have over *one thousand* models to train, and
at 20 predictors we have over *one million* models to train!
So although it is a simple method, best subset selection is usually too computationally
expensive to use in practice.
Another idea is to iteratively build up a model by adding one predictor variable
at a time. This method—known as *forward selection* ([Eforymson 1966](#ref-forwardefroymson); [Draper and Smith 1966](#ref-forwarddraper))—is also widely
applicable and fairly straightforward. It involves the following steps:
1. Start with a model having no predictors.
2. Run the following 3 steps until you run out of predictors:
1. For each unused predictor, add it to the model to form a *candidate model*.
2. Tune all of the candidate models.
3. Update the model to be the candidate model with the highest cross\-validation accuracy.
3. Select the model that provides the best trade\-off between accuracy and simplicity.
Say you have \\(m\\) total predictors to work with. In the first iteration, you have to make
\\(m\\) candidate models, each with 1 predictor. Then in the second iteration, you have
to make \\(m\-1\\) candidate models, each with 2 predictors (the one you chose before and a new one).
This pattern continues for as many iterations as you want. If you run the method
all the way until you run out of predictors to choose, you will end up training
\\(\\frac{1}{2}m(m\+1\)\\) separate models. This is a *big* improvement from the \\(2^m\-1\\)
models that best subset selection requires you to train! For example, while best subset selection requires
training over 1000 candidate models with 10 predictors, forward selection requires training only 55 candidate models.
Therefore we will continue the rest of this section using forward selection.
> **Note:** One word of caution before we move on. Every additional model that you train
> increases the likelihood that you will get unlucky and stumble
> on a model that has a high cross\-validation accuracy estimate, but a low true
> accuracy on the test data and other future observations.
> Since forward selection involves training a lot of models, you run a fairly
> high risk of this happening. To keep this risk low, only use forward selection
> when you have a large amount of data and a relatively small total number of
> predictors. More advanced methods do not suffer from this
> problem as much; see the additional resources at the end of this chapter for
> where to learn more about advanced predictor selection methods.
### 6\.8\.3 Forward selection in R
We now turn to implementing forward selection in R.
Unfortunately there is no built\-in way to do this using the `tidymodels` framework,
so we will have to code it ourselves. First we will use the `select` function to extract a smaller set of predictors
to work with in this illustrative example—`Smoothness`, `Concavity`, `Perimeter`, `Irrelevant1`, `Irrelevant2`, and `Irrelevant3`—as
well as the `Class` variable as the label. We will also extract the column names for the full set of predictors.
```
cancer_subset <- cancer_irrelevant |>
select(Class,
Smoothness,
Concavity,
Perimeter,
Irrelevant1,
Irrelevant2,
Irrelevant3)
names <- colnames(cancer_subset |> select(-Class))
cancer_subset
```
```
## # A tibble: 569 × 7
## Class Smoothness Concavity Perimeter Irrelevant1 Irrelevant2 Irrelevant3
## <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Malignant 0.118 0.300 123. 1 0 1
## 2 Malignant 0.0847 0.0869 133. 0 0 0
## 3 Malignant 0.110 0.197 130 0 0 0
## 4 Malignant 0.142 0.241 77.6 0 1 0
## 5 Malignant 0.100 0.198 135. 0 0 0
## 6 Malignant 0.128 0.158 82.6 1 0 1
## 7 Malignant 0.0946 0.113 120. 0 1 1
## 8 Malignant 0.119 0.0937 90.2 1 0 0
## 9 Malignant 0.127 0.186 87.5 0 0 1
## 10 Malignant 0.119 0.227 84.0 1 1 0
## # ℹ 559 more rows
```
The key idea of the forward selection code is to use the `paste` function (which concatenates strings
separated by spaces) to create a model formula for each subset of predictors for which we want to build a model.
The `collapse` argument tells `paste` what to put between the items in the list;
to make a formula, we need to put a `+` symbol between each variable.
As an example, let’s make a model formula for all the predictors,
which should output something like
`Class ~ Smoothness + Concavity + Perimeter + Irrelevant1 + Irrelevant2 + Irrelevant3`:
```
example_formula <- paste("Class", "~", paste(names, collapse="+"))
example_formula
```
```
## [1] "Class ~ Smoothness+Concavity+Perimeter+Irrelevant1+Irrelevant2+Irrelevant3"
```
Finally, we need to write some code that performs the task of sequentially
finding the best predictor to add to the model.
If you recall the end of the wrangling chapter, we mentioned
that sometimes one needs more flexible forms of iteration than what
we have used earlier, and in these cases one typically resorts to
a *for loop*; see [the chapter on iteration](https://r4ds.had.co.nz/iteration.html) in *R for Data Science* ([Wickham and Grolemund 2016](#ref-wickham2016r)).
Here we will use two for loops:
one over increasing predictor set sizes
(where you see `for (i in 1:length(names))` below),
and another to check which predictor to add in each round (where you see `for (j in 1:length(names))` below).
For each set of predictors to try, we construct a model formula,
pass it into a `recipe`, build a `workflow` that tunes
a K\-NN classifier using 5\-fold cross\-validation,
and finally records the estimated accuracy.
```
# create an empty tibble to store the results
accuracies <- tibble(size = integer(),
model_string = character(),
accuracy = numeric())
# create a model specification
knn_spec <- nearest_neighbor(weight_func = "rectangular",
neighbors = tune()) |>
set_engine("kknn") |>
set_mode("classification")
# create a 5-fold cross-validation object
cancer_vfold <- vfold_cv(cancer_subset, v = 5, strata = Class)
# store the total number of predictors
n_total <- length(names)
# stores selected predictors
selected <- c()
# for every size from 1 to the total number of predictors
for (i in 1:n_total) {
# for every predictor still not added yet
accs <- list()
models <- list()
for (j in 1:length(names)) {
# create a model string for this combination of predictors
preds_new <- c(selected, names[[j]])
model_string <- paste("Class", "~", paste(preds_new, collapse="+"))
# create a recipe from the model string
cancer_recipe <- recipe(as.formula(model_string),
data = cancer_subset) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
# tune the K-NN classifier with these predictors,
# and collect the accuracy for the best K
acc <- workflow() |>
add_recipe(cancer_recipe) |>
add_model(knn_spec) |>
tune_grid(resamples = cancer_vfold, grid = 10) |>
collect_metrics() |>
filter(.metric == "accuracy") |>
summarize(mx = max(mean))
acc <- acc$mx |> unlist()
# add this result to the dataframe
accs[[j]] <- acc
models[[j]] <- model_string
}
jstar <- which.max(unlist(accs))
accuracies <- accuracies |>
add_row(size = i,
model_string = models[[jstar]],
accuracy = accs[[jstar]])
selected <- c(selected, names[[jstar]])
names <- names[-jstar]
}
accuracies
```
```
## # A tibble: 6 × 3
## size model_string accuracy
## <int> <chr> <dbl>
## 1 1 Class ~ Perimeter 0.896
## 2 2 Class ~ Perimeter+Concavity 0.916
## 3 3 Class ~ Perimeter+Concavity+Smoothness 0.931
## 4 4 Class ~ Perimeter+Concavity+Smoothness+Irrelevant1 0.928
## 5 5 Class ~ Perimeter+Concavity+Smoothness+Irrelevant1+Irrelevant3 0.924
## 6 6 Class ~ Perimeter+Concavity+Smoothness+Irrelevant1+Irrelevant3… 0.902
```
Interesting! The forward selection procedure first added the three meaningful variables `Perimeter`,
`Concavity`, and `Smoothness`, followed by the irrelevant variables. Figure [6\.12](classification2.html#fig:06-fwdsel-3)
visualizes the accuracy versus the number of predictors in the model. You can see that
as meaningful predictors are added, the estimated accuracy increases substantially; and as you add irrelevant
variables, the accuracy either exhibits small fluctuations or decreases as the model attempts to tune the number
of neighbors to account for the extra noise. In order to pick the right model from the sequence, you have
to balance high accuracy and model simplicity (i.e., having fewer predictors and a lower chance of overfitting). The
way to find that balance is to look for the *elbow*
in Figure [6\.12](classification2.html#fig:06-fwdsel-3), i.e., the place on the plot where the accuracy stops increasing dramatically and
levels off or begins to decrease. The elbow in Figure [6\.12](classification2.html#fig:06-fwdsel-3) appears to occur at the model with
3 predictors; after that point the accuracy levels off. So here the right trade\-off of accuracy and number of predictors
occurs with 3 variables: `Class ~ Perimeter + Concavity + Smoothness`. In other words, we have successfully removed irrelevant
predictors from the model! It is always worth remembering, however, that what cross\-validation gives you
is an *estimate* of the true accuracy; you have to use your judgement when looking at this plot to decide
where the elbow occurs, and whether adding a variable provides a meaningful increase in accuracy.
Figure 6\.12: Estimated accuracy versus the number of predictors for the sequence of models built using forward selection.
> **Note:** Since the choice of which variables to include as predictors is
> part of tuning your classifier, you *cannot use your test data* for this
> process!
6\.9 Exercises
--------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Classification II: evaluation and tuning” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
6\.10 Additional resources
--------------------------
* The [`tidymodels` website](https://tidymodels.org/packages) is an excellent
reference for more details on, and advanced usage of, the functions and
packages in the past two chapters. Aside from that, it also has a [nice
beginner’s tutorial](https://www.tidymodels.org/start/) and [an extensive list
of more advanced examples](https://www.tidymodels.org/learn/) that you can use
to continue learning beyond the scope of this book. It’s worth noting that the
`tidymodels` package does a lot more than just classification, and so the
examples on the website similarly go beyond classification as well. In the next
two chapters, you’ll learn about another kind of predictive modeling setting,
so it might be worth visiting the website only after reading through those
chapters.
* *An Introduction to Statistical Learning* ([James et al. 2013](#ref-james2013introduction)) provides
a great next stop in the process of
learning about classification. Chapter 4 discusses additional basic techniques
for classification that we do not cover, such as logistic regression, linear
discriminant analysis, and naive Bayes. Chapter 5 goes into much more detail
about cross\-validation. Chapters 8 and 9 cover decision trees and support
vector machines, two very popular but more advanced classification methods.
Finally, Chapter 6 covers a number of methods for selecting predictor
variables. Note that while this book is still a very accessible introductory
text, it requires a bit more mathematical background than we require.
| Data Science |
ubc-dsci.github.io | https://ubc-dsci.github.io/introduction-to-datascience/regression1.html |
Chapter 7 Regression I: K\-nearest neighbors
============================================
7\.1 Overview
-------------
This chapter continues our foray into answering predictive questions.
Here we will focus on predicting *numerical* variables
and will use *regression* to perform this task.
This is unlike the past two chapters, which focused on predicting categorical
variables via classification. However, regression does have many similarities
to classification: for example, just as in the case of classification,
we will split our data into training, validation, and test sets, we will
use `tidymodels` workflows, we will use a K\-nearest neighbors (K\-NN)
approach to make predictions, and we will use cross\-validation to choose K.
Because of how similar these procedures are, make sure to read Chapters
[5](classification1.html#classification1) and [6](classification2.html#classification2) before reading
this one—we will move a little bit faster here with the
concepts that have already been covered.
This chapter will primarily focus on the case where there is a single predictor,
but the end of the chapter shows how to perform
regression with more than one predictor variable, i.e., *multivariable regression*.
It is important to note that regression
can also be used to answer inferential and causal questions,
however that is beyond the scope of this book.
7\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Recognize situations where a regression analysis would be appropriate for making predictions.
* Explain the K\-nearest neighbors (K\-NN) regression algorithm and describe how it differs from K\-NN classification.
* Interpret the output of a K\-NN regression.
* In a data set with two or more variables, perform K\-nearest neighbors regression in R.
* Evaluate K\-NN regression prediction quality in R using the root mean squared prediction error (RMSPE).
* Estimate the RMSPE in R using cross\-validation or a test set.
* Choose the number of neighbors in K\-nearest neighbors regression by minimizing estimated cross\-validation RMSPE.
* Describe underfitting and overfitting, and relate it to the number of neighbors in K\-nearest neighbors regression.
* Describe the advantages and disadvantages of K\-nearest neighbors regression.
7\.3 The regression problem
---------------------------
Regression, like classification, is a predictive problem setting where we want
to use past information to predict future observations. But in the case of
regression, the goal is to predict *numerical* values instead of *categorical* values.
The variable that you want to predict is often called the *response variable*.
For example, we could try to use the number of hours a person spends on
exercise each week to predict their race time in the annual Boston marathon. As
another example, we could try to use the size of a house to
predict its sale price. Both of these response variables—race time and sale price—are
numerical, and so predicting them given past data is considered a regression problem.
Just like in the
classification setting, there are many possible methods that we can use
to predict numerical response variables. In this chapter we will
focus on the **K\-nearest neighbors** algorithm ([Fix and Hodges 1951](#ref-knnfix); [Cover and Hart 1967](#ref-knncover)), and in the next chapter
we will study **linear regression**.
In your future studies, you might encounter regression trees, splines,
and general local regression methods; see the additional resources
section at the end of the next chapter for where to begin learning more about
these other methods.
Many of the concepts from classification map over to the setting of regression. For example,
a regression model predicts a new observation’s response variable based on the response variables
for similar observations in the data set of past observations. When building a regression model,
we first split the data into training and test sets, in order to ensure that we assess the performance
of our method on observations not seen during training. And finally, we can use cross\-validation to evaluate different
choices of model parameters (e.g., K in a K\-nearest neighbors model). The major difference
is that we are now predicting numerical variables instead of categorical variables.
> **Note:** You can usually tell whether a variable is numerical or
> categorical—and therefore whether you need to perform regression or
> classification—by taking the response variable for two observations X and Y from your data,
> and asking the question, “is response variable X *more* than response
> variable Y?” If the variable is categorical, the question will make no sense.
> (Is blue more than red? Is benign more than malignant?) If the variable is
> numerical, it will make sense. (Is 1\.5 hours more than 2\.25 hours? Is
> $500,000 more than $400,000?) Be careful when applying this heuristic,
> though: sometimes categorical variables will be encoded as numbers in your
> data (e.g., “1” represents “benign”, and “0” represents “malignant”). In
> these cases you have to ask the question about the *meaning* of the labels
> (“benign” and “malignant”), not their values (“1” and “0”).
7\.4 Exploring a data set
-------------------------
In this chapter and the next, we will study
a data set of
[932 real estate transactions in Sacramento, California](https://support.spatialkey.com/spatialkey-sample-csv-data/)
originally reported in the *Sacramento Bee* newspaper.
We first need to formulate a precise question that
we want to answer. In this example, our question is again predictive:
Can we use the size of a house in the Sacramento, CA area to predict
its sale price? A rigorous, quantitative answer to this question might help
a realtor advise a client as to whether the price of a particular listing
is fair, or perhaps how to set the price of a new listing.
We begin the analysis by loading and examining the data, and setting the seed value.
```
library(tidyverse)
library(tidymodels)
library(gridExtra)
set.seed(5)
sacramento <- read_csv("data/sacramento.csv")
sacramento
```
```
## # A tibble: 932 × 9
## city zip beds baths sqft type price latitude longitude
## <chr> <chr> <dbl> <dbl> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 SACRAMENTO z95838 2 1 836 Residential 59222 38.6 -121.
## 2 SACRAMENTO z95823 3 1 1167 Residential 68212 38.5 -121.
## 3 SACRAMENTO z95815 2 1 796 Residential 68880 38.6 -121.
## 4 SACRAMENTO z95815 2 1 852 Residential 69307 38.6 -121.
## 5 SACRAMENTO z95824 2 1 797 Residential 81900 38.5 -121.
## 6 SACRAMENTO z95841 3 1 1122 Condo 89921 38.7 -121.
## 7 SACRAMENTO z95842 3 2 1104 Residential 90895 38.7 -121.
## 8 SACRAMENTO z95820 3 1 1177 Residential 91002 38.5 -121.
## 9 RANCHO_CORDOVA z95670 2 2 941 Condo 94905 38.6 -121.
## 10 RIO_LINDA z95673 3 2 1146 Residential 98937 38.7 -121.
## # ℹ 922 more rows
```
The scientific question guides our initial exploration: the columns in the
data that we are interested in are `sqft` (house size, in livable square feet)
and `price` (house sale price, in US dollars (USD)). The first step is to visualize
the data as a scatter plot where we place the predictor variable
(house size) on the x\-axis, and we place the response variable that we
want to predict (sale price) on the y\-axis.
> **Note:** Given that the y\-axis unit is dollars in Figure [7\.1](regression1.html#fig:07-edaRegr),
> we format the axis labels to put dollar signs in front of the house prices,
> as well as commas to increase the readability of the larger numbers.
> We can do this in R by passing the `dollar_format` function
> (from the `scales` package)
> to the `labels` argument of the `scale_y_continuous` function.
```
eda <- ggplot(sacramento, aes(x = sqft, y = price)) +
geom_point(alpha = 0.4) +
xlab("House size (square feet)") +
ylab("Price (USD)") +
scale_y_continuous(labels = dollar_format()) +
theme(text = element_text(size = 12))
eda
```
Figure 7\.1: Scatter plot of price (USD) versus house size (square feet).
The plot is shown in Figure [7\.1](regression1.html#fig:07-edaRegr).
We can see that in Sacramento, CA, as the
size of a house increases, so does its sale price. Thus, we can reason that we
may be able to use the size of a not\-yet\-sold house (for which we don’t know
the sale price) to predict its final sale price. Note that we do not suggest here
that a larger house size *causes* a higher sale price; just that house price
tends to increase with house size, and that we may be able to use the latter to
predict the former.
7\.5 K\-nearest neighbors regression
------------------------------------
Much like in the case of classification,
we can use a K\-nearest neighbors\-based
approach in regression to make predictions.
Let’s take a small sample of the data in Figure [7\.1](regression1.html#fig:07-edaRegr)
and walk through how K\-nearest neighbors (K\-NN) works
in a regression context before we dive in to creating our model and assessing
how well it predicts house sale price. This subsample is taken to allow us to
illustrate the mechanics of K\-NN regression with a few data points; later in
this chapter we will use all the data.
To take a small random sample of size 30, we’ll use the function
`slice_sample`, and input the data frame to sample from and the number of rows
to randomly select.
```
small_sacramento <- slice_sample(sacramento, n = 30)
```
Next let’s say we come across a 2,000 square\-foot house in Sacramento we are
interested in purchasing, with an advertised list price of $350,000\. Should we
offer to pay the asking price for this house, or is it overpriced and we should
offer less? Absent any other information, we can get a sense for a good answer
to this question by using the data we have to predict the sale price given the
sale prices we have already observed. But in Figure [7\.2](regression1.html#fig:07-small-eda-regr),
you can see that we have no
observations of a house of size *exactly* 2,000 square feet. How can we predict
the sale price?
```
small_plot <- ggplot(small_sacramento, aes(x = sqft, y = price)) +
geom_point() +
xlab("House size (square feet)") +
ylab("Price (USD)") +
scale_y_continuous(labels = dollar_format()) +
geom_vline(xintercept = 2000, linetype = "dashed") +
theme(text = element_text(size = 12))
small_plot
```
Figure 7\.2: Scatter plot of price (USD) versus house size (square feet) with vertical line indicating 2,000 square feet on x\-axis.
We will employ the same intuition from the classification chapter, and use the
neighboring points to the new point of interest to suggest/predict what its
sale price might be.
For the example shown in Figure [7\.2](regression1.html#fig:07-small-eda-regr),
we find and label the 5 nearest neighbors to our observation
of a house that is 2,000 square feet.
```
nearest_neighbors <- small_sacramento |>
mutate(diff = abs(2000 - sqft)) |>
slice_min(diff, n = 5)
nearest_neighbors
```
```
## # A tibble: 5 × 10
## city zip beds baths sqft type price latitude longitude diff
## <chr> <chr> <dbl> <dbl> <dbl> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 ROSEVILLE z95661 3 2 2049 Residenti… 395500 38.7 -121. 49
## 2 ANTELOPE z95843 4 3 2085 Residenti… 408431 38.7 -121. 85
## 3 SACRAMENTO z95823 4 2 1876 Residenti… 299940 38.5 -121. 124
## 4 ROSEVILLE z95747 3 2.5 1829 Residenti… 306500 38.8 -121. 171
## 5 SACRAMENTO z95825 4 2 1776 Multi_Fam… 221250 38.6 -121. 224
```
Figure 7\.3: Scatter plot of price (USD) versus house size (square feet) with lines to 5 nearest neighbors (highlighted in orange).
Figure [7\.3](regression1.html#fig:07-knn3-example) illustrates the difference between the house sizes
of the 5 nearest neighbors (in terms of house size) to our new
2,000 square\-foot house of interest. Now that we have obtained these nearest neighbors,
we can use their values to predict the
sale price for the new home. Specifically, we can take the mean (or
average) of these 5 values as our predicted value, as illustrated by
the red point in Figure [7\.4](regression1.html#fig:07-predictedViz-knn).
```
prediction <- nearest_neighbors |>
summarise(predicted = mean(price))
prediction
```
```
## # A tibble: 1 × 1
## predicted
## <dbl>
## 1 326324.
```
Figure 7\.4: Scatter plot of price (USD) versus house size (square feet) with predicted price for a 2,000 square\-foot house based on 5 nearest neighbors represented as a red dot.
Our predicted price is $326,324
(shown as a red point in Figure [7\.4](regression1.html#fig:07-predictedViz-knn)), which is much less than $350,000; perhaps we
might want to offer less than the list price at which the house is advertised.
But this is only the very beginning of the story. We still have all the same
unanswered questions here with K\-NN regression that we had with K\-NN
classification: which \\(K\\) do we choose, and is our model any good at making
predictions? In the next few sections, we will address these questions in the
context of K\-NN regression.
One strength of the K\-NN regression algorithm
that we would like to draw attention to at this point
is its ability to work well with non\-linear relationships
(i.e., if the relationship is not a straight line).
This stems from the use of nearest neighbors to predict values.
The algorithm really has very few assumptions
about what the data must look like for it to work.
7\.6 Training, evaluating, and tuning the model
-----------------------------------------------
As usual,
we must start by putting some test data away in a lock box
that we will come back to only after we choose our final model.
Let’s take care of that now.
Note that for the remainder of the chapter
we’ll be working with the entire Sacramento data set,
as opposed to the smaller sample of 30 points
that we used earlier in the chapter (Figure [7\.2](regression1.html#fig:07-small-eda-regr)).
```
sacramento_split <- initial_split(sacramento, prop = 0.75, strata = price)
sacramento_train <- training(sacramento_split)
sacramento_test <- testing(sacramento_split)
```
Next, we’ll use cross\-validation to choose \\(K\\). In K\-NN classification, we used
accuracy to see how well our predictions matched the true labels. We cannot use
the same metric in the regression setting, since our predictions will almost never
*exactly* match the true response variable values. Therefore in the
context of K\-NN regression we will use root mean square prediction error
(RMSPE) instead. The mathematical formula for calculating RMSPE is:
\\\[\\text{RMSPE} \= \\sqrt{\\frac{1}{n}\\sum\\limits\_{i\=1}^{n}(y\_i \- \\hat{y}\_i)^2}\\]
where:
* \\(n\\) is the number of observations,
* \\(y\_i\\) is the observed value for the \\(i^\\text{th}\\) observation, and
* \\(\\hat{y}\_i\\) is the forecasted/predicted value for the \\(i^\\text{th}\\) observation.
In other words, we compute the *squared* difference between the predicted and true response
value for each observation in our test (or validation) set, compute the average, and then finally
take the square root. The reason we use the *squared* difference (and not just the difference)
is that the differences can be positive or negative, i.e., we can overshoot or undershoot the true
response value. Figure [7\.5](regression1.html#fig:07-verticalerrors) illustrates both positive and negative differences
between predicted and true response values.
So if we want to measure error—a notion of distance between our predicted and true response values—we
want to make sure that we are only adding up positive values, with larger positive values representing larger
mistakes.
If the predictions are very close to the true values, then
RMSPE will be small. If, on the other\-hand, the predictions are very
different from the true values, then RMSPE will be quite large. When we
use cross\-validation, we will choose the \\(K\\) that gives
us the smallest RMSPE.
Figure 7\.5: Scatter plot of price (USD) versus house size (square feet) with example predictions (blue line) and the error in those predictions compared with true response values (vertical lines).
> **Note:** When using many code packages (`tidymodels` included), the evaluation output
> we will get to assess the prediction quality of
> our K\-NN regression models is labeled “RMSE”, or “root mean squared
> error”. Why is this so, and why not RMSPE?
> In statistics, we try to be very precise with our
> language to indicate whether we are calculating the prediction error on the
> training data (*in\-sample* prediction) versus on the testing data
> (*out\-of\-sample* prediction). When predicting and evaluating prediction quality on the training data, we
> say RMSE. By contrast, when predicting and evaluating prediction quality
> on the testing or validation data, we say RMSPE.
> The equation for calculating RMSE and RMSPE is exactly the same; all that changes is whether the \\(y\\)s are
> training or testing data. But many people just use RMSE for both,
> and rely on context to denote which data the root mean squared error is being calculated on.
Now that we know how we can assess how well our model predicts a numerical
value, let’s use R to perform cross\-validation and to choose the optimal \\(K\\).
First, we will create a recipe for preprocessing our data.
Note that we include standardization
in our preprocessing to build good habits, but since we only have one
predictor, it is technically not necessary; there is no risk of comparing two predictors
of different scales.
Next we create a model specification for K\-nearest neighbors regression. Note
that we use `set_mode("regression")`
now in the model specification to denote a regression problem, as opposed to the classification
problems from the previous chapters.
The use of `set_mode("regression")` essentially
tells `tidymodels` that we need to use different metrics (RMSPE, not accuracy)
for tuning and evaluation.
Then we create a 5\-fold cross\-validation object, and put the recipe and model specification together
in a workflow.
```
sacr_recipe <- recipe(price ~ sqft, data = sacramento_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
sacr_spec <- nearest_neighbor(weight_func = "rectangular",
neighbors = tune()) |>
set_engine("kknn") |>
set_mode("regression")
sacr_vfold <- vfold_cv(sacramento_train, v = 5, strata = price)
sacr_wkflw <- workflow() |>
add_recipe(sacr_recipe) |>
add_model(sacr_spec)
sacr_wkflw
```
```
## ══ Workflow ══════════
## Preprocessor: Recipe
## Model: nearest_neighbor()
##
## ── Preprocessor ──────────
## 2 Recipe Steps
##
## • step_scale()
## • step_center()
##
## ── Model ──────────
## K-Nearest Neighbor Model Specification (regression)
##
## Main Arguments:
## neighbors = tune()
## weight_func = rectangular
##
## Computational engine: kknn
```
Next we run cross\-validation for a grid of numbers of neighbors ranging from 1 to 200\.
The following code tunes
the model and returns the RMSPE for each number of neighbors. In the output of the `sacr_results`
results data frame, we see that the `neighbors` variable contains the value of \\(K\\),
the mean (`mean`) contains the value of the RMSPE estimated via cross\-validation,
and the standard error (`std_err`) contains a value corresponding to a measure of how uncertain we are in the mean value. A detailed treatment of this
is beyond the scope of this chapter; but roughly, if your estimated mean RMSPE is $100,000 and standard
error is $1,000, you can expect the *true* RMSPE to be somewhere roughly between $99,000 and $101,000 (although it may
fall outside this range). You may ignore the other columns in the metrics data frame,
as they do not provide any additional insight.
```
gridvals <- tibble(neighbors = seq(from = 1, to = 200, by = 3))
sacr_results <- sacr_wkflw |>
tune_grid(resamples = sacr_vfold, grid = gridvals) |>
collect_metrics() |>
filter(.metric == "rmse")
# show the results
sacr_results
```
```
## # A tibble: 67 × 7
## neighbors .metric .estimator mean n std_err .config
## <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 1 rmse standard 107206. 5 4102. Preprocessor1_Model01
## 2 4 rmse standard 90469. 5 3312. Preprocessor1_Model02
## 3 7 rmse standard 86580. 5 3062. Preprocessor1_Model03
## 4 10 rmse standard 85321. 5 3395. Preprocessor1_Model04
## 5 13 rmse standard 85045. 5 3641. Preprocessor1_Model05
## 6 16 rmse standard 84675. 5 3679. Preprocessor1_Model06
## 7 19 rmse standard 84776. 5 3984. Preprocessor1_Model07
## 8 22 rmse standard 84617. 5 3952. Preprocessor1_Model08
## 9 25 rmse standard 84953. 5 3929. Preprocessor1_Model09
## 10 28 rmse standard 84612. 5 3917. Preprocessor1_Model10
## # ℹ 57 more rows
```
Figure 7\.6: Effect of the number of neighbors on the RMSPE.
Figure [7\.6](regression1.html#fig:07-choose-k-knn-plot) visualizes how the RMSPE varies with the number of neighbors \\(K\\).
We take the *minimum* RMSPE to find the best setting for the number of neighbors:
```
# show only the row of minimum RMSPE
sacr_min <- sacr_results |>
filter(mean == min(mean))
sacr_min
```
```
## # A tibble: 1 × 7
## neighbors .metric .estimator mean n std_err .config
## <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 52 rmse standard 84561. 5 4470. Preprocessor1_Model18
```
The smallest RMSPE occurs when \\(K \=\\) 52\.
7\.7 Underfitting and overfitting
---------------------------------
Similar to the setting of classification, by setting the number of neighbors
to be too small or too large, we cause the RMSPE to increase, as shown in
Figure [7\.6](regression1.html#fig:07-choose-k-knn-plot). What is happening here?
Figure [7\.7](regression1.html#fig:07-howK) visualizes the effect of different settings of \\(K\\) on the
regression model. Each plot shows the predicted values for house sale price from
our K\-NN regression model on the training data for 6 different values for \\(K\\): 1, 3, 25, 52, 250, and 680 (almost the entire training set).
For each model, we predict prices for the range of possible home sizes we
observed in the data set (here 500 to 5,000 square feet) and we plot the
predicted prices as a blue line.
Figure 7\.7: Predicted values for house price (represented as a blue line) from K\-NN regression models for six different values for \\(K\\).
Figure [7\.7](regression1.html#fig:07-howK) shows that when \\(K\\) \= 1, the blue line runs perfectly
through (almost) all of our training observations.
This happens because our
predicted values for a given region (typically) depend on just a single observation.
In general, when \\(K\\) is too small, the line follows the training data quite
closely, even if it does not match it perfectly.
If we used a different training data set of house prices and sizes
from the Sacramento real estate market, we would end up with completely different
predictions. In other words, the model is *influenced too much* by the data.
Because the model follows the training data so closely, it will not make accurate
predictions on new observations which, generally, will not have the same fluctuations
as the original training data.
Recall from the classification
chapters that this behavior—where the model is influenced too much
by the noisy data—is called *overfitting*; we use this same term
in the context of regression.
What about the plots in Figure [7\.7](regression1.html#fig:07-howK) where \\(K\\) is quite large,
say, \\(K\\) \= 250 or 680?
In this case the blue line becomes extremely smooth, and actually becomes flat
once \\(K\\) is equal to the number of datapoints in the training set.
This happens because our predicted values for a given x value (here, home
size), depend on many neighboring observations; in the case where \\(K\\) is equal
to the size of the training set, the prediction is just the mean of the house prices
(completely ignoring the house size). In contrast to the \\(K\=1\\) example,
the smooth, inflexible blue line does not follow the training observations very closely.
In other words, the model is *not influenced enough* by the training data.
Recall from the classification
chapters that this behavior is called *underfitting*; we again use this same
term in the context of regression.
Ideally, what we want is neither of the two situations discussed above. Instead,
we would like a model that (1\) follows the overall “trend” in the training data, so the model
actually uses the training data to learn something useful, and (2\) does not follow
the noisy fluctuations, so that we can be confident that our model will transfer/generalize
well to other new data. If we explore
the other values for \\(K\\), in particular \\(K\\) \= 52
(as suggested by cross\-validation),
we can see it achieves this goal: it follows the increasing trend of house price
versus house size, but is not influenced too much by the idiosyncratic variations
in price. All of this is similar to how
the choice of \\(K\\) affects K\-nearest neighbors classification, as discussed in the previous
chapter.
7\.8 Evaluating on the test set
-------------------------------
To assess how well our model might do at predicting on unseen data, we will
assess its RMSPE on the test data. To do this, we will first
re\-train our K\-NN regression model on the entire training data set,
using \\(K \=\\) 52 neighbors. Then we will
use `predict` to make predictions on the test data, and use the `metrics`
function again to compute the summary of regression quality. Because
we specify that we are performing regression in `set_mode`, the `metrics`
function knows to output a quality summary related to regression, and not, say, classification.
```
kmin <- sacr_min |> pull(neighbors)
sacr_spec <- nearest_neighbor(weight_func = "rectangular", neighbors = kmin) |>
set_engine("kknn") |>
set_mode("regression")
sacr_fit <- workflow() |>
add_recipe(sacr_recipe) |>
add_model(sacr_spec) |>
fit(data = sacramento_train)
sacr_summary <- sacr_fit |>
predict(sacramento_test) |>
bind_cols(sacramento_test) |>
metrics(truth = price, estimate = .pred) |>
filter(.metric == 'rmse')
sacr_summary
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 90529.
```
Our final model’s test error as assessed by RMSPE
is $90,529\.
Note that RMSPE is measured in the same units as the response variable.
In other words, on new observations, we expect the error in our prediction to be
*roughly* $90,529\.
From one perspective, this is good news: this is about the same as the cross\-validation
RMSPE estimate of our tuned model
(which was $84,561\),
so we can say that the model appears to generalize well
to new data that it has never seen before.
However, much like in the case of K\-NN classification, whether this value for RMSPE is *good*—i.e.,
whether an error of around $90,529
is acceptable—depends entirely on the application.
In this application, this error
is not prohibitively large, but it is not negligible either;
$90,529
might represent a substantial fraction of a home buyer’s budget, and
could make or break whether or not they could afford put an offer on a house.
Finally, Figure [7\.8](regression1.html#fig:07-predict-all) shows the predictions that our final
model makes across the range of house sizes we might encounter in the
Sacramento area.
Note that instead of predicting the house price only for those house sizes that happen to appear in our data,
we predict it for evenly spaced values between the minimum and maximum in the data set
(roughly 500 to 5000 square feet).
We superimpose this prediction line on a scatter
plot of the original housing price data,
so that we can qualitatively assess if the model seems to fit the data well.
You have already seen a
few plots like this in this chapter, but here we also provide the code that
generated it as a learning opportunity.
```
sqft_prediction_grid <- tibble(
sqft = seq(
from = sacramento |> select(sqft) |> min(),
to = sacramento |> select(sqft) |> max(),
by = 10
)
)
sacr_preds <- sacr_fit |>
predict(sqft_prediction_grid) |>
bind_cols(sqft_prediction_grid)
plot_final <- ggplot(sacramento, aes(x = sqft, y = price)) +
geom_point(alpha = 0.4) +
geom_line(data = sacr_preds,
mapping = aes(x = sqft, y = .pred),
color = "steelblue",
linewidth = 1) +
xlab("House size (square feet)") +
ylab("Price (USD)") +
scale_y_continuous(labels = dollar_format()) +
ggtitle(paste0("K = ", kmin)) +
theme(text = element_text(size = 12))
plot_final
```
Figure 7\.8: Predicted values of house price (blue line) for the final K\-NN regression model.
7\.9 Multivariable K\-NN regression
-----------------------------------
As in K\-NN classification, we can use multiple predictors in K\-NN regression.
In this setting, we have the same concerns regarding the scale of the predictors. Once again,
predictions are made by identifying the \\(K\\)
observations that are nearest to the new point we want to predict; any
variables that are on a large scale will have a much larger effect than
variables on a small scale. But since the `recipe` we built above scales and centers
all predictor variables, this is handled for us.
Note that we also have the same concern regarding the selection of predictors
in K\-NN regression as in K\-NN classification: having more predictors is **not** always
better, and the choice of which predictors to use has a potentially large influence
on the quality of predictions. Fortunately, we can use the predictor selection
algorithm from the classification chapter in K\-NN regression as well.
As the algorithm is the same, we will not cover it again in this chapter.
We will now demonstrate a multivariable K\-NN regression analysis of the
Sacramento real estate data using `tidymodels`. This time we will use
house size (measured in square feet) as well as number of bedrooms as our
predictors, and continue to use house sale price as our response variable
that we are trying to predict.
It is always a good practice to do exploratory data analysis, such as
visualizing the data, before we start modeling the data. Figure [7\.9](regression1.html#fig:07-bedscatter)
shows that the number of bedrooms might provide useful information
to help predict the sale price of a house.
```
plot_beds <- sacramento |>
ggplot(aes(x = beds, y = price)) +
geom_point(alpha = 0.4) +
labs(x = 'Number of Bedrooms', y = 'Price (USD)') +
theme(text = element_text(size = 12))
plot_beds
```
Figure 7\.9: Scatter plot of the sale price of houses versus the number of bedrooms.
Figure [7\.9](regression1.html#fig:07-bedscatter) shows that as the number of bedrooms increases,
the house sale price tends to increase as well, but that the relationship
is quite weak. Does adding the number of bedrooms
to our model improve our ability to predict price? To answer that
question, we will have to create a new K\-NN regression
model using house size and number of bedrooms, and then we can compare it to
the model we previously came up with that only used house
size. Let’s do that now!
First we’ll build a new model specification and recipe for the analysis. Note that
we use the formula `price ~ sqft + beds` to denote that we have two predictors,
and set `neighbors = tune()` to tell `tidymodels` to tune the number of neighbors for us.
```
sacr_recipe <- recipe(price ~ sqft + beds, data = sacramento_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
sacr_spec <- nearest_neighbor(weight_func = "rectangular",
neighbors = tune()) |>
set_engine("kknn") |>
set_mode("regression")
```
Next, we’ll use 5\-fold cross\-validation to choose the number of neighbors via the minimum RMSPE:
```
gridvals <- tibble(neighbors = seq(1, 200))
sacr_multi <- workflow() |>
add_recipe(sacr_recipe) |>
add_model(sacr_spec) |>
tune_grid(sacr_vfold, grid = gridvals) |>
collect_metrics() |>
filter(.metric == "rmse") |>
filter(mean == min(mean))
sacr_k <- sacr_multi |>
pull(neighbors)
sacr_multi
```
```
## # A tibble: 1 × 7
## neighbors .metric .estimator mean n std_err .config
## <int> <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 11 rmse standard 81839. 5 3108. Preprocessor1_Model011
```
Here we see that the smallest estimated RMSPE from cross\-validation occurs when \\(K \=\\) 11\.
If we want to compare this multivariable K\-NN regression model to the model with only a single
predictor *as part of the model tuning process* (e.g., if we are running forward selection as described
in the chapter on evaluating and tuning classification models),
then we must compare the RMSPE estimated using only the training data via cross\-validation.
Looking back, the estimated cross\-validation RMSPE for the single\-predictor
model was $84,561\.
The estimated cross\-validation RMSPE for the multivariable model is
$81,839\.
Thus in this case, we did not improve the model
by a large amount by adding this additional predictor.
Regardless, let’s continue the analysis to see how we can make predictions with a multivariable K\-NN regression model
and evaluate its performance on test data. We first need to re\-train the model on the entire
training data set with \\(K \=\\) 11, and then use that model to make predictions
on the test data.
```
sacr_spec <- nearest_neighbor(weight_func = "rectangular",
neighbors = sacr_k) |>
set_engine("kknn") |>
set_mode("regression")
knn_mult_fit <- workflow() |>
add_recipe(sacr_recipe) |>
add_model(sacr_spec) |>
fit(data = sacramento_train)
knn_mult_preds <- knn_mult_fit |>
predict(sacramento_test) |>
bind_cols(sacramento_test)
knn_mult_mets <- metrics(knn_mult_preds, truth = price, estimate = .pred) |>
filter(.metric == 'rmse')
knn_mult_mets
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 90862.
```
This time, when we performed K\-NN regression on the same data set, but also
included number of bedrooms as a predictor, we obtained a RMSPE test error
of $90,862\.
Figure [7\.10](regression1.html#fig:07-knn-mult-viz) visualizes the model’s predictions overlaid on top of the data. This
time the predictions are a surface in 3D space, instead of a line in 2D space, as we have 2
predictors instead of 1\.
Figure 7\.10: K\-NN regression model’s predictions represented as a surface in 3D space overlaid on top of the data using three predictors (price, house size, and the number of bedrooms). Note that in general we recommend against using 3D visualizations; here we use a 3D visualization only to illustrate what the surface of predictions looks like for learning purposes.
We can see that the predictions in this case, where we have 2 predictors, form
a surface instead of a line. Because the newly added predictor (number of bedrooms) is
related to price (as price changes, so does number of bedrooms)
and is not totally determined by house size (our other predictor),
we get additional and useful information for making our
predictions. For example, in this model we would predict that the cost of a
house with a size of 2,500 square feet generally increases slightly as the number
of bedrooms increases. Without having the additional predictor of number of
bedrooms, we would predict the same price for these two houses.
7\.10 Strengths and limitations of K\-NN regression
---------------------------------------------------
As with K\-NN classification (or any prediction algorithm for that matter), K\-NN
regression has both strengths and weaknesses. Some are listed here:
**Strengths:** K\-nearest neighbors regression
1. is a simple, intuitive algorithm,
2. requires few assumptions about what the data must look like, and
3. works well with non\-linear relationships (i.e., if the relationship is not a straight line).
**Weaknesses:** K\-nearest neighbors regression
1. becomes very slow as the training data gets larger,
2. may not perform well with a large number of predictors, and
3. may not predict well beyond the range of values input in your training data.
7\.11 Exercises
---------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Regression I: K\-nearest neighbors” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
7\.1 Overview
-------------
This chapter continues our foray into answering predictive questions.
Here we will focus on predicting *numerical* variables
and will use *regression* to perform this task.
This is unlike the past two chapters, which focused on predicting categorical
variables via classification. However, regression does have many similarities
to classification: for example, just as in the case of classification,
we will split our data into training, validation, and test sets, we will
use `tidymodels` workflows, we will use a K\-nearest neighbors (K\-NN)
approach to make predictions, and we will use cross\-validation to choose K.
Because of how similar these procedures are, make sure to read Chapters
[5](classification1.html#classification1) and [6](classification2.html#classification2) before reading
this one—we will move a little bit faster here with the
concepts that have already been covered.
This chapter will primarily focus on the case where there is a single predictor,
but the end of the chapter shows how to perform
regression with more than one predictor variable, i.e., *multivariable regression*.
It is important to note that regression
can also be used to answer inferential and causal questions,
however that is beyond the scope of this book.
7\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Recognize situations where a regression analysis would be appropriate for making predictions.
* Explain the K\-nearest neighbors (K\-NN) regression algorithm and describe how it differs from K\-NN classification.
* Interpret the output of a K\-NN regression.
* In a data set with two or more variables, perform K\-nearest neighbors regression in R.
* Evaluate K\-NN regression prediction quality in R using the root mean squared prediction error (RMSPE).
* Estimate the RMSPE in R using cross\-validation or a test set.
* Choose the number of neighbors in K\-nearest neighbors regression by minimizing estimated cross\-validation RMSPE.
* Describe underfitting and overfitting, and relate it to the number of neighbors in K\-nearest neighbors regression.
* Describe the advantages and disadvantages of K\-nearest neighbors regression.
7\.3 The regression problem
---------------------------
Regression, like classification, is a predictive problem setting where we want
to use past information to predict future observations. But in the case of
regression, the goal is to predict *numerical* values instead of *categorical* values.
The variable that you want to predict is often called the *response variable*.
For example, we could try to use the number of hours a person spends on
exercise each week to predict their race time in the annual Boston marathon. As
another example, we could try to use the size of a house to
predict its sale price. Both of these response variables—race time and sale price—are
numerical, and so predicting them given past data is considered a regression problem.
Just like in the
classification setting, there are many possible methods that we can use
to predict numerical response variables. In this chapter we will
focus on the **K\-nearest neighbors** algorithm ([Fix and Hodges 1951](#ref-knnfix); [Cover and Hart 1967](#ref-knncover)), and in the next chapter
we will study **linear regression**.
In your future studies, you might encounter regression trees, splines,
and general local regression methods; see the additional resources
section at the end of the next chapter for where to begin learning more about
these other methods.
Many of the concepts from classification map over to the setting of regression. For example,
a regression model predicts a new observation’s response variable based on the response variables
for similar observations in the data set of past observations. When building a regression model,
we first split the data into training and test sets, in order to ensure that we assess the performance
of our method on observations not seen during training. And finally, we can use cross\-validation to evaluate different
choices of model parameters (e.g., K in a K\-nearest neighbors model). The major difference
is that we are now predicting numerical variables instead of categorical variables.
> **Note:** You can usually tell whether a variable is numerical or
> categorical—and therefore whether you need to perform regression or
> classification—by taking the response variable for two observations X and Y from your data,
> and asking the question, “is response variable X *more* than response
> variable Y?” If the variable is categorical, the question will make no sense.
> (Is blue more than red? Is benign more than malignant?) If the variable is
> numerical, it will make sense. (Is 1\.5 hours more than 2\.25 hours? Is
> $500,000 more than $400,000?) Be careful when applying this heuristic,
> though: sometimes categorical variables will be encoded as numbers in your
> data (e.g., “1” represents “benign”, and “0” represents “malignant”). In
> these cases you have to ask the question about the *meaning* of the labels
> (“benign” and “malignant”), not their values (“1” and “0”).
7\.4 Exploring a data set
-------------------------
In this chapter and the next, we will study
a data set of
[932 real estate transactions in Sacramento, California](https://support.spatialkey.com/spatialkey-sample-csv-data/)
originally reported in the *Sacramento Bee* newspaper.
We first need to formulate a precise question that
we want to answer. In this example, our question is again predictive:
Can we use the size of a house in the Sacramento, CA area to predict
its sale price? A rigorous, quantitative answer to this question might help
a realtor advise a client as to whether the price of a particular listing
is fair, or perhaps how to set the price of a new listing.
We begin the analysis by loading and examining the data, and setting the seed value.
```
library(tidyverse)
library(tidymodels)
library(gridExtra)
set.seed(5)
sacramento <- read_csv("data/sacramento.csv")
sacramento
```
```
## # A tibble: 932 × 9
## city zip beds baths sqft type price latitude longitude
## <chr> <chr> <dbl> <dbl> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 SACRAMENTO z95838 2 1 836 Residential 59222 38.6 -121.
## 2 SACRAMENTO z95823 3 1 1167 Residential 68212 38.5 -121.
## 3 SACRAMENTO z95815 2 1 796 Residential 68880 38.6 -121.
## 4 SACRAMENTO z95815 2 1 852 Residential 69307 38.6 -121.
## 5 SACRAMENTO z95824 2 1 797 Residential 81900 38.5 -121.
## 6 SACRAMENTO z95841 3 1 1122 Condo 89921 38.7 -121.
## 7 SACRAMENTO z95842 3 2 1104 Residential 90895 38.7 -121.
## 8 SACRAMENTO z95820 3 1 1177 Residential 91002 38.5 -121.
## 9 RANCHO_CORDOVA z95670 2 2 941 Condo 94905 38.6 -121.
## 10 RIO_LINDA z95673 3 2 1146 Residential 98937 38.7 -121.
## # ℹ 922 more rows
```
The scientific question guides our initial exploration: the columns in the
data that we are interested in are `sqft` (house size, in livable square feet)
and `price` (house sale price, in US dollars (USD)). The first step is to visualize
the data as a scatter plot where we place the predictor variable
(house size) on the x\-axis, and we place the response variable that we
want to predict (sale price) on the y\-axis.
> **Note:** Given that the y\-axis unit is dollars in Figure [7\.1](regression1.html#fig:07-edaRegr),
> we format the axis labels to put dollar signs in front of the house prices,
> as well as commas to increase the readability of the larger numbers.
> We can do this in R by passing the `dollar_format` function
> (from the `scales` package)
> to the `labels` argument of the `scale_y_continuous` function.
```
eda <- ggplot(sacramento, aes(x = sqft, y = price)) +
geom_point(alpha = 0.4) +
xlab("House size (square feet)") +
ylab("Price (USD)") +
scale_y_continuous(labels = dollar_format()) +
theme(text = element_text(size = 12))
eda
```
Figure 7\.1: Scatter plot of price (USD) versus house size (square feet).
The plot is shown in Figure [7\.1](regression1.html#fig:07-edaRegr).
We can see that in Sacramento, CA, as the
size of a house increases, so does its sale price. Thus, we can reason that we
may be able to use the size of a not\-yet\-sold house (for which we don’t know
the sale price) to predict its final sale price. Note that we do not suggest here
that a larger house size *causes* a higher sale price; just that house price
tends to increase with house size, and that we may be able to use the latter to
predict the former.
7\.5 K\-nearest neighbors regression
------------------------------------
Much like in the case of classification,
we can use a K\-nearest neighbors\-based
approach in regression to make predictions.
Let’s take a small sample of the data in Figure [7\.1](regression1.html#fig:07-edaRegr)
and walk through how K\-nearest neighbors (K\-NN) works
in a regression context before we dive in to creating our model and assessing
how well it predicts house sale price. This subsample is taken to allow us to
illustrate the mechanics of K\-NN regression with a few data points; later in
this chapter we will use all the data.
To take a small random sample of size 30, we’ll use the function
`slice_sample`, and input the data frame to sample from and the number of rows
to randomly select.
```
small_sacramento <- slice_sample(sacramento, n = 30)
```
Next let’s say we come across a 2,000 square\-foot house in Sacramento we are
interested in purchasing, with an advertised list price of $350,000\. Should we
offer to pay the asking price for this house, or is it overpriced and we should
offer less? Absent any other information, we can get a sense for a good answer
to this question by using the data we have to predict the sale price given the
sale prices we have already observed. But in Figure [7\.2](regression1.html#fig:07-small-eda-regr),
you can see that we have no
observations of a house of size *exactly* 2,000 square feet. How can we predict
the sale price?
```
small_plot <- ggplot(small_sacramento, aes(x = sqft, y = price)) +
geom_point() +
xlab("House size (square feet)") +
ylab("Price (USD)") +
scale_y_continuous(labels = dollar_format()) +
geom_vline(xintercept = 2000, linetype = "dashed") +
theme(text = element_text(size = 12))
small_plot
```
Figure 7\.2: Scatter plot of price (USD) versus house size (square feet) with vertical line indicating 2,000 square feet on x\-axis.
We will employ the same intuition from the classification chapter, and use the
neighboring points to the new point of interest to suggest/predict what its
sale price might be.
For the example shown in Figure [7\.2](regression1.html#fig:07-small-eda-regr),
we find and label the 5 nearest neighbors to our observation
of a house that is 2,000 square feet.
```
nearest_neighbors <- small_sacramento |>
mutate(diff = abs(2000 - sqft)) |>
slice_min(diff, n = 5)
nearest_neighbors
```
```
## # A tibble: 5 × 10
## city zip beds baths sqft type price latitude longitude diff
## <chr> <chr> <dbl> <dbl> <dbl> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 ROSEVILLE z95661 3 2 2049 Residenti… 395500 38.7 -121. 49
## 2 ANTELOPE z95843 4 3 2085 Residenti… 408431 38.7 -121. 85
## 3 SACRAMENTO z95823 4 2 1876 Residenti… 299940 38.5 -121. 124
## 4 ROSEVILLE z95747 3 2.5 1829 Residenti… 306500 38.8 -121. 171
## 5 SACRAMENTO z95825 4 2 1776 Multi_Fam… 221250 38.6 -121. 224
```
Figure 7\.3: Scatter plot of price (USD) versus house size (square feet) with lines to 5 nearest neighbors (highlighted in orange).
Figure [7\.3](regression1.html#fig:07-knn3-example) illustrates the difference between the house sizes
of the 5 nearest neighbors (in terms of house size) to our new
2,000 square\-foot house of interest. Now that we have obtained these nearest neighbors,
we can use their values to predict the
sale price for the new home. Specifically, we can take the mean (or
average) of these 5 values as our predicted value, as illustrated by
the red point in Figure [7\.4](regression1.html#fig:07-predictedViz-knn).
```
prediction <- nearest_neighbors |>
summarise(predicted = mean(price))
prediction
```
```
## # A tibble: 1 × 1
## predicted
## <dbl>
## 1 326324.
```
Figure 7\.4: Scatter plot of price (USD) versus house size (square feet) with predicted price for a 2,000 square\-foot house based on 5 nearest neighbors represented as a red dot.
Our predicted price is $326,324
(shown as a red point in Figure [7\.4](regression1.html#fig:07-predictedViz-knn)), which is much less than $350,000; perhaps we
might want to offer less than the list price at which the house is advertised.
But this is only the very beginning of the story. We still have all the same
unanswered questions here with K\-NN regression that we had with K\-NN
classification: which \\(K\\) do we choose, and is our model any good at making
predictions? In the next few sections, we will address these questions in the
context of K\-NN regression.
One strength of the K\-NN regression algorithm
that we would like to draw attention to at this point
is its ability to work well with non\-linear relationships
(i.e., if the relationship is not a straight line).
This stems from the use of nearest neighbors to predict values.
The algorithm really has very few assumptions
about what the data must look like for it to work.
7\.6 Training, evaluating, and tuning the model
-----------------------------------------------
As usual,
we must start by putting some test data away in a lock box
that we will come back to only after we choose our final model.
Let’s take care of that now.
Note that for the remainder of the chapter
we’ll be working with the entire Sacramento data set,
as opposed to the smaller sample of 30 points
that we used earlier in the chapter (Figure [7\.2](regression1.html#fig:07-small-eda-regr)).
```
sacramento_split <- initial_split(sacramento, prop = 0.75, strata = price)
sacramento_train <- training(sacramento_split)
sacramento_test <- testing(sacramento_split)
```
Next, we’ll use cross\-validation to choose \\(K\\). In K\-NN classification, we used
accuracy to see how well our predictions matched the true labels. We cannot use
the same metric in the regression setting, since our predictions will almost never
*exactly* match the true response variable values. Therefore in the
context of K\-NN regression we will use root mean square prediction error
(RMSPE) instead. The mathematical formula for calculating RMSPE is:
\\\[\\text{RMSPE} \= \\sqrt{\\frac{1}{n}\\sum\\limits\_{i\=1}^{n}(y\_i \- \\hat{y}\_i)^2}\\]
where:
* \\(n\\) is the number of observations,
* \\(y\_i\\) is the observed value for the \\(i^\\text{th}\\) observation, and
* \\(\\hat{y}\_i\\) is the forecasted/predicted value for the \\(i^\\text{th}\\) observation.
In other words, we compute the *squared* difference between the predicted and true response
value for each observation in our test (or validation) set, compute the average, and then finally
take the square root. The reason we use the *squared* difference (and not just the difference)
is that the differences can be positive or negative, i.e., we can overshoot or undershoot the true
response value. Figure [7\.5](regression1.html#fig:07-verticalerrors) illustrates both positive and negative differences
between predicted and true response values.
So if we want to measure error—a notion of distance between our predicted and true response values—we
want to make sure that we are only adding up positive values, with larger positive values representing larger
mistakes.
If the predictions are very close to the true values, then
RMSPE will be small. If, on the other\-hand, the predictions are very
different from the true values, then RMSPE will be quite large. When we
use cross\-validation, we will choose the \\(K\\) that gives
us the smallest RMSPE.
Figure 7\.5: Scatter plot of price (USD) versus house size (square feet) with example predictions (blue line) and the error in those predictions compared with true response values (vertical lines).
> **Note:** When using many code packages (`tidymodels` included), the evaluation output
> we will get to assess the prediction quality of
> our K\-NN regression models is labeled “RMSE”, or “root mean squared
> error”. Why is this so, and why not RMSPE?
> In statistics, we try to be very precise with our
> language to indicate whether we are calculating the prediction error on the
> training data (*in\-sample* prediction) versus on the testing data
> (*out\-of\-sample* prediction). When predicting and evaluating prediction quality on the training data, we
> say RMSE. By contrast, when predicting and evaluating prediction quality
> on the testing or validation data, we say RMSPE.
> The equation for calculating RMSE and RMSPE is exactly the same; all that changes is whether the \\(y\\)s are
> training or testing data. But many people just use RMSE for both,
> and rely on context to denote which data the root mean squared error is being calculated on.
Now that we know how we can assess how well our model predicts a numerical
value, let’s use R to perform cross\-validation and to choose the optimal \\(K\\).
First, we will create a recipe for preprocessing our data.
Note that we include standardization
in our preprocessing to build good habits, but since we only have one
predictor, it is technically not necessary; there is no risk of comparing two predictors
of different scales.
Next we create a model specification for K\-nearest neighbors regression. Note
that we use `set_mode("regression")`
now in the model specification to denote a regression problem, as opposed to the classification
problems from the previous chapters.
The use of `set_mode("regression")` essentially
tells `tidymodels` that we need to use different metrics (RMSPE, not accuracy)
for tuning and evaluation.
Then we create a 5\-fold cross\-validation object, and put the recipe and model specification together
in a workflow.
```
sacr_recipe <- recipe(price ~ sqft, data = sacramento_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
sacr_spec <- nearest_neighbor(weight_func = "rectangular",
neighbors = tune()) |>
set_engine("kknn") |>
set_mode("regression")
sacr_vfold <- vfold_cv(sacramento_train, v = 5, strata = price)
sacr_wkflw <- workflow() |>
add_recipe(sacr_recipe) |>
add_model(sacr_spec)
sacr_wkflw
```
```
## ══ Workflow ══════════
## Preprocessor: Recipe
## Model: nearest_neighbor()
##
## ── Preprocessor ──────────
## 2 Recipe Steps
##
## • step_scale()
## • step_center()
##
## ── Model ──────────
## K-Nearest Neighbor Model Specification (regression)
##
## Main Arguments:
## neighbors = tune()
## weight_func = rectangular
##
## Computational engine: kknn
```
Next we run cross\-validation for a grid of numbers of neighbors ranging from 1 to 200\.
The following code tunes
the model and returns the RMSPE for each number of neighbors. In the output of the `sacr_results`
results data frame, we see that the `neighbors` variable contains the value of \\(K\\),
the mean (`mean`) contains the value of the RMSPE estimated via cross\-validation,
and the standard error (`std_err`) contains a value corresponding to a measure of how uncertain we are in the mean value. A detailed treatment of this
is beyond the scope of this chapter; but roughly, if your estimated mean RMSPE is $100,000 and standard
error is $1,000, you can expect the *true* RMSPE to be somewhere roughly between $99,000 and $101,000 (although it may
fall outside this range). You may ignore the other columns in the metrics data frame,
as they do not provide any additional insight.
```
gridvals <- tibble(neighbors = seq(from = 1, to = 200, by = 3))
sacr_results <- sacr_wkflw |>
tune_grid(resamples = sacr_vfold, grid = gridvals) |>
collect_metrics() |>
filter(.metric == "rmse")
# show the results
sacr_results
```
```
## # A tibble: 67 × 7
## neighbors .metric .estimator mean n std_err .config
## <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 1 rmse standard 107206. 5 4102. Preprocessor1_Model01
## 2 4 rmse standard 90469. 5 3312. Preprocessor1_Model02
## 3 7 rmse standard 86580. 5 3062. Preprocessor1_Model03
## 4 10 rmse standard 85321. 5 3395. Preprocessor1_Model04
## 5 13 rmse standard 85045. 5 3641. Preprocessor1_Model05
## 6 16 rmse standard 84675. 5 3679. Preprocessor1_Model06
## 7 19 rmse standard 84776. 5 3984. Preprocessor1_Model07
## 8 22 rmse standard 84617. 5 3952. Preprocessor1_Model08
## 9 25 rmse standard 84953. 5 3929. Preprocessor1_Model09
## 10 28 rmse standard 84612. 5 3917. Preprocessor1_Model10
## # ℹ 57 more rows
```
Figure 7\.6: Effect of the number of neighbors on the RMSPE.
Figure [7\.6](regression1.html#fig:07-choose-k-knn-plot) visualizes how the RMSPE varies with the number of neighbors \\(K\\).
We take the *minimum* RMSPE to find the best setting for the number of neighbors:
```
# show only the row of minimum RMSPE
sacr_min <- sacr_results |>
filter(mean == min(mean))
sacr_min
```
```
## # A tibble: 1 × 7
## neighbors .metric .estimator mean n std_err .config
## <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 52 rmse standard 84561. 5 4470. Preprocessor1_Model18
```
The smallest RMSPE occurs when \\(K \=\\) 52\.
7\.7 Underfitting and overfitting
---------------------------------
Similar to the setting of classification, by setting the number of neighbors
to be too small or too large, we cause the RMSPE to increase, as shown in
Figure [7\.6](regression1.html#fig:07-choose-k-knn-plot). What is happening here?
Figure [7\.7](regression1.html#fig:07-howK) visualizes the effect of different settings of \\(K\\) on the
regression model. Each plot shows the predicted values for house sale price from
our K\-NN regression model on the training data for 6 different values for \\(K\\): 1, 3, 25, 52, 250, and 680 (almost the entire training set).
For each model, we predict prices for the range of possible home sizes we
observed in the data set (here 500 to 5,000 square feet) and we plot the
predicted prices as a blue line.
Figure 7\.7: Predicted values for house price (represented as a blue line) from K\-NN regression models for six different values for \\(K\\).
Figure [7\.7](regression1.html#fig:07-howK) shows that when \\(K\\) \= 1, the blue line runs perfectly
through (almost) all of our training observations.
This happens because our
predicted values for a given region (typically) depend on just a single observation.
In general, when \\(K\\) is too small, the line follows the training data quite
closely, even if it does not match it perfectly.
If we used a different training data set of house prices and sizes
from the Sacramento real estate market, we would end up with completely different
predictions. In other words, the model is *influenced too much* by the data.
Because the model follows the training data so closely, it will not make accurate
predictions on new observations which, generally, will not have the same fluctuations
as the original training data.
Recall from the classification
chapters that this behavior—where the model is influenced too much
by the noisy data—is called *overfitting*; we use this same term
in the context of regression.
What about the plots in Figure [7\.7](regression1.html#fig:07-howK) where \\(K\\) is quite large,
say, \\(K\\) \= 250 or 680?
In this case the blue line becomes extremely smooth, and actually becomes flat
once \\(K\\) is equal to the number of datapoints in the training set.
This happens because our predicted values for a given x value (here, home
size), depend on many neighboring observations; in the case where \\(K\\) is equal
to the size of the training set, the prediction is just the mean of the house prices
(completely ignoring the house size). In contrast to the \\(K\=1\\) example,
the smooth, inflexible blue line does not follow the training observations very closely.
In other words, the model is *not influenced enough* by the training data.
Recall from the classification
chapters that this behavior is called *underfitting*; we again use this same
term in the context of regression.
Ideally, what we want is neither of the two situations discussed above. Instead,
we would like a model that (1\) follows the overall “trend” in the training data, so the model
actually uses the training data to learn something useful, and (2\) does not follow
the noisy fluctuations, so that we can be confident that our model will transfer/generalize
well to other new data. If we explore
the other values for \\(K\\), in particular \\(K\\) \= 52
(as suggested by cross\-validation),
we can see it achieves this goal: it follows the increasing trend of house price
versus house size, but is not influenced too much by the idiosyncratic variations
in price. All of this is similar to how
the choice of \\(K\\) affects K\-nearest neighbors classification, as discussed in the previous
chapter.
7\.8 Evaluating on the test set
-------------------------------
To assess how well our model might do at predicting on unseen data, we will
assess its RMSPE on the test data. To do this, we will first
re\-train our K\-NN regression model on the entire training data set,
using \\(K \=\\) 52 neighbors. Then we will
use `predict` to make predictions on the test data, and use the `metrics`
function again to compute the summary of regression quality. Because
we specify that we are performing regression in `set_mode`, the `metrics`
function knows to output a quality summary related to regression, and not, say, classification.
```
kmin <- sacr_min |> pull(neighbors)
sacr_spec <- nearest_neighbor(weight_func = "rectangular", neighbors = kmin) |>
set_engine("kknn") |>
set_mode("regression")
sacr_fit <- workflow() |>
add_recipe(sacr_recipe) |>
add_model(sacr_spec) |>
fit(data = sacramento_train)
sacr_summary <- sacr_fit |>
predict(sacramento_test) |>
bind_cols(sacramento_test) |>
metrics(truth = price, estimate = .pred) |>
filter(.metric == 'rmse')
sacr_summary
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 90529.
```
Our final model’s test error as assessed by RMSPE
is $90,529\.
Note that RMSPE is measured in the same units as the response variable.
In other words, on new observations, we expect the error in our prediction to be
*roughly* $90,529\.
From one perspective, this is good news: this is about the same as the cross\-validation
RMSPE estimate of our tuned model
(which was $84,561\),
so we can say that the model appears to generalize well
to new data that it has never seen before.
However, much like in the case of K\-NN classification, whether this value for RMSPE is *good*—i.e.,
whether an error of around $90,529
is acceptable—depends entirely on the application.
In this application, this error
is not prohibitively large, but it is not negligible either;
$90,529
might represent a substantial fraction of a home buyer’s budget, and
could make or break whether or not they could afford put an offer on a house.
Finally, Figure [7\.8](regression1.html#fig:07-predict-all) shows the predictions that our final
model makes across the range of house sizes we might encounter in the
Sacramento area.
Note that instead of predicting the house price only for those house sizes that happen to appear in our data,
we predict it for evenly spaced values between the minimum and maximum in the data set
(roughly 500 to 5000 square feet).
We superimpose this prediction line on a scatter
plot of the original housing price data,
so that we can qualitatively assess if the model seems to fit the data well.
You have already seen a
few plots like this in this chapter, but here we also provide the code that
generated it as a learning opportunity.
```
sqft_prediction_grid <- tibble(
sqft = seq(
from = sacramento |> select(sqft) |> min(),
to = sacramento |> select(sqft) |> max(),
by = 10
)
)
sacr_preds <- sacr_fit |>
predict(sqft_prediction_grid) |>
bind_cols(sqft_prediction_grid)
plot_final <- ggplot(sacramento, aes(x = sqft, y = price)) +
geom_point(alpha = 0.4) +
geom_line(data = sacr_preds,
mapping = aes(x = sqft, y = .pred),
color = "steelblue",
linewidth = 1) +
xlab("House size (square feet)") +
ylab("Price (USD)") +
scale_y_continuous(labels = dollar_format()) +
ggtitle(paste0("K = ", kmin)) +
theme(text = element_text(size = 12))
plot_final
```
Figure 7\.8: Predicted values of house price (blue line) for the final K\-NN regression model.
7\.9 Multivariable K\-NN regression
-----------------------------------
As in K\-NN classification, we can use multiple predictors in K\-NN regression.
In this setting, we have the same concerns regarding the scale of the predictors. Once again,
predictions are made by identifying the \\(K\\)
observations that are nearest to the new point we want to predict; any
variables that are on a large scale will have a much larger effect than
variables on a small scale. But since the `recipe` we built above scales and centers
all predictor variables, this is handled for us.
Note that we also have the same concern regarding the selection of predictors
in K\-NN regression as in K\-NN classification: having more predictors is **not** always
better, and the choice of which predictors to use has a potentially large influence
on the quality of predictions. Fortunately, we can use the predictor selection
algorithm from the classification chapter in K\-NN regression as well.
As the algorithm is the same, we will not cover it again in this chapter.
We will now demonstrate a multivariable K\-NN regression analysis of the
Sacramento real estate data using `tidymodels`. This time we will use
house size (measured in square feet) as well as number of bedrooms as our
predictors, and continue to use house sale price as our response variable
that we are trying to predict.
It is always a good practice to do exploratory data analysis, such as
visualizing the data, before we start modeling the data. Figure [7\.9](regression1.html#fig:07-bedscatter)
shows that the number of bedrooms might provide useful information
to help predict the sale price of a house.
```
plot_beds <- sacramento |>
ggplot(aes(x = beds, y = price)) +
geom_point(alpha = 0.4) +
labs(x = 'Number of Bedrooms', y = 'Price (USD)') +
theme(text = element_text(size = 12))
plot_beds
```
Figure 7\.9: Scatter plot of the sale price of houses versus the number of bedrooms.
Figure [7\.9](regression1.html#fig:07-bedscatter) shows that as the number of bedrooms increases,
the house sale price tends to increase as well, but that the relationship
is quite weak. Does adding the number of bedrooms
to our model improve our ability to predict price? To answer that
question, we will have to create a new K\-NN regression
model using house size and number of bedrooms, and then we can compare it to
the model we previously came up with that only used house
size. Let’s do that now!
First we’ll build a new model specification and recipe for the analysis. Note that
we use the formula `price ~ sqft + beds` to denote that we have two predictors,
and set `neighbors = tune()` to tell `tidymodels` to tune the number of neighbors for us.
```
sacr_recipe <- recipe(price ~ sqft + beds, data = sacramento_train) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
sacr_spec <- nearest_neighbor(weight_func = "rectangular",
neighbors = tune()) |>
set_engine("kknn") |>
set_mode("regression")
```
Next, we’ll use 5\-fold cross\-validation to choose the number of neighbors via the minimum RMSPE:
```
gridvals <- tibble(neighbors = seq(1, 200))
sacr_multi <- workflow() |>
add_recipe(sacr_recipe) |>
add_model(sacr_spec) |>
tune_grid(sacr_vfold, grid = gridvals) |>
collect_metrics() |>
filter(.metric == "rmse") |>
filter(mean == min(mean))
sacr_k <- sacr_multi |>
pull(neighbors)
sacr_multi
```
```
## # A tibble: 1 × 7
## neighbors .metric .estimator mean n std_err .config
## <int> <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 11 rmse standard 81839. 5 3108. Preprocessor1_Model011
```
Here we see that the smallest estimated RMSPE from cross\-validation occurs when \\(K \=\\) 11\.
If we want to compare this multivariable K\-NN regression model to the model with only a single
predictor *as part of the model tuning process* (e.g., if we are running forward selection as described
in the chapter on evaluating and tuning classification models),
then we must compare the RMSPE estimated using only the training data via cross\-validation.
Looking back, the estimated cross\-validation RMSPE for the single\-predictor
model was $84,561\.
The estimated cross\-validation RMSPE for the multivariable model is
$81,839\.
Thus in this case, we did not improve the model
by a large amount by adding this additional predictor.
Regardless, let’s continue the analysis to see how we can make predictions with a multivariable K\-NN regression model
and evaluate its performance on test data. We first need to re\-train the model on the entire
training data set with \\(K \=\\) 11, and then use that model to make predictions
on the test data.
```
sacr_spec <- nearest_neighbor(weight_func = "rectangular",
neighbors = sacr_k) |>
set_engine("kknn") |>
set_mode("regression")
knn_mult_fit <- workflow() |>
add_recipe(sacr_recipe) |>
add_model(sacr_spec) |>
fit(data = sacramento_train)
knn_mult_preds <- knn_mult_fit |>
predict(sacramento_test) |>
bind_cols(sacramento_test)
knn_mult_mets <- metrics(knn_mult_preds, truth = price, estimate = .pred) |>
filter(.metric == 'rmse')
knn_mult_mets
```
```
## # A tibble: 1 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 90862.
```
This time, when we performed K\-NN regression on the same data set, but also
included number of bedrooms as a predictor, we obtained a RMSPE test error
of $90,862\.
Figure [7\.10](regression1.html#fig:07-knn-mult-viz) visualizes the model’s predictions overlaid on top of the data. This
time the predictions are a surface in 3D space, instead of a line in 2D space, as we have 2
predictors instead of 1\.
Figure 7\.10: K\-NN regression model’s predictions represented as a surface in 3D space overlaid on top of the data using three predictors (price, house size, and the number of bedrooms). Note that in general we recommend against using 3D visualizations; here we use a 3D visualization only to illustrate what the surface of predictions looks like for learning purposes.
We can see that the predictions in this case, where we have 2 predictors, form
a surface instead of a line. Because the newly added predictor (number of bedrooms) is
related to price (as price changes, so does number of bedrooms)
and is not totally determined by house size (our other predictor),
we get additional and useful information for making our
predictions. For example, in this model we would predict that the cost of a
house with a size of 2,500 square feet generally increases slightly as the number
of bedrooms increases. Without having the additional predictor of number of
bedrooms, we would predict the same price for these two houses.
7\.10 Strengths and limitations of K\-NN regression
---------------------------------------------------
As with K\-NN classification (or any prediction algorithm for that matter), K\-NN
regression has both strengths and weaknesses. Some are listed here:
**Strengths:** K\-nearest neighbors regression
1. is a simple, intuitive algorithm,
2. requires few assumptions about what the data must look like, and
3. works well with non\-linear relationships (i.e., if the relationship is not a straight line).
**Weaknesses:** K\-nearest neighbors regression
1. becomes very slow as the training data gets larger,
2. may not perform well with a large number of predictors, and
3. may not predict well beyond the range of values input in your training data.
7\.11 Exercises
---------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Regression I: K\-nearest neighbors” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
| Data Science |
ubc-dsci.github.io | https://ubc-dsci.github.io/introduction-to-datascience/regression2.html |
Chapter 8 Regression II: linear regression
==========================================
8\.1 Overview
-------------
Up to this point, we have solved all of our predictive problems—both classification
and regression—using K\-nearest neighbors (K\-NN)\-based approaches. In the context of regression,
there is another commonly used method known as *linear regression*. This chapter provides an introduction
to the basic concept of linear regression, shows how to use `tidymodels` to perform linear regression in R,
and characterizes its strengths and weaknesses compared to K\-NN regression. The focus is, as usual,
on the case where there is a single predictor and single response variable of interest; but the chapter
concludes with an example using *multivariable linear regression* when there is more than one
predictor.
8\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Use R to fit simple and multivariable linear regression models on training data.
* Evaluate the linear regression model on test data.
* Compare and contrast predictions obtained from K\-nearest neighbors regression to those obtained using linear regression from the same data set.
* Describe how linear regression is affected by outliers and multicollinearity.
8\.3 Simple linear regression
-----------------------------
At the end of the previous chapter, we noted some limitations of K\-NN regression.
While the method is simple and easy to understand, K\-NN regression does not
predict well beyond the range of the predictors in the training data, and
the method gets significantly slower as the training data set grows.
Fortunately, there is an alternative to K\-NN regression—*linear regression*—that addresses
both of these limitations. Linear regression is also very commonly
used in practice because it provides an interpretable mathematical equation that describes
the relationship between the predictor and response variables. In this first part of the chapter, we will focus on *simple* linear regression,
which involves only one predictor variable and one response variable; later on, we will consider
*multivariable* linear regression, which involves multiple predictor variables.
Like K\-NN regression, simple linear regression involves
predicting a numerical response variable (like race time, house price, or height);
but *how* it makes those predictions for a new observation is quite different from K\-NN regression.
Instead of looking at the K nearest neighbors and averaging
over their values for a prediction, in simple linear regression, we create a
straight line of best fit through the training data and then
“look up” the prediction using the line.
> **Note:** Although we did not cover it in earlier chapters, there
> is another popular method for classification called *logistic
> regression* (it is used for classification even though the name, somewhat confusingly,
> has the word “regression” in it). In logistic regression—similar to linear regression—you
> “fit” the model to the training data and then “look up” the prediction for each new observation.
> Logistic regression and K\-NN classification have an advantage/disadvantage comparison
> similar to that of linear regression and K\-NN
> regression. It is useful to have a good understanding of linear regression before learning about
> logistic regression. After reading this chapter, see the “Additional Resources” section at the end of the
> classification chapters to learn more about logistic regression.
Let’s return to the Sacramento housing data from Chapter [7](regression1.html#regression1) to learn
how to apply linear regression and compare it to K\-NN regression. For now, we
will consider
a smaller version of the housing data to help make our visualizations clear.
Recall our predictive question: can we use the size of a house in the Sacramento, CA area to predict
its sale price? In particular, recall that we have come across a new 2,000 square\-foot house we are interested
in purchasing with an advertised list price of
$350,000\. Should we offer the list price, or is that over/undervalued?
To answer this question using simple linear regression, we use the data we have
to draw the straight line of best fit through our existing data points.
The small subset of data as well as the line of best fit are shown
in Figure [8\.1](regression2.html#fig:08-lin-reg1).
Figure 8\.1: Scatter plot of sale price versus size with line of best fit for subset of the Sacramento housing data.
The equation for the straight line is:
\\\[\\text{house sale price} \= \\beta\_0 \+ \\beta\_1 \\cdot (\\text{house size}),\\]
where
* \\(\\beta\_0\\) is the *vertical intercept* of the line (the price when house size is 0\)
* \\(\\beta\_1\\) is the *slope* of the line (how quickly the price increases as you increase house size)
Therefore using the data to find the line of best fit is equivalent to finding coefficients
\\(\\beta\_0\\) and \\(\\beta\_1\\) that *parametrize* (correspond to) the line of best fit.
Now of course, in this particular problem, the idea of a 0 square\-foot house is a bit silly;
but you can think of \\(\\beta\_0\\) here as the “base price,” and
\\(\\beta\_1\\) as the increase in price for each square foot of space.
Let’s push this thought even further: what would happen in the equation for the line if you
tried to evaluate the price of a house with size 6 *million* square feet?
Or what about *negative* 2,000 square feet? As it turns out, nothing in the formula breaks; linear
regression will happily make predictions for nonsensical predictor values if you ask it to. But even though
you *can* make these wild predictions, you shouldn’t. You should only make predictions roughly within
the range of your original data, and perhaps a bit beyond it only if it makes sense. For example,
the data in Figure [8\.1](regression2.html#fig:08-lin-reg1) only reaches around 800 square feet on the low end, but
it would probably be reasonable to use the linear regression model to make a prediction at 600 square feet, say.
Back to the example! Once we have the coefficients \\(\\beta\_0\\) and \\(\\beta\_1\\), we can use the equation
above to evaluate the predicted sale price given the value we have for the
predictor variable—here 2,000 square feet. Figure
[8\.2](regression2.html#fig:08-lin-reg2) demonstrates this process.
Figure 8\.2: Scatter plot of sale price versus size with line of best fit and a red dot at the predicted sale price for a 2,000 square\-foot home.
By using simple linear regression on this small data set to predict the sale price
for a 2,000 square\-foot house, we get a predicted value of
$295,564\. But wait a minute… how
exactly does simple linear regression choose the line of best fit? Many
different lines could be drawn through the data points.
Some plausible examples are shown in Figure [8\.3](regression2.html#fig:08-several-lines).
Figure 8\.3: Scatter plot of sale price versus size with many possible lines that could be drawn through the data points.
Simple linear regression chooses the straight line of best fit by choosing
the line that minimizes the **average squared vertical distance** between itself and
each of the observed data points in the training data (equivalent to minimizing the RMSE).
Figure [8\.4](regression2.html#fig:08-verticalDistToMin) illustrates these vertical distances as red lines. Finally, to assess the predictive
accuracy of a simple linear regression model,
we use RMSPE—the same measure of predictive performance we used with K\-NN regression.
Figure 8\.4: Scatter plot of sale price versus size with red lines denoting the vertical distances between the predicted values and the observed data points.
8\.4 Linear regression in R
---------------------------
We can perform simple linear regression in R using `tidymodels` in a
very similar manner to how we performed K\-NN regression.
To do this, instead of creating a `nearest_neighbor` model specification with
the `kknn` engine, we use a `linear_reg` model specification
with the `lm` engine. Another difference is that we do not need to choose \\(K\\) in the
context of linear regression, and so we do not need to perform cross\-validation.
Below we illustrate how we can use the usual `tidymodels` workflow to predict house sale
price given house size using a simple linear regression approach using the full
Sacramento real estate data set.
As usual, we start by loading packages, setting the seed, loading data, and putting some test data away in a lock box that we
can come back to after we choose our final model. Let’s take care of that now.
```
library(tidyverse)
library(tidymodels)
set.seed(7)
sacramento <- read_csv("data/sacramento.csv")
sacramento_split <- initial_split(sacramento, prop = 0.75, strata = price)
sacramento_train <- training(sacramento_split)
sacramento_test <- testing(sacramento_split)
```
Now that we have our training data, we will create the model specification
and recipe, and fit our simple linear regression model:
```
lm_spec <- linear_reg() |>
set_engine("lm") |>
set_mode("regression")
lm_recipe <- recipe(price ~ sqft, data = sacramento_train)
lm_fit <- workflow() |>
add_recipe(lm_recipe) |>
add_model(lm_spec) |>
fit(data = sacramento_train)
lm_fit
```
```
## ══ Workflow [trained] ══════════
## Preprocessor: Recipe
## Model: linear_reg()
##
## ── Preprocessor ──────────
## 0 Recipe Steps
##
## ── Model ──────────
##
## Call:
## stats::lm(formula = ..y ~ ., data = data)
##
## Coefficients:
## (Intercept) sqft
## 18450.3 134.8
```
> **Note:** An additional difference that you will notice here is that we do
> not standardize (i.e., scale and center) our
> predictors. In K\-nearest neighbors models, recall that the model fit changes
> depending on whether we standardize first or not. In linear regression,
> standardization does not affect the fit (it *does* affect the coefficients in
> the equation, though!). So you can standardize if you want—it won’t
> hurt anything—but if you leave the predictors in their original form,
> the best fit coefficients are usually easier to interpret afterward.
Our coefficients are
(intercept) \\(\\beta\_0\=\\) 18450
and (slope) \\(\\beta\_1\=\\) 135\.
This means that the equation of the line of best fit is
\\\[\\text{house sale price} \= 18450 \+ 135\\cdot (\\text{house size}).\\]
In other words, the model predicts that houses
start at $18,450 for 0 square feet, and that
every extra square foot increases the cost of
the house by $135\. Finally,
we predict on the test data set to assess how well our model does:
```
lm_test_results <- lm_fit |>
predict(sacramento_test) |>
bind_cols(sacramento_test) |>
metrics(truth = price, estimate = .pred)
lm_test_results
```
```
## # A tibble: 3 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 88528.
## 2 rsq standard 0.608
## 3 mae standard 61892.
```
Our final model’s test error as assessed by RMSPE
is $88,528\.
Remember that this is in units of the response variable, and here that
is US Dollars (USD). Does this mean our model is “good” at predicting house
sale price based off of the predictor of home size? Again, answering this is
tricky and requires knowledge of how you intend to use the prediction.
To visualize the simple linear regression model, we can plot the predicted house
sale price across all possible house sizes we might encounter.
Since our model is linear,
we only need to compute the predicted price of the minimum and maximum house size,
and then connect them with a straight line.
We superimpose this prediction line on a scatter
plot of the original housing price data,
so that we can qualitatively assess if the model seems to fit the data well.
Figure [8\.5](regression2.html#fig:08-lm-predict-all) displays the result.
```
sqft_prediction_grid <- tibble(
sqft = c(
sacramento |> select(sqft) |> min(),
sacramento |> select(sqft) |> max()
)
)
sacr_preds <- lm_fit |>
predict(sqft_prediction_grid) |>
bind_cols(sqft_prediction_grid)
lm_plot_final <- ggplot(sacramento, aes(x = sqft, y = price)) +
geom_point(alpha = 0.4) +
geom_line(data = sacr_preds,
mapping = aes(x = sqft, y = .pred),
color = "steelblue",
linewidth = 1) +
xlab("House size (square feet)") +
ylab("Price (USD)") +
scale_y_continuous(labels = dollar_format()) +
theme(text = element_text(size = 12))
lm_plot_final
```
Figure 8\.5: Scatter plot of sale price versus size with line of best fit for the full Sacramento housing data.
We can extract the coefficients from our model by accessing the
fit object that is output by the `fit` function; we first have to extract
it from the workflow using the `extract_fit_parsnip` function, and then apply
the `tidy` function to convert the result into a data frame:
```
coeffs <- lm_fit |>
extract_fit_parsnip() |>
tidy()
coeffs
```
```
## # A tibble: 2 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 18450. 7916. 2.33 2.01e- 2
## 2 sqft 135. 4.31 31.2 1.37e-134
```
8\.5 Comparing simple linear and K\-NN regression
-------------------------------------------------
Now that we have a general understanding of both simple linear and K\-NN
regression, we can start to compare and contrast these methods as well as the
predictions made by them. To start, let’s look at the visualization of the
simple linear regression model predictions for the Sacramento real estate data
(predicting price from house size) and the “best” K\-NN regression model
obtained from the same problem, shown in Figure [8\.6](regression2.html#fig:08-compareRegression).
Figure 8\.6: Comparison of simple linear regression and K\-NN regression.
What differences do we observe in Figure [8\.6](regression2.html#fig:08-compareRegression)? One obvious
difference is the shape of the blue lines. In simple linear regression we are
restricted to a straight line, whereas in K\-NN regression our line is much more
flexible and can be quite wiggly. But there is a major interpretability advantage in limiting the
model to a straight line. A
straight line can be defined by two numbers, the
vertical intercept and the slope. The intercept tells us what the prediction is when
all of the predictors are equal to 0; and the slope tells us what unit increase in the response
variable we predict given a unit increase in the predictor
variable. K\-NN regression, as simple as it is to implement and understand, has no such
interpretability from its wiggly line.
There can, however, also be a disadvantage to using a simple linear regression
model in some cases, particularly when the relationship between the response and
the predictor is not linear, but instead some other shape (e.g., curved or oscillating). In
these cases the prediction model from a simple linear regression
will underfit, meaning that model/predicted values do not
match the actual observed values very well. Such a model would probably have a
quite high RMSE when assessing model goodness of fit on the training data and
a quite high RMSPE when assessing model prediction quality on a test data
set. On such a data set, K\-NN regression may fare better. Additionally, there
are other types of regression you can learn about in future books that may do
even better at predicting with such data.
How do these two models compare on the Sacramento house prices data set? In
Figure [8\.6](regression2.html#fig:08-compareRegression), we also printed the RMSPE as calculated from
predicting on the test data set that was not used to train/fit the models. The RMSPE for the simple linear
regression model is slightly lower than the RMSPE for the K\-NN regression model.
Considering that the simple linear regression model is also more interpretable,
if we were comparing these in practice we would likely choose to use the simple
linear regression model.
Finally, note that the K\-NN regression model becomes “flat”
at the left and right boundaries of the data, while the linear model
predicts a constant slope. Predicting outside the range of the observed
data is known as *extrapolation*; K\-NN and linear models behave quite differently
when extrapolating. Depending on the application, the flat
or constant slope trend may make more sense. For example, if our housing
data were slightly different, the linear model may have actually predicted
a *negative* price for a small house (if the intercept \\(\\beta\_0\\) was negative),
which obviously does not match reality. On the other hand, the trend of increasing
house size corresponding to increasing house price probably continues for large houses,
so the “flat” extrapolation of K\-NN likely does not match reality.
8\.6 Multivariable linear regression
------------------------------------
As in K\-NN classification and K\-NN regression, we can move beyond the simple
case of only one predictor to the case with multiple predictors,
known as *multivariable linear regression*.
To do this, we follow a very similar approach to what we did for
K\-NN regression: we just add more predictors to the model formula in the
recipe. But recall that we do not need to use cross\-validation to choose any parameters,
nor do we need to standardize (i.e., center and scale) the data for linear regression.
Note once again that we have the same concerns regarding multiple predictors
as in the settings of multivariable K\-NN regression and classification: having more predictors is **not** always
better. But because the same predictor selection
algorithm from the classification chapter extends to the setting of linear regression,
it will not be covered again in this chapter.
We will demonstrate multivariable linear regression using the Sacramento real estate
data with both house size
(measured in square feet) as well as number of bedrooms as our predictors, and
continue to use house sale price as our response variable. We will start by
changing the formula in the recipe to
include both the `sqft` and `beds` variables as predictors:
```
mlm_recipe <- recipe(price ~ sqft + beds, data = sacramento_train)
```
Now we can build our workflow and fit the model:
```
mlm_fit <- workflow() |>
add_recipe(mlm_recipe) |>
add_model(lm_spec) |>
fit(data = sacramento_train)
mlm_fit
```
```
## ══ Workflow [trained] ══════════
## Preprocessor: Recipe
## Model: linear_reg()
##
## ── Preprocessor ──────────
## 0 Recipe Steps
##
## ── Model ──────────
##
## Call:
## stats::lm(formula = ..y ~ ., data = data)
##
## Coefficients:
## (Intercept) sqft beds
## 72547.8 160.6 -29644.3
```
And finally, we make predictions on the test data set to assess the quality of our model:
```
lm_mult_test_results <- mlm_fit |>
predict(sacramento_test) |>
bind_cols(sacramento_test) |>
metrics(truth = price, estimate = .pred)
lm_mult_test_results
```
```
## # A tibble: 3 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 88739.
## 2 rsq standard 0.603
## 3 mae standard 61732.
```
Our model’s test error as assessed by RMSPE
is $88,739\.
In the case of two predictors, we can plot the predictions made by our linear regression creates a *plane* of best fit, as
shown in Figure [8\.7](regression2.html#fig:08-3DlinReg).
Figure 8\.7: Linear regression plane of best fit overlaid on top of the data (using price, house size, and number of bedrooms as predictors). Note that in general we recommend against using 3D visualizations; here we use a 3D visualization only to illustrate what the regression plane looks like for learning purposes.
We see that the predictions from linear regression with two predictors form a
flat plane. This is the hallmark of linear regression, and differs from the
wiggly, flexible surface we get from other methods such as K\-NN regression.
As discussed, this can be advantageous in one aspect, which is that for each
predictor, we can get slopes/intercept from linear regression, and thus describe the
plane mathematically. We can extract those slope values from our model object
as shown below:
```
mcoeffs <- mlm_fit |>
extract_fit_parsnip() |>
tidy()
mcoeffs
```
```
## # A tibble: 3 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 72548. 11670. 6.22 8.76e- 10
## 2 sqft 161. 5.93 27.1 8.34e-111
## 3 beds -29644. 4799. -6.18 1.11e- 9
```
And then use those slopes to write a mathematical equation to describe the prediction plane:
\\\[\\text{house sale price} \= \\beta\_0 \+ \\beta\_1\\cdot(\\text{house size}) \+ \\beta\_2\\cdot(\\text{number of bedrooms}),\\]
where:
* \\(\\beta\_0\\) is the *vertical intercept* of the hyperplane (the price when both house size and number of bedrooms are 0\)
* \\(\\beta\_1\\) is the *slope* for the first predictor (how quickly the price changes as you increase house size, holding number of bedrooms constant)
* \\(\\beta\_2\\) is the *slope* for the second predictor (how quickly the price changes as you increase the number of bedrooms, holding house size constant)
Finally, we can fill in the values for \\(\\beta\_0\\), \\(\\beta\_1\\) and \\(\\beta\_2\\) from the model output above
to create the equation of the plane of best fit to the data:
\\\[\\text{house sale price} \= 72548 \+ 161\\cdot (\\text{house size}) \-29644 \\cdot (\\text{number of bedrooms})\\]
This model is more interpretable than the multivariable K\-NN
regression model; we can write a mathematical equation that explains how
each predictor is affecting the predictions. But as always, we should
question how well multivariable linear regression is doing compared to
the other tools we have, such as simple linear regression
and multivariable K\-NN regression. If this comparison is part of
the model tuning process—for example, if we are trying
out many different sets of predictors for multivariable linear
and K\-NN regression—we must perform this comparison using
cross\-validation on only our training data. But if we have already
decided on a small number (e.g., 2 or 3\) of tuned candidate models and
we want to make a final comparison, we can do so by comparing the prediction
error of the methods on the test data.
```
lm_mult_test_results
```
```
## # A tibble: 3 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 88739.
## 2 rsq standard 0.603
## 3 mae standard 61732.
```
We obtain an RMSPE for the multivariable linear regression model
of $88,739\.45\. This prediction error
is less than the prediction error for the multivariable K\-NN regression model,
indicating that we should likely choose linear regression for predictions of
house sale price on this data set. Revisiting the simple linear regression model
with only a single predictor from earlier in this chapter, we see that the RMSPE for that model was
$88,527\.75,
which is almost the same as that of our more complex model.
As mentioned earlier, this is not always the case: often including more
predictors will either positively or negatively impact the prediction performance on unseen
test data.
8\.7 Multicollinearity and outliers
-----------------------------------
What can go wrong when performing (possibly multivariable) linear regression?
This section will introduce two common issues—*outliers* and *collinear predictors*—and
illustrate their impact on predictions.
### 8\.7\.1 Outliers
Outliers are data points that do not follow the usual pattern of the rest of the data.
In the setting of linear regression, these are points that
have a vertical distance to the line of best fit that is either much higher or much lower
than you might expect based on the rest of the data. The problem with outliers is that
they can have *too much influence* on the line of best fit. In general, it is very difficult
to judge accurately which data are outliers without advanced techniques that are beyond
the scope of this book.
But to illustrate what can happen when you have outliers, Figure [8\.8](regression2.html#fig:08-lm-outlier)
shows a small subset of the Sacramento housing data again, except we have added a *single* data point (highlighted
in red). This house is 5,000 square feet in size, and sold for only $50,000\. Unbeknownst to the
data analyst, this house was sold by a parent to their child for an absurdly low price. Of course,
this is not representative of the real housing market values that the other data points follow;
the data point is an *outlier*. In blue we plot the original line of best fit, and in red
we plot the new line of best fit including the outlier. You can see how different the red line
is from the blue line, which is entirely caused by that one extra outlier data point.
Figure 8\.8: Scatter plot of a subset of the data, with outlier highlighted in red.
Fortunately, if you have enough data, the inclusion of one or two
outliers—as long as their values are not *too* wild—will
typically not have a large effect on the line of best fit. Figure
[8\.9](regression2.html#fig:08-lm-outlier-2) shows how that same outlier data point from earlier
influences the line of best fit when we are working with the entire original
Sacramento training data. You can see that with this larger data set, the line
changes much less when adding the outlier.
Nevertheless, it is still important when working with linear regression to critically
think about how much any individual data point is influencing the model.
Figure 8\.9: Scatter plot of the full data, with outlier highlighted in red.
### 8\.7\.2 Multicollinearity
The second, and much more subtle, issue can occur when performing multivariable
linear regression. In particular, if you include multiple predictors that are
strongly linearly related to one another, the coefficients that describe the
plane of best fit can be very unreliable—small changes to the data can
result in large changes in the coefficients. Consider an extreme example using
the Sacramento housing data where the house was measured twice by two people.
Since the two people are each slightly inaccurate, the two measurements might
not agree exactly, but they are very strongly linearly related to each other,
as shown in Figure [8\.10](regression2.html#fig:08-lm-multicol).
Figure 8\.10: Scatter plot of house size (in square feet) measured by person 1 versus house size (in square feet) measured by person 2\.
If we again fit the multivariable linear regression model on this data, then the plane of best fit
has regression coefficients that are very sensitive to the exact values in the data. For example,
if we change the data ever so slightly—e.g., by running cross\-validation, which splits
up the data randomly into different chunks—the coefficients vary by large amounts:
Best Fit 1: \\(\\text{house sale price} \= 22535 \+ (220\)\\cdot (\\text{house size 1 (ft$^2$)}) \+ (\-86\) \\cdot (\\text{house size 2 (ft$^2$)}).\\)
Best Fit 2: \\(\\text{house sale price} \= 15966 \+ (86\)\\cdot (\\text{house size 1 (ft$^2$)}) \+ (49\) \\cdot (\\text{house size 2 (ft$^2$)}).\\)
Best Fit 3: \\(\\text{house sale price} \= 17178 \+ (107\)\\cdot (\\text{house size 1 (ft$^2$)}) \+ (27\) \\cdot (\\text{house size 2 (ft$^2$)}).\\)
Therefore, when performing multivariable linear regression, it is important to avoid including very
linearly related predictors. However, techniques for doing so are beyond the scope of this
book; see the list of additional resources at the end of this chapter to find out where you can learn more.
8\.8 Designing new predictors
-----------------------------
We were quite fortunate in our initial exploration to find a predictor variable (house size)
that seems to have a meaningful and nearly linear relationship with our response variable (sale price).
But what should we do if we cannot immediately find such a nice variable?
Well, sometimes it is just a fact that the variables in the data do not have enough of
a relationship with the response variable to provide useful predictions. For example,
if the only available predictor was “the current house owner’s favorite ice cream flavor”,
we likely would have little hope of using that variable to predict the house’s sale price
(barring any future remarkable scientific discoveries about the relationship between
the housing market and homeowner ice cream preferences). In cases like these,
the only option is to obtain measurements of more useful variables.
There are, however, a wide variety of cases where the predictor variables do have a
meaningful relationship with the response variable, but that relationship does not fit
the assumptions of the regression method you have chosen. For example, a data frame `df`
with two variables—`x` and `y`—with a nonlinear relationship between the two variables
will not be fully captured by simple linear regression, as shown in Figure [8\.11](regression2.html#fig:08-predictor-design).
```
df
```
```
## # A tibble: 100 × 2
## x y
## <dbl> <dbl>
## 1 0.102 0.0720
## 2 0.800 0.532
## 3 0.478 0.148
## 4 0.972 1.01
## 5 0.846 0.677
## 6 0.405 0.157
## 7 0.879 0.768
## 8 0.130 0.0402
## 9 0.852 0.576
## 10 0.180 0.0847
## # ℹ 90 more rows
```
Figure 8\.11: Example of a data set with a nonlinear relationship between the predictor and the response.
Instead of trying to predict the response `y` using a linear regression on `x`,
we might have some scientific background about our problem to suggest that `y`
should be a cubic function of `x`. So before performing regression,
we might *create a new predictor variable* `z` using the `mutate` function:
```
df <- df |>
mutate(z = x^3)
```
Then we can perform linear regression for `y` using the predictor variable `z`,
as shown in Figure [8\.12](regression2.html#fig:08-predictor-design-2).
Here you can see that the transformed predictor `z` helps the
linear regression model make more accurate predictions.
Note that none of the `y` response values have changed between Figures [8\.11](regression2.html#fig:08-predictor-design)
and [8\.12](regression2.html#fig:08-predictor-design-2); the only change is that the `x` values
have been replaced by `z` values.
Figure 8\.12: Relationship between the transformed predictor and the response.
The process of
transforming predictors (and potentially combining multiple predictors in the process)
is known as *feature engineering*. In real data analysis
problems, you will need to rely on
a deep understanding of the problem—as well as the wrangling tools
from previous chapters—to engineer useful new features that improve
predictive performance.
> **Note:** Feature engineering
> is *part of tuning your model*, and as such you must not use your test data
> to evaluate the quality of the features you produce. You are free to use
> cross\-validation, though!
8\.9 The other sides of regression
----------------------------------
So far in this textbook we have used regression only in the context of
prediction. However, regression can also be seen as a method to understand and
quantify the effects of individual predictor variables on a response variable of interest.
In the housing example from this chapter, beyond just using past data
to predict future sale prices,
we might also be interested in describing the
individual relationships of house size and the number of bedrooms with house price,
quantifying how strong each of these relationships are, and assessing how accurately we
can estimate their magnitudes. And even beyond that, we may be interested in
understanding whether the predictors *cause* changes in the price.
These sides of regression are well beyond the scope of this book; but
the material you have learned here should give you a foundation of knowledge
that will serve you well when moving to more advanced books on the topic.
8\.10 Exercises
---------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Regression II: linear regression” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
8\.11 Additional resources
--------------------------
* The [`tidymodels` website](https://tidymodels.org/packages) is an excellent
reference for more details on, and advanced usage of, the functions and
packages in the past two chapters. Aside from that, it also has a [nice
beginner’s tutorial](https://www.tidymodels.org/start/) and [an extensive list
of more advanced examples](https://www.tidymodels.org/learn/) that you can use
to continue learning beyond the scope of this book.
* *Modern Dive* ([Ismay and Kim 2020](#ref-moderndive)) is another textbook that uses the
`tidyverse` / `tidymodels` framework. Chapter 6 complements the material in
the current chapter well; it covers some slightly more advanced concepts than
we do without getting mathematical. Give this chapter a read before moving on
to the next reference. It is also worth noting that this book takes a more
“explanatory” / “inferential” approach to regression in general (in Chapters 5,
6, and 10\), which provides a nice complement to the predictive tack we take in
the present book.
* *An Introduction to Statistical Learning* ([James et al. 2013](#ref-james2013introduction)) provides
a great next stop in the process of
learning about regression. Chapter 3 covers linear regression at a slightly
more mathematical level than we do here, but it is not too large a leap and so
should provide a good stepping stone. Chapter 6 discusses how to pick a subset
of “informative” predictors when you have a data set with many predictors, and
you expect only a few of them to be relevant. Chapter 7 covers regression
models that are more flexible than linear regression models but still enjoy the
computational efficiency of linear regression. In contrast, the K\-NN methods we
covered earlier are indeed more flexible but become very slow when given lots
of data.
8\.1 Overview
-------------
Up to this point, we have solved all of our predictive problems—both classification
and regression—using K\-nearest neighbors (K\-NN)\-based approaches. In the context of regression,
there is another commonly used method known as *linear regression*. This chapter provides an introduction
to the basic concept of linear regression, shows how to use `tidymodels` to perform linear regression in R,
and characterizes its strengths and weaknesses compared to K\-NN regression. The focus is, as usual,
on the case where there is a single predictor and single response variable of interest; but the chapter
concludes with an example using *multivariable linear regression* when there is more than one
predictor.
8\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Use R to fit simple and multivariable linear regression models on training data.
* Evaluate the linear regression model on test data.
* Compare and contrast predictions obtained from K\-nearest neighbors regression to those obtained using linear regression from the same data set.
* Describe how linear regression is affected by outliers and multicollinearity.
8\.3 Simple linear regression
-----------------------------
At the end of the previous chapter, we noted some limitations of K\-NN regression.
While the method is simple and easy to understand, K\-NN regression does not
predict well beyond the range of the predictors in the training data, and
the method gets significantly slower as the training data set grows.
Fortunately, there is an alternative to K\-NN regression—*linear regression*—that addresses
both of these limitations. Linear regression is also very commonly
used in practice because it provides an interpretable mathematical equation that describes
the relationship between the predictor and response variables. In this first part of the chapter, we will focus on *simple* linear regression,
which involves only one predictor variable and one response variable; later on, we will consider
*multivariable* linear regression, which involves multiple predictor variables.
Like K\-NN regression, simple linear regression involves
predicting a numerical response variable (like race time, house price, or height);
but *how* it makes those predictions for a new observation is quite different from K\-NN regression.
Instead of looking at the K nearest neighbors and averaging
over their values for a prediction, in simple linear regression, we create a
straight line of best fit through the training data and then
“look up” the prediction using the line.
> **Note:** Although we did not cover it in earlier chapters, there
> is another popular method for classification called *logistic
> regression* (it is used for classification even though the name, somewhat confusingly,
> has the word “regression” in it). In logistic regression—similar to linear regression—you
> “fit” the model to the training data and then “look up” the prediction for each new observation.
> Logistic regression and K\-NN classification have an advantage/disadvantage comparison
> similar to that of linear regression and K\-NN
> regression. It is useful to have a good understanding of linear regression before learning about
> logistic regression. After reading this chapter, see the “Additional Resources” section at the end of the
> classification chapters to learn more about logistic regression.
Let’s return to the Sacramento housing data from Chapter [7](regression1.html#regression1) to learn
how to apply linear regression and compare it to K\-NN regression. For now, we
will consider
a smaller version of the housing data to help make our visualizations clear.
Recall our predictive question: can we use the size of a house in the Sacramento, CA area to predict
its sale price? In particular, recall that we have come across a new 2,000 square\-foot house we are interested
in purchasing with an advertised list price of
$350,000\. Should we offer the list price, or is that over/undervalued?
To answer this question using simple linear regression, we use the data we have
to draw the straight line of best fit through our existing data points.
The small subset of data as well as the line of best fit are shown
in Figure [8\.1](regression2.html#fig:08-lin-reg1).
Figure 8\.1: Scatter plot of sale price versus size with line of best fit for subset of the Sacramento housing data.
The equation for the straight line is:
\\\[\\text{house sale price} \= \\beta\_0 \+ \\beta\_1 \\cdot (\\text{house size}),\\]
where
* \\(\\beta\_0\\) is the *vertical intercept* of the line (the price when house size is 0\)
* \\(\\beta\_1\\) is the *slope* of the line (how quickly the price increases as you increase house size)
Therefore using the data to find the line of best fit is equivalent to finding coefficients
\\(\\beta\_0\\) and \\(\\beta\_1\\) that *parametrize* (correspond to) the line of best fit.
Now of course, in this particular problem, the idea of a 0 square\-foot house is a bit silly;
but you can think of \\(\\beta\_0\\) here as the “base price,” and
\\(\\beta\_1\\) as the increase in price for each square foot of space.
Let’s push this thought even further: what would happen in the equation for the line if you
tried to evaluate the price of a house with size 6 *million* square feet?
Or what about *negative* 2,000 square feet? As it turns out, nothing in the formula breaks; linear
regression will happily make predictions for nonsensical predictor values if you ask it to. But even though
you *can* make these wild predictions, you shouldn’t. You should only make predictions roughly within
the range of your original data, and perhaps a bit beyond it only if it makes sense. For example,
the data in Figure [8\.1](regression2.html#fig:08-lin-reg1) only reaches around 800 square feet on the low end, but
it would probably be reasonable to use the linear regression model to make a prediction at 600 square feet, say.
Back to the example! Once we have the coefficients \\(\\beta\_0\\) and \\(\\beta\_1\\), we can use the equation
above to evaluate the predicted sale price given the value we have for the
predictor variable—here 2,000 square feet. Figure
[8\.2](regression2.html#fig:08-lin-reg2) demonstrates this process.
Figure 8\.2: Scatter plot of sale price versus size with line of best fit and a red dot at the predicted sale price for a 2,000 square\-foot home.
By using simple linear regression on this small data set to predict the sale price
for a 2,000 square\-foot house, we get a predicted value of
$295,564\. But wait a minute… how
exactly does simple linear regression choose the line of best fit? Many
different lines could be drawn through the data points.
Some plausible examples are shown in Figure [8\.3](regression2.html#fig:08-several-lines).
Figure 8\.3: Scatter plot of sale price versus size with many possible lines that could be drawn through the data points.
Simple linear regression chooses the straight line of best fit by choosing
the line that minimizes the **average squared vertical distance** between itself and
each of the observed data points in the training data (equivalent to minimizing the RMSE).
Figure [8\.4](regression2.html#fig:08-verticalDistToMin) illustrates these vertical distances as red lines. Finally, to assess the predictive
accuracy of a simple linear regression model,
we use RMSPE—the same measure of predictive performance we used with K\-NN regression.
Figure 8\.4: Scatter plot of sale price versus size with red lines denoting the vertical distances between the predicted values and the observed data points.
8\.4 Linear regression in R
---------------------------
We can perform simple linear regression in R using `tidymodels` in a
very similar manner to how we performed K\-NN regression.
To do this, instead of creating a `nearest_neighbor` model specification with
the `kknn` engine, we use a `linear_reg` model specification
with the `lm` engine. Another difference is that we do not need to choose \\(K\\) in the
context of linear regression, and so we do not need to perform cross\-validation.
Below we illustrate how we can use the usual `tidymodels` workflow to predict house sale
price given house size using a simple linear regression approach using the full
Sacramento real estate data set.
As usual, we start by loading packages, setting the seed, loading data, and putting some test data away in a lock box that we
can come back to after we choose our final model. Let’s take care of that now.
```
library(tidyverse)
library(tidymodels)
set.seed(7)
sacramento <- read_csv("data/sacramento.csv")
sacramento_split <- initial_split(sacramento, prop = 0.75, strata = price)
sacramento_train <- training(sacramento_split)
sacramento_test <- testing(sacramento_split)
```
Now that we have our training data, we will create the model specification
and recipe, and fit our simple linear regression model:
```
lm_spec <- linear_reg() |>
set_engine("lm") |>
set_mode("regression")
lm_recipe <- recipe(price ~ sqft, data = sacramento_train)
lm_fit <- workflow() |>
add_recipe(lm_recipe) |>
add_model(lm_spec) |>
fit(data = sacramento_train)
lm_fit
```
```
## ══ Workflow [trained] ══════════
## Preprocessor: Recipe
## Model: linear_reg()
##
## ── Preprocessor ──────────
## 0 Recipe Steps
##
## ── Model ──────────
##
## Call:
## stats::lm(formula = ..y ~ ., data = data)
##
## Coefficients:
## (Intercept) sqft
## 18450.3 134.8
```
> **Note:** An additional difference that you will notice here is that we do
> not standardize (i.e., scale and center) our
> predictors. In K\-nearest neighbors models, recall that the model fit changes
> depending on whether we standardize first or not. In linear regression,
> standardization does not affect the fit (it *does* affect the coefficients in
> the equation, though!). So you can standardize if you want—it won’t
> hurt anything—but if you leave the predictors in their original form,
> the best fit coefficients are usually easier to interpret afterward.
Our coefficients are
(intercept) \\(\\beta\_0\=\\) 18450
and (slope) \\(\\beta\_1\=\\) 135\.
This means that the equation of the line of best fit is
\\\[\\text{house sale price} \= 18450 \+ 135\\cdot (\\text{house size}).\\]
In other words, the model predicts that houses
start at $18,450 for 0 square feet, and that
every extra square foot increases the cost of
the house by $135\. Finally,
we predict on the test data set to assess how well our model does:
```
lm_test_results <- lm_fit |>
predict(sacramento_test) |>
bind_cols(sacramento_test) |>
metrics(truth = price, estimate = .pred)
lm_test_results
```
```
## # A tibble: 3 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 88528.
## 2 rsq standard 0.608
## 3 mae standard 61892.
```
Our final model’s test error as assessed by RMSPE
is $88,528\.
Remember that this is in units of the response variable, and here that
is US Dollars (USD). Does this mean our model is “good” at predicting house
sale price based off of the predictor of home size? Again, answering this is
tricky and requires knowledge of how you intend to use the prediction.
To visualize the simple linear regression model, we can plot the predicted house
sale price across all possible house sizes we might encounter.
Since our model is linear,
we only need to compute the predicted price of the minimum and maximum house size,
and then connect them with a straight line.
We superimpose this prediction line on a scatter
plot of the original housing price data,
so that we can qualitatively assess if the model seems to fit the data well.
Figure [8\.5](regression2.html#fig:08-lm-predict-all) displays the result.
```
sqft_prediction_grid <- tibble(
sqft = c(
sacramento |> select(sqft) |> min(),
sacramento |> select(sqft) |> max()
)
)
sacr_preds <- lm_fit |>
predict(sqft_prediction_grid) |>
bind_cols(sqft_prediction_grid)
lm_plot_final <- ggplot(sacramento, aes(x = sqft, y = price)) +
geom_point(alpha = 0.4) +
geom_line(data = sacr_preds,
mapping = aes(x = sqft, y = .pred),
color = "steelblue",
linewidth = 1) +
xlab("House size (square feet)") +
ylab("Price (USD)") +
scale_y_continuous(labels = dollar_format()) +
theme(text = element_text(size = 12))
lm_plot_final
```
Figure 8\.5: Scatter plot of sale price versus size with line of best fit for the full Sacramento housing data.
We can extract the coefficients from our model by accessing the
fit object that is output by the `fit` function; we first have to extract
it from the workflow using the `extract_fit_parsnip` function, and then apply
the `tidy` function to convert the result into a data frame:
```
coeffs <- lm_fit |>
extract_fit_parsnip() |>
tidy()
coeffs
```
```
## # A tibble: 2 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 18450. 7916. 2.33 2.01e- 2
## 2 sqft 135. 4.31 31.2 1.37e-134
```
8\.5 Comparing simple linear and K\-NN regression
-------------------------------------------------
Now that we have a general understanding of both simple linear and K\-NN
regression, we can start to compare and contrast these methods as well as the
predictions made by them. To start, let’s look at the visualization of the
simple linear regression model predictions for the Sacramento real estate data
(predicting price from house size) and the “best” K\-NN regression model
obtained from the same problem, shown in Figure [8\.6](regression2.html#fig:08-compareRegression).
Figure 8\.6: Comparison of simple linear regression and K\-NN regression.
What differences do we observe in Figure [8\.6](regression2.html#fig:08-compareRegression)? One obvious
difference is the shape of the blue lines. In simple linear regression we are
restricted to a straight line, whereas in K\-NN regression our line is much more
flexible and can be quite wiggly. But there is a major interpretability advantage in limiting the
model to a straight line. A
straight line can be defined by two numbers, the
vertical intercept and the slope. The intercept tells us what the prediction is when
all of the predictors are equal to 0; and the slope tells us what unit increase in the response
variable we predict given a unit increase in the predictor
variable. K\-NN regression, as simple as it is to implement and understand, has no such
interpretability from its wiggly line.
There can, however, also be a disadvantage to using a simple linear regression
model in some cases, particularly when the relationship between the response and
the predictor is not linear, but instead some other shape (e.g., curved or oscillating). In
these cases the prediction model from a simple linear regression
will underfit, meaning that model/predicted values do not
match the actual observed values very well. Such a model would probably have a
quite high RMSE when assessing model goodness of fit on the training data and
a quite high RMSPE when assessing model prediction quality on a test data
set. On such a data set, K\-NN regression may fare better. Additionally, there
are other types of regression you can learn about in future books that may do
even better at predicting with such data.
How do these two models compare on the Sacramento house prices data set? In
Figure [8\.6](regression2.html#fig:08-compareRegression), we also printed the RMSPE as calculated from
predicting on the test data set that was not used to train/fit the models. The RMSPE for the simple linear
regression model is slightly lower than the RMSPE for the K\-NN regression model.
Considering that the simple linear regression model is also more interpretable,
if we were comparing these in practice we would likely choose to use the simple
linear regression model.
Finally, note that the K\-NN regression model becomes “flat”
at the left and right boundaries of the data, while the linear model
predicts a constant slope. Predicting outside the range of the observed
data is known as *extrapolation*; K\-NN and linear models behave quite differently
when extrapolating. Depending on the application, the flat
or constant slope trend may make more sense. For example, if our housing
data were slightly different, the linear model may have actually predicted
a *negative* price for a small house (if the intercept \\(\\beta\_0\\) was negative),
which obviously does not match reality. On the other hand, the trend of increasing
house size corresponding to increasing house price probably continues for large houses,
so the “flat” extrapolation of K\-NN likely does not match reality.
8\.6 Multivariable linear regression
------------------------------------
As in K\-NN classification and K\-NN regression, we can move beyond the simple
case of only one predictor to the case with multiple predictors,
known as *multivariable linear regression*.
To do this, we follow a very similar approach to what we did for
K\-NN regression: we just add more predictors to the model formula in the
recipe. But recall that we do not need to use cross\-validation to choose any parameters,
nor do we need to standardize (i.e., center and scale) the data for linear regression.
Note once again that we have the same concerns regarding multiple predictors
as in the settings of multivariable K\-NN regression and classification: having more predictors is **not** always
better. But because the same predictor selection
algorithm from the classification chapter extends to the setting of linear regression,
it will not be covered again in this chapter.
We will demonstrate multivariable linear regression using the Sacramento real estate
data with both house size
(measured in square feet) as well as number of bedrooms as our predictors, and
continue to use house sale price as our response variable. We will start by
changing the formula in the recipe to
include both the `sqft` and `beds` variables as predictors:
```
mlm_recipe <- recipe(price ~ sqft + beds, data = sacramento_train)
```
Now we can build our workflow and fit the model:
```
mlm_fit <- workflow() |>
add_recipe(mlm_recipe) |>
add_model(lm_spec) |>
fit(data = sacramento_train)
mlm_fit
```
```
## ══ Workflow [trained] ══════════
## Preprocessor: Recipe
## Model: linear_reg()
##
## ── Preprocessor ──────────
## 0 Recipe Steps
##
## ── Model ──────────
##
## Call:
## stats::lm(formula = ..y ~ ., data = data)
##
## Coefficients:
## (Intercept) sqft beds
## 72547.8 160.6 -29644.3
```
And finally, we make predictions on the test data set to assess the quality of our model:
```
lm_mult_test_results <- mlm_fit |>
predict(sacramento_test) |>
bind_cols(sacramento_test) |>
metrics(truth = price, estimate = .pred)
lm_mult_test_results
```
```
## # A tibble: 3 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 88739.
## 2 rsq standard 0.603
## 3 mae standard 61732.
```
Our model’s test error as assessed by RMSPE
is $88,739\.
In the case of two predictors, we can plot the predictions made by our linear regression creates a *plane* of best fit, as
shown in Figure [8\.7](regression2.html#fig:08-3DlinReg).
Figure 8\.7: Linear regression plane of best fit overlaid on top of the data (using price, house size, and number of bedrooms as predictors). Note that in general we recommend against using 3D visualizations; here we use a 3D visualization only to illustrate what the regression plane looks like for learning purposes.
We see that the predictions from linear regression with two predictors form a
flat plane. This is the hallmark of linear regression, and differs from the
wiggly, flexible surface we get from other methods such as K\-NN regression.
As discussed, this can be advantageous in one aspect, which is that for each
predictor, we can get slopes/intercept from linear regression, and thus describe the
plane mathematically. We can extract those slope values from our model object
as shown below:
```
mcoeffs <- mlm_fit |>
extract_fit_parsnip() |>
tidy()
mcoeffs
```
```
## # A tibble: 3 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 72548. 11670. 6.22 8.76e- 10
## 2 sqft 161. 5.93 27.1 8.34e-111
## 3 beds -29644. 4799. -6.18 1.11e- 9
```
And then use those slopes to write a mathematical equation to describe the prediction plane:
\\\[\\text{house sale price} \= \\beta\_0 \+ \\beta\_1\\cdot(\\text{house size}) \+ \\beta\_2\\cdot(\\text{number of bedrooms}),\\]
where:
* \\(\\beta\_0\\) is the *vertical intercept* of the hyperplane (the price when both house size and number of bedrooms are 0\)
* \\(\\beta\_1\\) is the *slope* for the first predictor (how quickly the price changes as you increase house size, holding number of bedrooms constant)
* \\(\\beta\_2\\) is the *slope* for the second predictor (how quickly the price changes as you increase the number of bedrooms, holding house size constant)
Finally, we can fill in the values for \\(\\beta\_0\\), \\(\\beta\_1\\) and \\(\\beta\_2\\) from the model output above
to create the equation of the plane of best fit to the data:
\\\[\\text{house sale price} \= 72548 \+ 161\\cdot (\\text{house size}) \-29644 \\cdot (\\text{number of bedrooms})\\]
This model is more interpretable than the multivariable K\-NN
regression model; we can write a mathematical equation that explains how
each predictor is affecting the predictions. But as always, we should
question how well multivariable linear regression is doing compared to
the other tools we have, such as simple linear regression
and multivariable K\-NN regression. If this comparison is part of
the model tuning process—for example, if we are trying
out many different sets of predictors for multivariable linear
and K\-NN regression—we must perform this comparison using
cross\-validation on only our training data. But if we have already
decided on a small number (e.g., 2 or 3\) of tuned candidate models and
we want to make a final comparison, we can do so by comparing the prediction
error of the methods on the test data.
```
lm_mult_test_results
```
```
## # A tibble: 3 × 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 88739.
## 2 rsq standard 0.603
## 3 mae standard 61732.
```
We obtain an RMSPE for the multivariable linear regression model
of $88,739\.45\. This prediction error
is less than the prediction error for the multivariable K\-NN regression model,
indicating that we should likely choose linear regression for predictions of
house sale price on this data set. Revisiting the simple linear regression model
with only a single predictor from earlier in this chapter, we see that the RMSPE for that model was
$88,527\.75,
which is almost the same as that of our more complex model.
As mentioned earlier, this is not always the case: often including more
predictors will either positively or negatively impact the prediction performance on unseen
test data.
8\.7 Multicollinearity and outliers
-----------------------------------
What can go wrong when performing (possibly multivariable) linear regression?
This section will introduce two common issues—*outliers* and *collinear predictors*—and
illustrate their impact on predictions.
### 8\.7\.1 Outliers
Outliers are data points that do not follow the usual pattern of the rest of the data.
In the setting of linear regression, these are points that
have a vertical distance to the line of best fit that is either much higher or much lower
than you might expect based on the rest of the data. The problem with outliers is that
they can have *too much influence* on the line of best fit. In general, it is very difficult
to judge accurately which data are outliers without advanced techniques that are beyond
the scope of this book.
But to illustrate what can happen when you have outliers, Figure [8\.8](regression2.html#fig:08-lm-outlier)
shows a small subset of the Sacramento housing data again, except we have added a *single* data point (highlighted
in red). This house is 5,000 square feet in size, and sold for only $50,000\. Unbeknownst to the
data analyst, this house was sold by a parent to their child for an absurdly low price. Of course,
this is not representative of the real housing market values that the other data points follow;
the data point is an *outlier*. In blue we plot the original line of best fit, and in red
we plot the new line of best fit including the outlier. You can see how different the red line
is from the blue line, which is entirely caused by that one extra outlier data point.
Figure 8\.8: Scatter plot of a subset of the data, with outlier highlighted in red.
Fortunately, if you have enough data, the inclusion of one or two
outliers—as long as their values are not *too* wild—will
typically not have a large effect on the line of best fit. Figure
[8\.9](regression2.html#fig:08-lm-outlier-2) shows how that same outlier data point from earlier
influences the line of best fit when we are working with the entire original
Sacramento training data. You can see that with this larger data set, the line
changes much less when adding the outlier.
Nevertheless, it is still important when working with linear regression to critically
think about how much any individual data point is influencing the model.
Figure 8\.9: Scatter plot of the full data, with outlier highlighted in red.
### 8\.7\.2 Multicollinearity
The second, and much more subtle, issue can occur when performing multivariable
linear regression. In particular, if you include multiple predictors that are
strongly linearly related to one another, the coefficients that describe the
plane of best fit can be very unreliable—small changes to the data can
result in large changes in the coefficients. Consider an extreme example using
the Sacramento housing data where the house was measured twice by two people.
Since the two people are each slightly inaccurate, the two measurements might
not agree exactly, but they are very strongly linearly related to each other,
as shown in Figure [8\.10](regression2.html#fig:08-lm-multicol).
Figure 8\.10: Scatter plot of house size (in square feet) measured by person 1 versus house size (in square feet) measured by person 2\.
If we again fit the multivariable linear regression model on this data, then the plane of best fit
has regression coefficients that are very sensitive to the exact values in the data. For example,
if we change the data ever so slightly—e.g., by running cross\-validation, which splits
up the data randomly into different chunks—the coefficients vary by large amounts:
Best Fit 1: \\(\\text{house sale price} \= 22535 \+ (220\)\\cdot (\\text{house size 1 (ft$^2$)}) \+ (\-86\) \\cdot (\\text{house size 2 (ft$^2$)}).\\)
Best Fit 2: \\(\\text{house sale price} \= 15966 \+ (86\)\\cdot (\\text{house size 1 (ft$^2$)}) \+ (49\) \\cdot (\\text{house size 2 (ft$^2$)}).\\)
Best Fit 3: \\(\\text{house sale price} \= 17178 \+ (107\)\\cdot (\\text{house size 1 (ft$^2$)}) \+ (27\) \\cdot (\\text{house size 2 (ft$^2$)}).\\)
Therefore, when performing multivariable linear regression, it is important to avoid including very
linearly related predictors. However, techniques for doing so are beyond the scope of this
book; see the list of additional resources at the end of this chapter to find out where you can learn more.
### 8\.7\.1 Outliers
Outliers are data points that do not follow the usual pattern of the rest of the data.
In the setting of linear regression, these are points that
have a vertical distance to the line of best fit that is either much higher or much lower
than you might expect based on the rest of the data. The problem with outliers is that
they can have *too much influence* on the line of best fit. In general, it is very difficult
to judge accurately which data are outliers without advanced techniques that are beyond
the scope of this book.
But to illustrate what can happen when you have outliers, Figure [8\.8](regression2.html#fig:08-lm-outlier)
shows a small subset of the Sacramento housing data again, except we have added a *single* data point (highlighted
in red). This house is 5,000 square feet in size, and sold for only $50,000\. Unbeknownst to the
data analyst, this house was sold by a parent to their child for an absurdly low price. Of course,
this is not representative of the real housing market values that the other data points follow;
the data point is an *outlier*. In blue we plot the original line of best fit, and in red
we plot the new line of best fit including the outlier. You can see how different the red line
is from the blue line, which is entirely caused by that one extra outlier data point.
Figure 8\.8: Scatter plot of a subset of the data, with outlier highlighted in red.
Fortunately, if you have enough data, the inclusion of one or two
outliers—as long as their values are not *too* wild—will
typically not have a large effect on the line of best fit. Figure
[8\.9](regression2.html#fig:08-lm-outlier-2) shows how that same outlier data point from earlier
influences the line of best fit when we are working with the entire original
Sacramento training data. You can see that with this larger data set, the line
changes much less when adding the outlier.
Nevertheless, it is still important when working with linear regression to critically
think about how much any individual data point is influencing the model.
Figure 8\.9: Scatter plot of the full data, with outlier highlighted in red.
### 8\.7\.2 Multicollinearity
The second, and much more subtle, issue can occur when performing multivariable
linear regression. In particular, if you include multiple predictors that are
strongly linearly related to one another, the coefficients that describe the
plane of best fit can be very unreliable—small changes to the data can
result in large changes in the coefficients. Consider an extreme example using
the Sacramento housing data where the house was measured twice by two people.
Since the two people are each slightly inaccurate, the two measurements might
not agree exactly, but they are very strongly linearly related to each other,
as shown in Figure [8\.10](regression2.html#fig:08-lm-multicol).
Figure 8\.10: Scatter plot of house size (in square feet) measured by person 1 versus house size (in square feet) measured by person 2\.
If we again fit the multivariable linear regression model on this data, then the plane of best fit
has regression coefficients that are very sensitive to the exact values in the data. For example,
if we change the data ever so slightly—e.g., by running cross\-validation, which splits
up the data randomly into different chunks—the coefficients vary by large amounts:
Best Fit 1: \\(\\text{house sale price} \= 22535 \+ (220\)\\cdot (\\text{house size 1 (ft$^2$)}) \+ (\-86\) \\cdot (\\text{house size 2 (ft$^2$)}).\\)
Best Fit 2: \\(\\text{house sale price} \= 15966 \+ (86\)\\cdot (\\text{house size 1 (ft$^2$)}) \+ (49\) \\cdot (\\text{house size 2 (ft$^2$)}).\\)
Best Fit 3: \\(\\text{house sale price} \= 17178 \+ (107\)\\cdot (\\text{house size 1 (ft$^2$)}) \+ (27\) \\cdot (\\text{house size 2 (ft$^2$)}).\\)
Therefore, when performing multivariable linear regression, it is important to avoid including very
linearly related predictors. However, techniques for doing so are beyond the scope of this
book; see the list of additional resources at the end of this chapter to find out where you can learn more.
8\.8 Designing new predictors
-----------------------------
We were quite fortunate in our initial exploration to find a predictor variable (house size)
that seems to have a meaningful and nearly linear relationship with our response variable (sale price).
But what should we do if we cannot immediately find such a nice variable?
Well, sometimes it is just a fact that the variables in the data do not have enough of
a relationship with the response variable to provide useful predictions. For example,
if the only available predictor was “the current house owner’s favorite ice cream flavor”,
we likely would have little hope of using that variable to predict the house’s sale price
(barring any future remarkable scientific discoveries about the relationship between
the housing market and homeowner ice cream preferences). In cases like these,
the only option is to obtain measurements of more useful variables.
There are, however, a wide variety of cases where the predictor variables do have a
meaningful relationship with the response variable, but that relationship does not fit
the assumptions of the regression method you have chosen. For example, a data frame `df`
with two variables—`x` and `y`—with a nonlinear relationship between the two variables
will not be fully captured by simple linear regression, as shown in Figure [8\.11](regression2.html#fig:08-predictor-design).
```
df
```
```
## # A tibble: 100 × 2
## x y
## <dbl> <dbl>
## 1 0.102 0.0720
## 2 0.800 0.532
## 3 0.478 0.148
## 4 0.972 1.01
## 5 0.846 0.677
## 6 0.405 0.157
## 7 0.879 0.768
## 8 0.130 0.0402
## 9 0.852 0.576
## 10 0.180 0.0847
## # ℹ 90 more rows
```
Figure 8\.11: Example of a data set with a nonlinear relationship between the predictor and the response.
Instead of trying to predict the response `y` using a linear regression on `x`,
we might have some scientific background about our problem to suggest that `y`
should be a cubic function of `x`. So before performing regression,
we might *create a new predictor variable* `z` using the `mutate` function:
```
df <- df |>
mutate(z = x^3)
```
Then we can perform linear regression for `y` using the predictor variable `z`,
as shown in Figure [8\.12](regression2.html#fig:08-predictor-design-2).
Here you can see that the transformed predictor `z` helps the
linear regression model make more accurate predictions.
Note that none of the `y` response values have changed between Figures [8\.11](regression2.html#fig:08-predictor-design)
and [8\.12](regression2.html#fig:08-predictor-design-2); the only change is that the `x` values
have been replaced by `z` values.
Figure 8\.12: Relationship between the transformed predictor and the response.
The process of
transforming predictors (and potentially combining multiple predictors in the process)
is known as *feature engineering*. In real data analysis
problems, you will need to rely on
a deep understanding of the problem—as well as the wrangling tools
from previous chapters—to engineer useful new features that improve
predictive performance.
> **Note:** Feature engineering
> is *part of tuning your model*, and as such you must not use your test data
> to evaluate the quality of the features you produce. You are free to use
> cross\-validation, though!
8\.9 The other sides of regression
----------------------------------
So far in this textbook we have used regression only in the context of
prediction. However, regression can also be seen as a method to understand and
quantify the effects of individual predictor variables on a response variable of interest.
In the housing example from this chapter, beyond just using past data
to predict future sale prices,
we might also be interested in describing the
individual relationships of house size and the number of bedrooms with house price,
quantifying how strong each of these relationships are, and assessing how accurately we
can estimate their magnitudes. And even beyond that, we may be interested in
understanding whether the predictors *cause* changes in the price.
These sides of regression are well beyond the scope of this book; but
the material you have learned here should give you a foundation of knowledge
that will serve you well when moving to more advanced books on the topic.
8\.10 Exercises
---------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Regression II: linear regression” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
8\.11 Additional resources
--------------------------
* The [`tidymodels` website](https://tidymodels.org/packages) is an excellent
reference for more details on, and advanced usage of, the functions and
packages in the past two chapters. Aside from that, it also has a [nice
beginner’s tutorial](https://www.tidymodels.org/start/) and [an extensive list
of more advanced examples](https://www.tidymodels.org/learn/) that you can use
to continue learning beyond the scope of this book.
* *Modern Dive* ([Ismay and Kim 2020](#ref-moderndive)) is another textbook that uses the
`tidyverse` / `tidymodels` framework. Chapter 6 complements the material in
the current chapter well; it covers some slightly more advanced concepts than
we do without getting mathematical. Give this chapter a read before moving on
to the next reference. It is also worth noting that this book takes a more
“explanatory” / “inferential” approach to regression in general (in Chapters 5,
6, and 10\), which provides a nice complement to the predictive tack we take in
the present book.
* *An Introduction to Statistical Learning* ([James et al. 2013](#ref-james2013introduction)) provides
a great next stop in the process of
learning about regression. Chapter 3 covers linear regression at a slightly
more mathematical level than we do here, but it is not too large a leap and so
should provide a good stepping stone. Chapter 6 discusses how to pick a subset
of “informative” predictors when you have a data set with many predictors, and
you expect only a few of them to be relevant. Chapter 7 covers regression
models that are more flexible than linear regression models but still enjoy the
computational efficiency of linear regression. In contrast, the K\-NN methods we
covered earlier are indeed more flexible but become very slow when given lots
of data.
| Data Science |
ubc-dsci.github.io | https://ubc-dsci.github.io/introduction-to-datascience/clustering.html |
Chapter 9 Clustering
====================
9\.1 Overview
-------------
As part of exploratory data analysis, it is often helpful to see if there are
meaningful subgroups (or *clusters*) in the data.
This grouping can be used for many purposes,
such as generating new questions or improving predictive analyses.
This chapter provides an introduction to clustering
using the K\-means algorithm,
including techniques to choose the number of clusters.
9\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Describe a situation in which clustering is an appropriate technique to use,
and what insight it might extract from the data.
* Explain the K\-means clustering algorithm.
* Interpret the output of a K\-means analysis.
* Differentiate between clustering, classification, and regression.
* Identify when it is necessary to scale variables before clustering, and do this using R.
* Perform K\-means clustering in R using `tidymodels` workflows.
* Use the elbow method to choose the number of clusters for K\-means.
* Visualize the output of K\-means clustering in R using colored scatter plots.
* Describe the advantages, limitations and assumptions of the K\-means clustering algorithm.
9\.3 Clustering
---------------
Clustering is a data analysis technique
involving separating a data set into subgroups of related data.
For example, we might use clustering to separate a
data set of documents into groups that correspond to topics, a data set of
human genetic information into groups that correspond to ancestral
subpopulations, or a data set of online customers into groups that correspond
to purchasing behaviors. Once the data are separated, we can, for example,
use the subgroups to generate new questions about the data and follow up with a
predictive modeling exercise. In this course, clustering will be used only for
exploratory analysis, i.e., uncovering patterns in the data.
Note that clustering is a fundamentally different kind of task
than classification or regression.
In particular, both classification and regression are *supervised tasks*
where there is a *response variable* (a category label or value),
and we have examples of past data with labels/values
that help us predict those of future data.
By contrast, clustering is an *unsupervised task*,
as we are trying to understand
and examine the structure of data without any response variable labels
or values to help us.
This approach has both advantages and disadvantages.
Clustering requires no additional annotation or input on the data.
For example, while it would be nearly impossible to annotate
all the articles on Wikipedia with human\-made topic labels,
we can cluster the articles without this information
to find groupings corresponding to topics automatically.
However, given that there is no response variable, it is not as easy to evaluate
the “quality” of a clustering. With classification, we can use a test data set
to assess prediction performance. In clustering, there is not a single good
choice for evaluation. In this book, we will use visualization to ascertain the
quality of a clustering, and leave rigorous evaluation for more advanced
courses.
As in the case of classification,
there are many possible methods that we could use to cluster our observations
to look for subgroups.
In this book, we will focus on the widely used K\-means algorithm ([Lloyd 1982](#ref-kmeans)).
In your future studies, you might encounter hierarchical clustering,
principal component analysis, multidimensional scaling, and more;
see the additional resources section at the end of this chapter
for where to begin learning more about these other methods.
> **Note:** There are also so\-called *semisupervised* tasks,
> where only some of the data come with response variable labels/values,
> but the vast majority don’t.
> The goal is to try to uncover underlying structure in the data
> that allows one to guess the missing labels.
> This sort of task is beneficial, for example,
> when one has an unlabeled data set that is too large to manually label,
> but one is willing to provide a few informative example labels as a “seed”
> to guess the labels for all the data.
9\.4 An illustrative example
----------------------------
In this chapter we will focus on a data set from
[the `palmerpenguins` R package](https://allisonhorst.github.io/palmerpenguins/) ([Horst, Hill, and Gorman 2020](#ref-palmerpenguins)). This
data set was collected by Dr. Kristen Gorman and
the Palmer Station, Antarctica Long Term Ecological Research Site, and includes
measurements for adult penguins (Figure [9\.1](clustering.html#fig:09-penguins)) found near there ([Gorman, Williams, and Fraser 2014](#ref-penguinpaper)).
Our goal will be to use two
variables—penguin bill and flipper length, both in millimeters—to determine whether
there are distinct types of penguins in our data.
Understanding this might help us with species discovery and classification in a data\-driven
way. Note that we have reduced the size of the data set to 18 observations and 2 variables;
this will help us make clear visualizations that illustrate how clustering works for learning purposes.
Figure 9\.1: A Gentoo penguin.
Before we get started, we will load the `tidyverse` metapackage
as well as set a random seed.
This will ensure we have access to the functions we need
and that our analysis will be reproducible.
As we will learn in more detail later in the chapter,
setting the seed here is important
because the K\-means clustering algorithm uses randomness
when choosing a starting position for each cluster.
```
library(tidyverse)
set.seed(1)
```
Now we can load and preview the `penguins` data.
```
penguins <- read_csv("data/penguins.csv")
penguins
```
```
## # A tibble: 18 × 2
## bill_length_mm flipper_length_mm
## <dbl> <dbl>
## 1 39.2 196
## 2 36.5 182
## 3 34.5 187
## 4 36.7 187
## 5 38.1 181
## 6 39.2 190
## 7 36 195
## 8 37.8 193
## 9 46.5 213
## 10 46.1 215
## 11 47.8 215
## 12 45 220
## 13 49.1 212
## 14 43.3 208
## 15 46 195
## 16 46.7 195
## 17 52.2 197
## 18 46.8 189
```
We will begin by using a version of the data that we have standardized, `penguins_standardized`,
to illustrate how K\-means clustering works (recall standardization from Chapter [5](classification1.html#classification1)).
Later in this chapter, we will return to the original `penguins` data to see how to include standardization automatically
in the clustering pipeline.
```
penguins_standardized
```
```
## # A tibble: 18 × 2
## bill_length_standardized flipper_length_standardized
## <dbl> <dbl>
## 1 -0.641 -0.190
## 2 -1.14 -1.33
## 3 -1.52 -0.922
## 4 -1.11 -0.922
## 5 -0.847 -1.41
## 6 -0.641 -0.678
## 7 -1.24 -0.271
## 8 -0.902 -0.434
## 9 0.720 1.19
## 10 0.646 1.36
## 11 0.963 1.36
## 12 0.440 1.76
## 13 1.21 1.11
## 14 0.123 0.786
## 15 0.627 -0.271
## 16 0.757 -0.271
## 17 1.78 -0.108
## 18 0.776 -0.759
```
Next, we can create a scatter plot using this data set
to see if we can detect subtypes or groups in our data set.
```
ggplot(penguins_standardized,
aes(x = flipper_length_standardized,
y = bill_length_standardized)) +
geom_point() +
xlab("Flipper Length (standardized)") +
ylab("Bill Length (standardized)") +
theme(text = element_text(size = 12))
```
Figure 9\.2: Scatter plot of standardized bill length versus standardized flipper length.
Based on the visualization
in Figure [9\.2](clustering.html#fig:10-toy-example-plot),
we might suspect there are a few subtypes of penguins within our data set.
We can see roughly 3 groups of observations in Figure [9\.2](clustering.html#fig:10-toy-example-plot),
including:
1. a small flipper and bill length group,
2. a small flipper length, but large bill length group, and
3. a large flipper and bill length group.
Data visualization is a great tool to give us a rough sense of such patterns
when we have a small number of variables.
But if we are to group data—and select the number of groups—as part of
a reproducible analysis, we need something a bit more automated.
Additionally, finding groups via visualization becomes more difficult
as we increase the number of variables we consider when clustering.
The way to rigorously separate the data into groups
is to use a clustering algorithm.
In this chapter, we will focus on the *K\-means* algorithm,
a widely used and often very effective clustering method,
combined with the *elbow method*
for selecting the number of clusters.
This procedure will separate the data into groups;
Figure [9\.3](clustering.html#fig:10-toy-example-clustering) shows these groups
denoted by colored scatter points.
Figure 9\.3: Scatter plot of standardized bill length versus standardized flipper length with colored groups.
What are the labels for these groups? Unfortunately, we don’t have any. K\-means,
like almost all clustering algorithms, just outputs meaningless “cluster labels”
that are typically whole numbers: 1, 2, 3, etc. But in a simple case like this,
where we can easily visualize the clusters on a scatter plot, we can give
human\-made labels to the groups using their positions on
the plot:
* small flipper length and small bill length (orange cluster),
* small flipper length and large bill length (blue cluster).
* and large flipper length and large bill length (yellow cluster).
Once we have made these determinations, we can use them to inform our species
classifications or ask further questions about our data. For example, we might
be interested in understanding the relationship between flipper length and bill
length, and that relationship may differ depending on the type of penguin we
have.
9\.5 K\-means
-------------
### 9\.5\.1 Measuring cluster quality
The K\-means algorithm is a procedure that groups data into K clusters.
It starts with an initial clustering of the data, and then iteratively
improves it by making adjustments to the assignment of data
to clusters until it cannot improve any further. But how do we measure
the “quality” of a clustering, and what does it mean to improve it?
In K\-means clustering, we measure the quality of a cluster
by its *within\-cluster sum\-of\-squared\-distances* (WSSD).
Computing this involves two steps.
First, we find the cluster centers by computing the mean of each variable
over data points in the cluster. For example, suppose we have a
cluster containing four observations, and we are using two variables, \\(x\\) and \\(y\\), to cluster the data.
Then we would compute the coordinates, \\(\\mu\_x\\) and \\(\\mu\_y\\), of the cluster center via
\\\[\\mu\_x \= \\frac{1}{4}(x\_1\+x\_2\+x\_3\+x\_4\) \\quad \\mu\_y \= \\frac{1}{4}(y\_1\+y\_2\+y\_3\+y\_4\).\\]
In the first cluster from the example, there are 4 data points. These are shown with their cluster center
(standardized flipper length \-0\.35, standardized bill length 0\.99\) highlighted
in Figure [9\.4](clustering.html#fig:10-toy-example-clus1-center).
Figure 9\.4: Cluster 1 from the `penguins_standardized` data set example. Observations are small blue points, with the cluster center highlighted as a large blue point with a black outline.
The second step in computing the WSSD is to add up the squared distance
between each point in the cluster and the cluster center.
We use the straight\-line / Euclidean distance formula
that we learned about in Chapter [5](classification1.html#classification1).
In the 4\-observation cluster example above,
we would compute the WSSD \\(S^2\\) via
\\\[\\begin{align\*}
S^2 \= \\left((x\_1 \- \\mu\_x)^2 \+ (y\_1 \- \\mu\_y)^2\\right) \+ \\left((x\_2 \- \\mu\_x)^2 \+ (y\_2 \- \\mu\_y)^2\\right) \+ \\\\ \\left((x\_3 \- \\mu\_x)^2 \+ (y\_3 \- \\mu\_y)^2\\right) \+ \\left((x\_4 \- \\mu\_x)^2 \+ (y\_4 \- \\mu\_y)^2\\right).
\\end{align\*}\\]
These distances are denoted by lines in Figure [9\.5](clustering.html#fig:10-toy-example-clus1-dists) for the first cluster of the penguin data example.
Figure 9\.5: Cluster 1 from the `penguins_standardized` data set example. Observations are small blue points, with the cluster center highlighted as a large blue point with a black outline. The distances from the observations to the cluster center are represented as black lines.
The larger the value of \\(S^2\\), the more spread out the cluster is, since large \\(S^2\\) means
that points are far from the cluster center. Note, however, that “large” is relative to *both* the
scale of the variables for clustering *and* the number of points in the cluster. A cluster where points
are very close to the center might still have a large \\(S^2\\) if there are many data points in the cluster.
After we have calculated the WSSD for all the clusters,
we sum them together to get the *total WSSD*. For our example,
this means adding up all the squared distances for the 18 observations.
These distances are denoted by black lines in
Figure [9\.6](clustering.html#fig:10-toy-example-all-clus-dists).
Figure 9\.6: All clusters from the `penguins_standardized` data set example. Observations are small orange, blue, and yellow points with cluster centers denoted by larger points with a black outline. The distances from the observations to each of the respective cluster centers are represented as black lines.
Since K\-means uses the straight\-line distance to measure the quality of a clustering,
it is limited to clustering based on quantitative variables.
However, note that there are variants of the K\-means algorithm,
as well as other clustering algorithms entirely,
that use other distance metrics
to allow for non\-quantitative data to be clustered.
These are beyond the scope of this book.
### 9\.5\.2 The clustering algorithm
We begin the K\-means algorithm by picking K,
and randomly assigning a roughly equal number of observations
to each of the K clusters.
An example random initialization is shown in Figure [9\.7](clustering.html#fig:10-toy-kmeans-init).
Figure 9\.7: Random initialization of labels.
Then K\-means consists of two major steps that attempt to minimize the
sum of WSSDs over all the clusters, i.e., the *total WSSD*:
1. **Center update:** Compute the center of each cluster.
2. **Label update:** Reassign each data point to the cluster with the nearest center.
These two steps are repeated until the cluster assignments no longer change.
We show what the first four iterations of K\-means would look like in
Figure [9\.8](clustering.html#fig:10-toy-kmeans-iter). There each pair of plots
in each row corresponds to an iteration,
where the left figure in the pair depicts the center update,
and the right figure in the pair depicts the label update (i.e., the reassignment of data to clusters).
Figure 9\.8: First four iterations of K\-means clustering on the `penguins_standardized` example data set. Each pair of plots corresponds to an iteration. Within the pair, the first plot depicts the center update, and the second plot depicts the reassignment of data to clusters. Cluster centers are indicated by larger points that are outlined in black.
Note that at this point, we can terminate the algorithm since none of the assignments changed
in the fourth iteration; both the centers and labels will remain the same from this point onward.
> **Note:** Is K\-means *guaranteed* to stop at some point, or could it iterate forever? As it turns out,
> thankfully, the answer is that K\-means is guaranteed to stop after *some* number of iterations. For the interested reader, the
> logic for this has three steps: (1\) both the label update and the center update decrease total WSSD in each iteration,
> (2\) the total WSSD is always greater than or equal to 0, and (3\) there are only a finite number of possible
> ways to assign the data to clusters. So at some point, the total WSSD must stop decreasing, which means none of the assignments
> are changing, and the algorithm terminates.
### 9\.5\.3 Random restarts
Unlike the classification and regression models we studied in previous chapters, K\-means can get “stuck” in a bad solution.
For example, Figure [9\.9](clustering.html#fig:10-toy-kmeans-bad-init) illustrates an unlucky random initialization by K\-means.
Figure 9\.9: Random initialization of labels.
Figure [9\.10](clustering.html#fig:10-toy-kmeans-bad-iter) shows what the iterations of K\-means would look like with the unlucky random initialization shown in Figure [9\.9](clustering.html#fig:10-toy-kmeans-bad-init).
Figure 9\.10: First five iterations of K\-means clustering on the `penguins_standardized` example data set with a poor random initialization. Each pair of plots corresponds to an iteration. Within the pair, the first plot depicts the center update, and the second plot depicts the reassignment of data to clusters. Cluster centers are indicated by larger points that are outlined in black.
This looks like a relatively bad clustering of the data, but K\-means cannot improve it.
To solve this problem when clustering data using K\-means, we should randomly re\-initialize the labels a few times, run K\-means for each initialization,
and pick the clustering that has the lowest final total WSSD.
### 9\.5\.4 Choosing K
In order to cluster data using K\-means,
we also have to pick the number of clusters, K.
But unlike in classification, we have no response variable
and cannot perform cross\-validation with some measure of model prediction error.
Further, if K is chosen too small, then multiple clusters get grouped together;
if K is too large, then clusters get subdivided.
In both cases, we will potentially miss interesting structure in the data.
Figure [9\.11](clustering.html#fig:10-toy-kmeans-vary-k) illustrates the impact of K
on K\-means clustering of our penguin flipper and bill length data
by showing the different clusterings for K’s ranging from 1 to 9\.
Figure 9\.11: Clustering of the penguin data for K clusters ranging from 1 to 9\. Cluster centers are indicated by larger points that are outlined in black.
If we set K less than 3, then the clustering merges separate groups of data; this causes a large
total WSSD, since the cluster center is not close to any of the data in the cluster. On
the other hand, if we set K greater than 3, the clustering subdivides subgroups of data; this does indeed still
decrease the total WSSD, but by only a *diminishing amount*. If we plot the total WSSD versus the number of
clusters, we see that the decrease in total WSSD levels off (or forms an “elbow shape”) when we reach roughly
the right number of clusters (Figure [9\.12](clustering.html#fig:10-toy-kmeans-elbow)).
Figure 9\.12: Total WSSD for K clusters ranging from 1 to 9\.
9\.6 K\-means in R
------------------
We can perform K\-means clustering in R using a `tidymodels` workflow similar
to those in the earlier classification and regression chapters.
We will begin by loading the `tidyclust` library, which contains the necessary
functionality.
```
library(tidyclust)
```
Returning to the original (unstandardized) `penguins` data,
recall that K\-means clustering uses straight\-line
distance to decide which points are similar to
each other. Therefore, the *scale* of each of the variables in the data
will influence which cluster data points end up being assigned.
Variables with a large scale will have a much larger
effect on deciding cluster assignment than variables with a small scale.
To address this problem, we need to create a recipe that
standardizes our data
before clustering using the `step_scale` and `step_center` preprocessing steps.
Standardization will ensure that each variable has a mean
of 0 and standard deviation of 1 prior to clustering.
We will designate that all variables are to be used in clustering via
the model formula `~ .`.
> **Note:** Recipes were originally designed specifically for *predictive* data
> analysis problems—like classification and regression—not clustering
> problems. So the functions in R that we use to construct recipes are a little bit
> awkward in the setting of clustering In particular, we will have to treat
> “predictors” here as if it meant “variables to be used in clustering”. So the
> model formula `~ .` specifies that all variables are “predictors”, i.e., all variables
> should be used for clustering. Similarly, when we use the `all_predictors()` function
> in the preprocessing steps, we really mean “apply this step to all variables used for
> clustering.”
```
kmeans_recipe <- recipe(~ ., data=penguins) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
kmeans_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## predictor: 2
##
## ── Operations
## • Scaling for: all_predictors()
## • Centering for: all_predictors()
```
To indicate that we are performing K\-means clustering, we will use the `k_means`
model specification. We will use the `num_clusters` argument to specify the number
of clusters (here we choose K \= 3\), and specify that we are using the `"stats"` engine.
```
kmeans_spec <- k_means(num_clusters = 3) |>
set_engine("stats")
kmeans_spec
```
```
## K Means Cluster Specification (partition)
##
## Main Arguments:
## num_clusters = 3
##
## Computational engine: stats
```
To actually run the K\-means clustering, we combine the recipe and model
specification in a workflow, and use the `fit` function. Note that the
K\-means algorithm uses a random initialization of assignments; but since
we set the random seed earlier, the clustering will be reproducible.
```
kmeans_fit <- workflow() |>
add_recipe(kmeans_recipe) |>
add_model(kmeans_spec) |>
fit(data = penguins)
kmeans_fit
```
```
## ══ Workflow [trained] ══════════
## Preprocessor: Recipe
## Model: k_means()
##
## ── Preprocessor ──────────
## 2 Recipe Steps
##
## • step_scale()
## • step_center()
##
## ── Model ──────────
## K-means clustering with 3 clusters of sizes 4, 6, 8
##
## Cluster means:
## bill_length_mm flipper_length_mm
## 1 0.9858721 -0.3524358
## 2 0.6828058 1.2606357
## 3 -1.0050404 -0.7692589
##
## Clustering vector:
## [1] 3 3 3 3 3 3 3 3 2 2 2 2 2 2 1 1 1 1
##
## Within cluster sum of squares by cluster:
## [1] 1.098928 1.247042 2.121932
## (between_SS / total_SS = 86.9 %)
##
## Available components:
##
## [1] "cluster" "centers" "totss" "withinss" "tot.withinss"
## [6] "betweenss" "size" "iter" "ifault"
```
As you can see above, the fit object has a lot of information
that can be used to visualize the clusters, pick K, and evaluate the total WSSD.
Let’s start by visualizing the clusters as a colored scatter plot! In
order to do that, we first need to augment our
original data frame with the cluster assignments. We can
achieve this using the `augment` function from `tidyclust`.
```
clustered_data <- kmeans_fit |>
augment(penguins)
clustered_data
```
```
## # A tibble: 18 × 3
## bill_length_mm flipper_length_mm .pred_cluster
## <dbl> <dbl> <fct>
## 1 39.2 196 Cluster_1
## 2 36.5 182 Cluster_1
## 3 34.5 187 Cluster_1
## 4 36.7 187 Cluster_1
## 5 38.1 181 Cluster_1
## 6 39.2 190 Cluster_1
## 7 36 195 Cluster_1
## 8 37.8 193 Cluster_1
## 9 46.5 213 Cluster_2
## 10 46.1 215 Cluster_2
## 11 47.8 215 Cluster_2
## 12 45 220 Cluster_2
## 13 49.1 212 Cluster_2
## 14 43.3 208 Cluster_2
## 15 46 195 Cluster_3
## 16 46.7 195 Cluster_3
## 17 52.2 197 Cluster_3
## 18 46.8 189 Cluster_3
```
Now that we have the cluster assignments included in the `clustered_data` tidy data frame, we can
visualize them as shown in Figure [9\.13](clustering.html#fig:10-plot-clusters-2).
Note that we are plotting the *un\-standardized* data here; if we for some reason wanted to
visualize the *standardized* data from the recipe, we would need to use the `bake` function
to obtain that first.
```
cluster_plot <- ggplot(clustered_data,
aes(x = flipper_length_mm,
y = bill_length_mm,
color = .pred_cluster),
size = 2) +
geom_point() +
labs(x = "Flipper Length",
y = "Bill Length",
color = "Cluster") +
scale_color_manual(values = c("steelblue",
"darkorange",
"goldenrod1")) +
theme(text = element_text(size = 12))
cluster_plot
```
Figure 9\.13: The data colored by the cluster assignments returned by K\-means.
As mentioned above, we also need to select K by finding
where the “elbow” occurs in the plot of total WSSD versus the number of clusters.
We can obtain the total WSSD (`tot.withinss`) from our
clustering with 3 clusters using the `glance` function.
```
glance(kmeans_fit)
```
```
## # A tibble: 1 × 4
## totss tot.withinss betweenss iter
## <dbl> <dbl> <dbl> <int>
## 1 34 4.47 29.5 2
```
To calculate the total WSSD for a variety of Ks, we will
create a data frame with a column named `num_clusters` with rows containing
each value of K we want to run K\-means with (here, 1 to 9\).
```
penguin_clust_ks <- tibble(num_clusters = 1:9)
penguin_clust_ks
```
```
## # A tibble: 9 × 1
## num_clusters
## <int>
## 1 1
## 2 2
## 3 3
## 4 4
## 5 5
## 6 6
## 7 7
## 8 8
## 9 9
```
Then we construct our model specification again, this time
specifying that we want to tune the `num_clusters` parameter.
```
kmeans_spec <- k_means(num_clusters = tune()) |>
set_engine("stats")
kmeans_spec
```
```
## K Means Cluster Specification (partition)
##
## Main Arguments:
## num_clusters = tune()
##
## Computational engine: stats
```
We combine the recipe and specification in a workflow, and then
use the `tune_cluster` function to run K\-means on each of the different
settings of `num_clusters`. The `grid` argument controls which values of
K we want to try—in this case, the values from 1 to 9 that are
stored in the `penguin_clust_ks` data frame. We set the `resamples`
argument to `apparent(penguins)` to tell K\-means to run on the whole
data set for each value of `num_clusters`. Finally, we collect the results
using the `collect_metrics` function.
```
kmeans_results <- workflow() |>
add_recipe(kmeans_recipe) |>
add_model(kmeans_spec) |>
tune_cluster(resamples = apparent(penguins), grid = penguin_clust_ks) |>
collect_metrics()
kmeans_results
```
```
## # A tibble: 18 × 7
## num_clusters .metric .estimator mean n std_err .config
## <int> <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 1 sse_total standard 34 1 NA Preprocessor1_…
## 2 1 sse_within_total standard 34 1 NA Preprocessor1_…
## 3 2 sse_total standard 34 1 NA Preprocessor1_…
## 4 2 sse_within_total standard 10.9 1 NA Preprocessor1_…
## 5 3 sse_total standard 34 1 NA Preprocessor1_…
## 6 3 sse_within_total standard 4.47 1 NA Preprocessor1_…
## 7 4 sse_total standard 34 1 NA Preprocessor1_…
## 8 4 sse_within_total standard 3.54 1 NA Preprocessor1_…
## 9 5 sse_total standard 34 1 NA Preprocessor1_…
## 10 5 sse_within_total standard 2.23 1 NA Preprocessor1_…
## 11 6 sse_total standard 34 1 NA Preprocessor1_…
## 12 6 sse_within_total standard 1.75 1 NA Preprocessor1_…
## 13 7 sse_total standard 34 1 NA Preprocessor1_…
## 14 7 sse_within_total standard 2.06 1 NA Preprocessor1_…
## 15 8 sse_total standard 34 1 NA Preprocessor1_…
## 16 8 sse_within_total standard 2.46 1 NA Preprocessor1_…
## 17 9 sse_total standard 34 1 NA Preprocessor1_…
## 18 9 sse_within_total standard 0.906 1 NA Preprocessor1_…
```
The total WSSD results correspond to the `mean` column when the `.metric` variable is equal to `sse_within_total`.
We can obtain a tidy data frame with this information using `filter` and `mutate`.
```
kmeans_results <- kmeans_results |>
filter(.metric == "sse_within_total") |>
mutate(total_WSSD = mean) |>
select(num_clusters, total_WSSD)
kmeans_results
```
```
## # A tibble: 9 × 2
## num_clusters total_WSSD
## <int> <dbl>
## 1 1 34
## 2 2 10.9
## 3 3 4.47
## 4 4 3.54
## 5 5 2.23
## 6 6 1.75
## 7 7 2.06
## 8 8 2.46
## 9 9 0.906
```
Now that we have `total_WSSD` and `num_clusters` as columns in a data frame, we can make a line plot
(Figure [9\.14](clustering.html#fig:10-plot-choose-k)) and search for the “elbow” to find which value of K to use.
```
elbow_plot <- ggplot(kmeans_results, aes(x = num_clusters, y = total_WSSD)) +
geom_point() +
geom_line() +
xlab("K") +
ylab("Total within-cluster sum of squares") +
scale_x_continuous(breaks = 1:9) +
theme(text = element_text(size = 12))
elbow_plot
```
Figure 9\.14: A plot showing the total WSSD versus the number of clusters.
It looks like 3 clusters is the right choice for this data.
But why is there a “bump” in the total WSSD plot here?
Shouldn’t total WSSD always decrease as we add more clusters?
Technically yes, but remember: K\-means can get “stuck” in a bad solution.
Unfortunately, for K \= 8 we had an unlucky initialization
and found a bad clustering!
We can help prevent finding a bad clustering
by trying a few different random initializations
via the `nstart` argument in the model specification.
Here we will try using 10 restarts.
```
kmeans_spec <- k_means(num_clusters = tune()) |>
set_engine("stats", nstart = 10)
kmeans_spec
```
```
## K Means Cluster Specification (partition)
##
## Main Arguments:
## num_clusters = tune()
##
## Engine-Specific Arguments:
## nstart = 10
##
## Computational engine: stats
```
Now if we rerun the same workflow with the new model specification,
K\-means clustering will be performed `nstart = 10` times for each value of K.
The `collect_metrics` function will then pick the best clustering of the 10 runs for each value of K,
and report the results for that best clustering.
Figure [9\.15](clustering.html#fig:10-choose-k-nstart) shows the resulting
total WSSD plot from using 10 restarts; the bump is gone and the total WSSD decreases as expected.
The more times we perform K\-means clustering,
the more likely we are to find a good clustering (if one exists).
What value should you choose for `nstart`? The answer is that it depends
on many factors: the size and characteristics of your data set,
as well as how powerful your computer is.
The larger the `nstart` value the better from an analysis perspective,
but there is a trade\-off that doing many clusterings
could take a long time.
So this is something that needs to be balanced.
```
kmeans_results <- workflow() |>
add_recipe(kmeans_recipe) |>
add_model(kmeans_spec) |>
tune_cluster(resamples = apparent(penguins), grid = penguin_clust_ks) |>
collect_metrics() |>
filter(.metric == "sse_within_total") |>
mutate(total_WSSD = mean) |>
select(num_clusters, total_WSSD)
elbow_plot <- ggplot(kmeans_results, aes(x = num_clusters, y = total_WSSD)) +
geom_point() +
geom_line() +
xlab("K") +
ylab("Total within-cluster sum of squares") +
scale_x_continuous(breaks = 1:9) +
theme(text = element_text(size = 12))
elbow_plot
```
Figure 9\.15: A plot showing the total WSSD versus the number of clusters when K\-means is run with 10 restarts.
9\.7 Exercises
--------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Clustering” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
9\.8 Additional resources
-------------------------
* Chapter 10 of *An Introduction to Statistical
Learning* ([James et al. 2013](#ref-james2013introduction)) provides a
great next stop in the process of learning about clustering and unsupervised
learning in general. In the realm of clustering specifically, it provides a
great companion introduction to K\-means, but also covers *hierarchical*
clustering for when you expect there to be subgroups, and then subgroups within
subgroups, etc., in your data. In the realm of more general unsupervised
learning, it covers *principal components analysis (PCA)*, which is a very
popular technique for reducing the number of predictors in a data set.
9\.1 Overview
-------------
As part of exploratory data analysis, it is often helpful to see if there are
meaningful subgroups (or *clusters*) in the data.
This grouping can be used for many purposes,
such as generating new questions or improving predictive analyses.
This chapter provides an introduction to clustering
using the K\-means algorithm,
including techniques to choose the number of clusters.
9\.2 Chapter learning objectives
--------------------------------
By the end of the chapter, readers will be able to do the following:
* Describe a situation in which clustering is an appropriate technique to use,
and what insight it might extract from the data.
* Explain the K\-means clustering algorithm.
* Interpret the output of a K\-means analysis.
* Differentiate between clustering, classification, and regression.
* Identify when it is necessary to scale variables before clustering, and do this using R.
* Perform K\-means clustering in R using `tidymodels` workflows.
* Use the elbow method to choose the number of clusters for K\-means.
* Visualize the output of K\-means clustering in R using colored scatter plots.
* Describe the advantages, limitations and assumptions of the K\-means clustering algorithm.
9\.3 Clustering
---------------
Clustering is a data analysis technique
involving separating a data set into subgroups of related data.
For example, we might use clustering to separate a
data set of documents into groups that correspond to topics, a data set of
human genetic information into groups that correspond to ancestral
subpopulations, or a data set of online customers into groups that correspond
to purchasing behaviors. Once the data are separated, we can, for example,
use the subgroups to generate new questions about the data and follow up with a
predictive modeling exercise. In this course, clustering will be used only for
exploratory analysis, i.e., uncovering patterns in the data.
Note that clustering is a fundamentally different kind of task
than classification or regression.
In particular, both classification and regression are *supervised tasks*
where there is a *response variable* (a category label or value),
and we have examples of past data with labels/values
that help us predict those of future data.
By contrast, clustering is an *unsupervised task*,
as we are trying to understand
and examine the structure of data without any response variable labels
or values to help us.
This approach has both advantages and disadvantages.
Clustering requires no additional annotation or input on the data.
For example, while it would be nearly impossible to annotate
all the articles on Wikipedia with human\-made topic labels,
we can cluster the articles without this information
to find groupings corresponding to topics automatically.
However, given that there is no response variable, it is not as easy to evaluate
the “quality” of a clustering. With classification, we can use a test data set
to assess prediction performance. In clustering, there is not a single good
choice for evaluation. In this book, we will use visualization to ascertain the
quality of a clustering, and leave rigorous evaluation for more advanced
courses.
As in the case of classification,
there are many possible methods that we could use to cluster our observations
to look for subgroups.
In this book, we will focus on the widely used K\-means algorithm ([Lloyd 1982](#ref-kmeans)).
In your future studies, you might encounter hierarchical clustering,
principal component analysis, multidimensional scaling, and more;
see the additional resources section at the end of this chapter
for where to begin learning more about these other methods.
> **Note:** There are also so\-called *semisupervised* tasks,
> where only some of the data come with response variable labels/values,
> but the vast majority don’t.
> The goal is to try to uncover underlying structure in the data
> that allows one to guess the missing labels.
> This sort of task is beneficial, for example,
> when one has an unlabeled data set that is too large to manually label,
> but one is willing to provide a few informative example labels as a “seed”
> to guess the labels for all the data.
9\.4 An illustrative example
----------------------------
In this chapter we will focus on a data set from
[the `palmerpenguins` R package](https://allisonhorst.github.io/palmerpenguins/) ([Horst, Hill, and Gorman 2020](#ref-palmerpenguins)). This
data set was collected by Dr. Kristen Gorman and
the Palmer Station, Antarctica Long Term Ecological Research Site, and includes
measurements for adult penguins (Figure [9\.1](clustering.html#fig:09-penguins)) found near there ([Gorman, Williams, and Fraser 2014](#ref-penguinpaper)).
Our goal will be to use two
variables—penguin bill and flipper length, both in millimeters—to determine whether
there are distinct types of penguins in our data.
Understanding this might help us with species discovery and classification in a data\-driven
way. Note that we have reduced the size of the data set to 18 observations and 2 variables;
this will help us make clear visualizations that illustrate how clustering works for learning purposes.
Figure 9\.1: A Gentoo penguin.
Before we get started, we will load the `tidyverse` metapackage
as well as set a random seed.
This will ensure we have access to the functions we need
and that our analysis will be reproducible.
As we will learn in more detail later in the chapter,
setting the seed here is important
because the K\-means clustering algorithm uses randomness
when choosing a starting position for each cluster.
```
library(tidyverse)
set.seed(1)
```
Now we can load and preview the `penguins` data.
```
penguins <- read_csv("data/penguins.csv")
penguins
```
```
## # A tibble: 18 × 2
## bill_length_mm flipper_length_mm
## <dbl> <dbl>
## 1 39.2 196
## 2 36.5 182
## 3 34.5 187
## 4 36.7 187
## 5 38.1 181
## 6 39.2 190
## 7 36 195
## 8 37.8 193
## 9 46.5 213
## 10 46.1 215
## 11 47.8 215
## 12 45 220
## 13 49.1 212
## 14 43.3 208
## 15 46 195
## 16 46.7 195
## 17 52.2 197
## 18 46.8 189
```
We will begin by using a version of the data that we have standardized, `penguins_standardized`,
to illustrate how K\-means clustering works (recall standardization from Chapter [5](classification1.html#classification1)).
Later in this chapter, we will return to the original `penguins` data to see how to include standardization automatically
in the clustering pipeline.
```
penguins_standardized
```
```
## # A tibble: 18 × 2
## bill_length_standardized flipper_length_standardized
## <dbl> <dbl>
## 1 -0.641 -0.190
## 2 -1.14 -1.33
## 3 -1.52 -0.922
## 4 -1.11 -0.922
## 5 -0.847 -1.41
## 6 -0.641 -0.678
## 7 -1.24 -0.271
## 8 -0.902 -0.434
## 9 0.720 1.19
## 10 0.646 1.36
## 11 0.963 1.36
## 12 0.440 1.76
## 13 1.21 1.11
## 14 0.123 0.786
## 15 0.627 -0.271
## 16 0.757 -0.271
## 17 1.78 -0.108
## 18 0.776 -0.759
```
Next, we can create a scatter plot using this data set
to see if we can detect subtypes or groups in our data set.
```
ggplot(penguins_standardized,
aes(x = flipper_length_standardized,
y = bill_length_standardized)) +
geom_point() +
xlab("Flipper Length (standardized)") +
ylab("Bill Length (standardized)") +
theme(text = element_text(size = 12))
```
Figure 9\.2: Scatter plot of standardized bill length versus standardized flipper length.
Based on the visualization
in Figure [9\.2](clustering.html#fig:10-toy-example-plot),
we might suspect there are a few subtypes of penguins within our data set.
We can see roughly 3 groups of observations in Figure [9\.2](clustering.html#fig:10-toy-example-plot),
including:
1. a small flipper and bill length group,
2. a small flipper length, but large bill length group, and
3. a large flipper and bill length group.
Data visualization is a great tool to give us a rough sense of such patterns
when we have a small number of variables.
But if we are to group data—and select the number of groups—as part of
a reproducible analysis, we need something a bit more automated.
Additionally, finding groups via visualization becomes more difficult
as we increase the number of variables we consider when clustering.
The way to rigorously separate the data into groups
is to use a clustering algorithm.
In this chapter, we will focus on the *K\-means* algorithm,
a widely used and often very effective clustering method,
combined with the *elbow method*
for selecting the number of clusters.
This procedure will separate the data into groups;
Figure [9\.3](clustering.html#fig:10-toy-example-clustering) shows these groups
denoted by colored scatter points.
Figure 9\.3: Scatter plot of standardized bill length versus standardized flipper length with colored groups.
What are the labels for these groups? Unfortunately, we don’t have any. K\-means,
like almost all clustering algorithms, just outputs meaningless “cluster labels”
that are typically whole numbers: 1, 2, 3, etc. But in a simple case like this,
where we can easily visualize the clusters on a scatter plot, we can give
human\-made labels to the groups using their positions on
the plot:
* small flipper length and small bill length (orange cluster),
* small flipper length and large bill length (blue cluster).
* and large flipper length and large bill length (yellow cluster).
Once we have made these determinations, we can use them to inform our species
classifications or ask further questions about our data. For example, we might
be interested in understanding the relationship between flipper length and bill
length, and that relationship may differ depending on the type of penguin we
have.
9\.5 K\-means
-------------
### 9\.5\.1 Measuring cluster quality
The K\-means algorithm is a procedure that groups data into K clusters.
It starts with an initial clustering of the data, and then iteratively
improves it by making adjustments to the assignment of data
to clusters until it cannot improve any further. But how do we measure
the “quality” of a clustering, and what does it mean to improve it?
In K\-means clustering, we measure the quality of a cluster
by its *within\-cluster sum\-of\-squared\-distances* (WSSD).
Computing this involves two steps.
First, we find the cluster centers by computing the mean of each variable
over data points in the cluster. For example, suppose we have a
cluster containing four observations, and we are using two variables, \\(x\\) and \\(y\\), to cluster the data.
Then we would compute the coordinates, \\(\\mu\_x\\) and \\(\\mu\_y\\), of the cluster center via
\\\[\\mu\_x \= \\frac{1}{4}(x\_1\+x\_2\+x\_3\+x\_4\) \\quad \\mu\_y \= \\frac{1}{4}(y\_1\+y\_2\+y\_3\+y\_4\).\\]
In the first cluster from the example, there are 4 data points. These are shown with their cluster center
(standardized flipper length \-0\.35, standardized bill length 0\.99\) highlighted
in Figure [9\.4](clustering.html#fig:10-toy-example-clus1-center).
Figure 9\.4: Cluster 1 from the `penguins_standardized` data set example. Observations are small blue points, with the cluster center highlighted as a large blue point with a black outline.
The second step in computing the WSSD is to add up the squared distance
between each point in the cluster and the cluster center.
We use the straight\-line / Euclidean distance formula
that we learned about in Chapter [5](classification1.html#classification1).
In the 4\-observation cluster example above,
we would compute the WSSD \\(S^2\\) via
\\\[\\begin{align\*}
S^2 \= \\left((x\_1 \- \\mu\_x)^2 \+ (y\_1 \- \\mu\_y)^2\\right) \+ \\left((x\_2 \- \\mu\_x)^2 \+ (y\_2 \- \\mu\_y)^2\\right) \+ \\\\ \\left((x\_3 \- \\mu\_x)^2 \+ (y\_3 \- \\mu\_y)^2\\right) \+ \\left((x\_4 \- \\mu\_x)^2 \+ (y\_4 \- \\mu\_y)^2\\right).
\\end{align\*}\\]
These distances are denoted by lines in Figure [9\.5](clustering.html#fig:10-toy-example-clus1-dists) for the first cluster of the penguin data example.
Figure 9\.5: Cluster 1 from the `penguins_standardized` data set example. Observations are small blue points, with the cluster center highlighted as a large blue point with a black outline. The distances from the observations to the cluster center are represented as black lines.
The larger the value of \\(S^2\\), the more spread out the cluster is, since large \\(S^2\\) means
that points are far from the cluster center. Note, however, that “large” is relative to *both* the
scale of the variables for clustering *and* the number of points in the cluster. A cluster where points
are very close to the center might still have a large \\(S^2\\) if there are many data points in the cluster.
After we have calculated the WSSD for all the clusters,
we sum them together to get the *total WSSD*. For our example,
this means adding up all the squared distances for the 18 observations.
These distances are denoted by black lines in
Figure [9\.6](clustering.html#fig:10-toy-example-all-clus-dists).
Figure 9\.6: All clusters from the `penguins_standardized` data set example. Observations are small orange, blue, and yellow points with cluster centers denoted by larger points with a black outline. The distances from the observations to each of the respective cluster centers are represented as black lines.
Since K\-means uses the straight\-line distance to measure the quality of a clustering,
it is limited to clustering based on quantitative variables.
However, note that there are variants of the K\-means algorithm,
as well as other clustering algorithms entirely,
that use other distance metrics
to allow for non\-quantitative data to be clustered.
These are beyond the scope of this book.
### 9\.5\.2 The clustering algorithm
We begin the K\-means algorithm by picking K,
and randomly assigning a roughly equal number of observations
to each of the K clusters.
An example random initialization is shown in Figure [9\.7](clustering.html#fig:10-toy-kmeans-init).
Figure 9\.7: Random initialization of labels.
Then K\-means consists of two major steps that attempt to minimize the
sum of WSSDs over all the clusters, i.e., the *total WSSD*:
1. **Center update:** Compute the center of each cluster.
2. **Label update:** Reassign each data point to the cluster with the nearest center.
These two steps are repeated until the cluster assignments no longer change.
We show what the first four iterations of K\-means would look like in
Figure [9\.8](clustering.html#fig:10-toy-kmeans-iter). There each pair of plots
in each row corresponds to an iteration,
where the left figure in the pair depicts the center update,
and the right figure in the pair depicts the label update (i.e., the reassignment of data to clusters).
Figure 9\.8: First four iterations of K\-means clustering on the `penguins_standardized` example data set. Each pair of plots corresponds to an iteration. Within the pair, the first plot depicts the center update, and the second plot depicts the reassignment of data to clusters. Cluster centers are indicated by larger points that are outlined in black.
Note that at this point, we can terminate the algorithm since none of the assignments changed
in the fourth iteration; both the centers and labels will remain the same from this point onward.
> **Note:** Is K\-means *guaranteed* to stop at some point, or could it iterate forever? As it turns out,
> thankfully, the answer is that K\-means is guaranteed to stop after *some* number of iterations. For the interested reader, the
> logic for this has three steps: (1\) both the label update and the center update decrease total WSSD in each iteration,
> (2\) the total WSSD is always greater than or equal to 0, and (3\) there are only a finite number of possible
> ways to assign the data to clusters. So at some point, the total WSSD must stop decreasing, which means none of the assignments
> are changing, and the algorithm terminates.
### 9\.5\.3 Random restarts
Unlike the classification and regression models we studied in previous chapters, K\-means can get “stuck” in a bad solution.
For example, Figure [9\.9](clustering.html#fig:10-toy-kmeans-bad-init) illustrates an unlucky random initialization by K\-means.
Figure 9\.9: Random initialization of labels.
Figure [9\.10](clustering.html#fig:10-toy-kmeans-bad-iter) shows what the iterations of K\-means would look like with the unlucky random initialization shown in Figure [9\.9](clustering.html#fig:10-toy-kmeans-bad-init).
Figure 9\.10: First five iterations of K\-means clustering on the `penguins_standardized` example data set with a poor random initialization. Each pair of plots corresponds to an iteration. Within the pair, the first plot depicts the center update, and the second plot depicts the reassignment of data to clusters. Cluster centers are indicated by larger points that are outlined in black.
This looks like a relatively bad clustering of the data, but K\-means cannot improve it.
To solve this problem when clustering data using K\-means, we should randomly re\-initialize the labels a few times, run K\-means for each initialization,
and pick the clustering that has the lowest final total WSSD.
### 9\.5\.4 Choosing K
In order to cluster data using K\-means,
we also have to pick the number of clusters, K.
But unlike in classification, we have no response variable
and cannot perform cross\-validation with some measure of model prediction error.
Further, if K is chosen too small, then multiple clusters get grouped together;
if K is too large, then clusters get subdivided.
In both cases, we will potentially miss interesting structure in the data.
Figure [9\.11](clustering.html#fig:10-toy-kmeans-vary-k) illustrates the impact of K
on K\-means clustering of our penguin flipper and bill length data
by showing the different clusterings for K’s ranging from 1 to 9\.
Figure 9\.11: Clustering of the penguin data for K clusters ranging from 1 to 9\. Cluster centers are indicated by larger points that are outlined in black.
If we set K less than 3, then the clustering merges separate groups of data; this causes a large
total WSSD, since the cluster center is not close to any of the data in the cluster. On
the other hand, if we set K greater than 3, the clustering subdivides subgroups of data; this does indeed still
decrease the total WSSD, but by only a *diminishing amount*. If we plot the total WSSD versus the number of
clusters, we see that the decrease in total WSSD levels off (or forms an “elbow shape”) when we reach roughly
the right number of clusters (Figure [9\.12](clustering.html#fig:10-toy-kmeans-elbow)).
Figure 9\.12: Total WSSD for K clusters ranging from 1 to 9\.
### 9\.5\.1 Measuring cluster quality
The K\-means algorithm is a procedure that groups data into K clusters.
It starts with an initial clustering of the data, and then iteratively
improves it by making adjustments to the assignment of data
to clusters until it cannot improve any further. But how do we measure
the “quality” of a clustering, and what does it mean to improve it?
In K\-means clustering, we measure the quality of a cluster
by its *within\-cluster sum\-of\-squared\-distances* (WSSD).
Computing this involves two steps.
First, we find the cluster centers by computing the mean of each variable
over data points in the cluster. For example, suppose we have a
cluster containing four observations, and we are using two variables, \\(x\\) and \\(y\\), to cluster the data.
Then we would compute the coordinates, \\(\\mu\_x\\) and \\(\\mu\_y\\), of the cluster center via
\\\[\\mu\_x \= \\frac{1}{4}(x\_1\+x\_2\+x\_3\+x\_4\) \\quad \\mu\_y \= \\frac{1}{4}(y\_1\+y\_2\+y\_3\+y\_4\).\\]
In the first cluster from the example, there are 4 data points. These are shown with their cluster center
(standardized flipper length \-0\.35, standardized bill length 0\.99\) highlighted
in Figure [9\.4](clustering.html#fig:10-toy-example-clus1-center).
Figure 9\.4: Cluster 1 from the `penguins_standardized` data set example. Observations are small blue points, with the cluster center highlighted as a large blue point with a black outline.
The second step in computing the WSSD is to add up the squared distance
between each point in the cluster and the cluster center.
We use the straight\-line / Euclidean distance formula
that we learned about in Chapter [5](classification1.html#classification1).
In the 4\-observation cluster example above,
we would compute the WSSD \\(S^2\\) via
\\\[\\begin{align\*}
S^2 \= \\left((x\_1 \- \\mu\_x)^2 \+ (y\_1 \- \\mu\_y)^2\\right) \+ \\left((x\_2 \- \\mu\_x)^2 \+ (y\_2 \- \\mu\_y)^2\\right) \+ \\\\ \\left((x\_3 \- \\mu\_x)^2 \+ (y\_3 \- \\mu\_y)^2\\right) \+ \\left((x\_4 \- \\mu\_x)^2 \+ (y\_4 \- \\mu\_y)^2\\right).
\\end{align\*}\\]
These distances are denoted by lines in Figure [9\.5](clustering.html#fig:10-toy-example-clus1-dists) for the first cluster of the penguin data example.
Figure 9\.5: Cluster 1 from the `penguins_standardized` data set example. Observations are small blue points, with the cluster center highlighted as a large blue point with a black outline. The distances from the observations to the cluster center are represented as black lines.
The larger the value of \\(S^2\\), the more spread out the cluster is, since large \\(S^2\\) means
that points are far from the cluster center. Note, however, that “large” is relative to *both* the
scale of the variables for clustering *and* the number of points in the cluster. A cluster where points
are very close to the center might still have a large \\(S^2\\) if there are many data points in the cluster.
After we have calculated the WSSD for all the clusters,
we sum them together to get the *total WSSD*. For our example,
this means adding up all the squared distances for the 18 observations.
These distances are denoted by black lines in
Figure [9\.6](clustering.html#fig:10-toy-example-all-clus-dists).
Figure 9\.6: All clusters from the `penguins_standardized` data set example. Observations are small orange, blue, and yellow points with cluster centers denoted by larger points with a black outline. The distances from the observations to each of the respective cluster centers are represented as black lines.
Since K\-means uses the straight\-line distance to measure the quality of a clustering,
it is limited to clustering based on quantitative variables.
However, note that there are variants of the K\-means algorithm,
as well as other clustering algorithms entirely,
that use other distance metrics
to allow for non\-quantitative data to be clustered.
These are beyond the scope of this book.
### 9\.5\.2 The clustering algorithm
We begin the K\-means algorithm by picking K,
and randomly assigning a roughly equal number of observations
to each of the K clusters.
An example random initialization is shown in Figure [9\.7](clustering.html#fig:10-toy-kmeans-init).
Figure 9\.7: Random initialization of labels.
Then K\-means consists of two major steps that attempt to minimize the
sum of WSSDs over all the clusters, i.e., the *total WSSD*:
1. **Center update:** Compute the center of each cluster.
2. **Label update:** Reassign each data point to the cluster with the nearest center.
These two steps are repeated until the cluster assignments no longer change.
We show what the first four iterations of K\-means would look like in
Figure [9\.8](clustering.html#fig:10-toy-kmeans-iter). There each pair of plots
in each row corresponds to an iteration,
where the left figure in the pair depicts the center update,
and the right figure in the pair depicts the label update (i.e., the reassignment of data to clusters).
Figure 9\.8: First four iterations of K\-means clustering on the `penguins_standardized` example data set. Each pair of plots corresponds to an iteration. Within the pair, the first plot depicts the center update, and the second plot depicts the reassignment of data to clusters. Cluster centers are indicated by larger points that are outlined in black.
Note that at this point, we can terminate the algorithm since none of the assignments changed
in the fourth iteration; both the centers and labels will remain the same from this point onward.
> **Note:** Is K\-means *guaranteed* to stop at some point, or could it iterate forever? As it turns out,
> thankfully, the answer is that K\-means is guaranteed to stop after *some* number of iterations. For the interested reader, the
> logic for this has three steps: (1\) both the label update and the center update decrease total WSSD in each iteration,
> (2\) the total WSSD is always greater than or equal to 0, and (3\) there are only a finite number of possible
> ways to assign the data to clusters. So at some point, the total WSSD must stop decreasing, which means none of the assignments
> are changing, and the algorithm terminates.
### 9\.5\.3 Random restarts
Unlike the classification and regression models we studied in previous chapters, K\-means can get “stuck” in a bad solution.
For example, Figure [9\.9](clustering.html#fig:10-toy-kmeans-bad-init) illustrates an unlucky random initialization by K\-means.
Figure 9\.9: Random initialization of labels.
Figure [9\.10](clustering.html#fig:10-toy-kmeans-bad-iter) shows what the iterations of K\-means would look like with the unlucky random initialization shown in Figure [9\.9](clustering.html#fig:10-toy-kmeans-bad-init).
Figure 9\.10: First five iterations of K\-means clustering on the `penguins_standardized` example data set with a poor random initialization. Each pair of plots corresponds to an iteration. Within the pair, the first plot depicts the center update, and the second plot depicts the reassignment of data to clusters. Cluster centers are indicated by larger points that are outlined in black.
This looks like a relatively bad clustering of the data, but K\-means cannot improve it.
To solve this problem when clustering data using K\-means, we should randomly re\-initialize the labels a few times, run K\-means for each initialization,
and pick the clustering that has the lowest final total WSSD.
### 9\.5\.4 Choosing K
In order to cluster data using K\-means,
we also have to pick the number of clusters, K.
But unlike in classification, we have no response variable
and cannot perform cross\-validation with some measure of model prediction error.
Further, if K is chosen too small, then multiple clusters get grouped together;
if K is too large, then clusters get subdivided.
In both cases, we will potentially miss interesting structure in the data.
Figure [9\.11](clustering.html#fig:10-toy-kmeans-vary-k) illustrates the impact of K
on K\-means clustering of our penguin flipper and bill length data
by showing the different clusterings for K’s ranging from 1 to 9\.
Figure 9\.11: Clustering of the penguin data for K clusters ranging from 1 to 9\. Cluster centers are indicated by larger points that are outlined in black.
If we set K less than 3, then the clustering merges separate groups of data; this causes a large
total WSSD, since the cluster center is not close to any of the data in the cluster. On
the other hand, if we set K greater than 3, the clustering subdivides subgroups of data; this does indeed still
decrease the total WSSD, but by only a *diminishing amount*. If we plot the total WSSD versus the number of
clusters, we see that the decrease in total WSSD levels off (or forms an “elbow shape”) when we reach roughly
the right number of clusters (Figure [9\.12](clustering.html#fig:10-toy-kmeans-elbow)).
Figure 9\.12: Total WSSD for K clusters ranging from 1 to 9\.
9\.6 K\-means in R
------------------
We can perform K\-means clustering in R using a `tidymodels` workflow similar
to those in the earlier classification and regression chapters.
We will begin by loading the `tidyclust` library, which contains the necessary
functionality.
```
library(tidyclust)
```
Returning to the original (unstandardized) `penguins` data,
recall that K\-means clustering uses straight\-line
distance to decide which points are similar to
each other. Therefore, the *scale* of each of the variables in the data
will influence which cluster data points end up being assigned.
Variables with a large scale will have a much larger
effect on deciding cluster assignment than variables with a small scale.
To address this problem, we need to create a recipe that
standardizes our data
before clustering using the `step_scale` and `step_center` preprocessing steps.
Standardization will ensure that each variable has a mean
of 0 and standard deviation of 1 prior to clustering.
We will designate that all variables are to be used in clustering via
the model formula `~ .`.
> **Note:** Recipes were originally designed specifically for *predictive* data
> analysis problems—like classification and regression—not clustering
> problems. So the functions in R that we use to construct recipes are a little bit
> awkward in the setting of clustering In particular, we will have to treat
> “predictors” here as if it meant “variables to be used in clustering”. So the
> model formula `~ .` specifies that all variables are “predictors”, i.e., all variables
> should be used for clustering. Similarly, when we use the `all_predictors()` function
> in the preprocessing steps, we really mean “apply this step to all variables used for
> clustering.”
```
kmeans_recipe <- recipe(~ ., data=penguins) |>
step_scale(all_predictors()) |>
step_center(all_predictors())
kmeans_recipe
```
```
##
## ── Recipe ──────────
##
## ── Inputs
## Number of variables by role
## predictor: 2
##
## ── Operations
## • Scaling for: all_predictors()
## • Centering for: all_predictors()
```
To indicate that we are performing K\-means clustering, we will use the `k_means`
model specification. We will use the `num_clusters` argument to specify the number
of clusters (here we choose K \= 3\), and specify that we are using the `"stats"` engine.
```
kmeans_spec <- k_means(num_clusters = 3) |>
set_engine("stats")
kmeans_spec
```
```
## K Means Cluster Specification (partition)
##
## Main Arguments:
## num_clusters = 3
##
## Computational engine: stats
```
To actually run the K\-means clustering, we combine the recipe and model
specification in a workflow, and use the `fit` function. Note that the
K\-means algorithm uses a random initialization of assignments; but since
we set the random seed earlier, the clustering will be reproducible.
```
kmeans_fit <- workflow() |>
add_recipe(kmeans_recipe) |>
add_model(kmeans_spec) |>
fit(data = penguins)
kmeans_fit
```
```
## ══ Workflow [trained] ══════════
## Preprocessor: Recipe
## Model: k_means()
##
## ── Preprocessor ──────────
## 2 Recipe Steps
##
## • step_scale()
## • step_center()
##
## ── Model ──────────
## K-means clustering with 3 clusters of sizes 4, 6, 8
##
## Cluster means:
## bill_length_mm flipper_length_mm
## 1 0.9858721 -0.3524358
## 2 0.6828058 1.2606357
## 3 -1.0050404 -0.7692589
##
## Clustering vector:
## [1] 3 3 3 3 3 3 3 3 2 2 2 2 2 2 1 1 1 1
##
## Within cluster sum of squares by cluster:
## [1] 1.098928 1.247042 2.121932
## (between_SS / total_SS = 86.9 %)
##
## Available components:
##
## [1] "cluster" "centers" "totss" "withinss" "tot.withinss"
## [6] "betweenss" "size" "iter" "ifault"
```
As you can see above, the fit object has a lot of information
that can be used to visualize the clusters, pick K, and evaluate the total WSSD.
Let’s start by visualizing the clusters as a colored scatter plot! In
order to do that, we first need to augment our
original data frame with the cluster assignments. We can
achieve this using the `augment` function from `tidyclust`.
```
clustered_data <- kmeans_fit |>
augment(penguins)
clustered_data
```
```
## # A tibble: 18 × 3
## bill_length_mm flipper_length_mm .pred_cluster
## <dbl> <dbl> <fct>
## 1 39.2 196 Cluster_1
## 2 36.5 182 Cluster_1
## 3 34.5 187 Cluster_1
## 4 36.7 187 Cluster_1
## 5 38.1 181 Cluster_1
## 6 39.2 190 Cluster_1
## 7 36 195 Cluster_1
## 8 37.8 193 Cluster_1
## 9 46.5 213 Cluster_2
## 10 46.1 215 Cluster_2
## 11 47.8 215 Cluster_2
## 12 45 220 Cluster_2
## 13 49.1 212 Cluster_2
## 14 43.3 208 Cluster_2
## 15 46 195 Cluster_3
## 16 46.7 195 Cluster_3
## 17 52.2 197 Cluster_3
## 18 46.8 189 Cluster_3
```
Now that we have the cluster assignments included in the `clustered_data` tidy data frame, we can
visualize them as shown in Figure [9\.13](clustering.html#fig:10-plot-clusters-2).
Note that we are plotting the *un\-standardized* data here; if we for some reason wanted to
visualize the *standardized* data from the recipe, we would need to use the `bake` function
to obtain that first.
```
cluster_plot <- ggplot(clustered_data,
aes(x = flipper_length_mm,
y = bill_length_mm,
color = .pred_cluster),
size = 2) +
geom_point() +
labs(x = "Flipper Length",
y = "Bill Length",
color = "Cluster") +
scale_color_manual(values = c("steelblue",
"darkorange",
"goldenrod1")) +
theme(text = element_text(size = 12))
cluster_plot
```
Figure 9\.13: The data colored by the cluster assignments returned by K\-means.
As mentioned above, we also need to select K by finding
where the “elbow” occurs in the plot of total WSSD versus the number of clusters.
We can obtain the total WSSD (`tot.withinss`) from our
clustering with 3 clusters using the `glance` function.
```
glance(kmeans_fit)
```
```
## # A tibble: 1 × 4
## totss tot.withinss betweenss iter
## <dbl> <dbl> <dbl> <int>
## 1 34 4.47 29.5 2
```
To calculate the total WSSD for a variety of Ks, we will
create a data frame with a column named `num_clusters` with rows containing
each value of K we want to run K\-means with (here, 1 to 9\).
```
penguin_clust_ks <- tibble(num_clusters = 1:9)
penguin_clust_ks
```
```
## # A tibble: 9 × 1
## num_clusters
## <int>
## 1 1
## 2 2
## 3 3
## 4 4
## 5 5
## 6 6
## 7 7
## 8 8
## 9 9
```
Then we construct our model specification again, this time
specifying that we want to tune the `num_clusters` parameter.
```
kmeans_spec <- k_means(num_clusters = tune()) |>
set_engine("stats")
kmeans_spec
```
```
## K Means Cluster Specification (partition)
##
## Main Arguments:
## num_clusters = tune()
##
## Computational engine: stats
```
We combine the recipe and specification in a workflow, and then
use the `tune_cluster` function to run K\-means on each of the different
settings of `num_clusters`. The `grid` argument controls which values of
K we want to try—in this case, the values from 1 to 9 that are
stored in the `penguin_clust_ks` data frame. We set the `resamples`
argument to `apparent(penguins)` to tell K\-means to run on the whole
data set for each value of `num_clusters`. Finally, we collect the results
using the `collect_metrics` function.
```
kmeans_results <- workflow() |>
add_recipe(kmeans_recipe) |>
add_model(kmeans_spec) |>
tune_cluster(resamples = apparent(penguins), grid = penguin_clust_ks) |>
collect_metrics()
kmeans_results
```
```
## # A tibble: 18 × 7
## num_clusters .metric .estimator mean n std_err .config
## <int> <chr> <chr> <dbl> <int> <dbl> <chr>
## 1 1 sse_total standard 34 1 NA Preprocessor1_…
## 2 1 sse_within_total standard 34 1 NA Preprocessor1_…
## 3 2 sse_total standard 34 1 NA Preprocessor1_…
## 4 2 sse_within_total standard 10.9 1 NA Preprocessor1_…
## 5 3 sse_total standard 34 1 NA Preprocessor1_…
## 6 3 sse_within_total standard 4.47 1 NA Preprocessor1_…
## 7 4 sse_total standard 34 1 NA Preprocessor1_…
## 8 4 sse_within_total standard 3.54 1 NA Preprocessor1_…
## 9 5 sse_total standard 34 1 NA Preprocessor1_…
## 10 5 sse_within_total standard 2.23 1 NA Preprocessor1_…
## 11 6 sse_total standard 34 1 NA Preprocessor1_…
## 12 6 sse_within_total standard 1.75 1 NA Preprocessor1_…
## 13 7 sse_total standard 34 1 NA Preprocessor1_…
## 14 7 sse_within_total standard 2.06 1 NA Preprocessor1_…
## 15 8 sse_total standard 34 1 NA Preprocessor1_…
## 16 8 sse_within_total standard 2.46 1 NA Preprocessor1_…
## 17 9 sse_total standard 34 1 NA Preprocessor1_…
## 18 9 sse_within_total standard 0.906 1 NA Preprocessor1_…
```
The total WSSD results correspond to the `mean` column when the `.metric` variable is equal to `sse_within_total`.
We can obtain a tidy data frame with this information using `filter` and `mutate`.
```
kmeans_results <- kmeans_results |>
filter(.metric == "sse_within_total") |>
mutate(total_WSSD = mean) |>
select(num_clusters, total_WSSD)
kmeans_results
```
```
## # A tibble: 9 × 2
## num_clusters total_WSSD
## <int> <dbl>
## 1 1 34
## 2 2 10.9
## 3 3 4.47
## 4 4 3.54
## 5 5 2.23
## 6 6 1.75
## 7 7 2.06
## 8 8 2.46
## 9 9 0.906
```
Now that we have `total_WSSD` and `num_clusters` as columns in a data frame, we can make a line plot
(Figure [9\.14](clustering.html#fig:10-plot-choose-k)) and search for the “elbow” to find which value of K to use.
```
elbow_plot <- ggplot(kmeans_results, aes(x = num_clusters, y = total_WSSD)) +
geom_point() +
geom_line() +
xlab("K") +
ylab("Total within-cluster sum of squares") +
scale_x_continuous(breaks = 1:9) +
theme(text = element_text(size = 12))
elbow_plot
```
Figure 9\.14: A plot showing the total WSSD versus the number of clusters.
It looks like 3 clusters is the right choice for this data.
But why is there a “bump” in the total WSSD plot here?
Shouldn’t total WSSD always decrease as we add more clusters?
Technically yes, but remember: K\-means can get “stuck” in a bad solution.
Unfortunately, for K \= 8 we had an unlucky initialization
and found a bad clustering!
We can help prevent finding a bad clustering
by trying a few different random initializations
via the `nstart` argument in the model specification.
Here we will try using 10 restarts.
```
kmeans_spec <- k_means(num_clusters = tune()) |>
set_engine("stats", nstart = 10)
kmeans_spec
```
```
## K Means Cluster Specification (partition)
##
## Main Arguments:
## num_clusters = tune()
##
## Engine-Specific Arguments:
## nstart = 10
##
## Computational engine: stats
```
Now if we rerun the same workflow with the new model specification,
K\-means clustering will be performed `nstart = 10` times for each value of K.
The `collect_metrics` function will then pick the best clustering of the 10 runs for each value of K,
and report the results for that best clustering.
Figure [9\.15](clustering.html#fig:10-choose-k-nstart) shows the resulting
total WSSD plot from using 10 restarts; the bump is gone and the total WSSD decreases as expected.
The more times we perform K\-means clustering,
the more likely we are to find a good clustering (if one exists).
What value should you choose for `nstart`? The answer is that it depends
on many factors: the size and characteristics of your data set,
as well as how powerful your computer is.
The larger the `nstart` value the better from an analysis perspective,
but there is a trade\-off that doing many clusterings
could take a long time.
So this is something that needs to be balanced.
```
kmeans_results <- workflow() |>
add_recipe(kmeans_recipe) |>
add_model(kmeans_spec) |>
tune_cluster(resamples = apparent(penguins), grid = penguin_clust_ks) |>
collect_metrics() |>
filter(.metric == "sse_within_total") |>
mutate(total_WSSD = mean) |>
select(num_clusters, total_WSSD)
elbow_plot <- ggplot(kmeans_results, aes(x = num_clusters, y = total_WSSD)) +
geom_point() +
geom_line() +
xlab("K") +
ylab("Total within-cluster sum of squares") +
scale_x_continuous(breaks = 1:9) +
theme(text = element_text(size = 12))
elbow_plot
```
Figure 9\.15: A plot showing the total WSSD versus the number of clusters when K\-means is run with 10 restarts.
9\.7 Exercises
--------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the “Clustering” row.
You can launch an interactive version of the worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of the worksheet by clicking “view worksheet.”
If you instead decide to download the worksheet and run it on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
9\.8 Additional resources
-------------------------
* Chapter 10 of *An Introduction to Statistical
Learning* ([James et al. 2013](#ref-james2013introduction)) provides a
great next stop in the process of learning about clustering and unsupervised
learning in general. In the realm of clustering specifically, it provides a
great companion introduction to K\-means, but also covers *hierarchical*
clustering for when you expect there to be subgroups, and then subgroups within
subgroups, etc., in your data. In the realm of more general unsupervised
learning, it covers *principal components analysis (PCA)*, which is a very
popular technique for reducing the number of predictors in a data set.
| Data Science |
ubc-dsci.github.io | https://ubc-dsci.github.io/introduction-to-datascience/inference.html |
Chapter 10 Statistical inference
================================
10\.1 Overview
--------------
A typical data analysis task in practice is to draw conclusions about some
unknown aspect of a population of interest based on observed data sampled from
that population; we typically do not get data on the *entire* population. Data
analysis questions regarding how summaries, patterns, trends, or relationships
in a data set extend to the wider population are called *inferential
questions*. This chapter will start with the fundamental ideas of sampling from
populations and then introduce two common techniques in statistical inference:
*point estimation* and *interval estimation*.
10\.2 Chapter learning objectives
---------------------------------
By the end of the chapter, readers will be able to do the following:
* Describe real\-world examples of questions that can be answered with statistical inference.
* Define common population parameters (e.g., mean, proportion, standard deviation) that are often estimated using sampled data, and estimate these from a sample.
* Define the following statistical sampling terms: population, sample, population parameter, point estimate, and sampling distribution.
* Explain the difference between a population parameter and a sample point estimate.
* Use R to draw random samples from a finite population.
* Use R to create a sampling distribution from a finite population.
* Describe how sample size influences the sampling distribution.
* Define bootstrapping.
* Use R to create a bootstrap distribution to approximate a sampling distribution.
* Contrast the bootstrap and sampling distributions.
10\.3 Why do we need sampling?
------------------------------
We often need to understand how quantities we observe in a subset
of data relate to the same quantities in the broader population. For example, suppose a
retailer is considering selling iPhone accessories, and they want to estimate
how big the market might be. Additionally, they want to strategize how they can
market their products on North American college and university campuses. This
retailer might formulate the following question:
*What proportion of all undergraduate students in North America own an iPhone?*
In the above question, we are interested in making a conclusion about *all*
undergraduate students in North America; this is referred to as the **population**. In
general, the population is the complete collection of individuals or cases we
are interested in studying. Further, in the above question, we are interested
in computing a quantity—the proportion of iPhone owners—based on
the entire population. This proportion is referred to as a **population parameter**. In
general, a population parameter is a numerical characteristic of the entire
population. To compute this number in the example above, we would need to ask
every single undergraduate in North America whether they own an iPhone. In
practice, directly computing population parameters is often time\-consuming and
costly, and sometimes impossible.
A more practical approach would be to make measurements for a **sample**, i.e., a
subset of individuals collected from the population. We can then compute a
**sample estimate**—a numerical characteristic of the sample—that
estimates the population parameter. For example, suppose we randomly selected
ten undergraduate students across North America (the sample) and computed the
proportion of those students who own an iPhone (the sample estimate). In that
case, we might suspect that proportion is a reasonable estimate of the
proportion of students who own an iPhone in the entire population. Figure
[10\.1](inference.html#fig:11-population-vs-sample) illustrates this process.
In general, the process of using a sample to make a conclusion about the
broader population from which it is taken is referred to as **statistical inference**.
Figure 10\.1: The process of using a sample from a broader population to obtain a point estimate of a population parameter. In this case, a sample of 10 individuals yielded 6 who own an iPhone, resulting in an estimated population proportion of 60% iPhone owners. The actual population proportion in this example illustration is 53\.8%.
Note that proportions are not the *only* kind of population parameter we might
be interested in. For example, suppose an undergraduate student studying at the University
of British Columbia in Canada is looking for an apartment
to rent. They need to create a budget, so they want to know about
studio apartment rental prices in Vancouver. This student might
formulate the question:
*What is the average price per month of studio apartment rentals in Vancouver?*
In this case, the population consists of all studio apartment rentals in Vancouver, and the
population parameter is the *average price per month*. Here we used the average
as a measure of the center to describe the “typical value” of studio apartment
rental prices. But even within this one example, we could also be interested in
many other population parameters. For instance, we know that not every studio
apartment rental in Vancouver will have the same price per month. The student
might be interested in how much monthly prices vary and want to find a measure
of the rentals’ spread (or variability), such as the standard deviation. Or perhaps the
student might be interested in the fraction of studio apartment rentals that
cost more than $1000 per month. The question we want to answer will help us
determine the parameter we want to estimate. If we were somehow able to observe
the whole population of studio apartment rental offerings in Vancouver, we
could compute each of these numbers exactly; therefore, these are all
population parameters. There are many kinds of observations and population
parameters that you will run into in practice, but in this chapter, we will
focus on two settings:
1. Using categorical observations to estimate the proportion of a category
2. Using quantitative observations to estimate the average (or mean)
10\.4 Sampling distributions
----------------------------
### 10\.4\.1 Sampling distributions for proportions
We will look at an example using data from
[Inside Airbnb](http://insideairbnb.com/) ([Cox n.d.](#ref-insideairbnb)). Airbnb is an online
marketplace for arranging vacation rentals and places to stay. The data set
contains listings for Vancouver, Canada, in September 2020\. Our data
includes an ID number, neighborhood, type of room, the number of people the
rental accommodates, number of bathrooms, bedrooms, beds, and the price per
night.
```
library(tidyverse)
set.seed(123)
airbnb <- read_csv("data/listings.csv")
airbnb
```
```
## # A tibble: 4,594 × 8
## id neighbourhood room_type accommodates bathrooms bedrooms beds price
## <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 1 Downtown Entire h… 5 2 baths 2 2 150
## 2 2 Downtown Eastside Entire h… 4 2 baths 2 2 132
## 3 3 West End Entire h… 2 1 bath 1 1 85
## 4 4 Kensington-Cedar… Entire h… 2 1 bath 1 0 146
## 5 5 Kensington-Cedar… Entire h… 4 1 bath 1 2 110
## 6 6 Hastings-Sunrise Entire h… 4 1 bath 2 3 195
## 7 7 Renfrew-Collingw… Entire h… 8 3 baths 4 5 130
## 8 8 Mount Pleasant Entire h… 2 1 bath 1 1 94
## 9 9 Grandview-Woodla… Private … 2 1 privat… 1 1 79
## 10 10 West End Private … 2 1 privat… 1 1 75
## # ℹ 4,584 more rows
```
Suppose the city of Vancouver wants information about Airbnb rentals to help
plan city bylaws, and they want to know how many Airbnb places are listed as
entire homes and apartments (rather than as private or shared rooms). Therefore
they may want to estimate the true proportion of all Airbnb listings where the
“type of place” is listed as “entire home or apartment.” Of course, we usually
do not have access to the true population, but here let’s imagine (for learning
purposes) that our data set represents the population of all Airbnb rental
listings in Vancouver, Canada. We can find the proportion of listings where
`room_type == "Entire home/apt"`.
```
airbnb |>
summarize(
n = sum(room_type == "Entire home/apt"),
proportion = sum(room_type == "Entire home/apt") / nrow(airbnb)
)
```
```
## # A tibble: 1 × 2
## n proportion
## <int> <dbl>
## 1 3434 0.747
```
We can see that the proportion of `Entire home/apt` listings in
the data set is 0\.747\. This
value, 0\.747, is the population parameter. Remember, this
parameter value is usually unknown in real data analysis problems, as it is
typically not possible to make measurements for an entire population.
Instead, perhaps we can approximate it with a small subset of data!
To investigate this idea, let’s try randomly selecting 40 listings (*i.e.,* taking a random sample of
size 40 from our population), and computing the proportion for that sample.
We will use the `rep_sample_n` function from the `infer`
package to take the sample. The arguments of `rep_sample_n` are (1\) the data frame to
sample from, and (2\) the size of the sample to take.
```
library(infer)
sample_1 <- rep_sample_n(tbl = airbnb, size = 40)
airbnb_sample_1 <- summarize(sample_1,
n = sum(room_type == "Entire home/apt"),
prop = sum(room_type == "Entire home/apt") / 40
)
airbnb_sample_1
```
```
## # A tibble: 1 × 3
## replicate n prop
## <int> <int> <dbl>
## 1 1 28 0.7
```
Here we see that the proportion of entire home/apartment listings in this
random sample is 0\.7\. Wow—that’s close to our
true population value! But remember, we computed the proportion using a random sample of size 40\.
This has two consequences. First, this value is only an *estimate*, i.e., our best guess
of our population parameter using this sample.
Given that we are estimating a single value here, we often
refer to it as a **point estimate**. Second, since the sample was random,
if we were to take *another* random sample of size 40 and compute the proportion for that sample,
we would not get the same answer:
```
sample_2 <- rep_sample_n(airbnb, size = 40)
airbnb_sample_2 <- summarize(sample_2,
n = sum(room_type == "Entire home/apt"),
prop = sum(room_type == "Entire home/apt") / 40
)
airbnb_sample_2
```
```
## # A tibble: 1 × 3
## replicate n prop
## <int> <int> <dbl>
## 1 1 35 0.875
```
Confirmed! We get a different value for our estimate this time.
That means that our point estimate might be unreliable. Indeed, estimates vary from sample to
sample due to **sampling variability**. But just how much
should we expect the estimates of our random samples to vary?
Or in other words, how much can we really trust our point estimate based on a single sample?
To understand this, we will simulate many samples (much more than just two)
of size 40 from our population of listings and calculate the proportion of
entire home/apartment listings in each sample. This simulation will create
many sample proportions, which we can visualize using a histogram. The
distribution of the estimate for all possible samples of a given size (which we
commonly refer to as \\(n\\)) from a population is called
a **sampling distribution**. The sampling distribution will help us see how much we would
expect our sample proportions from this population to vary for samples of size 40\.
We again use the `rep_sample_n` to take samples of size 40 from our
population of Airbnb listings. But this time we set the `reps` argument to 20,000 to specify
that we want to take 20,000 samples of size 40\.
```
samples <- rep_sample_n(airbnb, size = 40, reps = 20000)
samples
```
```
## # A tibble: 800,000 × 9
## # Groups: replicate [20,000]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 1 4403 Downtown Entire h… 2 1 bath 1 1
## 2 1 902 Kensington-C… Private … 2 1 shared… 1 1
## 3 1 3808 Hastings-Sun… Entire h… 6 1.5 baths 1 3
## 4 1 561 Kensington-C… Entire h… 6 1 bath 2 2
## 5 1 3385 Mount Pleasa… Entire h… 4 1 bath 1 1
## 6 1 4232 Shaughnessy Entire h… 6 1.5 baths 2 2
## 7 1 1169 Downtown Entire h… 3 1 bath 1 1
## 8 1 959 Kitsilano Private … 1 1.5 shar… 1 1
## 9 1 2171 Downtown Entire h… 2 1 bath 1 1
## 10 1 1258 Dunbar South… Entire h… 4 1 bath 2 2
## # ℹ 799,990 more rows
## # ℹ 1 more variable: price <dbl>
```
Notice that the column `replicate` indicates the replicate, or sample, to which
each listing belongs. Above, since by default R only prints the first few rows,
it looks like all of the listings have `replicate` set to 1\. But you can
check the last few entries using the `tail()` function to verify that
we indeed created 20,000 samples (or replicates).
```
tail(samples)
```
```
## # A tibble: 6 × 9
## # Groups: replicate [1]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 20000 3414 Marpole Entire h… 4 1 bath 2 2
## 2 20000 1974 Hastings-Sunr… Private … 2 1 shared… 1 1
## 3 20000 1846 Riley Park Entire h… 4 1 bath 2 3
## 4 20000 862 Downtown Entire h… 5 2 baths 2 2
## 5 20000 3295 Victoria-Fras… Private … 2 1 shared… 1 1
## 6 20000 997 Dunbar Southl… Private … 1 1.5 shar… 1 1
## # ℹ 1 more variable: price <dbl>
```
Now that we have obtained the samples, we need to compute the
proportion of entire home/apartment listings in each sample.
We first group the data by the `replicate` variable—to group the
set of listings in each sample together—and then use `summarize`
to compute the proportion in each sample.
We print both the first and last few entries of the resulting data frame
below to show that we end up with 20,000 point estimates, one for each of the 20,000 samples.
```
sample_estimates <- samples |>
group_by(replicate) |>
summarize(sample_proportion = sum(room_type == "Entire home/apt") / 40)
sample_estimates
```
```
## # A tibble: 20,000 × 2
## replicate sample_proportion
## <int> <dbl>
## 1 1 0.85
## 2 2 0.85
## 3 3 0.65
## 4 4 0.7
## 5 5 0.75
## 6 6 0.725
## 7 7 0.775
## 8 8 0.775
## 9 9 0.7
## 10 10 0.675
## # ℹ 19,990 more rows
```
```
tail(sample_estimates)
```
```
## # A tibble: 6 × 2
## replicate sample_proportion
## <int> <dbl>
## 1 19995 0.75
## 2 19996 0.675
## 3 19997 0.625
## 4 19998 0.75
## 5 19999 0.875
## 6 20000 0.65
```
We can now visualize the sampling distribution of sample proportions
for samples of size 40 using a histogram in Figure [10\.2](inference.html#fig:11-example-proportions7). Keep in mind: in the real world,
we don’t have access to the full population. So we
can’t take many samples and can’t actually construct or visualize the sampling distribution.
We have created this particular example
such that we *do* have access to the full population, which lets us visualize the
sampling distribution directly for learning purposes.
```
sampling_distribution <- ggplot(sample_estimates, aes(x = sample_proportion)) +
geom_histogram(color = "lightgrey", bins = 12) +
labs(x = "Sample proportions", y = "Count") +
theme(text = element_text(size = 12))
sampling_distribution
```
Figure 10\.2: Sampling distribution of the sample proportion for sample size 40\.
The sampling distribution in Figure [10\.2](inference.html#fig:11-example-proportions7) appears
to be bell\-shaped, is roughly symmetric, and has one peak. It is centered
around 0\.7 and the sample proportions
range from about 0\.4 to about
1\. In fact, we can
calculate the mean of the sample proportions.
```
sample_estimates |>
summarize(mean_proportion = mean(sample_proportion))
```
```
## # A tibble: 1 × 1
## mean_proportion
## <dbl>
## 1 0.747
```
We notice that the sample proportions are centered around the population
proportion value, 0\.747! In general, the mean of
the sampling distribution should be equal to the population proportion.
This is great news because it means that the sample proportion is neither an overestimate nor an
underestimate of the population proportion.
In other words, if you were to take many samples as we did above, there is no tendency
towards over or underestimating the population proportion.
In a real data analysis setting where you just have access to your single
sample, this implies that you would suspect that your sample point estimate is
roughly equally likely to be above or below the true population proportion.
### 10\.4\.2 Sampling distributions for means
In the previous section, our variable of interest—`room_type`—was
*categorical*, and the population parameter was a proportion. As mentioned in
the chapter introduction, there are many choices of the population parameter
for each type of variable. What if we wanted to infer something about a
population of *quantitative* variables instead? For instance, a traveler
visiting Vancouver, Canada may wish to estimate the
population *mean* (or average) price per night of Airbnb listings. Knowing
the average could help them tell whether a particular listing is overpriced.
We can visualize the population distribution of the price per night with a histogram.
```
population_distribution <- ggplot(airbnb, aes(x = price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
population_distribution
```
Figure 10\.3: Population distribution of price per night (dollars) for all Airbnb listings in Vancouver, Canada.
In Figure [10\.3](inference.html#fig:11-example-means2), we see that the population distribution
has one peak. It is also skewed (i.e., is not symmetric): most of the listings are
less than $250 per night, but a small number of listings cost much more,
creating a long tail on the histogram’s right side.
Along with visualizing the population, we can calculate the population mean,
the average price per night for all the Airbnb listings.
```
population_parameters <- airbnb |>
summarize(mean_price = mean(price))
population_parameters
```
```
## # A tibble: 1 × 1
## mean_price
## <dbl>
## 1 154.51
```
The price per night of all Airbnb rentals in Vancouver, BC
is $154\.51, on average. This value is our
population parameter since we are calculating it using the population data.
Now suppose we did not have access to the population data (which is usually the
case!), yet we wanted to estimate the mean price per night. We could answer
this question by taking a random sample of as many Airbnb listings as our time
and resources allow. Let’s say we could do this for 40 listings. What would
such a sample look like? Let’s take advantage of the fact that we do have
access to the population data and simulate taking one random sample of 40
listings in R, again using `rep_sample_n`.
```
one_sample <- airbnb |>
rep_sample_n(40)
```
We can create a histogram to visualize the distribution of observations in the
sample (Figure [10\.4](inference.html#fig:11-example-means-sample-hist)), and calculate the mean
of our sample.
```
sample_distribution <- ggplot(one_sample, aes(price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
sample_distribution
```
Figure 10\.4: Distribution of price per night (dollars) for sample of 40 Airbnb listings.
```
estimates <- one_sample |>
summarize(mean_price = mean(price))
estimates
```
```
## # A tibble: 1 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 155.80
```
The average value of the sample of size 40
is $155\.80\. This
number is a point estimate for the mean of the full population.
Recall that the population mean was
$154\.51\. So our estimate was fairly close to
the population parameter: the mean was about
0\.8%
off. Note that we usually cannot compute the estimate’s accuracy in practice
since we do not have access to the population parameter; if we did, we wouldn’t
need to estimate it!
Also, recall from the previous section that the point estimate can vary; if we
took another random sample from the population, our estimate’s value might
change. So then, did we just get lucky with our point estimate above? How much
does our estimate vary across different samples of size 40 in this example?
Again, since we have access to the population, we can take many samples and
plot the sampling distribution of sample means for samples of size 40 to
get a sense for this variation. In this case, we’ll use 20,000 samples of size
40\.
```
samples <- rep_sample_n(airbnb, size = 40, reps = 20000)
samples
```
```
## # A tibble: 800,000 × 9
## # Groups: replicate [20,000]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 1 1177 Downtown Entire h… 4 2 baths 2 2
## 2 1 4063 Downtown Entire h… 2 1 bath 1 1
## 3 1 2641 Kitsilano Private … 1 1 shared… 1 1
## 4 1 1941 West End Entire h… 2 1 bath 1 1
## 5 1 2431 Mount Pleasa… Entire h… 2 1 bath 1 1
## 6 1 1871 Arbutus Ridge Entire h… 4 1 bath 2 2
## 7 1 2557 Marpole Private … 3 1 privat… 1 2
## 8 1 3534 Downtown Entire h… 2 1 bath 1 1
## 9 1 4379 Downtown Entire h… 4 1 bath 1 0
## 10 1 2161 Downtown Entire h… 4 2 baths 2 2
## # ℹ 799,990 more rows
## # ℹ 1 more variable: price <dbl>
```
Now we can calculate the sample mean for each replicate and plot the sampling
distribution of sample means for samples of size 40\.
```
sample_estimates <- samples |>
group_by(replicate) |>
summarize(mean_price = mean(price))
sample_estimates
```
```
## # A tibble: 20,000 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 160.06
## 2 2 173.18
## 3 3 131.20
## 4 4 176.96
## 5 5 125.65
## 6 6 148.84
## 7 7 134.82
## 8 8 137.26
## 9 9 166.11
## 10 10 157.81
## # ℹ 19,990 more rows
```
```
sampling_distribution_40 <- ggplot(sample_estimates, aes(x = mean_price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Sample mean price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
sampling_distribution_40
```
Figure 10\.5: Sampling distribution of the sample means for sample size of 40\.
In Figure [10\.5](inference.html#fig:11-example-means4), the sampling distribution of the mean
has one peak and is bell\-shaped. Most of the estimates are between
about $140 and
$170; but there is
a good fraction of cases outside this range (i.e., where the point estimate was
not close to the population parameter). So it does indeed look like we were
quite lucky when we estimated the population mean with only
0\.8% error.
Let’s visualize the population distribution, distribution of the sample, and
the sampling distribution on one plot to compare them in Figure
[10\.6](inference.html#fig:11-example-means5). Comparing these three distributions, the centers
of the distributions are all around the same price (around $150\). The original
population distribution has a long right tail, and the sample distribution has
a similar shape to that of the population distribution. However, the sampling
distribution is not shaped like the population or sample distribution. Instead,
it has a bell shape, and it has a lower spread than the population or sample
distributions. The sample means vary less than the individual observations
because there will be some high values and some small values in any random
sample, which will keep the average from being too extreme.
Figure 10\.6: Comparison of population distribution, sample distribution, and sampling distribution.
Given that there is quite a bit of variation in the sampling distribution of
the sample mean—i.e., the point estimate that we obtain is not very
reliable—is there any way to improve the estimate? One way to improve a
point estimate is to take a *larger* sample. To illustrate what effect this
has, we will take many samples of size 20, 50, 100, and 500, and plot the
sampling distribution of the sample mean. We indicate the mean of the sampling
distribution with a vertical dashed line.
Figure 10\.7: Comparison of sampling distributions, with mean highlighted as a vertical dashed line.
Based on the visualization in Figure [10\.7](inference.html#fig:11-example-means7), three points
about the sample mean become clear. First, the mean of the sample mean (across
samples) is equal to the population mean. In other words, the sampling
distribution is centered at the population mean. Second, increasing the size of
the sample decreases the spread (i.e., the variability) of the sampling
distribution. Therefore, a larger sample size results in a more reliable point
estimate of the population parameter. And third, the distribution of the sample
mean is roughly bell\-shaped.
> **Note:** You might notice that in the `n = 20` case in Figure [10\.7](inference.html#fig:11-example-means7),
> the distribution is not *quite* bell\-shaped. There is a bit of skew towards the right!
> You might also notice that in the `n = 50` case and larger, that skew seems to disappear.
> In general, the sampling distribution—for both means and proportions—only
> becomes bell\-shaped *once the sample size is large enough*.
> How large is “large enough?” Unfortunately, it depends entirely on the problem at hand. But
> as a rule of thumb, often a sample size of at least 20 will suffice.
### 10\.4\.3 Summary
1. A point estimate is a single value computed using a sample from a population (e.g., a mean or proportion).
2. The sampling distribution of an estimate is the distribution of the estimate for all possible samples of a fixed size from the same population.
3. The shape of the sampling distribution is usually bell\-shaped with one peak and centered at the population mean or proportion.
4. The spread of the sampling distribution is related to the sample size. As the sample size increases, the spread of the sampling distribution decreases.
10\.5 Bootstrapping
-------------------
### 10\.5\.1 Overview
*Why all this emphasis on sampling distributions?*
We saw in the previous section that we could compute a **point estimate** of a
population parameter using a sample of observations from the population. And
since we constructed examples where we had access to the population, we could
evaluate how accurate the estimate was, and even get a sense of how much the
estimate would vary for different samples from the population. But in real
data analysis settings, we usually have *just one sample* from our population
and do not have access to the population itself. Therefore we cannot construct
the sampling distribution as we did in the previous section. And as we saw, our
sample estimate’s value can vary significantly from the population parameter.
So reporting the point estimate from a single sample alone may not be enough.
We also need to report some notion of *uncertainty* in the value of the point
estimate.
Unfortunately, we cannot construct the exact sampling distribution without
full access to the population. However, if we could somehow *approximate* what
the sampling distribution would look like for a sample, we could
use that approximation to then report how uncertain our sample
point estimate is (as we did above with the *exact* sampling
distribution). There are several methods to accomplish this; in this book, we
will use the *bootstrap*. We will discuss **interval estimation** and
construct
**confidence intervals** using just a single sample from a population. A
confidence interval is a range of plausible values for our population parameter.
Here is the key idea. First, if you take a big enough sample, it *looks like*
the population. Notice the histograms’ shapes for samples of different sizes
taken from the population in Figure [10\.8](inference.html#fig:11-example-bootstrapping0). We
see that the sample’s distribution looks like that of the population for a
large enough sample.
Figure 10\.8: Comparison of samples of different sizes from the population.
In the previous section, we took many samples of the same size *from our
population* to get a sense of the variability of a sample estimate. But if our
sample is big enough that it looks like our population, we can pretend that our
sample *is* the population, and take more samples (with replacement) of the
same size from it instead! This very clever technique is
called **the bootstrap**. Note that by taking many samples from our single, observed
sample, we do not obtain the true sampling distribution, but rather an
approximation that we call **the bootstrap distribution**.
> **Note:** We must sample *with* replacement when using the bootstrap.
> Otherwise, if we had a sample of size \\(n\\), and obtained a sample from it of
> size \\(n\\) *without* replacement, it would just return our original sample!
This section will explore how to create a bootstrap distribution from a single
sample using R. The process is visualized in Figure [10\.9](inference.html#fig:11-intro-bootstrap-image).
For a sample of size \\(n\\), you would do the following:
1. Randomly select an observation from the original sample, which was drawn from the population.
2. Record the observation’s value.
3. Replace that observation.
4. Repeat steps 1–3 (sampling *with* replacement) until you have \\(n\\) observations, which form a bootstrap sample.
5. Calculate the bootstrap point estimate (e.g., mean, median, proportion, slope, etc.) of the \\(n\\) observations in your bootstrap sample.
6. Repeat steps 1–5 many times to create a distribution of point estimates (the bootstrap distribution).
7. Calculate the plausible range of values around our observed point estimate.
Figure 10\.9: Overview of the bootstrap process.
### 10\.5\.2 Bootstrapping in R
Let’s continue working with our Airbnb example to illustrate how we might create
and use a bootstrap distribution using just a single sample from the population.
Once again, suppose we are
interested in estimating the population mean price per night of all Airbnb
listings in Vancouver, Canada, using a single sample size of 40\.
Recall our point estimate was $155\.80\. The
histogram of prices in the sample is displayed in Figure [10\.10](inference.html#fig:11-bootstrapping1).
```
one_sample
```
```
## # A tibble: 40 × 8
## id neighbourhood room_type accommodates bathrooms bedrooms beds price
## <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 3928 Marpole Private … 2 1 shared… 1 1 58
## 2 3013 Kensington-Cedar… Entire h… 4 1 bath 2 2 112
## 3 3156 Downtown Entire h… 6 2 baths 2 2 151
## 4 3873 Dunbar Southlands Private … 5 1 bath 2 3 700
## 5 3632 Downtown Eastside Entire h… 6 2 baths 3 3 157
## 6 296 Kitsilano Private … 1 1 shared… 1 1 100
## 7 3514 West End Entire h… 2 1 bath 1 1 110
## 8 594 Sunset Entire h… 5 1 bath 3 3 105
## 9 3305 Dunbar Southlands Entire h… 4 1 bath 1 2 196
## 10 938 Downtown Entire h… 7 2 baths 2 3 269
## # ℹ 30 more rows
```
```
one_sample_dist <- ggplot(one_sample, aes(price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
one_sample_dist
```
Figure 10\.10: Histogram of price per night (dollars) for one sample of size 40\.
The histogram for the sample is skewed, with a few observations out to the right. The
mean of the sample is $155\.80\.
Remember, in practice, we usually only have this one sample from the population. So
this sample and estimate are the only data we can work with.
We now perform steps 1–5 listed above to generate a single bootstrap
sample in R and calculate a point estimate from that bootstrap sample. We will
use the `rep_sample_n` function as we did when we were
creating our sampling distribution. But critically, note that we now
pass `one_sample`—our single sample of size 40—as the first argument.
And since we need to sample with replacement,
we change the argument for `replace` from its default value of `FALSE` to `TRUE`.
```
boot1 <- one_sample |>
rep_sample_n(size = 40, replace = TRUE, reps = 1)
boot1_dist <- ggplot(boot1, aes(price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
boot1_dist
```
Figure 10\.11: Bootstrap distribution.
```
summarize(boot1, mean_price = mean(price))
```
```
## # A tibble: 1 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 164.20
```
Notice in Figure [10\.11](inference.html#fig:11-bootstrapping3) that the histogram of our bootstrap sample
has a similar shape to the original sample histogram. Though the shapes of
the distributions are similar, they are not identical. You’ll also notice that
the original sample mean and the bootstrap sample mean differ. How might that
happen? Remember that we are sampling with replacement from the original
sample, so we don’t end up with the same sample values again. We are *pretending*
that our single sample is close to the population, and we are trying to
mimic drawing another sample from the population by drawing one from our original
sample.
Let’s now take 20,000 bootstrap samples from the original sample (`one_sample`)
using `rep_sample_n`, and calculate the means for
each of those replicates. Recall that this assumes that `one_sample` *looks like*
our original population; but since we do not have access to the population itself,
this is often the best we can do.
```
boot20000 <- one_sample |>
rep_sample_n(size = 40, replace = TRUE, reps = 20000)
boot20000
```
```
## # A tibble: 800,000 × 9
## # Groups: replicate [20,000]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 1 1276 Hastings-Sun… Entire h… 2 1 bath 1 1
## 2 1 3235 Hastings-Sun… Entire h… 2 1 bath 1 1
## 3 1 1301 Oakridge Entire h… 12 2 baths 2 12
## 4 1 118 Grandview-Wo… Entire h… 4 1 bath 2 2
## 5 1 2550 Downtown Eas… Private … 2 1.5 shar… 1 1
## 6 1 1006 Grandview-Wo… Entire h… 5 1 bath 3 4
## 7 1 3632 Downtown Eas… Entire h… 6 2 baths 3 3
## 8 1 1923 West End Entire h… 4 2 baths 2 2
## 9 1 3873 Dunbar South… Private … 5 1 bath 2 3
## 10 1 2349 Kerrisdale Private … 2 1 shared… 1 1
## # ℹ 799,990 more rows
## # ℹ 1 more variable: price <dbl>
```
```
tail(boot20000)
```
```
## # A tibble: 6 × 9
## # Groups: replicate [1]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 20000 1949 Kitsilano Entire h… 3 1 bath 1 1
## 2 20000 1025 Kensington-Ce… Entire h… 3 1 bath 1 1
## 3 20000 3013 Kensington-Ce… Entire h… 4 1 bath 2 2
## 4 20000 2868 Downtown Entire h… 2 1 bath 1 1
## 5 20000 3156 Downtown Entire h… 6 2 baths 2 2
## 6 20000 1923 West End Entire h… 4 2 baths 2 2
## # ℹ 1 more variable: price <dbl>
```
Let’s take a look at the histograms of the first six replicates of our bootstrap samples.
```
six_bootstrap_samples <- boot20000 |>
filter(replicate <= 6)
ggplot(six_bootstrap_samples, aes(price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
facet_wrap(~replicate) +
theme(text = element_text(size = 12))
```
Figure 10\.12: Histograms of the first six replicates of the bootstrap samples.
We see in Figure [10\.12](inference.html#fig:11-bootstrapping-six-bootstrap-samples) how the
bootstrap samples differ. We can also calculate the sample mean for each of
these six replicates.
```
six_bootstrap_samples |>
group_by(replicate) |>
summarize(mean_price = mean(price))
```
```
## # A tibble: 6 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 177.2
## 2 2 131.45
## 3 3 179.10
## 4 4 171.35
## 5 5 191.32
## 6 6 170.05
```
We can see that the bootstrap sample distributions and the sample means are
different. They are different because we are sampling *with replacement*. We
will now calculate point estimates for our 20,000 bootstrap samples and
generate a bootstrap distribution of our point estimates. The bootstrap
distribution (Figure [10\.13](inference.html#fig:11-bootstrapping5)) suggests how we might expect
our point estimate to behave if we took another sample.
```
boot20000_means <- boot20000 |>
group_by(replicate) |>
summarize(mean_price = mean(price))
boot20000_means
```
```
## # A tibble: 20,000 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 177.2
## 2 2 131.45
## 3 3 179.10
## 4 4 171.35
## 5 5 191.32
## 6 6 170.05
## 7 7 178.83
## 8 8 154.78
## 9 9 163.85
## 10 10 209.28
## # ℹ 19,990 more rows
```
```
tail(boot20000_means)
```
```
## # A tibble: 6 × 2
## replicate mean_price
## <int> <dbl>
## 1 19995 130.40
## 2 19996 189.18
## 3 19997 168.98
## 4 19998 168.23
## 5 19999 155.73
## 6 20000 136.95
```
```
boot_est_dist <- ggplot(boot20000_means, aes(x = mean_price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Sample mean price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
boot_est_dist
```
Figure 10\.13: Distribution of the bootstrap sample means.
Let’s compare the bootstrap distribution—which we construct by taking many samples from our original sample of size 40—with
the true sampling distribution—which corresponds to taking many samples from the population.
Figure 10\.14: Comparison of the distribution of the bootstrap sample means and sampling distribution.
There are two essential points that we can take away from Figure
[10\.14](inference.html#fig:11-bootstrapping6). First, the shape and spread of the true sampling
distribution and the bootstrap distribution are similar; the bootstrap
distribution lets us get a sense of the point estimate’s variability. The
second important point is that the means of these two distributions are
different. The sampling distribution is centered at
$154\.51, the population mean value. However, the bootstrap
distribution is centered at the original sample’s mean price per night,
$155\.87\. Because we are resampling from the
original sample repeatedly, we see that the bootstrap distribution is centered
at the original sample’s mean value (unlike the sampling distribution of the
sample mean, which is centered at the population parameter value).
Figure
[10\.15](inference.html#fig:11-bootstrapping7) summarizes the bootstrapping process.
The idea here is that we can use this distribution of bootstrap sample means to
approximate the sampling distribution of the sample means when we only have one
sample. Since the bootstrap distribution pretty well approximates the sampling
distribution spread, we can use the bootstrap spread to help us develop a
plausible range for our population parameter along with our estimate!
Figure 10\.15: Summary of bootstrapping process.
### 10\.5\.3 Using the bootstrap to calculate a plausible range
Now that we have constructed our bootstrap distribution, let’s use it to create
an approximate 95% percentile bootstrap confidence interval.
A **confidence interval** is a range of plausible values for the population parameter. We will
find the range of values covering the middle 95% of the bootstrap
distribution, giving us a 95% confidence interval. You may be wondering, what
does “95% confidence” mean? If we took 100 random samples and calculated 100
95% confidence intervals, then about 95% of the ranges would capture the
population parameter’s value. Note there’s nothing special about 95%. We
could have used other levels, such as 90% or 99%. There is a balance between
our level of confidence and precision. A higher confidence level corresponds to
a wider range of the interval, and a lower confidence level corresponds to a
narrower range. Therefore the level we choose is based on what chance we are
willing to take of being wrong based on the implications of being wrong for our
application. In general, we choose confidence levels to be comfortable with our
level of uncertainty but not so strict that the interval is unhelpful. For
instance, if our decision impacts human life and the implications of being
wrong are deadly, we may want to be very confident and choose a higher
confidence level.
To calculate a 95% percentile bootstrap confidence interval, we will do the following:
1. Arrange the observations in the bootstrap distribution in ascending order.
2. Find the value such that 2\.5% of observations fall below it (the 2\.5% percentile). Use that value as the lower bound of the interval.
3. Find the value such that 97\.5% of observations fall below it (the 97\.5% percentile). Use that value as the upper bound of the interval.
To do this in R, we can use the `quantile()` function. Quantiles are expressed in proportions rather than
percentages, so the 2\.5th and 97\.5th percentiles would be the 0\.025 and 0\.975 quantiles, respectively.
```
bounds <- boot20000_means |>
select(mean_price) |>
pull() |>
quantile(c(0.025, 0.975))
bounds
```
```
## 2.5% 97.5%
## 119 204
```
Our interval, $119\.28 to $203\.63, captures
the middle 95% of the sample mean prices in the bootstrap distribution. We can
visualize the interval on our distribution in Figure
[10\.16](inference.html#fig:11-bootstrapping9).
Figure 10\.16: Distribution of the bootstrap sample means with percentile lower and upper bounds.
To finish our estimation of the population parameter, we would report the point
estimate and our confidence interval’s lower and upper bounds. Here the sample
mean price per night of 40 Airbnb listings was
$155\.80, and we are 95% “confident” that the true
population mean price per night for all Airbnb listings in Vancouver is between
$119\.28 and $203\.63\.
Notice that our interval does indeed contain the true
population mean value, $154\.51! However, in
practice, we would not know whether our interval captured the population
parameter or not because we usually only have a single sample, not the entire
population. This is the best we can do when we only have one sample!
This chapter is only the beginning of the journey into statistical inference.
We can extend the concepts learned here to do much more than report point
estimates and confidence intervals, such as testing for real differences
between populations, tests for associations between variables, and so much
more. We have just scratched the surface of statistical inference; however, the
material presented here will serve as the foundation for more advanced
statistical techniques you may learn about in the future!
10\.6 Exercises
---------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the two “Statistical inference” rows.
You can launch an interactive version of each worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of each worksheet by clicking “view worksheet.”
If you instead decide to download the worksheets and run them on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
10\.7 Additional resources
--------------------------
* Chapters 7 to 10 of *Modern Dive* ([Ismay and Kim 2020](#ref-moderndive)) provide a great
next step in learning about inference. In particular, Chapters 7 and 8 cover
sampling and bootstrapping using `tidyverse` and `infer` in a slightly more
in\-depth manner than the present chapter. Chapters 9 and 10 take the next step
beyond the scope of this chapter and begin to provide some of the initial
mathematical underpinnings of inference and more advanced applications of the
concept of inference in testing hypotheses and performing regression. This
material offers a great starting point for getting more into the technical side
of statistics.
* Chapters 4 to 7 of *OpenIntro Statistics* ([Diez, Çetinkaya\-Rundel, and Barr 2019](#ref-openintro))
provide a good next step after *Modern Dive*. Although it is still certainly
an introductory text, things get a bit more mathematical here. Depending on
your background, you may actually want to start going through Chapters 1 to 3
first, where you will learn some fundamental concepts in probability theory.
Although it may seem like a diversion, probability theory is *the language of
statistics*; if you have a solid grasp of probability, more advanced statistics
will come naturally to you!
10\.1 Overview
--------------
A typical data analysis task in practice is to draw conclusions about some
unknown aspect of a population of interest based on observed data sampled from
that population; we typically do not get data on the *entire* population. Data
analysis questions regarding how summaries, patterns, trends, or relationships
in a data set extend to the wider population are called *inferential
questions*. This chapter will start with the fundamental ideas of sampling from
populations and then introduce two common techniques in statistical inference:
*point estimation* and *interval estimation*.
10\.2 Chapter learning objectives
---------------------------------
By the end of the chapter, readers will be able to do the following:
* Describe real\-world examples of questions that can be answered with statistical inference.
* Define common population parameters (e.g., mean, proportion, standard deviation) that are often estimated using sampled data, and estimate these from a sample.
* Define the following statistical sampling terms: population, sample, population parameter, point estimate, and sampling distribution.
* Explain the difference between a population parameter and a sample point estimate.
* Use R to draw random samples from a finite population.
* Use R to create a sampling distribution from a finite population.
* Describe how sample size influences the sampling distribution.
* Define bootstrapping.
* Use R to create a bootstrap distribution to approximate a sampling distribution.
* Contrast the bootstrap and sampling distributions.
10\.3 Why do we need sampling?
------------------------------
We often need to understand how quantities we observe in a subset
of data relate to the same quantities in the broader population. For example, suppose a
retailer is considering selling iPhone accessories, and they want to estimate
how big the market might be. Additionally, they want to strategize how they can
market their products on North American college and university campuses. This
retailer might formulate the following question:
*What proportion of all undergraduate students in North America own an iPhone?*
In the above question, we are interested in making a conclusion about *all*
undergraduate students in North America; this is referred to as the **population**. In
general, the population is the complete collection of individuals or cases we
are interested in studying. Further, in the above question, we are interested
in computing a quantity—the proportion of iPhone owners—based on
the entire population. This proportion is referred to as a **population parameter**. In
general, a population parameter is a numerical characteristic of the entire
population. To compute this number in the example above, we would need to ask
every single undergraduate in North America whether they own an iPhone. In
practice, directly computing population parameters is often time\-consuming and
costly, and sometimes impossible.
A more practical approach would be to make measurements for a **sample**, i.e., a
subset of individuals collected from the population. We can then compute a
**sample estimate**—a numerical characteristic of the sample—that
estimates the population parameter. For example, suppose we randomly selected
ten undergraduate students across North America (the sample) and computed the
proportion of those students who own an iPhone (the sample estimate). In that
case, we might suspect that proportion is a reasonable estimate of the
proportion of students who own an iPhone in the entire population. Figure
[10\.1](inference.html#fig:11-population-vs-sample) illustrates this process.
In general, the process of using a sample to make a conclusion about the
broader population from which it is taken is referred to as **statistical inference**.
Figure 10\.1: The process of using a sample from a broader population to obtain a point estimate of a population parameter. In this case, a sample of 10 individuals yielded 6 who own an iPhone, resulting in an estimated population proportion of 60% iPhone owners. The actual population proportion in this example illustration is 53\.8%.
Note that proportions are not the *only* kind of population parameter we might
be interested in. For example, suppose an undergraduate student studying at the University
of British Columbia in Canada is looking for an apartment
to rent. They need to create a budget, so they want to know about
studio apartment rental prices in Vancouver. This student might
formulate the question:
*What is the average price per month of studio apartment rentals in Vancouver?*
In this case, the population consists of all studio apartment rentals in Vancouver, and the
population parameter is the *average price per month*. Here we used the average
as a measure of the center to describe the “typical value” of studio apartment
rental prices. But even within this one example, we could also be interested in
many other population parameters. For instance, we know that not every studio
apartment rental in Vancouver will have the same price per month. The student
might be interested in how much monthly prices vary and want to find a measure
of the rentals’ spread (or variability), such as the standard deviation. Or perhaps the
student might be interested in the fraction of studio apartment rentals that
cost more than $1000 per month. The question we want to answer will help us
determine the parameter we want to estimate. If we were somehow able to observe
the whole population of studio apartment rental offerings in Vancouver, we
could compute each of these numbers exactly; therefore, these are all
population parameters. There are many kinds of observations and population
parameters that you will run into in practice, but in this chapter, we will
focus on two settings:
1. Using categorical observations to estimate the proportion of a category
2. Using quantitative observations to estimate the average (or mean)
10\.4 Sampling distributions
----------------------------
### 10\.4\.1 Sampling distributions for proportions
We will look at an example using data from
[Inside Airbnb](http://insideairbnb.com/) ([Cox n.d.](#ref-insideairbnb)). Airbnb is an online
marketplace for arranging vacation rentals and places to stay. The data set
contains listings for Vancouver, Canada, in September 2020\. Our data
includes an ID number, neighborhood, type of room, the number of people the
rental accommodates, number of bathrooms, bedrooms, beds, and the price per
night.
```
library(tidyverse)
set.seed(123)
airbnb <- read_csv("data/listings.csv")
airbnb
```
```
## # A tibble: 4,594 × 8
## id neighbourhood room_type accommodates bathrooms bedrooms beds price
## <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 1 Downtown Entire h… 5 2 baths 2 2 150
## 2 2 Downtown Eastside Entire h… 4 2 baths 2 2 132
## 3 3 West End Entire h… 2 1 bath 1 1 85
## 4 4 Kensington-Cedar… Entire h… 2 1 bath 1 0 146
## 5 5 Kensington-Cedar… Entire h… 4 1 bath 1 2 110
## 6 6 Hastings-Sunrise Entire h… 4 1 bath 2 3 195
## 7 7 Renfrew-Collingw… Entire h… 8 3 baths 4 5 130
## 8 8 Mount Pleasant Entire h… 2 1 bath 1 1 94
## 9 9 Grandview-Woodla… Private … 2 1 privat… 1 1 79
## 10 10 West End Private … 2 1 privat… 1 1 75
## # ℹ 4,584 more rows
```
Suppose the city of Vancouver wants information about Airbnb rentals to help
plan city bylaws, and they want to know how many Airbnb places are listed as
entire homes and apartments (rather than as private or shared rooms). Therefore
they may want to estimate the true proportion of all Airbnb listings where the
“type of place” is listed as “entire home or apartment.” Of course, we usually
do not have access to the true population, but here let’s imagine (for learning
purposes) that our data set represents the population of all Airbnb rental
listings in Vancouver, Canada. We can find the proportion of listings where
`room_type == "Entire home/apt"`.
```
airbnb |>
summarize(
n = sum(room_type == "Entire home/apt"),
proportion = sum(room_type == "Entire home/apt") / nrow(airbnb)
)
```
```
## # A tibble: 1 × 2
## n proportion
## <int> <dbl>
## 1 3434 0.747
```
We can see that the proportion of `Entire home/apt` listings in
the data set is 0\.747\. This
value, 0\.747, is the population parameter. Remember, this
parameter value is usually unknown in real data analysis problems, as it is
typically not possible to make measurements for an entire population.
Instead, perhaps we can approximate it with a small subset of data!
To investigate this idea, let’s try randomly selecting 40 listings (*i.e.,* taking a random sample of
size 40 from our population), and computing the proportion for that sample.
We will use the `rep_sample_n` function from the `infer`
package to take the sample. The arguments of `rep_sample_n` are (1\) the data frame to
sample from, and (2\) the size of the sample to take.
```
library(infer)
sample_1 <- rep_sample_n(tbl = airbnb, size = 40)
airbnb_sample_1 <- summarize(sample_1,
n = sum(room_type == "Entire home/apt"),
prop = sum(room_type == "Entire home/apt") / 40
)
airbnb_sample_1
```
```
## # A tibble: 1 × 3
## replicate n prop
## <int> <int> <dbl>
## 1 1 28 0.7
```
Here we see that the proportion of entire home/apartment listings in this
random sample is 0\.7\. Wow—that’s close to our
true population value! But remember, we computed the proportion using a random sample of size 40\.
This has two consequences. First, this value is only an *estimate*, i.e., our best guess
of our population parameter using this sample.
Given that we are estimating a single value here, we often
refer to it as a **point estimate**. Second, since the sample was random,
if we were to take *another* random sample of size 40 and compute the proportion for that sample,
we would not get the same answer:
```
sample_2 <- rep_sample_n(airbnb, size = 40)
airbnb_sample_2 <- summarize(sample_2,
n = sum(room_type == "Entire home/apt"),
prop = sum(room_type == "Entire home/apt") / 40
)
airbnb_sample_2
```
```
## # A tibble: 1 × 3
## replicate n prop
## <int> <int> <dbl>
## 1 1 35 0.875
```
Confirmed! We get a different value for our estimate this time.
That means that our point estimate might be unreliable. Indeed, estimates vary from sample to
sample due to **sampling variability**. But just how much
should we expect the estimates of our random samples to vary?
Or in other words, how much can we really trust our point estimate based on a single sample?
To understand this, we will simulate many samples (much more than just two)
of size 40 from our population of listings and calculate the proportion of
entire home/apartment listings in each sample. This simulation will create
many sample proportions, which we can visualize using a histogram. The
distribution of the estimate for all possible samples of a given size (which we
commonly refer to as \\(n\\)) from a population is called
a **sampling distribution**. The sampling distribution will help us see how much we would
expect our sample proportions from this population to vary for samples of size 40\.
We again use the `rep_sample_n` to take samples of size 40 from our
population of Airbnb listings. But this time we set the `reps` argument to 20,000 to specify
that we want to take 20,000 samples of size 40\.
```
samples <- rep_sample_n(airbnb, size = 40, reps = 20000)
samples
```
```
## # A tibble: 800,000 × 9
## # Groups: replicate [20,000]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 1 4403 Downtown Entire h… 2 1 bath 1 1
## 2 1 902 Kensington-C… Private … 2 1 shared… 1 1
## 3 1 3808 Hastings-Sun… Entire h… 6 1.5 baths 1 3
## 4 1 561 Kensington-C… Entire h… 6 1 bath 2 2
## 5 1 3385 Mount Pleasa… Entire h… 4 1 bath 1 1
## 6 1 4232 Shaughnessy Entire h… 6 1.5 baths 2 2
## 7 1 1169 Downtown Entire h… 3 1 bath 1 1
## 8 1 959 Kitsilano Private … 1 1.5 shar… 1 1
## 9 1 2171 Downtown Entire h… 2 1 bath 1 1
## 10 1 1258 Dunbar South… Entire h… 4 1 bath 2 2
## # ℹ 799,990 more rows
## # ℹ 1 more variable: price <dbl>
```
Notice that the column `replicate` indicates the replicate, or sample, to which
each listing belongs. Above, since by default R only prints the first few rows,
it looks like all of the listings have `replicate` set to 1\. But you can
check the last few entries using the `tail()` function to verify that
we indeed created 20,000 samples (or replicates).
```
tail(samples)
```
```
## # A tibble: 6 × 9
## # Groups: replicate [1]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 20000 3414 Marpole Entire h… 4 1 bath 2 2
## 2 20000 1974 Hastings-Sunr… Private … 2 1 shared… 1 1
## 3 20000 1846 Riley Park Entire h… 4 1 bath 2 3
## 4 20000 862 Downtown Entire h… 5 2 baths 2 2
## 5 20000 3295 Victoria-Fras… Private … 2 1 shared… 1 1
## 6 20000 997 Dunbar Southl… Private … 1 1.5 shar… 1 1
## # ℹ 1 more variable: price <dbl>
```
Now that we have obtained the samples, we need to compute the
proportion of entire home/apartment listings in each sample.
We first group the data by the `replicate` variable—to group the
set of listings in each sample together—and then use `summarize`
to compute the proportion in each sample.
We print both the first and last few entries of the resulting data frame
below to show that we end up with 20,000 point estimates, one for each of the 20,000 samples.
```
sample_estimates <- samples |>
group_by(replicate) |>
summarize(sample_proportion = sum(room_type == "Entire home/apt") / 40)
sample_estimates
```
```
## # A tibble: 20,000 × 2
## replicate sample_proportion
## <int> <dbl>
## 1 1 0.85
## 2 2 0.85
## 3 3 0.65
## 4 4 0.7
## 5 5 0.75
## 6 6 0.725
## 7 7 0.775
## 8 8 0.775
## 9 9 0.7
## 10 10 0.675
## # ℹ 19,990 more rows
```
```
tail(sample_estimates)
```
```
## # A tibble: 6 × 2
## replicate sample_proportion
## <int> <dbl>
## 1 19995 0.75
## 2 19996 0.675
## 3 19997 0.625
## 4 19998 0.75
## 5 19999 0.875
## 6 20000 0.65
```
We can now visualize the sampling distribution of sample proportions
for samples of size 40 using a histogram in Figure [10\.2](inference.html#fig:11-example-proportions7). Keep in mind: in the real world,
we don’t have access to the full population. So we
can’t take many samples and can’t actually construct or visualize the sampling distribution.
We have created this particular example
such that we *do* have access to the full population, which lets us visualize the
sampling distribution directly for learning purposes.
```
sampling_distribution <- ggplot(sample_estimates, aes(x = sample_proportion)) +
geom_histogram(color = "lightgrey", bins = 12) +
labs(x = "Sample proportions", y = "Count") +
theme(text = element_text(size = 12))
sampling_distribution
```
Figure 10\.2: Sampling distribution of the sample proportion for sample size 40\.
The sampling distribution in Figure [10\.2](inference.html#fig:11-example-proportions7) appears
to be bell\-shaped, is roughly symmetric, and has one peak. It is centered
around 0\.7 and the sample proportions
range from about 0\.4 to about
1\. In fact, we can
calculate the mean of the sample proportions.
```
sample_estimates |>
summarize(mean_proportion = mean(sample_proportion))
```
```
## # A tibble: 1 × 1
## mean_proportion
## <dbl>
## 1 0.747
```
We notice that the sample proportions are centered around the population
proportion value, 0\.747! In general, the mean of
the sampling distribution should be equal to the population proportion.
This is great news because it means that the sample proportion is neither an overestimate nor an
underestimate of the population proportion.
In other words, if you were to take many samples as we did above, there is no tendency
towards over or underestimating the population proportion.
In a real data analysis setting where you just have access to your single
sample, this implies that you would suspect that your sample point estimate is
roughly equally likely to be above or below the true population proportion.
### 10\.4\.2 Sampling distributions for means
In the previous section, our variable of interest—`room_type`—was
*categorical*, and the population parameter was a proportion. As mentioned in
the chapter introduction, there are many choices of the population parameter
for each type of variable. What if we wanted to infer something about a
population of *quantitative* variables instead? For instance, a traveler
visiting Vancouver, Canada may wish to estimate the
population *mean* (or average) price per night of Airbnb listings. Knowing
the average could help them tell whether a particular listing is overpriced.
We can visualize the population distribution of the price per night with a histogram.
```
population_distribution <- ggplot(airbnb, aes(x = price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
population_distribution
```
Figure 10\.3: Population distribution of price per night (dollars) for all Airbnb listings in Vancouver, Canada.
In Figure [10\.3](inference.html#fig:11-example-means2), we see that the population distribution
has one peak. It is also skewed (i.e., is not symmetric): most of the listings are
less than $250 per night, but a small number of listings cost much more,
creating a long tail on the histogram’s right side.
Along with visualizing the population, we can calculate the population mean,
the average price per night for all the Airbnb listings.
```
population_parameters <- airbnb |>
summarize(mean_price = mean(price))
population_parameters
```
```
## # A tibble: 1 × 1
## mean_price
## <dbl>
## 1 154.51
```
The price per night of all Airbnb rentals in Vancouver, BC
is $154\.51, on average. This value is our
population parameter since we are calculating it using the population data.
Now suppose we did not have access to the population data (which is usually the
case!), yet we wanted to estimate the mean price per night. We could answer
this question by taking a random sample of as many Airbnb listings as our time
and resources allow. Let’s say we could do this for 40 listings. What would
such a sample look like? Let’s take advantage of the fact that we do have
access to the population data and simulate taking one random sample of 40
listings in R, again using `rep_sample_n`.
```
one_sample <- airbnb |>
rep_sample_n(40)
```
We can create a histogram to visualize the distribution of observations in the
sample (Figure [10\.4](inference.html#fig:11-example-means-sample-hist)), and calculate the mean
of our sample.
```
sample_distribution <- ggplot(one_sample, aes(price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
sample_distribution
```
Figure 10\.4: Distribution of price per night (dollars) for sample of 40 Airbnb listings.
```
estimates <- one_sample |>
summarize(mean_price = mean(price))
estimates
```
```
## # A tibble: 1 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 155.80
```
The average value of the sample of size 40
is $155\.80\. This
number is a point estimate for the mean of the full population.
Recall that the population mean was
$154\.51\. So our estimate was fairly close to
the population parameter: the mean was about
0\.8%
off. Note that we usually cannot compute the estimate’s accuracy in practice
since we do not have access to the population parameter; if we did, we wouldn’t
need to estimate it!
Also, recall from the previous section that the point estimate can vary; if we
took another random sample from the population, our estimate’s value might
change. So then, did we just get lucky with our point estimate above? How much
does our estimate vary across different samples of size 40 in this example?
Again, since we have access to the population, we can take many samples and
plot the sampling distribution of sample means for samples of size 40 to
get a sense for this variation. In this case, we’ll use 20,000 samples of size
40\.
```
samples <- rep_sample_n(airbnb, size = 40, reps = 20000)
samples
```
```
## # A tibble: 800,000 × 9
## # Groups: replicate [20,000]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 1 1177 Downtown Entire h… 4 2 baths 2 2
## 2 1 4063 Downtown Entire h… 2 1 bath 1 1
## 3 1 2641 Kitsilano Private … 1 1 shared… 1 1
## 4 1 1941 West End Entire h… 2 1 bath 1 1
## 5 1 2431 Mount Pleasa… Entire h… 2 1 bath 1 1
## 6 1 1871 Arbutus Ridge Entire h… 4 1 bath 2 2
## 7 1 2557 Marpole Private … 3 1 privat… 1 2
## 8 1 3534 Downtown Entire h… 2 1 bath 1 1
## 9 1 4379 Downtown Entire h… 4 1 bath 1 0
## 10 1 2161 Downtown Entire h… 4 2 baths 2 2
## # ℹ 799,990 more rows
## # ℹ 1 more variable: price <dbl>
```
Now we can calculate the sample mean for each replicate and plot the sampling
distribution of sample means for samples of size 40\.
```
sample_estimates <- samples |>
group_by(replicate) |>
summarize(mean_price = mean(price))
sample_estimates
```
```
## # A tibble: 20,000 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 160.06
## 2 2 173.18
## 3 3 131.20
## 4 4 176.96
## 5 5 125.65
## 6 6 148.84
## 7 7 134.82
## 8 8 137.26
## 9 9 166.11
## 10 10 157.81
## # ℹ 19,990 more rows
```
```
sampling_distribution_40 <- ggplot(sample_estimates, aes(x = mean_price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Sample mean price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
sampling_distribution_40
```
Figure 10\.5: Sampling distribution of the sample means for sample size of 40\.
In Figure [10\.5](inference.html#fig:11-example-means4), the sampling distribution of the mean
has one peak and is bell\-shaped. Most of the estimates are between
about $140 and
$170; but there is
a good fraction of cases outside this range (i.e., where the point estimate was
not close to the population parameter). So it does indeed look like we were
quite lucky when we estimated the population mean with only
0\.8% error.
Let’s visualize the population distribution, distribution of the sample, and
the sampling distribution on one plot to compare them in Figure
[10\.6](inference.html#fig:11-example-means5). Comparing these three distributions, the centers
of the distributions are all around the same price (around $150\). The original
population distribution has a long right tail, and the sample distribution has
a similar shape to that of the population distribution. However, the sampling
distribution is not shaped like the population or sample distribution. Instead,
it has a bell shape, and it has a lower spread than the population or sample
distributions. The sample means vary less than the individual observations
because there will be some high values and some small values in any random
sample, which will keep the average from being too extreme.
Figure 10\.6: Comparison of population distribution, sample distribution, and sampling distribution.
Given that there is quite a bit of variation in the sampling distribution of
the sample mean—i.e., the point estimate that we obtain is not very
reliable—is there any way to improve the estimate? One way to improve a
point estimate is to take a *larger* sample. To illustrate what effect this
has, we will take many samples of size 20, 50, 100, and 500, and plot the
sampling distribution of the sample mean. We indicate the mean of the sampling
distribution with a vertical dashed line.
Figure 10\.7: Comparison of sampling distributions, with mean highlighted as a vertical dashed line.
Based on the visualization in Figure [10\.7](inference.html#fig:11-example-means7), three points
about the sample mean become clear. First, the mean of the sample mean (across
samples) is equal to the population mean. In other words, the sampling
distribution is centered at the population mean. Second, increasing the size of
the sample decreases the spread (i.e., the variability) of the sampling
distribution. Therefore, a larger sample size results in a more reliable point
estimate of the population parameter. And third, the distribution of the sample
mean is roughly bell\-shaped.
> **Note:** You might notice that in the `n = 20` case in Figure [10\.7](inference.html#fig:11-example-means7),
> the distribution is not *quite* bell\-shaped. There is a bit of skew towards the right!
> You might also notice that in the `n = 50` case and larger, that skew seems to disappear.
> In general, the sampling distribution—for both means and proportions—only
> becomes bell\-shaped *once the sample size is large enough*.
> How large is “large enough?” Unfortunately, it depends entirely on the problem at hand. But
> as a rule of thumb, often a sample size of at least 20 will suffice.
### 10\.4\.3 Summary
1. A point estimate is a single value computed using a sample from a population (e.g., a mean or proportion).
2. The sampling distribution of an estimate is the distribution of the estimate for all possible samples of a fixed size from the same population.
3. The shape of the sampling distribution is usually bell\-shaped with one peak and centered at the population mean or proportion.
4. The spread of the sampling distribution is related to the sample size. As the sample size increases, the spread of the sampling distribution decreases.
### 10\.4\.1 Sampling distributions for proportions
We will look at an example using data from
[Inside Airbnb](http://insideairbnb.com/) ([Cox n.d.](#ref-insideairbnb)). Airbnb is an online
marketplace for arranging vacation rentals and places to stay. The data set
contains listings for Vancouver, Canada, in September 2020\. Our data
includes an ID number, neighborhood, type of room, the number of people the
rental accommodates, number of bathrooms, bedrooms, beds, and the price per
night.
```
library(tidyverse)
set.seed(123)
airbnb <- read_csv("data/listings.csv")
airbnb
```
```
## # A tibble: 4,594 × 8
## id neighbourhood room_type accommodates bathrooms bedrooms beds price
## <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 1 Downtown Entire h… 5 2 baths 2 2 150
## 2 2 Downtown Eastside Entire h… 4 2 baths 2 2 132
## 3 3 West End Entire h… 2 1 bath 1 1 85
## 4 4 Kensington-Cedar… Entire h… 2 1 bath 1 0 146
## 5 5 Kensington-Cedar… Entire h… 4 1 bath 1 2 110
## 6 6 Hastings-Sunrise Entire h… 4 1 bath 2 3 195
## 7 7 Renfrew-Collingw… Entire h… 8 3 baths 4 5 130
## 8 8 Mount Pleasant Entire h… 2 1 bath 1 1 94
## 9 9 Grandview-Woodla… Private … 2 1 privat… 1 1 79
## 10 10 West End Private … 2 1 privat… 1 1 75
## # ℹ 4,584 more rows
```
Suppose the city of Vancouver wants information about Airbnb rentals to help
plan city bylaws, and they want to know how many Airbnb places are listed as
entire homes and apartments (rather than as private or shared rooms). Therefore
they may want to estimate the true proportion of all Airbnb listings where the
“type of place” is listed as “entire home or apartment.” Of course, we usually
do not have access to the true population, but here let’s imagine (for learning
purposes) that our data set represents the population of all Airbnb rental
listings in Vancouver, Canada. We can find the proportion of listings where
`room_type == "Entire home/apt"`.
```
airbnb |>
summarize(
n = sum(room_type == "Entire home/apt"),
proportion = sum(room_type == "Entire home/apt") / nrow(airbnb)
)
```
```
## # A tibble: 1 × 2
## n proportion
## <int> <dbl>
## 1 3434 0.747
```
We can see that the proportion of `Entire home/apt` listings in
the data set is 0\.747\. This
value, 0\.747, is the population parameter. Remember, this
parameter value is usually unknown in real data analysis problems, as it is
typically not possible to make measurements for an entire population.
Instead, perhaps we can approximate it with a small subset of data!
To investigate this idea, let’s try randomly selecting 40 listings (*i.e.,* taking a random sample of
size 40 from our population), and computing the proportion for that sample.
We will use the `rep_sample_n` function from the `infer`
package to take the sample. The arguments of `rep_sample_n` are (1\) the data frame to
sample from, and (2\) the size of the sample to take.
```
library(infer)
sample_1 <- rep_sample_n(tbl = airbnb, size = 40)
airbnb_sample_1 <- summarize(sample_1,
n = sum(room_type == "Entire home/apt"),
prop = sum(room_type == "Entire home/apt") / 40
)
airbnb_sample_1
```
```
## # A tibble: 1 × 3
## replicate n prop
## <int> <int> <dbl>
## 1 1 28 0.7
```
Here we see that the proportion of entire home/apartment listings in this
random sample is 0\.7\. Wow—that’s close to our
true population value! But remember, we computed the proportion using a random sample of size 40\.
This has two consequences. First, this value is only an *estimate*, i.e., our best guess
of our population parameter using this sample.
Given that we are estimating a single value here, we often
refer to it as a **point estimate**. Second, since the sample was random,
if we were to take *another* random sample of size 40 and compute the proportion for that sample,
we would not get the same answer:
```
sample_2 <- rep_sample_n(airbnb, size = 40)
airbnb_sample_2 <- summarize(sample_2,
n = sum(room_type == "Entire home/apt"),
prop = sum(room_type == "Entire home/apt") / 40
)
airbnb_sample_2
```
```
## # A tibble: 1 × 3
## replicate n prop
## <int> <int> <dbl>
## 1 1 35 0.875
```
Confirmed! We get a different value for our estimate this time.
That means that our point estimate might be unreliable. Indeed, estimates vary from sample to
sample due to **sampling variability**. But just how much
should we expect the estimates of our random samples to vary?
Or in other words, how much can we really trust our point estimate based on a single sample?
To understand this, we will simulate many samples (much more than just two)
of size 40 from our population of listings and calculate the proportion of
entire home/apartment listings in each sample. This simulation will create
many sample proportions, which we can visualize using a histogram. The
distribution of the estimate for all possible samples of a given size (which we
commonly refer to as \\(n\\)) from a population is called
a **sampling distribution**. The sampling distribution will help us see how much we would
expect our sample proportions from this population to vary for samples of size 40\.
We again use the `rep_sample_n` to take samples of size 40 from our
population of Airbnb listings. But this time we set the `reps` argument to 20,000 to specify
that we want to take 20,000 samples of size 40\.
```
samples <- rep_sample_n(airbnb, size = 40, reps = 20000)
samples
```
```
## # A tibble: 800,000 × 9
## # Groups: replicate [20,000]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 1 4403 Downtown Entire h… 2 1 bath 1 1
## 2 1 902 Kensington-C… Private … 2 1 shared… 1 1
## 3 1 3808 Hastings-Sun… Entire h… 6 1.5 baths 1 3
## 4 1 561 Kensington-C… Entire h… 6 1 bath 2 2
## 5 1 3385 Mount Pleasa… Entire h… 4 1 bath 1 1
## 6 1 4232 Shaughnessy Entire h… 6 1.5 baths 2 2
## 7 1 1169 Downtown Entire h… 3 1 bath 1 1
## 8 1 959 Kitsilano Private … 1 1.5 shar… 1 1
## 9 1 2171 Downtown Entire h… 2 1 bath 1 1
## 10 1 1258 Dunbar South… Entire h… 4 1 bath 2 2
## # ℹ 799,990 more rows
## # ℹ 1 more variable: price <dbl>
```
Notice that the column `replicate` indicates the replicate, or sample, to which
each listing belongs. Above, since by default R only prints the first few rows,
it looks like all of the listings have `replicate` set to 1\. But you can
check the last few entries using the `tail()` function to verify that
we indeed created 20,000 samples (or replicates).
```
tail(samples)
```
```
## # A tibble: 6 × 9
## # Groups: replicate [1]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 20000 3414 Marpole Entire h… 4 1 bath 2 2
## 2 20000 1974 Hastings-Sunr… Private … 2 1 shared… 1 1
## 3 20000 1846 Riley Park Entire h… 4 1 bath 2 3
## 4 20000 862 Downtown Entire h… 5 2 baths 2 2
## 5 20000 3295 Victoria-Fras… Private … 2 1 shared… 1 1
## 6 20000 997 Dunbar Southl… Private … 1 1.5 shar… 1 1
## # ℹ 1 more variable: price <dbl>
```
Now that we have obtained the samples, we need to compute the
proportion of entire home/apartment listings in each sample.
We first group the data by the `replicate` variable—to group the
set of listings in each sample together—and then use `summarize`
to compute the proportion in each sample.
We print both the first and last few entries of the resulting data frame
below to show that we end up with 20,000 point estimates, one for each of the 20,000 samples.
```
sample_estimates <- samples |>
group_by(replicate) |>
summarize(sample_proportion = sum(room_type == "Entire home/apt") / 40)
sample_estimates
```
```
## # A tibble: 20,000 × 2
## replicate sample_proportion
## <int> <dbl>
## 1 1 0.85
## 2 2 0.85
## 3 3 0.65
## 4 4 0.7
## 5 5 0.75
## 6 6 0.725
## 7 7 0.775
## 8 8 0.775
## 9 9 0.7
## 10 10 0.675
## # ℹ 19,990 more rows
```
```
tail(sample_estimates)
```
```
## # A tibble: 6 × 2
## replicate sample_proportion
## <int> <dbl>
## 1 19995 0.75
## 2 19996 0.675
## 3 19997 0.625
## 4 19998 0.75
## 5 19999 0.875
## 6 20000 0.65
```
We can now visualize the sampling distribution of sample proportions
for samples of size 40 using a histogram in Figure [10\.2](inference.html#fig:11-example-proportions7). Keep in mind: in the real world,
we don’t have access to the full population. So we
can’t take many samples and can’t actually construct or visualize the sampling distribution.
We have created this particular example
such that we *do* have access to the full population, which lets us visualize the
sampling distribution directly for learning purposes.
```
sampling_distribution <- ggplot(sample_estimates, aes(x = sample_proportion)) +
geom_histogram(color = "lightgrey", bins = 12) +
labs(x = "Sample proportions", y = "Count") +
theme(text = element_text(size = 12))
sampling_distribution
```
Figure 10\.2: Sampling distribution of the sample proportion for sample size 40\.
The sampling distribution in Figure [10\.2](inference.html#fig:11-example-proportions7) appears
to be bell\-shaped, is roughly symmetric, and has one peak. It is centered
around 0\.7 and the sample proportions
range from about 0\.4 to about
1\. In fact, we can
calculate the mean of the sample proportions.
```
sample_estimates |>
summarize(mean_proportion = mean(sample_proportion))
```
```
## # A tibble: 1 × 1
## mean_proportion
## <dbl>
## 1 0.747
```
We notice that the sample proportions are centered around the population
proportion value, 0\.747! In general, the mean of
the sampling distribution should be equal to the population proportion.
This is great news because it means that the sample proportion is neither an overestimate nor an
underestimate of the population proportion.
In other words, if you were to take many samples as we did above, there is no tendency
towards over or underestimating the population proportion.
In a real data analysis setting where you just have access to your single
sample, this implies that you would suspect that your sample point estimate is
roughly equally likely to be above or below the true population proportion.
### 10\.4\.2 Sampling distributions for means
In the previous section, our variable of interest—`room_type`—was
*categorical*, and the population parameter was a proportion. As mentioned in
the chapter introduction, there are many choices of the population parameter
for each type of variable. What if we wanted to infer something about a
population of *quantitative* variables instead? For instance, a traveler
visiting Vancouver, Canada may wish to estimate the
population *mean* (or average) price per night of Airbnb listings. Knowing
the average could help them tell whether a particular listing is overpriced.
We can visualize the population distribution of the price per night with a histogram.
```
population_distribution <- ggplot(airbnb, aes(x = price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
population_distribution
```
Figure 10\.3: Population distribution of price per night (dollars) for all Airbnb listings in Vancouver, Canada.
In Figure [10\.3](inference.html#fig:11-example-means2), we see that the population distribution
has one peak. It is also skewed (i.e., is not symmetric): most of the listings are
less than $250 per night, but a small number of listings cost much more,
creating a long tail on the histogram’s right side.
Along with visualizing the population, we can calculate the population mean,
the average price per night for all the Airbnb listings.
```
population_parameters <- airbnb |>
summarize(mean_price = mean(price))
population_parameters
```
```
## # A tibble: 1 × 1
## mean_price
## <dbl>
## 1 154.51
```
The price per night of all Airbnb rentals in Vancouver, BC
is $154\.51, on average. This value is our
population parameter since we are calculating it using the population data.
Now suppose we did not have access to the population data (which is usually the
case!), yet we wanted to estimate the mean price per night. We could answer
this question by taking a random sample of as many Airbnb listings as our time
and resources allow. Let’s say we could do this for 40 listings. What would
such a sample look like? Let’s take advantage of the fact that we do have
access to the population data and simulate taking one random sample of 40
listings in R, again using `rep_sample_n`.
```
one_sample <- airbnb |>
rep_sample_n(40)
```
We can create a histogram to visualize the distribution of observations in the
sample (Figure [10\.4](inference.html#fig:11-example-means-sample-hist)), and calculate the mean
of our sample.
```
sample_distribution <- ggplot(one_sample, aes(price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
sample_distribution
```
Figure 10\.4: Distribution of price per night (dollars) for sample of 40 Airbnb listings.
```
estimates <- one_sample |>
summarize(mean_price = mean(price))
estimates
```
```
## # A tibble: 1 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 155.80
```
The average value of the sample of size 40
is $155\.80\. This
number is a point estimate for the mean of the full population.
Recall that the population mean was
$154\.51\. So our estimate was fairly close to
the population parameter: the mean was about
0\.8%
off. Note that we usually cannot compute the estimate’s accuracy in practice
since we do not have access to the population parameter; if we did, we wouldn’t
need to estimate it!
Also, recall from the previous section that the point estimate can vary; if we
took another random sample from the population, our estimate’s value might
change. So then, did we just get lucky with our point estimate above? How much
does our estimate vary across different samples of size 40 in this example?
Again, since we have access to the population, we can take many samples and
plot the sampling distribution of sample means for samples of size 40 to
get a sense for this variation. In this case, we’ll use 20,000 samples of size
40\.
```
samples <- rep_sample_n(airbnb, size = 40, reps = 20000)
samples
```
```
## # A tibble: 800,000 × 9
## # Groups: replicate [20,000]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 1 1177 Downtown Entire h… 4 2 baths 2 2
## 2 1 4063 Downtown Entire h… 2 1 bath 1 1
## 3 1 2641 Kitsilano Private … 1 1 shared… 1 1
## 4 1 1941 West End Entire h… 2 1 bath 1 1
## 5 1 2431 Mount Pleasa… Entire h… 2 1 bath 1 1
## 6 1 1871 Arbutus Ridge Entire h… 4 1 bath 2 2
## 7 1 2557 Marpole Private … 3 1 privat… 1 2
## 8 1 3534 Downtown Entire h… 2 1 bath 1 1
## 9 1 4379 Downtown Entire h… 4 1 bath 1 0
## 10 1 2161 Downtown Entire h… 4 2 baths 2 2
## # ℹ 799,990 more rows
## # ℹ 1 more variable: price <dbl>
```
Now we can calculate the sample mean for each replicate and plot the sampling
distribution of sample means for samples of size 40\.
```
sample_estimates <- samples |>
group_by(replicate) |>
summarize(mean_price = mean(price))
sample_estimates
```
```
## # A tibble: 20,000 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 160.06
## 2 2 173.18
## 3 3 131.20
## 4 4 176.96
## 5 5 125.65
## 6 6 148.84
## 7 7 134.82
## 8 8 137.26
## 9 9 166.11
## 10 10 157.81
## # ℹ 19,990 more rows
```
```
sampling_distribution_40 <- ggplot(sample_estimates, aes(x = mean_price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Sample mean price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
sampling_distribution_40
```
Figure 10\.5: Sampling distribution of the sample means for sample size of 40\.
In Figure [10\.5](inference.html#fig:11-example-means4), the sampling distribution of the mean
has one peak and is bell\-shaped. Most of the estimates are between
about $140 and
$170; but there is
a good fraction of cases outside this range (i.e., where the point estimate was
not close to the population parameter). So it does indeed look like we were
quite lucky when we estimated the population mean with only
0\.8% error.
Let’s visualize the population distribution, distribution of the sample, and
the sampling distribution on one plot to compare them in Figure
[10\.6](inference.html#fig:11-example-means5). Comparing these three distributions, the centers
of the distributions are all around the same price (around $150\). The original
population distribution has a long right tail, and the sample distribution has
a similar shape to that of the population distribution. However, the sampling
distribution is not shaped like the population or sample distribution. Instead,
it has a bell shape, and it has a lower spread than the population or sample
distributions. The sample means vary less than the individual observations
because there will be some high values and some small values in any random
sample, which will keep the average from being too extreme.
Figure 10\.6: Comparison of population distribution, sample distribution, and sampling distribution.
Given that there is quite a bit of variation in the sampling distribution of
the sample mean—i.e., the point estimate that we obtain is not very
reliable—is there any way to improve the estimate? One way to improve a
point estimate is to take a *larger* sample. To illustrate what effect this
has, we will take many samples of size 20, 50, 100, and 500, and plot the
sampling distribution of the sample mean. We indicate the mean of the sampling
distribution with a vertical dashed line.
Figure 10\.7: Comparison of sampling distributions, with mean highlighted as a vertical dashed line.
Based on the visualization in Figure [10\.7](inference.html#fig:11-example-means7), three points
about the sample mean become clear. First, the mean of the sample mean (across
samples) is equal to the population mean. In other words, the sampling
distribution is centered at the population mean. Second, increasing the size of
the sample decreases the spread (i.e., the variability) of the sampling
distribution. Therefore, a larger sample size results in a more reliable point
estimate of the population parameter. And third, the distribution of the sample
mean is roughly bell\-shaped.
> **Note:** You might notice that in the `n = 20` case in Figure [10\.7](inference.html#fig:11-example-means7),
> the distribution is not *quite* bell\-shaped. There is a bit of skew towards the right!
> You might also notice that in the `n = 50` case and larger, that skew seems to disappear.
> In general, the sampling distribution—for both means and proportions—only
> becomes bell\-shaped *once the sample size is large enough*.
> How large is “large enough?” Unfortunately, it depends entirely on the problem at hand. But
> as a rule of thumb, often a sample size of at least 20 will suffice.
### 10\.4\.3 Summary
1. A point estimate is a single value computed using a sample from a population (e.g., a mean or proportion).
2. The sampling distribution of an estimate is the distribution of the estimate for all possible samples of a fixed size from the same population.
3. The shape of the sampling distribution is usually bell\-shaped with one peak and centered at the population mean or proportion.
4. The spread of the sampling distribution is related to the sample size. As the sample size increases, the spread of the sampling distribution decreases.
10\.5 Bootstrapping
-------------------
### 10\.5\.1 Overview
*Why all this emphasis on sampling distributions?*
We saw in the previous section that we could compute a **point estimate** of a
population parameter using a sample of observations from the population. And
since we constructed examples where we had access to the population, we could
evaluate how accurate the estimate was, and even get a sense of how much the
estimate would vary for different samples from the population. But in real
data analysis settings, we usually have *just one sample* from our population
and do not have access to the population itself. Therefore we cannot construct
the sampling distribution as we did in the previous section. And as we saw, our
sample estimate’s value can vary significantly from the population parameter.
So reporting the point estimate from a single sample alone may not be enough.
We also need to report some notion of *uncertainty* in the value of the point
estimate.
Unfortunately, we cannot construct the exact sampling distribution without
full access to the population. However, if we could somehow *approximate* what
the sampling distribution would look like for a sample, we could
use that approximation to then report how uncertain our sample
point estimate is (as we did above with the *exact* sampling
distribution). There are several methods to accomplish this; in this book, we
will use the *bootstrap*. We will discuss **interval estimation** and
construct
**confidence intervals** using just a single sample from a population. A
confidence interval is a range of plausible values for our population parameter.
Here is the key idea. First, if you take a big enough sample, it *looks like*
the population. Notice the histograms’ shapes for samples of different sizes
taken from the population in Figure [10\.8](inference.html#fig:11-example-bootstrapping0). We
see that the sample’s distribution looks like that of the population for a
large enough sample.
Figure 10\.8: Comparison of samples of different sizes from the population.
In the previous section, we took many samples of the same size *from our
population* to get a sense of the variability of a sample estimate. But if our
sample is big enough that it looks like our population, we can pretend that our
sample *is* the population, and take more samples (with replacement) of the
same size from it instead! This very clever technique is
called **the bootstrap**. Note that by taking many samples from our single, observed
sample, we do not obtain the true sampling distribution, but rather an
approximation that we call **the bootstrap distribution**.
> **Note:** We must sample *with* replacement when using the bootstrap.
> Otherwise, if we had a sample of size \\(n\\), and obtained a sample from it of
> size \\(n\\) *without* replacement, it would just return our original sample!
This section will explore how to create a bootstrap distribution from a single
sample using R. The process is visualized in Figure [10\.9](inference.html#fig:11-intro-bootstrap-image).
For a sample of size \\(n\\), you would do the following:
1. Randomly select an observation from the original sample, which was drawn from the population.
2. Record the observation’s value.
3. Replace that observation.
4. Repeat steps 1–3 (sampling *with* replacement) until you have \\(n\\) observations, which form a bootstrap sample.
5. Calculate the bootstrap point estimate (e.g., mean, median, proportion, slope, etc.) of the \\(n\\) observations in your bootstrap sample.
6. Repeat steps 1–5 many times to create a distribution of point estimates (the bootstrap distribution).
7. Calculate the plausible range of values around our observed point estimate.
Figure 10\.9: Overview of the bootstrap process.
### 10\.5\.2 Bootstrapping in R
Let’s continue working with our Airbnb example to illustrate how we might create
and use a bootstrap distribution using just a single sample from the population.
Once again, suppose we are
interested in estimating the population mean price per night of all Airbnb
listings in Vancouver, Canada, using a single sample size of 40\.
Recall our point estimate was $155\.80\. The
histogram of prices in the sample is displayed in Figure [10\.10](inference.html#fig:11-bootstrapping1).
```
one_sample
```
```
## # A tibble: 40 × 8
## id neighbourhood room_type accommodates bathrooms bedrooms beds price
## <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 3928 Marpole Private … 2 1 shared… 1 1 58
## 2 3013 Kensington-Cedar… Entire h… 4 1 bath 2 2 112
## 3 3156 Downtown Entire h… 6 2 baths 2 2 151
## 4 3873 Dunbar Southlands Private … 5 1 bath 2 3 700
## 5 3632 Downtown Eastside Entire h… 6 2 baths 3 3 157
## 6 296 Kitsilano Private … 1 1 shared… 1 1 100
## 7 3514 West End Entire h… 2 1 bath 1 1 110
## 8 594 Sunset Entire h… 5 1 bath 3 3 105
## 9 3305 Dunbar Southlands Entire h… 4 1 bath 1 2 196
## 10 938 Downtown Entire h… 7 2 baths 2 3 269
## # ℹ 30 more rows
```
```
one_sample_dist <- ggplot(one_sample, aes(price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
one_sample_dist
```
Figure 10\.10: Histogram of price per night (dollars) for one sample of size 40\.
The histogram for the sample is skewed, with a few observations out to the right. The
mean of the sample is $155\.80\.
Remember, in practice, we usually only have this one sample from the population. So
this sample and estimate are the only data we can work with.
We now perform steps 1–5 listed above to generate a single bootstrap
sample in R and calculate a point estimate from that bootstrap sample. We will
use the `rep_sample_n` function as we did when we were
creating our sampling distribution. But critically, note that we now
pass `one_sample`—our single sample of size 40—as the first argument.
And since we need to sample with replacement,
we change the argument for `replace` from its default value of `FALSE` to `TRUE`.
```
boot1 <- one_sample |>
rep_sample_n(size = 40, replace = TRUE, reps = 1)
boot1_dist <- ggplot(boot1, aes(price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
boot1_dist
```
Figure 10\.11: Bootstrap distribution.
```
summarize(boot1, mean_price = mean(price))
```
```
## # A tibble: 1 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 164.20
```
Notice in Figure [10\.11](inference.html#fig:11-bootstrapping3) that the histogram of our bootstrap sample
has a similar shape to the original sample histogram. Though the shapes of
the distributions are similar, they are not identical. You’ll also notice that
the original sample mean and the bootstrap sample mean differ. How might that
happen? Remember that we are sampling with replacement from the original
sample, so we don’t end up with the same sample values again. We are *pretending*
that our single sample is close to the population, and we are trying to
mimic drawing another sample from the population by drawing one from our original
sample.
Let’s now take 20,000 bootstrap samples from the original sample (`one_sample`)
using `rep_sample_n`, and calculate the means for
each of those replicates. Recall that this assumes that `one_sample` *looks like*
our original population; but since we do not have access to the population itself,
this is often the best we can do.
```
boot20000 <- one_sample |>
rep_sample_n(size = 40, replace = TRUE, reps = 20000)
boot20000
```
```
## # A tibble: 800,000 × 9
## # Groups: replicate [20,000]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 1 1276 Hastings-Sun… Entire h… 2 1 bath 1 1
## 2 1 3235 Hastings-Sun… Entire h… 2 1 bath 1 1
## 3 1 1301 Oakridge Entire h… 12 2 baths 2 12
## 4 1 118 Grandview-Wo… Entire h… 4 1 bath 2 2
## 5 1 2550 Downtown Eas… Private … 2 1.5 shar… 1 1
## 6 1 1006 Grandview-Wo… Entire h… 5 1 bath 3 4
## 7 1 3632 Downtown Eas… Entire h… 6 2 baths 3 3
## 8 1 1923 West End Entire h… 4 2 baths 2 2
## 9 1 3873 Dunbar South… Private … 5 1 bath 2 3
## 10 1 2349 Kerrisdale Private … 2 1 shared… 1 1
## # ℹ 799,990 more rows
## # ℹ 1 more variable: price <dbl>
```
```
tail(boot20000)
```
```
## # A tibble: 6 × 9
## # Groups: replicate [1]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 20000 1949 Kitsilano Entire h… 3 1 bath 1 1
## 2 20000 1025 Kensington-Ce… Entire h… 3 1 bath 1 1
## 3 20000 3013 Kensington-Ce… Entire h… 4 1 bath 2 2
## 4 20000 2868 Downtown Entire h… 2 1 bath 1 1
## 5 20000 3156 Downtown Entire h… 6 2 baths 2 2
## 6 20000 1923 West End Entire h… 4 2 baths 2 2
## # ℹ 1 more variable: price <dbl>
```
Let’s take a look at the histograms of the first six replicates of our bootstrap samples.
```
six_bootstrap_samples <- boot20000 |>
filter(replicate <= 6)
ggplot(six_bootstrap_samples, aes(price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
facet_wrap(~replicate) +
theme(text = element_text(size = 12))
```
Figure 10\.12: Histograms of the first six replicates of the bootstrap samples.
We see in Figure [10\.12](inference.html#fig:11-bootstrapping-six-bootstrap-samples) how the
bootstrap samples differ. We can also calculate the sample mean for each of
these six replicates.
```
six_bootstrap_samples |>
group_by(replicate) |>
summarize(mean_price = mean(price))
```
```
## # A tibble: 6 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 177.2
## 2 2 131.45
## 3 3 179.10
## 4 4 171.35
## 5 5 191.32
## 6 6 170.05
```
We can see that the bootstrap sample distributions and the sample means are
different. They are different because we are sampling *with replacement*. We
will now calculate point estimates for our 20,000 bootstrap samples and
generate a bootstrap distribution of our point estimates. The bootstrap
distribution (Figure [10\.13](inference.html#fig:11-bootstrapping5)) suggests how we might expect
our point estimate to behave if we took another sample.
```
boot20000_means <- boot20000 |>
group_by(replicate) |>
summarize(mean_price = mean(price))
boot20000_means
```
```
## # A tibble: 20,000 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 177.2
## 2 2 131.45
## 3 3 179.10
## 4 4 171.35
## 5 5 191.32
## 6 6 170.05
## 7 7 178.83
## 8 8 154.78
## 9 9 163.85
## 10 10 209.28
## # ℹ 19,990 more rows
```
```
tail(boot20000_means)
```
```
## # A tibble: 6 × 2
## replicate mean_price
## <int> <dbl>
## 1 19995 130.40
## 2 19996 189.18
## 3 19997 168.98
## 4 19998 168.23
## 5 19999 155.73
## 6 20000 136.95
```
```
boot_est_dist <- ggplot(boot20000_means, aes(x = mean_price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Sample mean price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
boot_est_dist
```
Figure 10\.13: Distribution of the bootstrap sample means.
Let’s compare the bootstrap distribution—which we construct by taking many samples from our original sample of size 40—with
the true sampling distribution—which corresponds to taking many samples from the population.
Figure 10\.14: Comparison of the distribution of the bootstrap sample means and sampling distribution.
There are two essential points that we can take away from Figure
[10\.14](inference.html#fig:11-bootstrapping6). First, the shape and spread of the true sampling
distribution and the bootstrap distribution are similar; the bootstrap
distribution lets us get a sense of the point estimate’s variability. The
second important point is that the means of these two distributions are
different. The sampling distribution is centered at
$154\.51, the population mean value. However, the bootstrap
distribution is centered at the original sample’s mean price per night,
$155\.87\. Because we are resampling from the
original sample repeatedly, we see that the bootstrap distribution is centered
at the original sample’s mean value (unlike the sampling distribution of the
sample mean, which is centered at the population parameter value).
Figure
[10\.15](inference.html#fig:11-bootstrapping7) summarizes the bootstrapping process.
The idea here is that we can use this distribution of bootstrap sample means to
approximate the sampling distribution of the sample means when we only have one
sample. Since the bootstrap distribution pretty well approximates the sampling
distribution spread, we can use the bootstrap spread to help us develop a
plausible range for our population parameter along with our estimate!
Figure 10\.15: Summary of bootstrapping process.
### 10\.5\.3 Using the bootstrap to calculate a plausible range
Now that we have constructed our bootstrap distribution, let’s use it to create
an approximate 95% percentile bootstrap confidence interval.
A **confidence interval** is a range of plausible values for the population parameter. We will
find the range of values covering the middle 95% of the bootstrap
distribution, giving us a 95% confidence interval. You may be wondering, what
does “95% confidence” mean? If we took 100 random samples and calculated 100
95% confidence intervals, then about 95% of the ranges would capture the
population parameter’s value. Note there’s nothing special about 95%. We
could have used other levels, such as 90% or 99%. There is a balance between
our level of confidence and precision. A higher confidence level corresponds to
a wider range of the interval, and a lower confidence level corresponds to a
narrower range. Therefore the level we choose is based on what chance we are
willing to take of being wrong based on the implications of being wrong for our
application. In general, we choose confidence levels to be comfortable with our
level of uncertainty but not so strict that the interval is unhelpful. For
instance, if our decision impacts human life and the implications of being
wrong are deadly, we may want to be very confident and choose a higher
confidence level.
To calculate a 95% percentile bootstrap confidence interval, we will do the following:
1. Arrange the observations in the bootstrap distribution in ascending order.
2. Find the value such that 2\.5% of observations fall below it (the 2\.5% percentile). Use that value as the lower bound of the interval.
3. Find the value such that 97\.5% of observations fall below it (the 97\.5% percentile). Use that value as the upper bound of the interval.
To do this in R, we can use the `quantile()` function. Quantiles are expressed in proportions rather than
percentages, so the 2\.5th and 97\.5th percentiles would be the 0\.025 and 0\.975 quantiles, respectively.
```
bounds <- boot20000_means |>
select(mean_price) |>
pull() |>
quantile(c(0.025, 0.975))
bounds
```
```
## 2.5% 97.5%
## 119 204
```
Our interval, $119\.28 to $203\.63, captures
the middle 95% of the sample mean prices in the bootstrap distribution. We can
visualize the interval on our distribution in Figure
[10\.16](inference.html#fig:11-bootstrapping9).
Figure 10\.16: Distribution of the bootstrap sample means with percentile lower and upper bounds.
To finish our estimation of the population parameter, we would report the point
estimate and our confidence interval’s lower and upper bounds. Here the sample
mean price per night of 40 Airbnb listings was
$155\.80, and we are 95% “confident” that the true
population mean price per night for all Airbnb listings in Vancouver is between
$119\.28 and $203\.63\.
Notice that our interval does indeed contain the true
population mean value, $154\.51! However, in
practice, we would not know whether our interval captured the population
parameter or not because we usually only have a single sample, not the entire
population. This is the best we can do when we only have one sample!
This chapter is only the beginning of the journey into statistical inference.
We can extend the concepts learned here to do much more than report point
estimates and confidence intervals, such as testing for real differences
between populations, tests for associations between variables, and so much
more. We have just scratched the surface of statistical inference; however, the
material presented here will serve as the foundation for more advanced
statistical techniques you may learn about in the future!
### 10\.5\.1 Overview
*Why all this emphasis on sampling distributions?*
We saw in the previous section that we could compute a **point estimate** of a
population parameter using a sample of observations from the population. And
since we constructed examples where we had access to the population, we could
evaluate how accurate the estimate was, and even get a sense of how much the
estimate would vary for different samples from the population. But in real
data analysis settings, we usually have *just one sample* from our population
and do not have access to the population itself. Therefore we cannot construct
the sampling distribution as we did in the previous section. And as we saw, our
sample estimate’s value can vary significantly from the population parameter.
So reporting the point estimate from a single sample alone may not be enough.
We also need to report some notion of *uncertainty* in the value of the point
estimate.
Unfortunately, we cannot construct the exact sampling distribution without
full access to the population. However, if we could somehow *approximate* what
the sampling distribution would look like for a sample, we could
use that approximation to then report how uncertain our sample
point estimate is (as we did above with the *exact* sampling
distribution). There are several methods to accomplish this; in this book, we
will use the *bootstrap*. We will discuss **interval estimation** and
construct
**confidence intervals** using just a single sample from a population. A
confidence interval is a range of plausible values for our population parameter.
Here is the key idea. First, if you take a big enough sample, it *looks like*
the population. Notice the histograms’ shapes for samples of different sizes
taken from the population in Figure [10\.8](inference.html#fig:11-example-bootstrapping0). We
see that the sample’s distribution looks like that of the population for a
large enough sample.
Figure 10\.8: Comparison of samples of different sizes from the population.
In the previous section, we took many samples of the same size *from our
population* to get a sense of the variability of a sample estimate. But if our
sample is big enough that it looks like our population, we can pretend that our
sample *is* the population, and take more samples (with replacement) of the
same size from it instead! This very clever technique is
called **the bootstrap**. Note that by taking many samples from our single, observed
sample, we do not obtain the true sampling distribution, but rather an
approximation that we call **the bootstrap distribution**.
> **Note:** We must sample *with* replacement when using the bootstrap.
> Otherwise, if we had a sample of size \\(n\\), and obtained a sample from it of
> size \\(n\\) *without* replacement, it would just return our original sample!
This section will explore how to create a bootstrap distribution from a single
sample using R. The process is visualized in Figure [10\.9](inference.html#fig:11-intro-bootstrap-image).
For a sample of size \\(n\\), you would do the following:
1. Randomly select an observation from the original sample, which was drawn from the population.
2. Record the observation’s value.
3. Replace that observation.
4. Repeat steps 1–3 (sampling *with* replacement) until you have \\(n\\) observations, which form a bootstrap sample.
5. Calculate the bootstrap point estimate (e.g., mean, median, proportion, slope, etc.) of the \\(n\\) observations in your bootstrap sample.
6. Repeat steps 1–5 many times to create a distribution of point estimates (the bootstrap distribution).
7. Calculate the plausible range of values around our observed point estimate.
Figure 10\.9: Overview of the bootstrap process.
### 10\.5\.2 Bootstrapping in R
Let’s continue working with our Airbnb example to illustrate how we might create
and use a bootstrap distribution using just a single sample from the population.
Once again, suppose we are
interested in estimating the population mean price per night of all Airbnb
listings in Vancouver, Canada, using a single sample size of 40\.
Recall our point estimate was $155\.80\. The
histogram of prices in the sample is displayed in Figure [10\.10](inference.html#fig:11-bootstrapping1).
```
one_sample
```
```
## # A tibble: 40 × 8
## id neighbourhood room_type accommodates bathrooms bedrooms beds price
## <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl> <dbl>
## 1 3928 Marpole Private … 2 1 shared… 1 1 58
## 2 3013 Kensington-Cedar… Entire h… 4 1 bath 2 2 112
## 3 3156 Downtown Entire h… 6 2 baths 2 2 151
## 4 3873 Dunbar Southlands Private … 5 1 bath 2 3 700
## 5 3632 Downtown Eastside Entire h… 6 2 baths 3 3 157
## 6 296 Kitsilano Private … 1 1 shared… 1 1 100
## 7 3514 West End Entire h… 2 1 bath 1 1 110
## 8 594 Sunset Entire h… 5 1 bath 3 3 105
## 9 3305 Dunbar Southlands Entire h… 4 1 bath 1 2 196
## 10 938 Downtown Entire h… 7 2 baths 2 3 269
## # ℹ 30 more rows
```
```
one_sample_dist <- ggplot(one_sample, aes(price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
one_sample_dist
```
Figure 10\.10: Histogram of price per night (dollars) for one sample of size 40\.
The histogram for the sample is skewed, with a few observations out to the right. The
mean of the sample is $155\.80\.
Remember, in practice, we usually only have this one sample from the population. So
this sample and estimate are the only data we can work with.
We now perform steps 1–5 listed above to generate a single bootstrap
sample in R and calculate a point estimate from that bootstrap sample. We will
use the `rep_sample_n` function as we did when we were
creating our sampling distribution. But critically, note that we now
pass `one_sample`—our single sample of size 40—as the first argument.
And since we need to sample with replacement,
we change the argument for `replace` from its default value of `FALSE` to `TRUE`.
```
boot1 <- one_sample |>
rep_sample_n(size = 40, replace = TRUE, reps = 1)
boot1_dist <- ggplot(boot1, aes(price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
boot1_dist
```
Figure 10\.11: Bootstrap distribution.
```
summarize(boot1, mean_price = mean(price))
```
```
## # A tibble: 1 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 164.20
```
Notice in Figure [10\.11](inference.html#fig:11-bootstrapping3) that the histogram of our bootstrap sample
has a similar shape to the original sample histogram. Though the shapes of
the distributions are similar, they are not identical. You’ll also notice that
the original sample mean and the bootstrap sample mean differ. How might that
happen? Remember that we are sampling with replacement from the original
sample, so we don’t end up with the same sample values again. We are *pretending*
that our single sample is close to the population, and we are trying to
mimic drawing another sample from the population by drawing one from our original
sample.
Let’s now take 20,000 bootstrap samples from the original sample (`one_sample`)
using `rep_sample_n`, and calculate the means for
each of those replicates. Recall that this assumes that `one_sample` *looks like*
our original population; but since we do not have access to the population itself,
this is often the best we can do.
```
boot20000 <- one_sample |>
rep_sample_n(size = 40, replace = TRUE, reps = 20000)
boot20000
```
```
## # A tibble: 800,000 × 9
## # Groups: replicate [20,000]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 1 1276 Hastings-Sun… Entire h… 2 1 bath 1 1
## 2 1 3235 Hastings-Sun… Entire h… 2 1 bath 1 1
## 3 1 1301 Oakridge Entire h… 12 2 baths 2 12
## 4 1 118 Grandview-Wo… Entire h… 4 1 bath 2 2
## 5 1 2550 Downtown Eas… Private … 2 1.5 shar… 1 1
## 6 1 1006 Grandview-Wo… Entire h… 5 1 bath 3 4
## 7 1 3632 Downtown Eas… Entire h… 6 2 baths 3 3
## 8 1 1923 West End Entire h… 4 2 baths 2 2
## 9 1 3873 Dunbar South… Private … 5 1 bath 2 3
## 10 1 2349 Kerrisdale Private … 2 1 shared… 1 1
## # ℹ 799,990 more rows
## # ℹ 1 more variable: price <dbl>
```
```
tail(boot20000)
```
```
## # A tibble: 6 × 9
## # Groups: replicate [1]
## replicate id neighbourhood room_type accommodates bathrooms bedrooms beds
## <int> <dbl> <chr> <chr> <dbl> <chr> <dbl> <dbl>
## 1 20000 1949 Kitsilano Entire h… 3 1 bath 1 1
## 2 20000 1025 Kensington-Ce… Entire h… 3 1 bath 1 1
## 3 20000 3013 Kensington-Ce… Entire h… 4 1 bath 2 2
## 4 20000 2868 Downtown Entire h… 2 1 bath 1 1
## 5 20000 3156 Downtown Entire h… 6 2 baths 2 2
## 6 20000 1923 West End Entire h… 4 2 baths 2 2
## # ℹ 1 more variable: price <dbl>
```
Let’s take a look at the histograms of the first six replicates of our bootstrap samples.
```
six_bootstrap_samples <- boot20000 |>
filter(replicate <= 6)
ggplot(six_bootstrap_samples, aes(price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Price per night (dollars)", y = "Count") +
facet_wrap(~replicate) +
theme(text = element_text(size = 12))
```
Figure 10\.12: Histograms of the first six replicates of the bootstrap samples.
We see in Figure [10\.12](inference.html#fig:11-bootstrapping-six-bootstrap-samples) how the
bootstrap samples differ. We can also calculate the sample mean for each of
these six replicates.
```
six_bootstrap_samples |>
group_by(replicate) |>
summarize(mean_price = mean(price))
```
```
## # A tibble: 6 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 177.2
## 2 2 131.45
## 3 3 179.10
## 4 4 171.35
## 5 5 191.32
## 6 6 170.05
```
We can see that the bootstrap sample distributions and the sample means are
different. They are different because we are sampling *with replacement*. We
will now calculate point estimates for our 20,000 bootstrap samples and
generate a bootstrap distribution of our point estimates. The bootstrap
distribution (Figure [10\.13](inference.html#fig:11-bootstrapping5)) suggests how we might expect
our point estimate to behave if we took another sample.
```
boot20000_means <- boot20000 |>
group_by(replicate) |>
summarize(mean_price = mean(price))
boot20000_means
```
```
## # A tibble: 20,000 × 2
## replicate mean_price
## <int> <dbl>
## 1 1 177.2
## 2 2 131.45
## 3 3 179.10
## 4 4 171.35
## 5 5 191.32
## 6 6 170.05
## 7 7 178.83
## 8 8 154.78
## 9 9 163.85
## 10 10 209.28
## # ℹ 19,990 more rows
```
```
tail(boot20000_means)
```
```
## # A tibble: 6 × 2
## replicate mean_price
## <int> <dbl>
## 1 19995 130.40
## 2 19996 189.18
## 3 19997 168.98
## 4 19998 168.23
## 5 19999 155.73
## 6 20000 136.95
```
```
boot_est_dist <- ggplot(boot20000_means, aes(x = mean_price)) +
geom_histogram(color = "lightgrey") +
labs(x = "Sample mean price per night (dollars)", y = "Count") +
theme(text = element_text(size = 12))
boot_est_dist
```
Figure 10\.13: Distribution of the bootstrap sample means.
Let’s compare the bootstrap distribution—which we construct by taking many samples from our original sample of size 40—with
the true sampling distribution—which corresponds to taking many samples from the population.
Figure 10\.14: Comparison of the distribution of the bootstrap sample means and sampling distribution.
There are two essential points that we can take away from Figure
[10\.14](inference.html#fig:11-bootstrapping6). First, the shape and spread of the true sampling
distribution and the bootstrap distribution are similar; the bootstrap
distribution lets us get a sense of the point estimate’s variability. The
second important point is that the means of these two distributions are
different. The sampling distribution is centered at
$154\.51, the population mean value. However, the bootstrap
distribution is centered at the original sample’s mean price per night,
$155\.87\. Because we are resampling from the
original sample repeatedly, we see that the bootstrap distribution is centered
at the original sample’s mean value (unlike the sampling distribution of the
sample mean, which is centered at the population parameter value).
Figure
[10\.15](inference.html#fig:11-bootstrapping7) summarizes the bootstrapping process.
The idea here is that we can use this distribution of bootstrap sample means to
approximate the sampling distribution of the sample means when we only have one
sample. Since the bootstrap distribution pretty well approximates the sampling
distribution spread, we can use the bootstrap spread to help us develop a
plausible range for our population parameter along with our estimate!
Figure 10\.15: Summary of bootstrapping process.
### 10\.5\.3 Using the bootstrap to calculate a plausible range
Now that we have constructed our bootstrap distribution, let’s use it to create
an approximate 95% percentile bootstrap confidence interval.
A **confidence interval** is a range of plausible values for the population parameter. We will
find the range of values covering the middle 95% of the bootstrap
distribution, giving us a 95% confidence interval. You may be wondering, what
does “95% confidence” mean? If we took 100 random samples and calculated 100
95% confidence intervals, then about 95% of the ranges would capture the
population parameter’s value. Note there’s nothing special about 95%. We
could have used other levels, such as 90% or 99%. There is a balance between
our level of confidence and precision. A higher confidence level corresponds to
a wider range of the interval, and a lower confidence level corresponds to a
narrower range. Therefore the level we choose is based on what chance we are
willing to take of being wrong based on the implications of being wrong for our
application. In general, we choose confidence levels to be comfortable with our
level of uncertainty but not so strict that the interval is unhelpful. For
instance, if our decision impacts human life and the implications of being
wrong are deadly, we may want to be very confident and choose a higher
confidence level.
To calculate a 95% percentile bootstrap confidence interval, we will do the following:
1. Arrange the observations in the bootstrap distribution in ascending order.
2. Find the value such that 2\.5% of observations fall below it (the 2\.5% percentile). Use that value as the lower bound of the interval.
3. Find the value such that 97\.5% of observations fall below it (the 97\.5% percentile). Use that value as the upper bound of the interval.
To do this in R, we can use the `quantile()` function. Quantiles are expressed in proportions rather than
percentages, so the 2\.5th and 97\.5th percentiles would be the 0\.025 and 0\.975 quantiles, respectively.
```
bounds <- boot20000_means |>
select(mean_price) |>
pull() |>
quantile(c(0.025, 0.975))
bounds
```
```
## 2.5% 97.5%
## 119 204
```
Our interval, $119\.28 to $203\.63, captures
the middle 95% of the sample mean prices in the bootstrap distribution. We can
visualize the interval on our distribution in Figure
[10\.16](inference.html#fig:11-bootstrapping9).
Figure 10\.16: Distribution of the bootstrap sample means with percentile lower and upper bounds.
To finish our estimation of the population parameter, we would report the point
estimate and our confidence interval’s lower and upper bounds. Here the sample
mean price per night of 40 Airbnb listings was
$155\.80, and we are 95% “confident” that the true
population mean price per night for all Airbnb listings in Vancouver is between
$119\.28 and $203\.63\.
Notice that our interval does indeed contain the true
population mean value, $154\.51! However, in
practice, we would not know whether our interval captured the population
parameter or not because we usually only have a single sample, not the entire
population. This is the best we can do when we only have one sample!
This chapter is only the beginning of the journey into statistical inference.
We can extend the concepts learned here to do much more than report point
estimates and confidence intervals, such as testing for real differences
between populations, tests for associations between variables, and so much
more. We have just scratched the surface of statistical inference; however, the
material presented here will serve as the foundation for more advanced
statistical techniques you may learn about in the future!
10\.6 Exercises
---------------
Practice exercises for the material covered in this chapter
can be found in the accompanying
[worksheets repository](https://worksheets.datasciencebook.ca)
in the two “Statistical inference” rows.
You can launch an interactive version of each worksheet in your browser by clicking the “launch binder” button.
You can also preview a non\-interactive version of each worksheet by clicking “view worksheet.”
If you instead decide to download the worksheets and run them on your own machine,
make sure to follow the instructions for computer setup
found in Chapter [13](setup.html#setup). This will ensure that the automated feedback
and guidance that the worksheets provide will function as intended.
10\.7 Additional resources
--------------------------
* Chapters 7 to 10 of *Modern Dive* ([Ismay and Kim 2020](#ref-moderndive)) provide a great
next step in learning about inference. In particular, Chapters 7 and 8 cover
sampling and bootstrapping using `tidyverse` and `infer` in a slightly more
in\-depth manner than the present chapter. Chapters 9 and 10 take the next step
beyond the scope of this chapter and begin to provide some of the initial
mathematical underpinnings of inference and more advanced applications of the
concept of inference in testing hypotheses and performing regression. This
material offers a great starting point for getting more into the technical side
of statistics.
* Chapters 4 to 7 of *OpenIntro Statistics* ([Diez, Çetinkaya\-Rundel, and Barr 2019](#ref-openintro))
provide a good next step after *Modern Dive*. Although it is still certainly
an introductory text, things get a bit more mathematical here. Depending on
your background, you may actually want to start going through Chapters 1 to 3
first, where you will learn some fundamental concepts in probability theory.
Although it may seem like a diversion, probability theory is *the language of
statistics*; if you have a solid grasp of probability, more advanced statistics
will come naturally to you!
| Data Science |
ubc-dsci.github.io | https://ubc-dsci.github.io/introduction-to-datascience/setup.html |
Chapter 13 Setting up your computer
===================================
13\.1 Overview
--------------
In this chapter, you’ll learn how to set up the software needed to follow along
with this book on your own computer. Given that installation instructions can
vary based on computer setup, we provide instructions for
multiple operating systems (Ubuntu Linux, MacOS, and Windows).
Although the instructions in this chapter will likely work on many systems,
we have specifically verified that they work on a computer that:
* runs Windows 10 Home, MacOS 13 Ventura, or Ubuntu 22\.04,
* uses a 64\-bit CPU,
* has a connection to the internet,
* uses English as the default language.
13\.2 Chapter learning objectives
---------------------------------
By the end of the chapter, readers will be able to do the following:
* Download the worksheets that accompany this book.
* Install the Docker virtualization engine.
* Edit and run the worksheets using JupyterLab running inside a Docker container.
* Install Git, JupyterLab Desktop, and R packages.
* Edit and run the worksheets using JupyterLab Desktop.
13\.3 Obtaining the worksheets for this book
--------------------------------------------
The worksheets containing exercises for this book
are online at <https://worksheets.datasciencebook.ca>.
The worksheets can be launched directly from that page using the Binder links in the rightmost
column of the table. This is the easiest way to access the worksheets, but note that you will not
be able to save your work and return to it again later.
In order to save your progress, you will need to download the worksheets to your own computer and
work on them locally. You can download the worksheets as a compressed zip file
using [the link at the top of the page](https://github.com/UBC-DSCI/data-science-a-first-intro-worksheets/archive/refs/heads/main.zip).
Once you unzip the downloaded file, you will have a folder containing all of the Jupyter notebook worksheets
accompanying this book. See Chapter [11](jupyter.html#jupyter) for
instructions on working with Jupyter notebooks.
13\.4 Working with Docker
-------------------------
Once you have downloaded the worksheets, you will next need to install and run
the software required to work on Jupyter notebooks on your own computer. Doing
this setup manually can be quite tricky, as it involves quite a few different
software packages, not to mention getting the right versions of
everything—the worksheets and autograder tests may not work unless all the versions are
exactly right! To keep things simple, we instead recommend that you install
[Docker](https://docker.com). Docker lets you run your Jupyter notebooks inside
a pre\-built *container* that comes with precisely the right versions of
all software packages needed run the worksheets that come with this book.
> **Note:** A *container* is a virtual user space within your computer.
> Within the container, you can run software in isolation without interfering with the
> other software that already exists on your machine. In this book, we use
> a container to run a specific version of the R programming
> language, as well as other necessary packages. The container ensures that
> the worksheets function correctly, even if you have a different version of R
> installed on your computer—or even if you haven’t installed R at all!
### 13\.4\.1 Windows
**Installation** To install Docker on Windows,
visit [the online Docker documentation](https://docs.docker.com/desktop/install/windows-install/),
and download the `Docker Desktop Installer.exe` file. Double\-click the file to open the installer
and follow the instructions on the installation wizard, choosing **WSL\-2** instead of **Hyper\-V** when prompted.
> **Note:** Occasionally, when you first run Docker on Windows, you will encounter an error message. Some common errors you may see:
>
>
> * If you need to update WSL, you can enter `cmd.exe` in the Start menu to run the command line. Type `wsl --update` to update WSL.
> * If the admin account on your computer is different to your user account, you must add the user to the “docker\-users” group.
> Run Computer Management as an administrator and navigate to `Local Users` and `Groups -> Groups -> docker-users`. Right\-click to
> add the user to the group. Log out and log back in for the changes to take effect.
> * If you need to enable virtualization, you will need to edit your BIOS. Restart your computer, and enter the BIOS using the hotkey
> (usually Delete, Esc, and/or one of the F\# keys). Look for an “Advanced” menu, and under your CPU settings, set the “Virtualization” option
> to “enabled”. Then save the changes and reboot your machine. If you are not familiar with BIOS editing, you may want to find an expert
> to help you with this, as editing the BIOS can be dangerous. Detailed instructions for doing this are beyond the scope of this book.
**Running JupyterLab** Run Docker Desktop. Once it is running, you need to download and run the
Docker *image* that we have made available for the worksheets (an *image* is like a “snapshot” of a
computer with all the right packages pre\-installed). You only need to do this step one time; the image will remain
the next time you run Docker Desktop.
In the Docker Desktop search bar, enter `ubcdsci/r-dsci-100`, as this is
the name of the image. You will see the `ubcdsci/r-dsci-100` image in the list (Figure [13\.1](setup.html#fig:docker-desktop-search)),
and “latest” in the Tag drop down menu. We need to change “latest” to the right image version before proceeding.
To find the right tag, open
the [`Dockerfile` in the worksheets repository](https://raw.githubusercontent.com/UBC-DSCI/data-science-a-first-intro-worksheets/main/Dockerfile),
and look for the line `FROM ubcdsci/r-dsci-100:` followed by the tag consisting of a sequence of numbers and letters.
Back in Docker Desktop, in the “Tag” drop down menu, click that tag to select the correct image version. Then click
the “Pull” button to download the image.
Figure 13\.1: The Docker Desktop search window. Make sure to click the Tag drop down menu and find the right version of the image before clicking the Pull button to download it.
Once the image is done downloading, click the “Images” button on the left side
of the Docker Desktop window (Figure [13\.2](setup.html#fig:docker-desktop-images)). You
will see the recently downloaded image listed there under the “Local” tab.
Figure 13\.2: The Docker Desktop images tab.
To start up a *container* using that image, click the play button beside the
image. This will open the run configuration menu (Figure [13\.3](setup.html#fig:docker-desktop-runconfig)).
Expand the “Optional settings” drop down menu. In the “Host port” textbox, enter
`8888`. In the “Volumes” section, click the “Host path” box and navigate to the
folder where your Jupyter worksheets are stored. In the “Container path” text
box, enter `/home/jovyan/work`. Then click the “Run” button to start the
container.
Figure 13\.3: The Docker Desktop container run configuration menu.
After clicking the “Run” button, you will see a terminal. The terminal will then print
some text as the Docker container starts. Once the text stops scrolling, find the
URL in the terminal that starts
with `http://127.0.0.1:8888` (highlighted by the red box in Figure [13\.4](setup.html#fig:docker-desktop-url)), and paste it
into your browser to start JupyterLab.
Figure 13\.4: The terminal text after running the Docker container. The red box indicates the URL that you should paste into your browser to open JupyterLab.
When you are done working, make sure to shut down and remove the container by
clicking the red trash can symbol (in the top right corner of Figure [13\.4](setup.html#fig:docker-desktop-url)).
You will not be able to start the container again until you do so.
More information on installing and running
Docker on Windows, as well as troubleshooting tips, can
be found in [the online Docker documentation](https://docs.docker.com/desktop/install/windows-install/).
### 13\.4\.2 MacOS
**Installation** To install Docker on MacOS,
visit [the online Docker documentation](https://docs.docker.com/desktop/install/mac-install/), and
download the `Docker.dmg` installation file that is appropriate for your
computer. To know which installer is right for your machine, you need to know
whether your computer has an Intel processor (older machines) or an
Apple processor (newer machines); the [Apple support page](https://support.apple.com/en-ca/HT211814) has
information to help you determine which processor you have. Once downloaded, double\-click
the file to open the installer, then drag the Docker icon to the Applications folder.
Double\-click the icon in the Applications folder to start Docker. In the installation
window, use the recommended settings.
**Running JupyterLab** Run Docker Desktop. Once it is running, follow the
instructions above in the Windows section on *Running JupyterLab* (the user
interface is the same). More information on installing and running Docker on
MacOS, as well as troubleshooting tips, can be
found in [the online Docker documentation](https://docs.docker.com/desktop/install/mac-install/).
### 13\.4\.3 Ubuntu
**Installation** To install Docker on Ubuntu, open the terminal and enter the following five commands.
```
sudo apt update
sudo apt install ca-certificates curl gnupg
curl -fsSL https://get.docker.com -o get-docker.sh
sudo chmod u+x get-docker.sh
sudo sh get-docker.sh
```
**Running JupyterLab** First, open
the [`Dockerfile` in the worksheets repository](https://raw.githubusercontent.com/UBC-DSCI/data-science-a-first-intro-worksheets/main/Dockerfile),
and look for the line `FROM ubcdsci/r-dsci-100:` followed by a tag consisting of a sequence of numbers and letters.
Then in the terminal, navigate to the directory where you want to run JupyterLab, and run
the following command, replacing `TAG` with the *tag* you found earlier.
```
docker run --rm -v $(pwd):/home/jovyan/work -p 8888:8888 ubcdsci/r-dsci-100:TAG jupyter lab
```
The terminal will then print some text as the Docker container starts. Once the text stops scrolling, find the
URL in your terminal that starts with `http://127.0.0.1:8888` (highlighted by the
red box in Figure [13\.5](setup.html#fig:ubuntu-docker-terminal)), and paste it into your browser to start JupyterLab.
More information on installing and running Docker on Ubuntu, as well as troubleshooting tips, can be found in
[the online Docker documentation](https://docs.docker.com/engine/install/ubuntu/).
Figure 13\.5: The terminal text after running the Docker container in Ubuntu. The red box indicates the URL that you should paste into your browser to open JupyterLab.
13\.5 Working with JupyterLab Desktop
-------------------------------------
You can also run the worksheets accompanying this book on your computer
using [JupyterLab Desktop](https://github.com/jupyterlab/jupyterlab-desktop).
The advantage of JupyterLab Desktop over Docker is that it can be easier to install;
Docker can sometimes run into some fairly technical issues (especially on Windows computers)
that require expert troubleshooting. The downside of JupyterLab Desktop is that there is a (very) small chance that
you may not end up with the right versions of all the R packages needed for the worksheets. Docker, on the other hand,
*guarantees* that the worksheets will work exactly as intended.
In this section, we will cover how to install JupyterLab Desktop,
Git and the JupyterLab Git extension (for version control, as discussed in Chapter [12](version-control.html#version-control)), and
all of the R packages needed to run
the code in this book.
### 13\.5\.1 Windows
**Installation** First, we will install Git for version control.
Go to [the Git download page](https://git-scm.com/download/win) and
download the Windows version of Git. Once the download has finished, run the installer and accept
the default configuration for all pages.
Next, visit the [“Installation” section of the JupyterLab Desktop homepage](https://github.com/jupyterlab/jupyterlab-desktop#installation).
Download the `JupyterLab-Setup-Windows.exe` installer file for Windows.
Double\-click the installer to run it, use the default settings.
Run JupyterLab Desktop by clicking the icon on your desktop.
**Configuring JupyterLab Desktop**
Next, in the JupyterLab Desktop graphical interface that appears (Figure [13\.6](setup.html#fig:setup-jlab-gui)),
you will see text at the bottom saying “Python environment not found”. Click “Install using the bundled installer”
to set up the environment.
Figure 13\.6: The JupyterLab Desktop graphical user interface.
Next, we need to add the JupyterLab Git extension (so that
we can use version control directly from within JupyterLab Desktop),
the IRkernel (to enable the R programming language),
and various R software packages. Click “New session…” in the JupyterLab Desktop
user interface, then scroll to the bottom, and click “Terminal” under the “Other” heading (Figure [13\.7](setup.html#fig:setup-jlab-gui-2)).
Figure 13\.7: A JupyterLab Desktop session, showing the Terminal option at the bottom.
In this terminal, run the following commands:
```
pip install --upgrade jupyterlab-git
conda env update --file https://raw.githubusercontent.com/UBC-DSCI/data-science-a-first-intro-worksheets/main/environment.yml
```
The second command installs the specific R and package versions specified in
the `environment.yml` file found in
[the worksheets repository](https://worksheets.datasciencebook.ca).
We will always keep the versions in the `environment.yml` file updated
so that they are compatible with the exercise worksheets that accompany the book.
Once all of the software installation is complete, it is a good idea to restart
JupyterLab Desktop entirely before you proceed to doing your data analysis.
This will ensure all the software and settings you put in place are
correctly set up and ready for use.
### 13\.5\.2 MacOS
**Installation** First, we will install Git for version control.
Open the terminal ([how\-to video](https://youtu.be/5AJbWEWwnbY))
and type the following command:
```
xcode-select --install
```
Next, visit the [“Installation” section of the JupyterLab Desktop homepage](https://github.com/jupyterlab/jupyterlab-desktop#installation).
Download the `JupyterLab-Setup-MacOS-x64.dmg` or `JupyterLab-Setup-MacOS-arm64.dmg` installer file.
To know which installer is right for your machine, you need to know
whether your computer has an Intel processor (older machines) or an
Apple processor (newer machines); the [Apple support page](https://support.apple.com/en-ca/HT211814) has
information to help you determine which processor you have.
Once downloaded, double\-click the file to open the installer, then drag
the JupyterLab Desktop icon to the Applications folder. Double\-click
the icon in the Applications folder to start JupyterLab Desktop.
**Configuring JupyterLab Desktop** From this point onward, with JupyterLab Desktop running,
follow the instructions in the Windows section on *Configuring JupyterLab Desktop* to set up the
environment, install the JupyterLab Git extension, and install
the various R software packages needed for the worksheets.
### 13\.5\.3 Ubuntu
**Installation** First, we will install Git for version control.
Open the terminal and type the following commands:
```
sudo apt update
sudo apt install git
```
Next, visit the [“Installation” section of the JupyterLab Desktop homepage](https://github.com/jupyterlab/jupyterlab-desktop#installation).
Download the `JupyterLab-Setup-Debian.deb` installer file for Ubuntu/Debian.
Open a terminal, navigate to where the installer file was downloaded, and run the command
```
sudo dpkg -i JupyterLab-Setup-Debian.deb
```
Run JupyterLab Desktop using the command
```
jlab
```
**Configuring JupyterLab Desktop** From this point onward, with JupyterLab Desktop running,
follow the instructions in the Windows section on *Configuring JupyterLab Desktop* to set up the
environment, install the JupyterLab Git extension, and install
the various R software packages needed for the worksheets.
13\.1 Overview
--------------
In this chapter, you’ll learn how to set up the software needed to follow along
with this book on your own computer. Given that installation instructions can
vary based on computer setup, we provide instructions for
multiple operating systems (Ubuntu Linux, MacOS, and Windows).
Although the instructions in this chapter will likely work on many systems,
we have specifically verified that they work on a computer that:
* runs Windows 10 Home, MacOS 13 Ventura, or Ubuntu 22\.04,
* uses a 64\-bit CPU,
* has a connection to the internet,
* uses English as the default language.
13\.2 Chapter learning objectives
---------------------------------
By the end of the chapter, readers will be able to do the following:
* Download the worksheets that accompany this book.
* Install the Docker virtualization engine.
* Edit and run the worksheets using JupyterLab running inside a Docker container.
* Install Git, JupyterLab Desktop, and R packages.
* Edit and run the worksheets using JupyterLab Desktop.
13\.3 Obtaining the worksheets for this book
--------------------------------------------
The worksheets containing exercises for this book
are online at <https://worksheets.datasciencebook.ca>.
The worksheets can be launched directly from that page using the Binder links in the rightmost
column of the table. This is the easiest way to access the worksheets, but note that you will not
be able to save your work and return to it again later.
In order to save your progress, you will need to download the worksheets to your own computer and
work on them locally. You can download the worksheets as a compressed zip file
using [the link at the top of the page](https://github.com/UBC-DSCI/data-science-a-first-intro-worksheets/archive/refs/heads/main.zip).
Once you unzip the downloaded file, you will have a folder containing all of the Jupyter notebook worksheets
accompanying this book. See Chapter [11](jupyter.html#jupyter) for
instructions on working with Jupyter notebooks.
13\.4 Working with Docker
-------------------------
Once you have downloaded the worksheets, you will next need to install and run
the software required to work on Jupyter notebooks on your own computer. Doing
this setup manually can be quite tricky, as it involves quite a few different
software packages, not to mention getting the right versions of
everything—the worksheets and autograder tests may not work unless all the versions are
exactly right! To keep things simple, we instead recommend that you install
[Docker](https://docker.com). Docker lets you run your Jupyter notebooks inside
a pre\-built *container* that comes with precisely the right versions of
all software packages needed run the worksheets that come with this book.
> **Note:** A *container* is a virtual user space within your computer.
> Within the container, you can run software in isolation without interfering with the
> other software that already exists on your machine. In this book, we use
> a container to run a specific version of the R programming
> language, as well as other necessary packages. The container ensures that
> the worksheets function correctly, even if you have a different version of R
> installed on your computer—or even if you haven’t installed R at all!
### 13\.4\.1 Windows
**Installation** To install Docker on Windows,
visit [the online Docker documentation](https://docs.docker.com/desktop/install/windows-install/),
and download the `Docker Desktop Installer.exe` file. Double\-click the file to open the installer
and follow the instructions on the installation wizard, choosing **WSL\-2** instead of **Hyper\-V** when prompted.
> **Note:** Occasionally, when you first run Docker on Windows, you will encounter an error message. Some common errors you may see:
>
>
> * If you need to update WSL, you can enter `cmd.exe` in the Start menu to run the command line. Type `wsl --update` to update WSL.
> * If the admin account on your computer is different to your user account, you must add the user to the “docker\-users” group.
> Run Computer Management as an administrator and navigate to `Local Users` and `Groups -> Groups -> docker-users`. Right\-click to
> add the user to the group. Log out and log back in for the changes to take effect.
> * If you need to enable virtualization, you will need to edit your BIOS. Restart your computer, and enter the BIOS using the hotkey
> (usually Delete, Esc, and/or one of the F\# keys). Look for an “Advanced” menu, and under your CPU settings, set the “Virtualization” option
> to “enabled”. Then save the changes and reboot your machine. If you are not familiar with BIOS editing, you may want to find an expert
> to help you with this, as editing the BIOS can be dangerous. Detailed instructions for doing this are beyond the scope of this book.
**Running JupyterLab** Run Docker Desktop. Once it is running, you need to download and run the
Docker *image* that we have made available for the worksheets (an *image* is like a “snapshot” of a
computer with all the right packages pre\-installed). You only need to do this step one time; the image will remain
the next time you run Docker Desktop.
In the Docker Desktop search bar, enter `ubcdsci/r-dsci-100`, as this is
the name of the image. You will see the `ubcdsci/r-dsci-100` image in the list (Figure [13\.1](setup.html#fig:docker-desktop-search)),
and “latest” in the Tag drop down menu. We need to change “latest” to the right image version before proceeding.
To find the right tag, open
the [`Dockerfile` in the worksheets repository](https://raw.githubusercontent.com/UBC-DSCI/data-science-a-first-intro-worksheets/main/Dockerfile),
and look for the line `FROM ubcdsci/r-dsci-100:` followed by the tag consisting of a sequence of numbers and letters.
Back in Docker Desktop, in the “Tag” drop down menu, click that tag to select the correct image version. Then click
the “Pull” button to download the image.
Figure 13\.1: The Docker Desktop search window. Make sure to click the Tag drop down menu and find the right version of the image before clicking the Pull button to download it.
Once the image is done downloading, click the “Images” button on the left side
of the Docker Desktop window (Figure [13\.2](setup.html#fig:docker-desktop-images)). You
will see the recently downloaded image listed there under the “Local” tab.
Figure 13\.2: The Docker Desktop images tab.
To start up a *container* using that image, click the play button beside the
image. This will open the run configuration menu (Figure [13\.3](setup.html#fig:docker-desktop-runconfig)).
Expand the “Optional settings” drop down menu. In the “Host port” textbox, enter
`8888`. In the “Volumes” section, click the “Host path” box and navigate to the
folder where your Jupyter worksheets are stored. In the “Container path” text
box, enter `/home/jovyan/work`. Then click the “Run” button to start the
container.
Figure 13\.3: The Docker Desktop container run configuration menu.
After clicking the “Run” button, you will see a terminal. The terminal will then print
some text as the Docker container starts. Once the text stops scrolling, find the
URL in the terminal that starts
with `http://127.0.0.1:8888` (highlighted by the red box in Figure [13\.4](setup.html#fig:docker-desktop-url)), and paste it
into your browser to start JupyterLab.
Figure 13\.4: The terminal text after running the Docker container. The red box indicates the URL that you should paste into your browser to open JupyterLab.
When you are done working, make sure to shut down and remove the container by
clicking the red trash can symbol (in the top right corner of Figure [13\.4](setup.html#fig:docker-desktop-url)).
You will not be able to start the container again until you do so.
More information on installing and running
Docker on Windows, as well as troubleshooting tips, can
be found in [the online Docker documentation](https://docs.docker.com/desktop/install/windows-install/).
### 13\.4\.2 MacOS
**Installation** To install Docker on MacOS,
visit [the online Docker documentation](https://docs.docker.com/desktop/install/mac-install/), and
download the `Docker.dmg` installation file that is appropriate for your
computer. To know which installer is right for your machine, you need to know
whether your computer has an Intel processor (older machines) or an
Apple processor (newer machines); the [Apple support page](https://support.apple.com/en-ca/HT211814) has
information to help you determine which processor you have. Once downloaded, double\-click
the file to open the installer, then drag the Docker icon to the Applications folder.
Double\-click the icon in the Applications folder to start Docker. In the installation
window, use the recommended settings.
**Running JupyterLab** Run Docker Desktop. Once it is running, follow the
instructions above in the Windows section on *Running JupyterLab* (the user
interface is the same). More information on installing and running Docker on
MacOS, as well as troubleshooting tips, can be
found in [the online Docker documentation](https://docs.docker.com/desktop/install/mac-install/).
### 13\.4\.3 Ubuntu
**Installation** To install Docker on Ubuntu, open the terminal and enter the following five commands.
```
sudo apt update
sudo apt install ca-certificates curl gnupg
curl -fsSL https://get.docker.com -o get-docker.sh
sudo chmod u+x get-docker.sh
sudo sh get-docker.sh
```
**Running JupyterLab** First, open
the [`Dockerfile` in the worksheets repository](https://raw.githubusercontent.com/UBC-DSCI/data-science-a-first-intro-worksheets/main/Dockerfile),
and look for the line `FROM ubcdsci/r-dsci-100:` followed by a tag consisting of a sequence of numbers and letters.
Then in the terminal, navigate to the directory where you want to run JupyterLab, and run
the following command, replacing `TAG` with the *tag* you found earlier.
```
docker run --rm -v $(pwd):/home/jovyan/work -p 8888:8888 ubcdsci/r-dsci-100:TAG jupyter lab
```
The terminal will then print some text as the Docker container starts. Once the text stops scrolling, find the
URL in your terminal that starts with `http://127.0.0.1:8888` (highlighted by the
red box in Figure [13\.5](setup.html#fig:ubuntu-docker-terminal)), and paste it into your browser to start JupyterLab.
More information on installing and running Docker on Ubuntu, as well as troubleshooting tips, can be found in
[the online Docker documentation](https://docs.docker.com/engine/install/ubuntu/).
Figure 13\.5: The terminal text after running the Docker container in Ubuntu. The red box indicates the URL that you should paste into your browser to open JupyterLab.
### 13\.4\.1 Windows
**Installation** To install Docker on Windows,
visit [the online Docker documentation](https://docs.docker.com/desktop/install/windows-install/),
and download the `Docker Desktop Installer.exe` file. Double\-click the file to open the installer
and follow the instructions on the installation wizard, choosing **WSL\-2** instead of **Hyper\-V** when prompted.
> **Note:** Occasionally, when you first run Docker on Windows, you will encounter an error message. Some common errors you may see:
>
>
> * If you need to update WSL, you can enter `cmd.exe` in the Start menu to run the command line. Type `wsl --update` to update WSL.
> * If the admin account on your computer is different to your user account, you must add the user to the “docker\-users” group.
> Run Computer Management as an administrator and navigate to `Local Users` and `Groups -> Groups -> docker-users`. Right\-click to
> add the user to the group. Log out and log back in for the changes to take effect.
> * If you need to enable virtualization, you will need to edit your BIOS. Restart your computer, and enter the BIOS using the hotkey
> (usually Delete, Esc, and/or one of the F\# keys). Look for an “Advanced” menu, and under your CPU settings, set the “Virtualization” option
> to “enabled”. Then save the changes and reboot your machine. If you are not familiar with BIOS editing, you may want to find an expert
> to help you with this, as editing the BIOS can be dangerous. Detailed instructions for doing this are beyond the scope of this book.
**Running JupyterLab** Run Docker Desktop. Once it is running, you need to download and run the
Docker *image* that we have made available for the worksheets (an *image* is like a “snapshot” of a
computer with all the right packages pre\-installed). You only need to do this step one time; the image will remain
the next time you run Docker Desktop.
In the Docker Desktop search bar, enter `ubcdsci/r-dsci-100`, as this is
the name of the image. You will see the `ubcdsci/r-dsci-100` image in the list (Figure [13\.1](setup.html#fig:docker-desktop-search)),
and “latest” in the Tag drop down menu. We need to change “latest” to the right image version before proceeding.
To find the right tag, open
the [`Dockerfile` in the worksheets repository](https://raw.githubusercontent.com/UBC-DSCI/data-science-a-first-intro-worksheets/main/Dockerfile),
and look for the line `FROM ubcdsci/r-dsci-100:` followed by the tag consisting of a sequence of numbers and letters.
Back in Docker Desktop, in the “Tag” drop down menu, click that tag to select the correct image version. Then click
the “Pull” button to download the image.
Figure 13\.1: The Docker Desktop search window. Make sure to click the Tag drop down menu and find the right version of the image before clicking the Pull button to download it.
Once the image is done downloading, click the “Images” button on the left side
of the Docker Desktop window (Figure [13\.2](setup.html#fig:docker-desktop-images)). You
will see the recently downloaded image listed there under the “Local” tab.
Figure 13\.2: The Docker Desktop images tab.
To start up a *container* using that image, click the play button beside the
image. This will open the run configuration menu (Figure [13\.3](setup.html#fig:docker-desktop-runconfig)).
Expand the “Optional settings” drop down menu. In the “Host port” textbox, enter
`8888`. In the “Volumes” section, click the “Host path” box and navigate to the
folder where your Jupyter worksheets are stored. In the “Container path” text
box, enter `/home/jovyan/work`. Then click the “Run” button to start the
container.
Figure 13\.3: The Docker Desktop container run configuration menu.
After clicking the “Run” button, you will see a terminal. The terminal will then print
some text as the Docker container starts. Once the text stops scrolling, find the
URL in the terminal that starts
with `http://127.0.0.1:8888` (highlighted by the red box in Figure [13\.4](setup.html#fig:docker-desktop-url)), and paste it
into your browser to start JupyterLab.
Figure 13\.4: The terminal text after running the Docker container. The red box indicates the URL that you should paste into your browser to open JupyterLab.
When you are done working, make sure to shut down and remove the container by
clicking the red trash can symbol (in the top right corner of Figure [13\.4](setup.html#fig:docker-desktop-url)).
You will not be able to start the container again until you do so.
More information on installing and running
Docker on Windows, as well as troubleshooting tips, can
be found in [the online Docker documentation](https://docs.docker.com/desktop/install/windows-install/).
### 13\.4\.2 MacOS
**Installation** To install Docker on MacOS,
visit [the online Docker documentation](https://docs.docker.com/desktop/install/mac-install/), and
download the `Docker.dmg` installation file that is appropriate for your
computer. To know which installer is right for your machine, you need to know
whether your computer has an Intel processor (older machines) or an
Apple processor (newer machines); the [Apple support page](https://support.apple.com/en-ca/HT211814) has
information to help you determine which processor you have. Once downloaded, double\-click
the file to open the installer, then drag the Docker icon to the Applications folder.
Double\-click the icon in the Applications folder to start Docker. In the installation
window, use the recommended settings.
**Running JupyterLab** Run Docker Desktop. Once it is running, follow the
instructions above in the Windows section on *Running JupyterLab* (the user
interface is the same). More information on installing and running Docker on
MacOS, as well as troubleshooting tips, can be
found in [the online Docker documentation](https://docs.docker.com/desktop/install/mac-install/).
### 13\.4\.3 Ubuntu
**Installation** To install Docker on Ubuntu, open the terminal and enter the following five commands.
```
sudo apt update
sudo apt install ca-certificates curl gnupg
curl -fsSL https://get.docker.com -o get-docker.sh
sudo chmod u+x get-docker.sh
sudo sh get-docker.sh
```
**Running JupyterLab** First, open
the [`Dockerfile` in the worksheets repository](https://raw.githubusercontent.com/UBC-DSCI/data-science-a-first-intro-worksheets/main/Dockerfile),
and look for the line `FROM ubcdsci/r-dsci-100:` followed by a tag consisting of a sequence of numbers and letters.
Then in the terminal, navigate to the directory where you want to run JupyterLab, and run
the following command, replacing `TAG` with the *tag* you found earlier.
```
docker run --rm -v $(pwd):/home/jovyan/work -p 8888:8888 ubcdsci/r-dsci-100:TAG jupyter lab
```
The terminal will then print some text as the Docker container starts. Once the text stops scrolling, find the
URL in your terminal that starts with `http://127.0.0.1:8888` (highlighted by the
red box in Figure [13\.5](setup.html#fig:ubuntu-docker-terminal)), and paste it into your browser to start JupyterLab.
More information on installing and running Docker on Ubuntu, as well as troubleshooting tips, can be found in
[the online Docker documentation](https://docs.docker.com/engine/install/ubuntu/).
Figure 13\.5: The terminal text after running the Docker container in Ubuntu. The red box indicates the URL that you should paste into your browser to open JupyterLab.
13\.5 Working with JupyterLab Desktop
-------------------------------------
You can also run the worksheets accompanying this book on your computer
using [JupyterLab Desktop](https://github.com/jupyterlab/jupyterlab-desktop).
The advantage of JupyterLab Desktop over Docker is that it can be easier to install;
Docker can sometimes run into some fairly technical issues (especially on Windows computers)
that require expert troubleshooting. The downside of JupyterLab Desktop is that there is a (very) small chance that
you may not end up with the right versions of all the R packages needed for the worksheets. Docker, on the other hand,
*guarantees* that the worksheets will work exactly as intended.
In this section, we will cover how to install JupyterLab Desktop,
Git and the JupyterLab Git extension (for version control, as discussed in Chapter [12](version-control.html#version-control)), and
all of the R packages needed to run
the code in this book.
### 13\.5\.1 Windows
**Installation** First, we will install Git for version control.
Go to [the Git download page](https://git-scm.com/download/win) and
download the Windows version of Git. Once the download has finished, run the installer and accept
the default configuration for all pages.
Next, visit the [“Installation” section of the JupyterLab Desktop homepage](https://github.com/jupyterlab/jupyterlab-desktop#installation).
Download the `JupyterLab-Setup-Windows.exe` installer file for Windows.
Double\-click the installer to run it, use the default settings.
Run JupyterLab Desktop by clicking the icon on your desktop.
**Configuring JupyterLab Desktop**
Next, in the JupyterLab Desktop graphical interface that appears (Figure [13\.6](setup.html#fig:setup-jlab-gui)),
you will see text at the bottom saying “Python environment not found”. Click “Install using the bundled installer”
to set up the environment.
Figure 13\.6: The JupyterLab Desktop graphical user interface.
Next, we need to add the JupyterLab Git extension (so that
we can use version control directly from within JupyterLab Desktop),
the IRkernel (to enable the R programming language),
and various R software packages. Click “New session…” in the JupyterLab Desktop
user interface, then scroll to the bottom, and click “Terminal” under the “Other” heading (Figure [13\.7](setup.html#fig:setup-jlab-gui-2)).
Figure 13\.7: A JupyterLab Desktop session, showing the Terminal option at the bottom.
In this terminal, run the following commands:
```
pip install --upgrade jupyterlab-git
conda env update --file https://raw.githubusercontent.com/UBC-DSCI/data-science-a-first-intro-worksheets/main/environment.yml
```
The second command installs the specific R and package versions specified in
the `environment.yml` file found in
[the worksheets repository](https://worksheets.datasciencebook.ca).
We will always keep the versions in the `environment.yml` file updated
so that they are compatible with the exercise worksheets that accompany the book.
Once all of the software installation is complete, it is a good idea to restart
JupyterLab Desktop entirely before you proceed to doing your data analysis.
This will ensure all the software and settings you put in place are
correctly set up and ready for use.
### 13\.5\.2 MacOS
**Installation** First, we will install Git for version control.
Open the terminal ([how\-to video](https://youtu.be/5AJbWEWwnbY))
and type the following command:
```
xcode-select --install
```
Next, visit the [“Installation” section of the JupyterLab Desktop homepage](https://github.com/jupyterlab/jupyterlab-desktop#installation).
Download the `JupyterLab-Setup-MacOS-x64.dmg` or `JupyterLab-Setup-MacOS-arm64.dmg` installer file.
To know which installer is right for your machine, you need to know
whether your computer has an Intel processor (older machines) or an
Apple processor (newer machines); the [Apple support page](https://support.apple.com/en-ca/HT211814) has
information to help you determine which processor you have.
Once downloaded, double\-click the file to open the installer, then drag
the JupyterLab Desktop icon to the Applications folder. Double\-click
the icon in the Applications folder to start JupyterLab Desktop.
**Configuring JupyterLab Desktop** From this point onward, with JupyterLab Desktop running,
follow the instructions in the Windows section on *Configuring JupyterLab Desktop* to set up the
environment, install the JupyterLab Git extension, and install
the various R software packages needed for the worksheets.
### 13\.5\.3 Ubuntu
**Installation** First, we will install Git for version control.
Open the terminal and type the following commands:
```
sudo apt update
sudo apt install git
```
Next, visit the [“Installation” section of the JupyterLab Desktop homepage](https://github.com/jupyterlab/jupyterlab-desktop#installation).
Download the `JupyterLab-Setup-Debian.deb` installer file for Ubuntu/Debian.
Open a terminal, navigate to where the installer file was downloaded, and run the command
```
sudo dpkg -i JupyterLab-Setup-Debian.deb
```
Run JupyterLab Desktop using the command
```
jlab
```
**Configuring JupyterLab Desktop** From this point onward, with JupyterLab Desktop running,
follow the instructions in the Windows section on *Configuring JupyterLab Desktop* to set up the
environment, install the JupyterLab Git extension, and install
the various R software packages needed for the worksheets.
### 13\.5\.1 Windows
**Installation** First, we will install Git for version control.
Go to [the Git download page](https://git-scm.com/download/win) and
download the Windows version of Git. Once the download has finished, run the installer and accept
the default configuration for all pages.
Next, visit the [“Installation” section of the JupyterLab Desktop homepage](https://github.com/jupyterlab/jupyterlab-desktop#installation).
Download the `JupyterLab-Setup-Windows.exe` installer file for Windows.
Double\-click the installer to run it, use the default settings.
Run JupyterLab Desktop by clicking the icon on your desktop.
**Configuring JupyterLab Desktop**
Next, in the JupyterLab Desktop graphical interface that appears (Figure [13\.6](setup.html#fig:setup-jlab-gui)),
you will see text at the bottom saying “Python environment not found”. Click “Install using the bundled installer”
to set up the environment.
Figure 13\.6: The JupyterLab Desktop graphical user interface.
Next, we need to add the JupyterLab Git extension (so that
we can use version control directly from within JupyterLab Desktop),
the IRkernel (to enable the R programming language),
and various R software packages. Click “New session…” in the JupyterLab Desktop
user interface, then scroll to the bottom, and click “Terminal” under the “Other” heading (Figure [13\.7](setup.html#fig:setup-jlab-gui-2)).
Figure 13\.7: A JupyterLab Desktop session, showing the Terminal option at the bottom.
In this terminal, run the following commands:
```
pip install --upgrade jupyterlab-git
conda env update --file https://raw.githubusercontent.com/UBC-DSCI/data-science-a-first-intro-worksheets/main/environment.yml
```
The second command installs the specific R and package versions specified in
the `environment.yml` file found in
[the worksheets repository](https://worksheets.datasciencebook.ca).
We will always keep the versions in the `environment.yml` file updated
so that they are compatible with the exercise worksheets that accompany the book.
Once all of the software installation is complete, it is a good idea to restart
JupyterLab Desktop entirely before you proceed to doing your data analysis.
This will ensure all the software and settings you put in place are
correctly set up and ready for use.
### 13\.5\.2 MacOS
**Installation** First, we will install Git for version control.
Open the terminal ([how\-to video](https://youtu.be/5AJbWEWwnbY))
and type the following command:
```
xcode-select --install
```
Next, visit the [“Installation” section of the JupyterLab Desktop homepage](https://github.com/jupyterlab/jupyterlab-desktop#installation).
Download the `JupyterLab-Setup-MacOS-x64.dmg` or `JupyterLab-Setup-MacOS-arm64.dmg` installer file.
To know which installer is right for your machine, you need to know
whether your computer has an Intel processor (older machines) or an
Apple processor (newer machines); the [Apple support page](https://support.apple.com/en-ca/HT211814) has
information to help you determine which processor you have.
Once downloaded, double\-click the file to open the installer, then drag
the JupyterLab Desktop icon to the Applications folder. Double\-click
the icon in the Applications folder to start JupyterLab Desktop.
**Configuring JupyterLab Desktop** From this point onward, with JupyterLab Desktop running,
follow the instructions in the Windows section on *Configuring JupyterLab Desktop* to set up the
environment, install the JupyterLab Git extension, and install
the various R software packages needed for the worksheets.
### 13\.5\.3 Ubuntu
**Installation** First, we will install Git for version control.
Open the terminal and type the following commands:
```
sudo apt update
sudo apt install git
```
Next, visit the [“Installation” section of the JupyterLab Desktop homepage](https://github.com/jupyterlab/jupyterlab-desktop#installation).
Download the `JupyterLab-Setup-Debian.deb` installer file for Ubuntu/Debian.
Open a terminal, navigate to where the installer file was downloaded, and run the command
```
sudo dpkg -i JupyterLab-Setup-Debian.deb
```
Run JupyterLab Desktop using the command
```
jlab
```
**Configuring JupyterLab Desktop** From this point onward, with JupyterLab Desktop running,
follow the instructions in the Windows section on *Configuring JupyterLab Desktop* to set up the
environment, install the JupyterLab Git extension, and install
the various R software packages needed for the worksheets.
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-vizI.html |
Chapter 2 Data visualization
============================
Data graphics provide one of the most accessible, compelling, and expressive modes to investigate and depict patterns in data. This chapter will motivate why well\-designed data graphics are important and describe a taxonomy for understanding their composition.
If you are seeing this material for the first time, you will never look at data graphics the same way again—yours will soon be a more critical lens.
2\.1 The 2012 federal election cycle
------------------------------------
Every four years, the presidential election draws an enormous amount of interest in the United States. The most prominent candidates announce their candidacy nearly two years before the November elections, beginning the process of raising the hundreds of millions of dollars necessary to orchestrate a national campaign. In many ways, the experience of running a successful presidential campaign is in itself evidence of the leadership and organizational skills necessary to be [*commander\-in\-chief*](https://en.wikipedia.org/w/index.php?search=commander-in-chief).
Voices from all parts of the political spectrum are critical of the influence of money upon political campaigns.
While the contributions from individual citizens to individual candidates are [limited in various ways](http://www.fec.gov/pages/brochures/citizens.shtml), the Supreme Court’s decision in [*Citizens United v. Federal Election Commission*](https://en.wikipedia.org/w/index.php?search=Citizens%20United%20v.%20Federal%20Election%20Commission) allows unlimited political spending by corporations (non\-profit or otherwise).
This has resulted in a system of committees (most notably, [*political action committees*](https://en.wikipedia.org/w/index.php?search=political%20action%20committees), PACs) that can accept unlimited contributions and spend them on behalf of (or against) a particular candidate or set of candidates.
Unraveling the complicated network of campaign spending is a subject [of great interest](http://www.nytimes.com/interactive/2015/10/11/us/politics/2016-presidential-election-super-pac-donors.html).
To perform that unraveling is an exercise in data science.
The [*Federal Election Commission*](https://en.wikipedia.org/w/index.php?search=Federal%20Election%20Commission) (FEC) maintains a website with logs of not only all of the ($200 or more) contributions made by individuals to candidates and committees, but also of spending by committees on behalf of (and against) candidates.
Of course, the FEC also maintains data on which candidates win elections, and by how much. These data sources are separate, and it requires some ingenuity to piece them together.
We will develop these skills in Chapters [4](ch-dataI.html#ch:dataI)–[6](ch-dataII.html#ch:dataII), but for now, we will focus on graphical displays of the information that can be gleaned from these data.
Our emphasis at this stage is on making intelligent decisions about how to display certain data, so that a clear (and correct) message is delivered.
Among the most basic questions is: How much money did each candidate raise?
However, the convoluted campaign finance network makes even this simple question difficult to answer, and—perhaps more importantly—less meaningful than we might think. A better question is: On whose candidacy was the most money spent? In Figure [2\.1](ch-vizI.html#fig:spent), we show a bar graph of the amount of money (in millions of dollars) that were spent by committees on particular candidates during the general election phase of the [*2012 federal election cycle*](https://en.wikipedia.org/w/index.php?search=2012%20federal%20election%20cycle). This includes candidates for president, the [*United States Senate*](https://en.wikipedia.org/w/index.php?search=United%20States%20Senate), and the [*United States House of Representatives*](https://en.wikipedia.org/w/index.php?search=United%20States%20House%20of%20Representatives). Only candidates on whose campaign at least $4 million was spent are included in Figure [2\.1](ch-vizI.html#fig:spent).
Figure 2\.1: Amount of money spent on individual candidates in the general election phase of the 2012 federal election cycle, in millions of dollars. Candidacies with at least $4 million in spending are depicted.
It seems clear from Figure [2\.1](ch-vizI.html#fig:spent) that President [Barack Obama](https://en.wikipedia.org/w/index.php?search=Barack%20Obama)’s re\-election campaign spent far more money than any other candidate, in particular more than doubling the amount of money spent by his [*Republican*](https://en.wikipedia.org/w/index.php?search=Republican) challenger, [Mitt Romney](https://en.wikipedia.org/w/index.php?search=Mitt%20Romney). However, committees are not limited to spending money in support of a candidate—they can also spend money **against** a particular candidate ([*attack ads*](https://en.wikipedia.org/w/index.php?search=attack%20ads)). In Figure [2\.2](ch-vizI.html#fig:spent-attack), we separate the same spending shown in Figure [2\.1](ch-vizI.html#fig:spent) by whether the money was spent for or against the candidate.
Figure 2\.2: Amount of money spent on individual candidates in the general election phase of the 2012 federal election cycle, in millions of dollars, broken down by type of spending. Candidacies with at least $4 million in spending are depicted.
In these elections, most of the money was spent against each candidate, and in particular, $251 million of the $274 million spent on President Obama’s campaign was spent against his candidacy. Similarly, most of the money spent on [Mitt Romney](https://en.wikipedia.org/w/index.php?search=Mitt%20Romney)’s campaign was against him, but the percentage of negative spending on Romney’s campaign (70%) was lower than that of Obama (92%).
The difference between Figure [2\.1](ch-vizI.html#fig:spent) and Figure [2\.2](ch-vizI.html#fig:spent-attack) is that in the latter we have used color to bring a third variable (type of spending) into the plot. This allows us to make a clear comparison that importantly changes the conclusions we might draw from the former plot. In particular, Figure [2\.1](ch-vizI.html#fig:spent) makes it appear as though President Obama’s [*war chest*](https://en.wikipedia.org/w/index.php?search=war%20chest) dwarfed that of Romney, when in fact the opposite was true.
### 2\.1\.1 Are these two groups different?
Since so much more money was spent attacking Obama’s campaign than Romney’s, you might conclude from Figure [2\.2](ch-vizI.html#fig:spent-attack) that Republicans were more successful in fundraising during this election cycle. In Figure [2\.3](ch-vizI.html#fig:raised-party), we can confirm that this was indeed the case, since more money was spent supporting Republican candidates than Democrats, and more money was spent attacking Democratic candidates than Republican. It also seems clear from Figure [2\.3](ch-vizI.html#fig:raised-party) that nearly all of the money was spent on either Democrats or Republicans.[2](#fn2)
Figure 2\.3: Amount of money spent on individual candidacies by political party affiliation during the general election phase of the 2012 federal election cycle.
However, the question of whether the money spent on candidates really differed by party affiliation is a bit thornier. As we saw above, the presidential election dominated the political donations in this election cycle. Romney faced a serious disadvantage in trying to unseat an incumbent president. In this case, the office being sought is a confounding variable. By further subdividing the contributions in Figure [2\.3](ch-vizI.html#fig:raised-party) by the office being sought, we can see in Figure [2\.4](ch-vizI.html#fig:raised-office) that while more money was spent supporting Republican candidates for all elective branches of government, it was only in the presidential election that more money was spent attacking Democratic candidates. In fact, slightly more money was spent attacking Republican House and Senate candidates.
Figure 2\.4: Amount of money spent on individual candidacies by political party affiliation during the general election phase of the 2012 federal election cycle, broken down by office being sought (House, President, or Senate).
Note that Figures [2\.3](ch-vizI.html#fig:raised-party) and [2\.4](ch-vizI.html#fig:raised-office) display the same data. In Figure [2\.4](ch-vizI.html#fig:raised-office), we have an additional variable that provides an important clue into the mystery of campaign finance. Our choice to include that variable results in Figure [2\.4](ch-vizI.html#fig:raised-office) conveying substantially more meaning than Figure [2\.3](ch-vizI.html#fig:raised-party), even though both figures are “correct.” In this chapter, we will begin to develop a framework for creating principled data graphics.
### 2\.1\.2 Graphing variation
One theme that arose during the presidential election was the allegation that Romney’s campaign was supported by a few rich donors, whereas Obama’s support came from people across the economic spectrum. If this were true, then we would expect to see a difference in the distribution of donation amounts between the two candidates. In particular, we would expect to see this in the histograms shown in Figure [2\.5](ch-vizI.html#fig:donations), which summarize the more than one million donations made by individuals to the two major committees that supported each candidate (for Obama, Obama for America, and the Obama Victory Fund 2012; for Romney, Romney for President, and Romney Victory 2012\). We do see some evidence for this claim in Figure [2\.5](ch-vizI.html#fig:donations), Obama did appear to receive more smaller donations, but the evidence is far from conclusive. One problem is that both candidates received many small donations but just a few larger donations; the scale on the horizontal axis makes it difficult to actually see what is going on. Secondly, the histograms are hard to compare in a side\-by\-side placement. Finally, we have lumped all of the donations from both phases of the presidential election (i.e., primary vs. general) in together.
Figure 2\.5: Donations made by individuals to the PACs supporting the two major presidential candidates in the 2012 election.
In Figure [2\.6](ch-vizI.html#fig:donations-split), we remedy these issues by (1\) using density curves instead of histograms, so that we can compare the distributions directly, (2\) plotting the logarithm of the donation amount on the horizontal scale to focus on the data that are important, and (3\) separating the donations by the phase of the election. Figure [2\.6](ch-vizI.html#fig:donations-split) allows us to make more nuanced conclusions. The right panel supports the allegation that Obama’s donations came from a broader base during the primary election phase. It does appear that more of Obama’s donations came in smaller amounts during this phase of the election. However, in the general phase, there is virtually no difference in the distribution of donations made to either campaign.
Figure 2\.6: Donations made by individuals to the PACs supporting the two major presidential candidates in the 2012 election, separated by election phase.
### 2\.1\.3 Examining relationships among variables
Naturally, the biggest questions raised by the *Citizens United* decision are about the influence of money in elections. If campaign spending is unlimited, does this mean that the candidate who generates the most spending on their behalf will earn the most votes? One way that we might address this question is to compare the amount of money spent on each candidate in each election with the number of votes that candidate earned. Statisticians will want to know the [*correlation*](https://en.wikipedia.org/w/index.php?search=correlation) between these two quantities—when one is high, is the other one likely to be high as well?
Since all 435 members of the United States House of Representatives are elected every two years, and the districts contain roughly the same number of people, House elections provide a nice data set to make this type of comparison. In Figure [2\.7](ch-vizI.html#fig:votes), we show a simple scatterplot relating the number of dollars spent on behalf of the Democratic candidate against the number of votes that candidate earned for each of the House elections.
Figure 2\.7: Scatterplot illustrating the relationship between number of dollars spent supporting and number of votes earned by Democrats in 2012 elections for the House of Representatives.
The relationship between the two quantities depicted in Figure [2\.7](ch-vizI.html#fig:votes) is very weak. It does not appear that candidates who benefited more from campaign spending earned more votes. However, the comparison in Figure [2\.7](ch-vizI.html#fig:votes) is misleading. On both axes, it is not the *amount* that is important, but the *proportion*. Although the population of each congressional district is similar, they are not the same, and voter turnout will vary based on a variety of factors. By comparing the proportion of the vote, we can control for the size of the voting population in each district. Similarly, it makes less sense to focus on the total amount of money spent, as opposed to the proportion of money spent. In Figure [2\.8](ch-vizI.html#fig:votes-better), we present the same comparison, but with both axes scaled to proportions.
Figure 2\.8: Scatterplot illustrating the relationship between proportion of dollars spent supporting and proportion of votes earned by Democrats in the 2012 House of Representatives elections. Each dot represents one district. The size of each dot is proportional to the total spending in that election, and the alpha transparency of each dot is proportional to the total number of votes in that district.
Figure [2\.8](ch-vizI.html#fig:votes-better) captures many nuances that were impossible to see in Figure [2\.7](ch-vizI.html#fig:votes). First, there *does* appear to be a positive association between the percentage of money supporting a candidate and the percentage of votes that they earn. However, that relationship is of greatest interest towards the center of the plot, where elections are actually contested. Outside of this region, one candidate wins more than 55% of the vote. In this case, there is usually very little money spent. These are considered “safe” House elections—you can see these points on the plot because most of them are close to \\(x\=0\\) or \\(x\=1\\), and the dots are very small. For example, one of the points in the lower\-left corner is the [*8th district in Ohio*](https://en.wikipedia.org/w/index.php?search=8th%20district%20in%20Ohio), which was won by the then [*Speaker of the House*](https://en.wikipedia.org/w/index.php?search=Speaker%20of%20the%20House) [John Boehner](https://en.wikipedia.org/w/index.php?search=John%20Boehner), who ran unopposed. The election in which the most money was spent (over $11 million) was also in Ohio. In the 16th district, Republican incumbent [Jim Renacci](https://en.wikipedia.org/w/index.php?search=Jim%20Renacci) narrowly defeated Democratic challenger [Betty Sutton](https://en.wikipedia.org/w/index.php?search=Betty%20Sutton), who was herself an incumbent from the 13th district. This battle was made possible through decennial redistricting (see Chapter [17](ch-spatial.html#ch:spatial)). Of the money spent in this election, 51\.2% was in support of Sutton but she earned only 48\.0% of the votes.
In the center of the plot, the dots are bigger, indicating that more money is being spent on these contested elections. Of course this makes sense, since candidates who are fighting for their political lives are more likely to fundraise aggressively. Nevertheless, the evidence that more financial support correlates with more votes in contested elections is relatively weak.
### 2\.1\.4 Networks
Not all relationships among variables are sensibly expressed by a scatterplot. Another way in which variables can be related is in the form of a network (we will discuss these in more detail in Chapter [20](ch-netsci.html#ch:netsci)). In this case, campaign funding has a network structure in which individuals donate money to committees, and committees then spend money on behalf of candidates. While the national campaign funding network is far too complex to show here, in Figure [2\.9](ch-vizI.html#fig:ma-funding) we display the funding network for candidates from [*Massachusetts*](https://en.wikipedia.org/w/index.php?search=Massachusetts).
In Figure [2\.9](ch-vizI.html#fig:ma-funding), we see that the two campaigns that benefited the most from committee spending were Republicans [Mitt Romney](https://en.wikipedia.org/w/index.php?search=Mitt%20Romney) and [Scott Brown](https://en.wikipedia.org/w/index.php?search=Scott%20Brown). This is not surprising, since Romney was running for president and received massive donations from the Republican National Committee, while Brown was running to keep his Senate seat in a heavily Democratic state against a strong challenger, [Elizabeth Warren](https://en.wikipedia.org/w/index.php?search=Elizabeth%20Warren). Both men lost their elections. The constellation of blue dots are the congressional delegation from Massachusetts, all of whom are Democrats.
Figure 2\.9: Campaign funding network for candidates from Massachusetts, 2012 federal elections. Each edge represents a contribution from a PAC to a candidate.
2\.2 Composing data graphics
----------------------------
Former *New York Times* intern and [FlowingData.com](https://flowingdata.com/) creator [Nathan Yau](https://en.wikipedia.org/w/index.php?search=Nathan%20Yau) makes the analogy that creating data graphics is like cooking: Anyone can learn to type graphical commands and generate plots on the computer. Similarly, anyone can heat up food in a microwave. What separates a high\-quality visualization from a plain one are the same elements that separate great chefs from novices: mastery of their tools, knowledge of their ingredients, insight, and creativity (Yau 2013\). In this section, we present a framework—rooted in scientific research—for understanding data graphics. Our hope is that by internalizing these ideas you will refine your data graphics palette.
### 2\.2\.1 A taxonomy for data graphics
The taxonomy presented in Yau (2013\) provides a systematic way of thinking about how data graphics convey specific pieces of information and how they could be improved. A complementary *grammar* of graphics (Wilkinson et al. 2005\) is implemented by [Hadley Wickham](https://en.wikipedia.org/w/index.php?search=Hadley%20Wickham) in the **ggplot2** graphics package (Hadley Wickham 2016\), albeit using slightly different terminology. For clarity, we will postpone discussion of **ggplot2** until Chapter [3](ch-vizII.html#ch:vizII). (To extend our cooking analogy, you must learn to taste before you can learn to cook well.)
In this framework, data graphics can be understood in terms of four basic elements: visual cues, coordinate systems, scale, and context. In what follows, we explicate this vision and append a few additional items (facets and layers). This section should equip the careful reader with the ability to systematically break down data graphics, enabling a more critical analysis of their content.
#### 2\.2\.1\.1 Visual Cues
Visual cues are graphical elements that draw the eye to what you want your audience to focus upon. They are the fundamental building blocks of data graphics, and the choice of which visual cues to use to represent which quantities is the central question for the data graphic composer. Table [2\.1](ch-vizI.html#tab:visual-cues) identifies nine distinct visual cues, for which we also list whether that cue is used to encode a numerical or categorical quantity:
Table 2\.1: Visual cues and what they signify.
| Visual Cue | Variable Type | Question |
| --- | --- | --- |
| Position | numerical | where in relation to other things? |
| Length | numerical | how big (in one dimension)? |
| Angle | numerical | how wide? parallel to something else? |
| Direction | numerical | at what slope? in a time series, going up or down? |
| Shape | categorical | belonging to which group? |
| Area | numerical | how big (in two dimensions)? |
| Volume | numerical | how big (in three dimensions)? |
| Shade | either | to what extent? how severely? |
| Color | either | to what extent? how severely? |
Research into graphical perception (dating back to the mid\-1980s) has shown that human beings’ ability to perceive differences in magnitude accurately descends in this order (Cleveland and McGill 1984\). That is, humans are quite good at accurately perceiving differences in position (e.g., how much taller one bar is than another), but not as good at perceiving differences in angles. This is one reason why many people prefer bar charts to [*pie charts*](https://en.wikipedia.org/w/index.php?search=pie%20charts). Our relatively poor ability to perceive differences in color is a major factor in the relatively low opinion of [*heat maps*](https://en.wikipedia.org/w/index.php?search=heat%20maps) that many data scientists have.
#### 2\.2\.1\.2 Coordinate systems
How are the data points organized? While any number of coordinate systems are possible, three are most common:
* **Cartesian**: The familiar \\((x,y)\\)\-rectangular coordinate system with two perpendicular axes.
* **Polar**: The radial analog of the Cartesian system with points identified by their radius \\(\\rho\\) and angle \\(\\theta\\).
* **Geographic**: The increasingly important system in which we have locations on the curved surface of the Earth, but we are trying to represent these locations in a flat two\-dimensional plane. We will discuss such geospatial analyses in Chapter [17](ch-spatial.html#ch:spatial).
An appropriate choice for a coordinate system is critical in representing one’s data accurately, since, for example, displaying geospatial data like airline routes on a flat [*Cartesian plane*](https://en.wikipedia.org/w/index.php?search=Cartesian%20plane) can lead to gross distortions of reality (see Section [17\.3\.2](ch-spatial.html#sec:projections)).
#### 2\.2\.1\.3 Scale
Scales translate values into visual cues.
The choice of scale is often crucial.
The central question is *how* does distance in the data graphic translate into meaningful differences in quantity? Each coordinate axis can have its own scale, for which we have three different choices:
* **Numeric**: A numeric quantity is most commonly set on a *linear*, *logarithmic*, or *percentage* scale. Note that a logarithmic scale does not have the property that, say, a one\-centimeter difference in position corresponds to an equal difference in quantity anywhere on the scale.
* **Categorical**: A categorical variable may have no ordering (e.g., Democrat, Republican, or Independent), or it may be *ordinal* (e.g., never, former, or current smoker).
* **Time**: A numeric quantity that has some special properties. First, because of the calendar, it can be demarcated by a series of different units (e.g., year, month, day, etc.). Second, it can be considered periodically (or cyclically) as a “wrap\-around” scale. Time is also so commonly used and misused that it warrants careful consideration.
Misleading with scale is easy, since it has the potential to completely distort the relative positions of data points in any graphic.
#### 2\.2\.1\.4 Context
The purpose of data graphics is to help the viewer make *meaningful* comparisons, but a bad data graphic can do just the opposite: It can instead focus the viewer’s attention on meaningless artifacts, or ignore crucial pieces of relevant but external knowledge. Context can be added to data graphics in the form of titles or subtitles that explain what is being shown, axis labels that make it clear how units and scale are depicted, or reference points or lines that contribute relevant external information. While one should avoid cluttering up a data graphic with excessive annotations, it is necessary to provide proper context.
#### 2\.2\.1\.5 Small multiples and layers
One of the fundamental challenges of creating data graphics is condensing multivariate information into a two\-dimensional image. While three\-dimensional images are occasionally useful, they are often more confusing than anything else. Instead, here are three common ways of incorporating more variables into a two\-dimensional data graphic:
* **Small multiples**: Also known as [*facets*](https://en.wikipedia.org/w/index.php?search=facets), a single data graphic can be composed of several small multiples of the same basic plot, with one (discrete) variable changing in each of the small sub\-images.
* **Layers**: It is sometimes appropriate to draw a new layer on top of an existing data graphic. This new layer can provide context or comparison, but there is a limit to how many layers humans can reliably parse.
* **Animation**: If time is the additional variable, then an animation can sometimes effectively convey changes in that variable. Of course, this doesn’t work on the printed page and makes it impossible for the user to see all the data at once.
### 2\.2\.2 Color
Color is one of the flashiest, but most misperceived and misused visual cues.
In making color choices, there are a few key ideas that are important for any data scientist to understand.
First, as we saw above, color and its monochromatic cousin *shade* are two of the most poorly perceived visual cues. Thus, while potentially useful for a small number of levels of a categorical variable, color and shade are not particularly faithful ways to represent numerical variables—especially if small differences in those quantities are important to distinguish. This means that while color can be visually appealing to humans, it often isn’t as informative as we might hope. For two numeric variables, it is hard to think of examples where color and shade would be more useful than position. Where color can be most effective is to represent a *third* or *fourth* numeric quantity on a scatterplot—once the two position cues have been exhausted.
Second, approximately 8% of the population—most of whom are men—have some form of color blindness. Most commonly, this renders them incapable of seeing colors accurately, most notably of distinguishing between red and green. Compounding the problem, many of these people do not know that they are color\-blind. Thus, for professional graphics it is worth thinking carefully about which colors to use. The [*National Football League*](https://en.wikipedia.org/w/index.php?search=National%20Football%20League) famously failed to account for this in [a 2015 game](http://espn.go.com/nfl/story/_/id/14116795/newyork-jets-buffalo-bills-jerseys-problematic-colorblind-fans) in which the [*Buffalo Bills*](https://en.wikipedia.org/w/index.php?search=Buffalo%20Bills) wore all\-red jerseys and the [*New York Jets*](https://en.wikipedia.org/w/index.php?search=New%20York%20Jets) wore all\-green, leaving colorblind fans unable to distinguish one team from the other!
To prevent issues with color blindness, avoid contrasting red with green in data graphics. As a bonus, your plots won’t seem Christmas\-y!
Thankfully, we have been freed from the burden of having to create such intelligent palettes by the research of [Cynthia Brewer](https://en.wikipedia.org/w/index.php?search=Cynthia%20Brewer), creator of the [ColorBrewer](http://colorbrewer2.org/) website (and inspiration for the **RColorBrewer** **R** package).
Brewer has created colorblind\-safe palettes in a variety of hues for three different types of numeric data in a single variable:
* **Sequential**: The ordering of the data has only one direction. Positive integers are sequential because they can only go up: they can’t go past 0\. (Thus, if 0 is encoded as white, then any darker shade of gray indicates a larger number.)
* **Diverging**: The ordering of the data has two directions. In an election forecast, we commonly see states colored based on how they are expected to vote for the president. Since red is associated with Republicans and blue with Democrats, states that are solidly red or blue are on opposite ends of the scale. But “swing states” that could go either way may appear purple, white, or some other neutral color that is “between” red and blue (see Figure [2\.10](ch-vizI.html#fig:brewer2)).
* **Qualitative**: There is no ordering of the data, and we simply need color to differentiate different categories.
Figure 2\.10: Diverging red\-blue color palette.
The **RColorBrewer** package provides functionality to use these palettes directly in **R**. Figure [2\.11](ch-vizI.html#fig:brewer) illustrates the sequential, qualitative, and diverging palettes built into **RColorBrewer**.
Figure 2\.11: Palettes available through the **RColorBrewer** package.
Take the extra time to use a well\-designed color palette. Accept that those who work with color for a living will probably choose better colors than you.
Other excellent perceptually distinct color palettes are provided by the **viridis** package. These palettes mimic those that are used in the [*matplotlib*](https://en.wikipedia.org/w/index.php?search=matplotlib) plotting library for [*Python*](https://en.wikipedia.org/w/index.php?search=Python). The **viridis** palettes are also accessible in **ggplot2** through, for example, the `scale_color_viridis()` function.
### 2\.2\.3 Dissecting data graphics
With a little practice, one can learn to dissect data graphics in terms of the taxonomy outlined above. For example, your basic scatterplot uses *position* in the *Cartesian* plane with *linear* scales to show the relationship between two variables. In what follows, we identify the visual cues, coordinate system, and scale in a series of simple data graphics.
1. The bar graph in Figure [2\.12](ch-vizI.html#fig:sat-dot) displays the average score on the math portion of the 1994–1995 [*SAT*](https://en.wikipedia.org/w/index.php?search=SAT) (with possible scores ranging from 200 to 800\) among states for whom at least two\-thirds of the students took the SAT.
Figure 2\.12: Bar graph of average SAT scores among states with at least two\-thirds of students taking the test.
This plot uses the visual cue of [*length*](https://en.wikipedia.org/w/index.php?search=length) to represent the math SAT score on the vertical axis with a *linear* scale. The [*categorical*](https://en.wikipedia.org/w/index.php?search=categorical) variable of `state` is arrayed on the horizontal axis. Although the states are ordered alphabetically, it would not be appropriate to consider the `state` variable to be ordinal, since the ordering is not meaningful in the context of math SAT scores. The coordinate system is *Cartesian*, although as noted previously, the horizontal coordinate is meaningless. Context is provided by the axis labels and title. Note also that since 200 is the minimum score possible on each section of the SAT, the vertical axis has been constrained to start at 200\.
2. Next, we consider a time series that shows the progression of the world record times in the [*100\-meter freestyle*](https://en.wikipedia.org/w/index.php?search=100-meter%20freestyle) swimming event for men and women. Figure [2\.13](ch-vizI.html#fig:swimgg) displays the times as a function of the year in which the new record was set.
Figure 2\.13: Scatterplot of world record time in 100\-m freestyle swimming.
At some level this is simply a scatterplot that uses [*position*](https://en.wikipedia.org/w/index.php?search=position) on both the vertical and horizontal axes to indicate swimming time and chronological time, respectively, in a [*Cartesian plane*](https://en.wikipedia.org/w/index.php?search=Cartesian%20plane). The numeric scale on the vertical axis is linear, in units of seconds, while the scale on the horizontal axis is also linear, measured in years. But there is more going on here. Color is being used as a visual cue to distinguish the categorical variable `sex`. Furthermore, since the points are connected by lines, [*direction*](https://en.wikipedia.org/w/index.php?search=direction) is being used to indicate the progression of the record times. (In this case, the records can only get faster, so the direction is always down.) One might even argue that [*angle*](https://en.wikipedia.org/w/index.php?search=angle) is being used to compare the descent of the world records across time and/or gender. In fact, in this case [*shape*](https://en.wikipedia.org/w/index.php?search=shape) is also being used to distinguish `sex`.
3. Next, we present two pie charts in Figure [2\.14](ch-vizI.html#fig:pie) indicating the different substance of abuse for subjects in the [*Health Evaluation and Linkage to Primary Care*](https://en.wikipedia.org/w/index.php?search=Health%20Evaluation%20and%20Linkage%20to%20Primary%20Care) (HELP) clinical trial (“Linking Alcohol and Drug Dependent Adults to Primary Medicalcare: Aa Randomized Controlled Trial of a Multidisciplinaryhealth Intervention in a Detoxification Unit” 2003\).
Each subject was identified with involvement with one primary substance (alcohol, cocaine, or heroin).
On the right, we see the distribution of substance for housed (no nights in shelter or on the street) participants is fairly evenly distributed, while on the left, we see the same distribution for those who were homeless one or more nights (more likely to have alcohol as their primary substance of abuse).
Figure 2\.14: Pie charts showing the breakdown of substance of abuse among HELP study participants, faceted by homeless status. Compare this to Figure [3\.13](ch-vizII.html#fig:stacked-bar).
This graphic uses a [*radial*](https://en.wikipedia.org/w/index.php?search=radial) coordinate system and the visual cue of [*color*](https://en.wikipedia.org/w/index.php?search=color) to distinguish the three levels of the [*categorical*](https://en.wikipedia.org/w/index.php?search=categorical) variable `substance`.
The visual cue of [*angle*](https://en.wikipedia.org/w/index.php?search=angle) is being used to quantify the differences in the proportion of patients using each substance.
Are you able to accurately identify these percentages from the figure? The actual percentages are shown as follows.
```
# A tibble: 3 × 3
substance Homeless Housed
<fct> <chr> <chr>
1 alcohol n = 103 (49.3%) n = 74 (30.3%)
2 cocaine n = 59 (28.2%) n = 93 (38.1%)
3 heroin n = 47 (22.5%) n = 77 (31.6%)
```
This is a case where a simple table of these proportions is more effective at communicating the true differences than this—and probably any—data graphic. Note that there are only six data points presented, so any graphic is probably gratuitous.
Don’t use pie charts, except perhaps in small multiples.
4. Finally, in Figure [2\.15](ch-vizI.html#fig:choropleth-ma) we present a [*choropleth map*](https://en.wikipedia.org/w/index.php?search=choropleth%20map) showing the population of Massachusetts by the 2010 Census tracts.
Figure 2\.15: Choropleth map of population among Massachusetts Census tracts, based on 2018 American Community Survey.
Clearly, we are using a *geographic* coordinate system here, with [*latitude*](https://en.wikipedia.org/w/index.php?search=latitude) and [*longitude*](https://en.wikipedia.org/w/index.php?search=longitude) on the vertical and horizontal axes, respectively. (This plot is not projected: More information about projection systems is provided in Chapter [17](ch-spatial.html#ch:spatial).) *Shade* is once again being used to represent the quantity `population`, but here the scale is more complicated. The ten shades of blue have been mapped to the [*decile*](https://en.wikipedia.org/w/index.php?search=decile)s of the census tract populations, and since the distribution of population across these tracts is [*right\-skewed*](https://en.wikipedia.org/w/index.php?search=right-skewed), each shade does not correspond to a range of people of the same width, but rather to the same number of tracts that have a population in that range. Helpful context is provided by the title, subtitle, and legend.
2\.3 Importance of data graphics: *Challenger*
----------------------------------------------
On January 27th, 1986, engineers at [*Morton Thiokol*](https://en.wikipedia.org/w/index.php?search=Morton%20Thiokol), who supplied solid rocket motors (SRMs) to [*NASA*](https://en.wikipedia.org/w/index.php?search=NASA) for the [*space shuttle*](https://en.wikipedia.org/w/index.php?search=space%20shuttle), recommended that NASA delay the launch of the space shuttle [*Challenger*](https://en.wikipedia.org/w/index.php?search=Challenger) due to concerns that the cold weather forecast for the next day’s launch would jeopardize the stability of the rubber [*O\-rings*](https://en.wikipedia.org/w/index.php?search=O-rings) that held the rockets together. These engineers provided 13 charts that were reviewed over a two\-hour conference call involving the engineers, their managers, and NASA. The engineers’ recommendation was overruled due to a lack of persuasive evidence, and the launch proceeded on schedule. The O\-rings failed in exactly the manner the engineers had feared 73 seconds after launch, [*Challenger* exploded](https://www.youtube.com/watch?v=j4JOjcDFtBE), and all seven astronauts on board died (Tufte 1997\).
In addition to the tragic loss of life, the incident was a devastating blow to NASA and the United States space program. The hand\-wringing that followed included a two\-and\-a\-half year hiatus for NASA and the formation of the [*Rogers Commission*](https://en.wikipedia.org/w/index.php?search=Rogers%20Commission) to study the disaster. What became clear is that the Morton Thiokol engineers had correctly identified the key causal link between *temperature* and *O\-ring damage*. They did this using statistical data analysis combined with a plausible physical explanation: in short, that the rubber O\-rings became brittle in low temperatures. (This link was famously demonstrated by legendary physicist and Rogers Commission member [Richard Feynman](https://en.wikipedia.org/w/index.php?search=Richard%20Feynman) during the hearings, using a glass of water and some ice cubes (Tufte 1997\).) Thus, the engineers were able to identify the critical weakness using their [*domain knowledge*](https://en.wikipedia.org/w/index.php?search=domain%20knowledge)—in this case, rocket science—and their data analysis.
Their failure—and its horrific consequences—was one of persuasion: They simply did not present their evidence in a convincing manner to the NASA officials who ultimately made the decision to proceed with the launch. More than 30 years later this tragedy remains critically important. The evidence brought to the discussions about whether to launch was in the form of hand written data tables (or “charts”), but none were graphical. In his sweeping critique of the incident, [Edward Tufte](https://en.wikipedia.org/w/index.php?search=Edward%20Tufte) created a powerful scatterplot similar to Figures [2\.16](ch-vizI.html#fig:tufte0) and [2\.17](ch-vizI.html#fig:tufte), which were derived from data that the engineers had at the time, but in a far more effective presentation (Tufte 1997\).
Figure 2\.16: A scatterplot with smoother demonstrating the relationship between temperature and O\-ring damage on solid rocket motors. The dots are semi\-transparent, so that darker dots indicate multiple observations with the same values.
Figure [2\.16](ch-vizI.html#fig:tufte0) indicates a clear relationship between the ambient temperature and O\-ring damage on the solid rocket motors. To demonstrate the dramatic extrapolation made to the predicted temperature on January 27th, 1986, Tufte extended the horizontal axis in his scatterplot (Figure [2\.17](ch-vizI.html#fig:tufte)) to include the forecast temperature. The huge gap makes plain the problem with extrapolation.
Reprints of two Morton Thiokol data graphics are shown in Figures [2\.18](ch-vizI.html#fig:challenger1) and [2\.19](ch-vizI.html#fig:challenger2) (Tufte 1997\).
Figure 2\.17: A recreation of Tufte’s scatterplot demonstrating the relationship between temperature and O\-ring damage on solid rocket motors.
Figure 2\.18: One of the original 13 charts presented by Morton Thiokol engineers to NASA on the conference call the night before the [*Challenger*](https://en.wikipedia.org/w/index.php?search=Challenger) launch. This is one of the more data\-intensive charts.
Figure 2\.19: Evidence presented during the congressional hearings after the [*Challenger*](https://en.wikipedia.org/w/index.php?search=Challenger) explosion.
Tufte provides a full critique of the engineers’ failures (Tufte 1997\), many of which are instructive for data scientists.
* **Lack of authorship**: There were no names on any of the charts. This creates a lack of accountability. No single person was willing to take responsibility for the data contained in any of the charts. It is much easier to refute an argument made by a group of nameless people, than to a single or group of named people.
* **Univariate analysis**: The engineers provided several data tables, but all were essentially univariate. That is, they presented data on a single variable, but did not illustrate the relationship between two variables. Note that while Figure [2\.18](ch-vizI.html#fig:challenger1) does show data for two different variables, it is very hard to see the connection between the two in tabular form. Since the crucial connection here was between temperature and O\-ring damage, this lack of bivariate analysis was probably the single most damaging omission in the engineers’ presentation.
* **Anecdotal evidence**: With such a small sample size, anecdotal evidence can be particularly challenging to refute. In this case, a bogus comparison was made based on two observations. While the engineers argued that SRM\-15 had the most damage on the coldest previous launch date (see Figure [2\.17](ch-vizI.html#fig:tufte)), NASA officials were able to counter that SRM\-22 had the second\-most damage on one of the warmer launch dates. These anecdotal pieces of evidence fall apart when all of the data are considered in context—in Figure [2\.17](ch-vizI.html#fig:tufte), it is clear that SRM\-22 is an outlier that deviates from the general pattern—but the engineers never presented all of the data in context.
* **Omitted data**: For some reason, the engineers chose not to present data from 22 other flights, which collectively represented 92% of
launches. This may have been due to time constraints. This dramatic reduction in the accumulated evidence played a role in enabling the anecdotal evidence outlined above.
* **Confusion**: No doubt working against the clock, and most likely working in tandem, the engineers were not always clear about two different types of damage: *erosion* and *blow\-by*. A failure to clearly define these terms may have hindered understanding on the part of NASA officials.
* **Extrapolation**: Most forcefully, the failure to include a simple scatterplot of the full data obscured the “stupendous extrapolation” (Tufte 1997\) necessary to justify the launch. The bottom line was that the forecast launch temperature (between 26 and 29 degrees [*Fahrenheit*](https://en.wikipedia.org/w/index.php?search=Fahrenheit)) was so much colder than anything that had occurred previously, any model for O\-ring damage as a function of temperature would be untested.
When more than a handful of observations are present, data graphics are often more revealing than tables. Always consider alternative representations to improve communication.
Tufte notes that the cardinal sin of the engineers was a failure to frame the data *in relation to what*? The notion that certain data may be understood in relation to something is perhaps the fundamental and defining characteristic of statistical reasoning. We will follow this thread throughout the book.
Always ensure that graphical displays are clearly described with appropriate axis labels, additional text descriptions, and a caption.
We present this tragic episode in this chapter as motivation for a careful study of data visualization. It illustrates a critical truism for practicing data scientists: Being right isn’t enough—you have to be *convincing*. Note that Figure [2\.19](ch-vizI.html#fig:challenger2) contains the same data that are present in Figure [2\.17](ch-vizI.html#fig:tufte) but in a far less suggestive format. It just so happens that for most human beings, graphical explanations are particularly persuasive. Thus, to be a successful data analyst, one must master at least the basics of data visualization.
2\.4 Creating effective presentations
-------------------------------------
Giving effective presentations is an important skill for a data scientist. Whether these presentations are in academic conferences, in a classroom, in a boardroom, or even on stage, the ability to communicate to an audience is of immeasurable value. While some people may be naturally more comfortable in the limelight, everyone can improve the quality of their presentations.
A few pieces of general advice are warranted (Ludwig 2012\):
* **Budget your time**: Often you will only have a few minutes to speak and usually a few additional minutes to answer questions. If your talk runs too short or too long, it makes you seem unprepared. Rehearse your talk several times in order to get a better feel for your timing. Note also that you may have a tendency to talk faster during your actual talk than you will during your rehearsal. Talking faster in order to speed up is a bad strategy—you are much better off simply cutting material ahead of time. You will probably have a hard time getting through \\(x\\) slides in \\(x\\) minutes.
Talking faster in order to speed up is not a good strategy—you are much better off simply cutting material ahead of time or moving to a key slide or conclusion.
* **Don’t write too much on each slide**: You don’t want people to have to read your slides, because if the audience is reading your slides, then they aren’t listening to you. You want your slides to provide visual cues to the points that you are making—not substitute for your spoken words. Concentrate on graphical displays and bullet\-pointed lists of ideas.
* **Put your problem in context**: Remember that (in most cases) most of your audience will have little or no knowledge of your subject matter. The easiest way to lose people is to dive right into technical details that require prior domain knowledge. Spend a few minutes at the beginning of your talk introducing your audience to the most basic aspects of your topic and presenting some motivation for what you are studying.
* **Speak loudly and clearly**: Remember that (in most cases) you know more about your topic that anyone else in the room, so speak and act with confidence!
* **Tell a story, but not necessarily the whole story**: It is unrealistic to expect that you can tell your audience everything that you know about your topic in \\(x\\) minutes. You should strive to convey the big ideas in a clear fashion but not dwell on the details. Your talk will be successful if your audience is able to walk away with an understanding of what your research question was, how you addressed it, and what the implications of your findings are.
2\.5 The wider world of data visualization
------------------------------------------
Thus far our discussion of data visualization has been limited to static, two\-dimensional data graphics. However, there are many additional ways to visualize data. While Chapter [3](ch-vizII.html#ch:vizII) focuses on static data graphics, Chapter [14](ch-vizIII.html#ch:vizIII) presents several cutting\-edge tools for making interactive data visualizations.
Even more broadly, the field of [*visual analytics*](https://en.wikipedia.org/w/index.php?search=visual%20analytics) is concerned with the science behind building interactive visual interfaces that enhance one’s ability to reason about data.
Finally, we have [*data art*](https://en.wikipedia.org/w/index.php?search=data%20art).
You can do many things with data. On one end of the spectrum, you might be focused on predicting the outcome of a specific response variable. In such cases, your goal is very well\-defined and your success can be quantified. On the other end of the spectrum are projects called [*data art*](https://en.wikipedia.org/w/index.php?search=data%20art), wherein the meaning of what you are doing with the data is elusive, but the experience of viewing the data in a new way is in itself meaningful.
Consider [Memo Akten](https://en.wikipedia.org/w/index.php?search=Memo%20Akten) and [Quayola](https://en.wikipedia.org/w/index.php?search=Quayola)’s [*Forms*](http://www.memo.tv/forms/), which was inspired by the physical movement of athletes in the [*Commonwealth Games*](https://en.wikipedia.org/w/index.php?search=Commonwealth%20Games).
Through video analysis, these movements were translated into three\-dimensional digital objects shown in Figure [2\.20](ch-vizI.html#fig:forms). Note how the image in the upper\-left is evocative of a swimmer surfacing after a dive. When viewed as [a movie](https://vimeo.com/38421611), *Forms* is an arresting example of data art.
Watch [Forms (process)](https://vimeo.com/38421611) from [Memo Akten](https://vimeo.com/memotv) on [Vimeo](https://vimeo.com).
Figure 2\.20: Still images from *Forms*, by Memo Akten and Quayola. Each image represents an athletic movement made by a competitor at the Commonwealth Games, but reimagined as a collection of moving three\-dimensional digital objects. Reprinted with permission.
Successful data art projects require both artistic talent and technical ability. *Before Us is the Salesman’s House* is a live, continuously\-updating exploration of the online marketplace [*eBay*](https://en.wikipedia.org/w/index.php?search=eBay). [This installation](https://vimeo.com/50146828) was created by statistician [Mark Hansen](https://en.wikipedia.org/w/index.php?search=Mark%20Hansen) and digital artist [Jer Thorpe](https://en.wikipedia.org/w/index.php?search=Jer%20Thorpe) and is projected on a big screen as you enter eBay’s campus.
Watch [Before us is the Salesman’s House—Three Cycles](https://vimeo.com/50146828) from [blprnt](https://vimeo.com/user313340) on [Vimeo](https://vimeo.com).
The display begins by pulling up [Arthur Miller](https://en.wikipedia.org/w/index.php?search=Arthur%20Miller)’s classic play [*Death of a Salesman*](https://en.wikipedia.org/w/index.php?search=Death%20of%20a%20Salesman), and “reading” the text of the first chapter. Along the way, several nouns are plucked from the text (e.g., flute, refrigerator, chair, bed, trophy, etc.). For each in succession, the display then shifts to a geographic display of where things with that noun in the description are *currently* being sold on eBay, replete with price and auction information. (Note that these descriptions are not always perfect. In the video, a search for “refrigerator” turns up a T\-shirt of former [*Chicago Bears*](https://en.wikipedia.org/w/index.php?search=Chicago%20Bears) defensive end William \[Refrigerator] Perry.)
Next, one city where such an item is being sold is chosen, and any classic books of American literature being sold nearby are collected. One is chosen, and the cycle returns to the beginning by “reading” the first page of that book. This process continues indefinitely.
When describing the exhibit, Hansen spoke of “one data set reading another.” It is this interplay of data and literature that makes such data art projects so powerful.
Finally, we consider another [Mark Hansen](https://en.wikipedia.org/w/index.php?search=Mark%20Hansen) collaboration, this time with [Ben Rubin](https://en.wikipedia.org/w/index.php?search=Ben%20Rubin) and [Michele Gorman](https://en.wikipedia.org/w/index.php?search=Michele%20Gorman). In [*Shakespeare Machine*](https://vimeo.com/54858820), 37 digital LCD blades—each corresponding to one of [*Shakespeare*](https://en.wikipedia.org/w/index.php?search=Shakespeare)’s plays—are arrayed in a circle. The display on each blade is a pattern of words culled from the text of these plays. First, pairs of hyphenated words are shown. Next, Boolean pairs (e.g., “good or bad”) are found. Third, articles and adjectives modifying nouns (e.g., “the holy father”). In this manner, the artistic masterpieces of Shakespeare are shattered into formulaic chunks. In Chapter [19](ch-text.html#ch:text), we will learn how to use [*regular expressions*](https://en.wikipedia.org/w/index.php?search=regular%20expressions) to find the data for *Shakespeare Machine*.
Watch [Shakespeare Machine](https://vimeo.com/54858820) by [Ben Rubin, Mark Hansen, Michele Gorman](https://vimeo.com/c4sr) on [Vimeo](https://vimeo.com).
2\.6 Further resources
----------------------
While issues related to data visualization pervade this entire text, they will be the particular focus of Chapters [3](ch-vizII.html#ch:vizII) (Data visualization II), [14](ch-vizIII.html#ch:vizIII) (Data visualization III), and [17](ch-spatial.html#ch:spatial) (Geospatial data).
No education in data graphics is complete without reading Tufte’s [*Visual Display of Quantitative Information*](https://en.wikipedia.org/w/index.php?search=Visual%20Display%20of%20Quantitative%20Information) (Tufte 2001\), which also contains a description of [John Snow](https://en.wikipedia.org/w/index.php?search=John%20Snow)’s cholera map (see Chapter [17](ch-spatial.html#ch:spatial)). For a full description of the [*Challenger*](https://en.wikipedia.org/w/index.php?search=Challenger) incident, see (Tufte 1997\). Tufte has also published two other landmark books (Tufte 1990, 2006\), as well as reasoned polemics about the shortcomings of [*PowerPoint*](https://en.wikipedia.org/w/index.php?search=PowerPoint) (Tufte 2003\). Cleveland and McGill (1984\) provide the foundation for Yau’s taxonomy (Yau 2013\). Yau (2011\) provides many examples of thought\-provoking data visualizations, particularly data art.
The grammar of graphics was first described by Wilkinson et al. (2005\). Hadley Wickham (2016\) implemented **ggplot2** based on this formulation.
Many important data graphics were developed by Tukey (1990\). A. Gelman, Pasarica, and Dodhia (2002\) have also written persuasively about data graphics in statistical journals. Gelman discusses a set of canonical data graphics as well as Tufte’s suggested modifications to them. D. Nolan and Perrett (2016\) discuss data visualization assignments and rubrics that can be used to grade them.
Steven J. Murdoch has created some **R** functions for drawing the kind of [modified diagrams](http://www.cl.cam.ac.uk/~sjm217/projects/graphics/) described in Tufte (2001\). These also appear in the **ggthemes** package (Arnold 2019\).
Cynthia Brewer’s color palettes are available at [http://colorbrewer2\.org](http://colorbrewer2.org) and through the **RColorBrewer** package. Her work is described in more detail in Brewer (1994\) and Brewer (1999\). The **viridis** (Garnier 2021a) and **viridisLite** (Garnier 2021b) packages provide [*matplotlib*](https://en.wikipedia.org/w/index.php?search=matplotlib)\-like palettes for **R**. Ram and Wickham (2018\) created the whimsical color palette that evokes [Wes Anderson](https://en.wikipedia.org/w/index.php?search=Wes%20Anderson)’s distinctive movies.
[Technically Speaking](http://techspeaking.denison.edu/Technically_Speaking/Home.html) is an NSF\-funded project for presentation advice that contains instructional videos for students (Ludwig 2012\).
2\.7 Exercises
--------------
**Problem 1 (Easy)**: Consider the following data graphic.
The `am` variable takes the value `0` if the car has [automatic transmission](https://en.wikipedia.org/wiki/Automatic_transmission) and `1` if the car has [manual transmission](https://en.wikipedia.org/wiki/Manual_transmission).
How could you differentiate the cars in the graphic based on their transmission type?
**Problem 2 (Medium)**: Pick one of the Science Notebook entries at <https://www.edwardtufte.com/tufte> (e.g., “Making better inferences from statistical graphics”).
Write a brief reflection on the graphical principles that are illustrated by this entry.
**Problem 3 (Medium)**: Find two graphs published in a newspaper or on the internet in the last two years.
1. Identify a graphical display that you find compelling. What aspects of the display work well, and how do these relate to the principles established in this chapter? Include a screen shot of the display along with your solution.
2. Identify a graphical display that you find less than compelling. What aspects of the display don’t work well? Are there ways that the display might be improved? Include a screen shot of the display along with your solution.
**Problem 4 (Medium)**: Find two scientific papers from the last two years in a peer\-reviewed journal (*Nature* and *Science* are good choices).
1. Identify a graphical display that you find compelling. What aspects of the display work well, and how do these relate to the principles established in this chapter? Include a screen shot of the display along with your solution.
2. Identify a graphical display that you find less than compelling. What aspects of the display don’t work well? Are there ways that the display might be improved? Include a screen shot of the display along with your solution.
**Problem 5 (Medium)**: Consider the two graphics related to *The New York Times* “Taxmageddon” article at [http://www.nytimes.com/2012/04/15/sunday\-review/coming\-soon\-taxmageddon.html](http://www.nytimes.com/2012/04/15/sunday-review/coming-soon-taxmageddon.html). The first is [“Whose Tax Rates Rose or Fell”](http://www.nytimes.com/imagepages/2012/04/13/opinion/sunday/0415web-leonhardt.html) and the second is [“Who Gains Most From Tax Breaks.”](http://www.nytimes.com/imagepages/2012/04/13/opinion/sunday/0415web-leonhardt2.html)
1. Examine the two graphics carefully. Discuss what you think they convey. What story do the graphics tell?
2. Evaluate both graphics in terms of the taxonomy described in this chapter. Are the scales appropriate? Consistent? Clearly labeled? Do variable dimensions exceed data dimensions?
3. What, if anything, is misleading about these graphics?
**Problem 6 (Medium)**: Consider the data graphic [http://tinyurl.com/nytimes\-unplanned](http://tinyurl.com/nytimes-unplanned) about birth control methods.
1. What quantity is being shown on the \\(y\\)\-axis of each plot?
2. List the variables displayed in the data graphic, along with the units and a few typical values for each.
3. List the visual cues used in the data graphic and explain how each visual cue is linked to each variable.
4. Examine the graphic carefully. Describe, in words, what *information* you think the data graphic conveys. Do not just summarize the *data*—interpret the data in the context of the problem and tell us what it means.
(Note: *information* is meaningful to human beings—it is not the same thing as *data*.)
2\.8 Supplementary exercises
----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-vizI.html\#datavizI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-vizI.html#datavizI-online-exercises)
**Problem 1 (Easy)**: Consider the following data\-driven image, available for purchase at [NBA Playoff Rings](http://champsring.com/products/pro-basketball-2013): [https://cdn.shopify.com/s/files/1/0144/6552/products/NBA\-Basketball\-2013\-\_6\_1024x1024\.jpg](https://cdn.shopify.com/s/files/1/0144/6552/products/NBA-Basketball-2013-_6_1024x1024.jpg)
1. Identify the visual cues, coordinate system, and scale(s).
2. How many variables are depicted in the graphic? Explicitly link each variable to a visual cue that you listed above.
3. Critique this data graphic using the taxonomy described in this chapter.
**Problem 2 (Easy)**: 2016 ELECTION: Consider the following data graphic about results from the [2016 presidential election in Massachusetts](https://www.wbur.org/politicker/2016/11/08/massachusetts-election-map).
What type of color palette is used in this graphic?
**Problem 3 (Easy)**: Choose *one* of the data graphics listed at [http://mdsr\-book.github.io/exercises.html\#exercise\_23](http://mdsr-book.github.io/exercises.html#exercise_23)
and answer the following questions. Be sure to indicate which graphical display you picked.
2. [World’s Top 10 Best Selling Cigarette Brands, 2004\-2007](http://vizwiz.blogspot.com/2009/12/simple-is-better.html)
3. ~~[GNPD Usage by Food Categories](http://www.nrsmithdesign.com/wp-content/uploads/2012/10/data-graphic.jpg)~~
4. [UK University Rankings](http://static.guim.co.uk/sys-images/Guardian/Pix/maps_and_graphs/2010/9/7/1283876186403/Top-universities-graphic-001.jpg)
5. [Childhood Obesity in the US](http://www.sparkpe.org/blog/wp-content/uploads/2010/07/childhood-obesity-bmi.gif)
6. [Relationship between ages and psychosocial maturity](http://ars.els-cdn.com/content/image/1-s2.0-S1043276005002602-gr2.jpg)
1. Identify the visual cues, coordinate system, and scale(s).
2. How many variables are depicted in the graphic? Explicitly link each variable to a visual cue that you listed above.
3. Critique this data graphic using the taxonomy described in this chapter.
**Problem 4 (Medium)**: Answer the following questions for each of the following collections of data graphics listed at ([http://mdsr\-book.github.io/exercises.html\#exercise\_24](http://mdsr-book.github.io/exercises.html#exercise_24)).
1. [What is a Data Scientist?](http://tinyurl.com/what-is-datascientist)
2. [Charts that explain food in America](http://www.vox.com/a/explain-food-america)
Briefly (one paragraph) critique the designer’s choices. Would you have made different choices? Why or why not?
> Note: Each link contains a collection of many data graphics, and we don’t expect (or want) you to write a full report on each individual graphic.
> But each collection shares some common stylistic elements.
> You should comment on a few things that you notice about the design of the collection.
**Problem 5 (Medium)**: Consider one of the more complicated data graphics listed at ([http://mdsr\-book.github.io/exercises.html\#exercise\_25](http://mdsr-book.github.io/exercises.html#exercise_25)):
1. What story does the data graphic tell? What is the main message that you take away from it?
2. Can the data graphic be described in terms of the taxonomy presented in this chapter? If so, list the visual cues, coordinate system, and scales(s) as you did in Problem 2(a). If not, describe the feature of this data graphic that lies outside of that taxonomy.
3. Critique and/or praise the visualization choices made by the designer. Do they work? Are they misleading? Thought\-provoking? Brilliant? Are there things that you would have done differently? Justify your response.
---
2\.1 The 2012 federal election cycle
------------------------------------
Every four years, the presidential election draws an enormous amount of interest in the United States. The most prominent candidates announce their candidacy nearly two years before the November elections, beginning the process of raising the hundreds of millions of dollars necessary to orchestrate a national campaign. In many ways, the experience of running a successful presidential campaign is in itself evidence of the leadership and organizational skills necessary to be [*commander\-in\-chief*](https://en.wikipedia.org/w/index.php?search=commander-in-chief).
Voices from all parts of the political spectrum are critical of the influence of money upon political campaigns.
While the contributions from individual citizens to individual candidates are [limited in various ways](http://www.fec.gov/pages/brochures/citizens.shtml), the Supreme Court’s decision in [*Citizens United v. Federal Election Commission*](https://en.wikipedia.org/w/index.php?search=Citizens%20United%20v.%20Federal%20Election%20Commission) allows unlimited political spending by corporations (non\-profit or otherwise).
This has resulted in a system of committees (most notably, [*political action committees*](https://en.wikipedia.org/w/index.php?search=political%20action%20committees), PACs) that can accept unlimited contributions and spend them on behalf of (or against) a particular candidate or set of candidates.
Unraveling the complicated network of campaign spending is a subject [of great interest](http://www.nytimes.com/interactive/2015/10/11/us/politics/2016-presidential-election-super-pac-donors.html).
To perform that unraveling is an exercise in data science.
The [*Federal Election Commission*](https://en.wikipedia.org/w/index.php?search=Federal%20Election%20Commission) (FEC) maintains a website with logs of not only all of the ($200 or more) contributions made by individuals to candidates and committees, but also of spending by committees on behalf of (and against) candidates.
Of course, the FEC also maintains data on which candidates win elections, and by how much. These data sources are separate, and it requires some ingenuity to piece them together.
We will develop these skills in Chapters [4](ch-dataI.html#ch:dataI)–[6](ch-dataII.html#ch:dataII), but for now, we will focus on graphical displays of the information that can be gleaned from these data.
Our emphasis at this stage is on making intelligent decisions about how to display certain data, so that a clear (and correct) message is delivered.
Among the most basic questions is: How much money did each candidate raise?
However, the convoluted campaign finance network makes even this simple question difficult to answer, and—perhaps more importantly—less meaningful than we might think. A better question is: On whose candidacy was the most money spent? In Figure [2\.1](ch-vizI.html#fig:spent), we show a bar graph of the amount of money (in millions of dollars) that were spent by committees on particular candidates during the general election phase of the [*2012 federal election cycle*](https://en.wikipedia.org/w/index.php?search=2012%20federal%20election%20cycle). This includes candidates for president, the [*United States Senate*](https://en.wikipedia.org/w/index.php?search=United%20States%20Senate), and the [*United States House of Representatives*](https://en.wikipedia.org/w/index.php?search=United%20States%20House%20of%20Representatives). Only candidates on whose campaign at least $4 million was spent are included in Figure [2\.1](ch-vizI.html#fig:spent).
Figure 2\.1: Amount of money spent on individual candidates in the general election phase of the 2012 federal election cycle, in millions of dollars. Candidacies with at least $4 million in spending are depicted.
It seems clear from Figure [2\.1](ch-vizI.html#fig:spent) that President [Barack Obama](https://en.wikipedia.org/w/index.php?search=Barack%20Obama)’s re\-election campaign spent far more money than any other candidate, in particular more than doubling the amount of money spent by his [*Republican*](https://en.wikipedia.org/w/index.php?search=Republican) challenger, [Mitt Romney](https://en.wikipedia.org/w/index.php?search=Mitt%20Romney). However, committees are not limited to spending money in support of a candidate—they can also spend money **against** a particular candidate ([*attack ads*](https://en.wikipedia.org/w/index.php?search=attack%20ads)). In Figure [2\.2](ch-vizI.html#fig:spent-attack), we separate the same spending shown in Figure [2\.1](ch-vizI.html#fig:spent) by whether the money was spent for or against the candidate.
Figure 2\.2: Amount of money spent on individual candidates in the general election phase of the 2012 federal election cycle, in millions of dollars, broken down by type of spending. Candidacies with at least $4 million in spending are depicted.
In these elections, most of the money was spent against each candidate, and in particular, $251 million of the $274 million spent on President Obama’s campaign was spent against his candidacy. Similarly, most of the money spent on [Mitt Romney](https://en.wikipedia.org/w/index.php?search=Mitt%20Romney)’s campaign was against him, but the percentage of negative spending on Romney’s campaign (70%) was lower than that of Obama (92%).
The difference between Figure [2\.1](ch-vizI.html#fig:spent) and Figure [2\.2](ch-vizI.html#fig:spent-attack) is that in the latter we have used color to bring a third variable (type of spending) into the plot. This allows us to make a clear comparison that importantly changes the conclusions we might draw from the former plot. In particular, Figure [2\.1](ch-vizI.html#fig:spent) makes it appear as though President Obama’s [*war chest*](https://en.wikipedia.org/w/index.php?search=war%20chest) dwarfed that of Romney, when in fact the opposite was true.
### 2\.1\.1 Are these two groups different?
Since so much more money was spent attacking Obama’s campaign than Romney’s, you might conclude from Figure [2\.2](ch-vizI.html#fig:spent-attack) that Republicans were more successful in fundraising during this election cycle. In Figure [2\.3](ch-vizI.html#fig:raised-party), we can confirm that this was indeed the case, since more money was spent supporting Republican candidates than Democrats, and more money was spent attacking Democratic candidates than Republican. It also seems clear from Figure [2\.3](ch-vizI.html#fig:raised-party) that nearly all of the money was spent on either Democrats or Republicans.[2](#fn2)
Figure 2\.3: Amount of money spent on individual candidacies by political party affiliation during the general election phase of the 2012 federal election cycle.
However, the question of whether the money spent on candidates really differed by party affiliation is a bit thornier. As we saw above, the presidential election dominated the political donations in this election cycle. Romney faced a serious disadvantage in trying to unseat an incumbent president. In this case, the office being sought is a confounding variable. By further subdividing the contributions in Figure [2\.3](ch-vizI.html#fig:raised-party) by the office being sought, we can see in Figure [2\.4](ch-vizI.html#fig:raised-office) that while more money was spent supporting Republican candidates for all elective branches of government, it was only in the presidential election that more money was spent attacking Democratic candidates. In fact, slightly more money was spent attacking Republican House and Senate candidates.
Figure 2\.4: Amount of money spent on individual candidacies by political party affiliation during the general election phase of the 2012 federal election cycle, broken down by office being sought (House, President, or Senate).
Note that Figures [2\.3](ch-vizI.html#fig:raised-party) and [2\.4](ch-vizI.html#fig:raised-office) display the same data. In Figure [2\.4](ch-vizI.html#fig:raised-office), we have an additional variable that provides an important clue into the mystery of campaign finance. Our choice to include that variable results in Figure [2\.4](ch-vizI.html#fig:raised-office) conveying substantially more meaning than Figure [2\.3](ch-vizI.html#fig:raised-party), even though both figures are “correct.” In this chapter, we will begin to develop a framework for creating principled data graphics.
### 2\.1\.2 Graphing variation
One theme that arose during the presidential election was the allegation that Romney’s campaign was supported by a few rich donors, whereas Obama’s support came from people across the economic spectrum. If this were true, then we would expect to see a difference in the distribution of donation amounts between the two candidates. In particular, we would expect to see this in the histograms shown in Figure [2\.5](ch-vizI.html#fig:donations), which summarize the more than one million donations made by individuals to the two major committees that supported each candidate (for Obama, Obama for America, and the Obama Victory Fund 2012; for Romney, Romney for President, and Romney Victory 2012\). We do see some evidence for this claim in Figure [2\.5](ch-vizI.html#fig:donations), Obama did appear to receive more smaller donations, but the evidence is far from conclusive. One problem is that both candidates received many small donations but just a few larger donations; the scale on the horizontal axis makes it difficult to actually see what is going on. Secondly, the histograms are hard to compare in a side\-by\-side placement. Finally, we have lumped all of the donations from both phases of the presidential election (i.e., primary vs. general) in together.
Figure 2\.5: Donations made by individuals to the PACs supporting the two major presidential candidates in the 2012 election.
In Figure [2\.6](ch-vizI.html#fig:donations-split), we remedy these issues by (1\) using density curves instead of histograms, so that we can compare the distributions directly, (2\) plotting the logarithm of the donation amount on the horizontal scale to focus on the data that are important, and (3\) separating the donations by the phase of the election. Figure [2\.6](ch-vizI.html#fig:donations-split) allows us to make more nuanced conclusions. The right panel supports the allegation that Obama’s donations came from a broader base during the primary election phase. It does appear that more of Obama’s donations came in smaller amounts during this phase of the election. However, in the general phase, there is virtually no difference in the distribution of donations made to either campaign.
Figure 2\.6: Donations made by individuals to the PACs supporting the two major presidential candidates in the 2012 election, separated by election phase.
### 2\.1\.3 Examining relationships among variables
Naturally, the biggest questions raised by the *Citizens United* decision are about the influence of money in elections. If campaign spending is unlimited, does this mean that the candidate who generates the most spending on their behalf will earn the most votes? One way that we might address this question is to compare the amount of money spent on each candidate in each election with the number of votes that candidate earned. Statisticians will want to know the [*correlation*](https://en.wikipedia.org/w/index.php?search=correlation) between these two quantities—when one is high, is the other one likely to be high as well?
Since all 435 members of the United States House of Representatives are elected every two years, and the districts contain roughly the same number of people, House elections provide a nice data set to make this type of comparison. In Figure [2\.7](ch-vizI.html#fig:votes), we show a simple scatterplot relating the number of dollars spent on behalf of the Democratic candidate against the number of votes that candidate earned for each of the House elections.
Figure 2\.7: Scatterplot illustrating the relationship between number of dollars spent supporting and number of votes earned by Democrats in 2012 elections for the House of Representatives.
The relationship between the two quantities depicted in Figure [2\.7](ch-vizI.html#fig:votes) is very weak. It does not appear that candidates who benefited more from campaign spending earned more votes. However, the comparison in Figure [2\.7](ch-vizI.html#fig:votes) is misleading. On both axes, it is not the *amount* that is important, but the *proportion*. Although the population of each congressional district is similar, they are not the same, and voter turnout will vary based on a variety of factors. By comparing the proportion of the vote, we can control for the size of the voting population in each district. Similarly, it makes less sense to focus on the total amount of money spent, as opposed to the proportion of money spent. In Figure [2\.8](ch-vizI.html#fig:votes-better), we present the same comparison, but with both axes scaled to proportions.
Figure 2\.8: Scatterplot illustrating the relationship between proportion of dollars spent supporting and proportion of votes earned by Democrats in the 2012 House of Representatives elections. Each dot represents one district. The size of each dot is proportional to the total spending in that election, and the alpha transparency of each dot is proportional to the total number of votes in that district.
Figure [2\.8](ch-vizI.html#fig:votes-better) captures many nuances that were impossible to see in Figure [2\.7](ch-vizI.html#fig:votes). First, there *does* appear to be a positive association between the percentage of money supporting a candidate and the percentage of votes that they earn. However, that relationship is of greatest interest towards the center of the plot, where elections are actually contested. Outside of this region, one candidate wins more than 55% of the vote. In this case, there is usually very little money spent. These are considered “safe” House elections—you can see these points on the plot because most of them are close to \\(x\=0\\) or \\(x\=1\\), and the dots are very small. For example, one of the points in the lower\-left corner is the [*8th district in Ohio*](https://en.wikipedia.org/w/index.php?search=8th%20district%20in%20Ohio), which was won by the then [*Speaker of the House*](https://en.wikipedia.org/w/index.php?search=Speaker%20of%20the%20House) [John Boehner](https://en.wikipedia.org/w/index.php?search=John%20Boehner), who ran unopposed. The election in which the most money was spent (over $11 million) was also in Ohio. In the 16th district, Republican incumbent [Jim Renacci](https://en.wikipedia.org/w/index.php?search=Jim%20Renacci) narrowly defeated Democratic challenger [Betty Sutton](https://en.wikipedia.org/w/index.php?search=Betty%20Sutton), who was herself an incumbent from the 13th district. This battle was made possible through decennial redistricting (see Chapter [17](ch-spatial.html#ch:spatial)). Of the money spent in this election, 51\.2% was in support of Sutton but she earned only 48\.0% of the votes.
In the center of the plot, the dots are bigger, indicating that more money is being spent on these contested elections. Of course this makes sense, since candidates who are fighting for their political lives are more likely to fundraise aggressively. Nevertheless, the evidence that more financial support correlates with more votes in contested elections is relatively weak.
### 2\.1\.4 Networks
Not all relationships among variables are sensibly expressed by a scatterplot. Another way in which variables can be related is in the form of a network (we will discuss these in more detail in Chapter [20](ch-netsci.html#ch:netsci)). In this case, campaign funding has a network structure in which individuals donate money to committees, and committees then spend money on behalf of candidates. While the national campaign funding network is far too complex to show here, in Figure [2\.9](ch-vizI.html#fig:ma-funding) we display the funding network for candidates from [*Massachusetts*](https://en.wikipedia.org/w/index.php?search=Massachusetts).
In Figure [2\.9](ch-vizI.html#fig:ma-funding), we see that the two campaigns that benefited the most from committee spending were Republicans [Mitt Romney](https://en.wikipedia.org/w/index.php?search=Mitt%20Romney) and [Scott Brown](https://en.wikipedia.org/w/index.php?search=Scott%20Brown). This is not surprising, since Romney was running for president and received massive donations from the Republican National Committee, while Brown was running to keep his Senate seat in a heavily Democratic state against a strong challenger, [Elizabeth Warren](https://en.wikipedia.org/w/index.php?search=Elizabeth%20Warren). Both men lost their elections. The constellation of blue dots are the congressional delegation from Massachusetts, all of whom are Democrats.
Figure 2\.9: Campaign funding network for candidates from Massachusetts, 2012 federal elections. Each edge represents a contribution from a PAC to a candidate.
### 2\.1\.1 Are these two groups different?
Since so much more money was spent attacking Obama’s campaign than Romney’s, you might conclude from Figure [2\.2](ch-vizI.html#fig:spent-attack) that Republicans were more successful in fundraising during this election cycle. In Figure [2\.3](ch-vizI.html#fig:raised-party), we can confirm that this was indeed the case, since more money was spent supporting Republican candidates than Democrats, and more money was spent attacking Democratic candidates than Republican. It also seems clear from Figure [2\.3](ch-vizI.html#fig:raised-party) that nearly all of the money was spent on either Democrats or Republicans.[2](#fn2)
Figure 2\.3: Amount of money spent on individual candidacies by political party affiliation during the general election phase of the 2012 federal election cycle.
However, the question of whether the money spent on candidates really differed by party affiliation is a bit thornier. As we saw above, the presidential election dominated the political donations in this election cycle. Romney faced a serious disadvantage in trying to unseat an incumbent president. In this case, the office being sought is a confounding variable. By further subdividing the contributions in Figure [2\.3](ch-vizI.html#fig:raised-party) by the office being sought, we can see in Figure [2\.4](ch-vizI.html#fig:raised-office) that while more money was spent supporting Republican candidates for all elective branches of government, it was only in the presidential election that more money was spent attacking Democratic candidates. In fact, slightly more money was spent attacking Republican House and Senate candidates.
Figure 2\.4: Amount of money spent on individual candidacies by political party affiliation during the general election phase of the 2012 federal election cycle, broken down by office being sought (House, President, or Senate).
Note that Figures [2\.3](ch-vizI.html#fig:raised-party) and [2\.4](ch-vizI.html#fig:raised-office) display the same data. In Figure [2\.4](ch-vizI.html#fig:raised-office), we have an additional variable that provides an important clue into the mystery of campaign finance. Our choice to include that variable results in Figure [2\.4](ch-vizI.html#fig:raised-office) conveying substantially more meaning than Figure [2\.3](ch-vizI.html#fig:raised-party), even though both figures are “correct.” In this chapter, we will begin to develop a framework for creating principled data graphics.
### 2\.1\.2 Graphing variation
One theme that arose during the presidential election was the allegation that Romney’s campaign was supported by a few rich donors, whereas Obama’s support came from people across the economic spectrum. If this were true, then we would expect to see a difference in the distribution of donation amounts between the two candidates. In particular, we would expect to see this in the histograms shown in Figure [2\.5](ch-vizI.html#fig:donations), which summarize the more than one million donations made by individuals to the two major committees that supported each candidate (for Obama, Obama for America, and the Obama Victory Fund 2012; for Romney, Romney for President, and Romney Victory 2012\). We do see some evidence for this claim in Figure [2\.5](ch-vizI.html#fig:donations), Obama did appear to receive more smaller donations, but the evidence is far from conclusive. One problem is that both candidates received many small donations but just a few larger donations; the scale on the horizontal axis makes it difficult to actually see what is going on. Secondly, the histograms are hard to compare in a side\-by\-side placement. Finally, we have lumped all of the donations from both phases of the presidential election (i.e., primary vs. general) in together.
Figure 2\.5: Donations made by individuals to the PACs supporting the two major presidential candidates in the 2012 election.
In Figure [2\.6](ch-vizI.html#fig:donations-split), we remedy these issues by (1\) using density curves instead of histograms, so that we can compare the distributions directly, (2\) plotting the logarithm of the donation amount on the horizontal scale to focus on the data that are important, and (3\) separating the donations by the phase of the election. Figure [2\.6](ch-vizI.html#fig:donations-split) allows us to make more nuanced conclusions. The right panel supports the allegation that Obama’s donations came from a broader base during the primary election phase. It does appear that more of Obama’s donations came in smaller amounts during this phase of the election. However, in the general phase, there is virtually no difference in the distribution of donations made to either campaign.
Figure 2\.6: Donations made by individuals to the PACs supporting the two major presidential candidates in the 2012 election, separated by election phase.
### 2\.1\.3 Examining relationships among variables
Naturally, the biggest questions raised by the *Citizens United* decision are about the influence of money in elections. If campaign spending is unlimited, does this mean that the candidate who generates the most spending on their behalf will earn the most votes? One way that we might address this question is to compare the amount of money spent on each candidate in each election with the number of votes that candidate earned. Statisticians will want to know the [*correlation*](https://en.wikipedia.org/w/index.php?search=correlation) between these two quantities—when one is high, is the other one likely to be high as well?
Since all 435 members of the United States House of Representatives are elected every two years, and the districts contain roughly the same number of people, House elections provide a nice data set to make this type of comparison. In Figure [2\.7](ch-vizI.html#fig:votes), we show a simple scatterplot relating the number of dollars spent on behalf of the Democratic candidate against the number of votes that candidate earned for each of the House elections.
Figure 2\.7: Scatterplot illustrating the relationship between number of dollars spent supporting and number of votes earned by Democrats in 2012 elections for the House of Representatives.
The relationship between the two quantities depicted in Figure [2\.7](ch-vizI.html#fig:votes) is very weak. It does not appear that candidates who benefited more from campaign spending earned more votes. However, the comparison in Figure [2\.7](ch-vizI.html#fig:votes) is misleading. On both axes, it is not the *amount* that is important, but the *proportion*. Although the population of each congressional district is similar, they are not the same, and voter turnout will vary based on a variety of factors. By comparing the proportion of the vote, we can control for the size of the voting population in each district. Similarly, it makes less sense to focus on the total amount of money spent, as opposed to the proportion of money spent. In Figure [2\.8](ch-vizI.html#fig:votes-better), we present the same comparison, but with both axes scaled to proportions.
Figure 2\.8: Scatterplot illustrating the relationship between proportion of dollars spent supporting and proportion of votes earned by Democrats in the 2012 House of Representatives elections. Each dot represents one district. The size of each dot is proportional to the total spending in that election, and the alpha transparency of each dot is proportional to the total number of votes in that district.
Figure [2\.8](ch-vizI.html#fig:votes-better) captures many nuances that were impossible to see in Figure [2\.7](ch-vizI.html#fig:votes). First, there *does* appear to be a positive association between the percentage of money supporting a candidate and the percentage of votes that they earn. However, that relationship is of greatest interest towards the center of the plot, where elections are actually contested. Outside of this region, one candidate wins more than 55% of the vote. In this case, there is usually very little money spent. These are considered “safe” House elections—you can see these points on the plot because most of them are close to \\(x\=0\\) or \\(x\=1\\), and the dots are very small. For example, one of the points in the lower\-left corner is the [*8th district in Ohio*](https://en.wikipedia.org/w/index.php?search=8th%20district%20in%20Ohio), which was won by the then [*Speaker of the House*](https://en.wikipedia.org/w/index.php?search=Speaker%20of%20the%20House) [John Boehner](https://en.wikipedia.org/w/index.php?search=John%20Boehner), who ran unopposed. The election in which the most money was spent (over $11 million) was also in Ohio. In the 16th district, Republican incumbent [Jim Renacci](https://en.wikipedia.org/w/index.php?search=Jim%20Renacci) narrowly defeated Democratic challenger [Betty Sutton](https://en.wikipedia.org/w/index.php?search=Betty%20Sutton), who was herself an incumbent from the 13th district. This battle was made possible through decennial redistricting (see Chapter [17](ch-spatial.html#ch:spatial)). Of the money spent in this election, 51\.2% was in support of Sutton but she earned only 48\.0% of the votes.
In the center of the plot, the dots are bigger, indicating that more money is being spent on these contested elections. Of course this makes sense, since candidates who are fighting for their political lives are more likely to fundraise aggressively. Nevertheless, the evidence that more financial support correlates with more votes in contested elections is relatively weak.
### 2\.1\.4 Networks
Not all relationships among variables are sensibly expressed by a scatterplot. Another way in which variables can be related is in the form of a network (we will discuss these in more detail in Chapter [20](ch-netsci.html#ch:netsci)). In this case, campaign funding has a network structure in which individuals donate money to committees, and committees then spend money on behalf of candidates. While the national campaign funding network is far too complex to show here, in Figure [2\.9](ch-vizI.html#fig:ma-funding) we display the funding network for candidates from [*Massachusetts*](https://en.wikipedia.org/w/index.php?search=Massachusetts).
In Figure [2\.9](ch-vizI.html#fig:ma-funding), we see that the two campaigns that benefited the most from committee spending were Republicans [Mitt Romney](https://en.wikipedia.org/w/index.php?search=Mitt%20Romney) and [Scott Brown](https://en.wikipedia.org/w/index.php?search=Scott%20Brown). This is not surprising, since Romney was running for president and received massive donations from the Republican National Committee, while Brown was running to keep his Senate seat in a heavily Democratic state against a strong challenger, [Elizabeth Warren](https://en.wikipedia.org/w/index.php?search=Elizabeth%20Warren). Both men lost their elections. The constellation of blue dots are the congressional delegation from Massachusetts, all of whom are Democrats.
Figure 2\.9: Campaign funding network for candidates from Massachusetts, 2012 federal elections. Each edge represents a contribution from a PAC to a candidate.
2\.2 Composing data graphics
----------------------------
Former *New York Times* intern and [FlowingData.com](https://flowingdata.com/) creator [Nathan Yau](https://en.wikipedia.org/w/index.php?search=Nathan%20Yau) makes the analogy that creating data graphics is like cooking: Anyone can learn to type graphical commands and generate plots on the computer. Similarly, anyone can heat up food in a microwave. What separates a high\-quality visualization from a plain one are the same elements that separate great chefs from novices: mastery of their tools, knowledge of their ingredients, insight, and creativity (Yau 2013\). In this section, we present a framework—rooted in scientific research—for understanding data graphics. Our hope is that by internalizing these ideas you will refine your data graphics palette.
### 2\.2\.1 A taxonomy for data graphics
The taxonomy presented in Yau (2013\) provides a systematic way of thinking about how data graphics convey specific pieces of information and how they could be improved. A complementary *grammar* of graphics (Wilkinson et al. 2005\) is implemented by [Hadley Wickham](https://en.wikipedia.org/w/index.php?search=Hadley%20Wickham) in the **ggplot2** graphics package (Hadley Wickham 2016\), albeit using slightly different terminology. For clarity, we will postpone discussion of **ggplot2** until Chapter [3](ch-vizII.html#ch:vizII). (To extend our cooking analogy, you must learn to taste before you can learn to cook well.)
In this framework, data graphics can be understood in terms of four basic elements: visual cues, coordinate systems, scale, and context. In what follows, we explicate this vision and append a few additional items (facets and layers). This section should equip the careful reader with the ability to systematically break down data graphics, enabling a more critical analysis of their content.
#### 2\.2\.1\.1 Visual Cues
Visual cues are graphical elements that draw the eye to what you want your audience to focus upon. They are the fundamental building blocks of data graphics, and the choice of which visual cues to use to represent which quantities is the central question for the data graphic composer. Table [2\.1](ch-vizI.html#tab:visual-cues) identifies nine distinct visual cues, for which we also list whether that cue is used to encode a numerical or categorical quantity:
Table 2\.1: Visual cues and what they signify.
| Visual Cue | Variable Type | Question |
| --- | --- | --- |
| Position | numerical | where in relation to other things? |
| Length | numerical | how big (in one dimension)? |
| Angle | numerical | how wide? parallel to something else? |
| Direction | numerical | at what slope? in a time series, going up or down? |
| Shape | categorical | belonging to which group? |
| Area | numerical | how big (in two dimensions)? |
| Volume | numerical | how big (in three dimensions)? |
| Shade | either | to what extent? how severely? |
| Color | either | to what extent? how severely? |
Research into graphical perception (dating back to the mid\-1980s) has shown that human beings’ ability to perceive differences in magnitude accurately descends in this order (Cleveland and McGill 1984\). That is, humans are quite good at accurately perceiving differences in position (e.g., how much taller one bar is than another), but not as good at perceiving differences in angles. This is one reason why many people prefer bar charts to [*pie charts*](https://en.wikipedia.org/w/index.php?search=pie%20charts). Our relatively poor ability to perceive differences in color is a major factor in the relatively low opinion of [*heat maps*](https://en.wikipedia.org/w/index.php?search=heat%20maps) that many data scientists have.
#### 2\.2\.1\.2 Coordinate systems
How are the data points organized? While any number of coordinate systems are possible, three are most common:
* **Cartesian**: The familiar \\((x,y)\\)\-rectangular coordinate system with two perpendicular axes.
* **Polar**: The radial analog of the Cartesian system with points identified by their radius \\(\\rho\\) and angle \\(\\theta\\).
* **Geographic**: The increasingly important system in which we have locations on the curved surface of the Earth, but we are trying to represent these locations in a flat two\-dimensional plane. We will discuss such geospatial analyses in Chapter [17](ch-spatial.html#ch:spatial).
An appropriate choice for a coordinate system is critical in representing one’s data accurately, since, for example, displaying geospatial data like airline routes on a flat [*Cartesian plane*](https://en.wikipedia.org/w/index.php?search=Cartesian%20plane) can lead to gross distortions of reality (see Section [17\.3\.2](ch-spatial.html#sec:projections)).
#### 2\.2\.1\.3 Scale
Scales translate values into visual cues.
The choice of scale is often crucial.
The central question is *how* does distance in the data graphic translate into meaningful differences in quantity? Each coordinate axis can have its own scale, for which we have three different choices:
* **Numeric**: A numeric quantity is most commonly set on a *linear*, *logarithmic*, or *percentage* scale. Note that a logarithmic scale does not have the property that, say, a one\-centimeter difference in position corresponds to an equal difference in quantity anywhere on the scale.
* **Categorical**: A categorical variable may have no ordering (e.g., Democrat, Republican, or Independent), or it may be *ordinal* (e.g., never, former, or current smoker).
* **Time**: A numeric quantity that has some special properties. First, because of the calendar, it can be demarcated by a series of different units (e.g., year, month, day, etc.). Second, it can be considered periodically (or cyclically) as a “wrap\-around” scale. Time is also so commonly used and misused that it warrants careful consideration.
Misleading with scale is easy, since it has the potential to completely distort the relative positions of data points in any graphic.
#### 2\.2\.1\.4 Context
The purpose of data graphics is to help the viewer make *meaningful* comparisons, but a bad data graphic can do just the opposite: It can instead focus the viewer’s attention on meaningless artifacts, or ignore crucial pieces of relevant but external knowledge. Context can be added to data graphics in the form of titles or subtitles that explain what is being shown, axis labels that make it clear how units and scale are depicted, or reference points or lines that contribute relevant external information. While one should avoid cluttering up a data graphic with excessive annotations, it is necessary to provide proper context.
#### 2\.2\.1\.5 Small multiples and layers
One of the fundamental challenges of creating data graphics is condensing multivariate information into a two\-dimensional image. While three\-dimensional images are occasionally useful, they are often more confusing than anything else. Instead, here are three common ways of incorporating more variables into a two\-dimensional data graphic:
* **Small multiples**: Also known as [*facets*](https://en.wikipedia.org/w/index.php?search=facets), a single data graphic can be composed of several small multiples of the same basic plot, with one (discrete) variable changing in each of the small sub\-images.
* **Layers**: It is sometimes appropriate to draw a new layer on top of an existing data graphic. This new layer can provide context or comparison, but there is a limit to how many layers humans can reliably parse.
* **Animation**: If time is the additional variable, then an animation can sometimes effectively convey changes in that variable. Of course, this doesn’t work on the printed page and makes it impossible for the user to see all the data at once.
### 2\.2\.2 Color
Color is one of the flashiest, but most misperceived and misused visual cues.
In making color choices, there are a few key ideas that are important for any data scientist to understand.
First, as we saw above, color and its monochromatic cousin *shade* are two of the most poorly perceived visual cues. Thus, while potentially useful for a small number of levels of a categorical variable, color and shade are not particularly faithful ways to represent numerical variables—especially if small differences in those quantities are important to distinguish. This means that while color can be visually appealing to humans, it often isn’t as informative as we might hope. For two numeric variables, it is hard to think of examples where color and shade would be more useful than position. Where color can be most effective is to represent a *third* or *fourth* numeric quantity on a scatterplot—once the two position cues have been exhausted.
Second, approximately 8% of the population—most of whom are men—have some form of color blindness. Most commonly, this renders them incapable of seeing colors accurately, most notably of distinguishing between red and green. Compounding the problem, many of these people do not know that they are color\-blind. Thus, for professional graphics it is worth thinking carefully about which colors to use. The [*National Football League*](https://en.wikipedia.org/w/index.php?search=National%20Football%20League) famously failed to account for this in [a 2015 game](http://espn.go.com/nfl/story/_/id/14116795/newyork-jets-buffalo-bills-jerseys-problematic-colorblind-fans) in which the [*Buffalo Bills*](https://en.wikipedia.org/w/index.php?search=Buffalo%20Bills) wore all\-red jerseys and the [*New York Jets*](https://en.wikipedia.org/w/index.php?search=New%20York%20Jets) wore all\-green, leaving colorblind fans unable to distinguish one team from the other!
To prevent issues with color blindness, avoid contrasting red with green in data graphics. As a bonus, your plots won’t seem Christmas\-y!
Thankfully, we have been freed from the burden of having to create such intelligent palettes by the research of [Cynthia Brewer](https://en.wikipedia.org/w/index.php?search=Cynthia%20Brewer), creator of the [ColorBrewer](http://colorbrewer2.org/) website (and inspiration for the **RColorBrewer** **R** package).
Brewer has created colorblind\-safe palettes in a variety of hues for three different types of numeric data in a single variable:
* **Sequential**: The ordering of the data has only one direction. Positive integers are sequential because they can only go up: they can’t go past 0\. (Thus, if 0 is encoded as white, then any darker shade of gray indicates a larger number.)
* **Diverging**: The ordering of the data has two directions. In an election forecast, we commonly see states colored based on how they are expected to vote for the president. Since red is associated with Republicans and blue with Democrats, states that are solidly red or blue are on opposite ends of the scale. But “swing states” that could go either way may appear purple, white, or some other neutral color that is “between” red and blue (see Figure [2\.10](ch-vizI.html#fig:brewer2)).
* **Qualitative**: There is no ordering of the data, and we simply need color to differentiate different categories.
Figure 2\.10: Diverging red\-blue color palette.
The **RColorBrewer** package provides functionality to use these palettes directly in **R**. Figure [2\.11](ch-vizI.html#fig:brewer) illustrates the sequential, qualitative, and diverging palettes built into **RColorBrewer**.
Figure 2\.11: Palettes available through the **RColorBrewer** package.
Take the extra time to use a well\-designed color palette. Accept that those who work with color for a living will probably choose better colors than you.
Other excellent perceptually distinct color palettes are provided by the **viridis** package. These palettes mimic those that are used in the [*matplotlib*](https://en.wikipedia.org/w/index.php?search=matplotlib) plotting library for [*Python*](https://en.wikipedia.org/w/index.php?search=Python). The **viridis** palettes are also accessible in **ggplot2** through, for example, the `scale_color_viridis()` function.
### 2\.2\.3 Dissecting data graphics
With a little practice, one can learn to dissect data graphics in terms of the taxonomy outlined above. For example, your basic scatterplot uses *position* in the *Cartesian* plane with *linear* scales to show the relationship between two variables. In what follows, we identify the visual cues, coordinate system, and scale in a series of simple data graphics.
1. The bar graph in Figure [2\.12](ch-vizI.html#fig:sat-dot) displays the average score on the math portion of the 1994–1995 [*SAT*](https://en.wikipedia.org/w/index.php?search=SAT) (with possible scores ranging from 200 to 800\) among states for whom at least two\-thirds of the students took the SAT.
Figure 2\.12: Bar graph of average SAT scores among states with at least two\-thirds of students taking the test.
This plot uses the visual cue of [*length*](https://en.wikipedia.org/w/index.php?search=length) to represent the math SAT score on the vertical axis with a *linear* scale. The [*categorical*](https://en.wikipedia.org/w/index.php?search=categorical) variable of `state` is arrayed on the horizontal axis. Although the states are ordered alphabetically, it would not be appropriate to consider the `state` variable to be ordinal, since the ordering is not meaningful in the context of math SAT scores. The coordinate system is *Cartesian*, although as noted previously, the horizontal coordinate is meaningless. Context is provided by the axis labels and title. Note also that since 200 is the minimum score possible on each section of the SAT, the vertical axis has been constrained to start at 200\.
2. Next, we consider a time series that shows the progression of the world record times in the [*100\-meter freestyle*](https://en.wikipedia.org/w/index.php?search=100-meter%20freestyle) swimming event for men and women. Figure [2\.13](ch-vizI.html#fig:swimgg) displays the times as a function of the year in which the new record was set.
Figure 2\.13: Scatterplot of world record time in 100\-m freestyle swimming.
At some level this is simply a scatterplot that uses [*position*](https://en.wikipedia.org/w/index.php?search=position) on both the vertical and horizontal axes to indicate swimming time and chronological time, respectively, in a [*Cartesian plane*](https://en.wikipedia.org/w/index.php?search=Cartesian%20plane). The numeric scale on the vertical axis is linear, in units of seconds, while the scale on the horizontal axis is also linear, measured in years. But there is more going on here. Color is being used as a visual cue to distinguish the categorical variable `sex`. Furthermore, since the points are connected by lines, [*direction*](https://en.wikipedia.org/w/index.php?search=direction) is being used to indicate the progression of the record times. (In this case, the records can only get faster, so the direction is always down.) One might even argue that [*angle*](https://en.wikipedia.org/w/index.php?search=angle) is being used to compare the descent of the world records across time and/or gender. In fact, in this case [*shape*](https://en.wikipedia.org/w/index.php?search=shape) is also being used to distinguish `sex`.
3. Next, we present two pie charts in Figure [2\.14](ch-vizI.html#fig:pie) indicating the different substance of abuse for subjects in the [*Health Evaluation and Linkage to Primary Care*](https://en.wikipedia.org/w/index.php?search=Health%20Evaluation%20and%20Linkage%20to%20Primary%20Care) (HELP) clinical trial (“Linking Alcohol and Drug Dependent Adults to Primary Medicalcare: Aa Randomized Controlled Trial of a Multidisciplinaryhealth Intervention in a Detoxification Unit” 2003\).
Each subject was identified with involvement with one primary substance (alcohol, cocaine, or heroin).
On the right, we see the distribution of substance for housed (no nights in shelter or on the street) participants is fairly evenly distributed, while on the left, we see the same distribution for those who were homeless one or more nights (more likely to have alcohol as their primary substance of abuse).
Figure 2\.14: Pie charts showing the breakdown of substance of abuse among HELP study participants, faceted by homeless status. Compare this to Figure [3\.13](ch-vizII.html#fig:stacked-bar).
This graphic uses a [*radial*](https://en.wikipedia.org/w/index.php?search=radial) coordinate system and the visual cue of [*color*](https://en.wikipedia.org/w/index.php?search=color) to distinguish the three levels of the [*categorical*](https://en.wikipedia.org/w/index.php?search=categorical) variable `substance`.
The visual cue of [*angle*](https://en.wikipedia.org/w/index.php?search=angle) is being used to quantify the differences in the proportion of patients using each substance.
Are you able to accurately identify these percentages from the figure? The actual percentages are shown as follows.
```
# A tibble: 3 × 3
substance Homeless Housed
<fct> <chr> <chr>
1 alcohol n = 103 (49.3%) n = 74 (30.3%)
2 cocaine n = 59 (28.2%) n = 93 (38.1%)
3 heroin n = 47 (22.5%) n = 77 (31.6%)
```
This is a case where a simple table of these proportions is more effective at communicating the true differences than this—and probably any—data graphic. Note that there are only six data points presented, so any graphic is probably gratuitous.
Don’t use pie charts, except perhaps in small multiples.
4. Finally, in Figure [2\.15](ch-vizI.html#fig:choropleth-ma) we present a [*choropleth map*](https://en.wikipedia.org/w/index.php?search=choropleth%20map) showing the population of Massachusetts by the 2010 Census tracts.
Figure 2\.15: Choropleth map of population among Massachusetts Census tracts, based on 2018 American Community Survey.
Clearly, we are using a *geographic* coordinate system here, with [*latitude*](https://en.wikipedia.org/w/index.php?search=latitude) and [*longitude*](https://en.wikipedia.org/w/index.php?search=longitude) on the vertical and horizontal axes, respectively. (This plot is not projected: More information about projection systems is provided in Chapter [17](ch-spatial.html#ch:spatial).) *Shade* is once again being used to represent the quantity `population`, but here the scale is more complicated. The ten shades of blue have been mapped to the [*decile*](https://en.wikipedia.org/w/index.php?search=decile)s of the census tract populations, and since the distribution of population across these tracts is [*right\-skewed*](https://en.wikipedia.org/w/index.php?search=right-skewed), each shade does not correspond to a range of people of the same width, but rather to the same number of tracts that have a population in that range. Helpful context is provided by the title, subtitle, and legend.
### 2\.2\.1 A taxonomy for data graphics
The taxonomy presented in Yau (2013\) provides a systematic way of thinking about how data graphics convey specific pieces of information and how they could be improved. A complementary *grammar* of graphics (Wilkinson et al. 2005\) is implemented by [Hadley Wickham](https://en.wikipedia.org/w/index.php?search=Hadley%20Wickham) in the **ggplot2** graphics package (Hadley Wickham 2016\), albeit using slightly different terminology. For clarity, we will postpone discussion of **ggplot2** until Chapter [3](ch-vizII.html#ch:vizII). (To extend our cooking analogy, you must learn to taste before you can learn to cook well.)
In this framework, data graphics can be understood in terms of four basic elements: visual cues, coordinate systems, scale, and context. In what follows, we explicate this vision and append a few additional items (facets and layers). This section should equip the careful reader with the ability to systematically break down data graphics, enabling a more critical analysis of their content.
#### 2\.2\.1\.1 Visual Cues
Visual cues are graphical elements that draw the eye to what you want your audience to focus upon. They are the fundamental building blocks of data graphics, and the choice of which visual cues to use to represent which quantities is the central question for the data graphic composer. Table [2\.1](ch-vizI.html#tab:visual-cues) identifies nine distinct visual cues, for which we also list whether that cue is used to encode a numerical or categorical quantity:
Table 2\.1: Visual cues and what they signify.
| Visual Cue | Variable Type | Question |
| --- | --- | --- |
| Position | numerical | where in relation to other things? |
| Length | numerical | how big (in one dimension)? |
| Angle | numerical | how wide? parallel to something else? |
| Direction | numerical | at what slope? in a time series, going up or down? |
| Shape | categorical | belonging to which group? |
| Area | numerical | how big (in two dimensions)? |
| Volume | numerical | how big (in three dimensions)? |
| Shade | either | to what extent? how severely? |
| Color | either | to what extent? how severely? |
Research into graphical perception (dating back to the mid\-1980s) has shown that human beings’ ability to perceive differences in magnitude accurately descends in this order (Cleveland and McGill 1984\). That is, humans are quite good at accurately perceiving differences in position (e.g., how much taller one bar is than another), but not as good at perceiving differences in angles. This is one reason why many people prefer bar charts to [*pie charts*](https://en.wikipedia.org/w/index.php?search=pie%20charts). Our relatively poor ability to perceive differences in color is a major factor in the relatively low opinion of [*heat maps*](https://en.wikipedia.org/w/index.php?search=heat%20maps) that many data scientists have.
#### 2\.2\.1\.2 Coordinate systems
How are the data points organized? While any number of coordinate systems are possible, three are most common:
* **Cartesian**: The familiar \\((x,y)\\)\-rectangular coordinate system with two perpendicular axes.
* **Polar**: The radial analog of the Cartesian system with points identified by their radius \\(\\rho\\) and angle \\(\\theta\\).
* **Geographic**: The increasingly important system in which we have locations on the curved surface of the Earth, but we are trying to represent these locations in a flat two\-dimensional plane. We will discuss such geospatial analyses in Chapter [17](ch-spatial.html#ch:spatial).
An appropriate choice for a coordinate system is critical in representing one’s data accurately, since, for example, displaying geospatial data like airline routes on a flat [*Cartesian plane*](https://en.wikipedia.org/w/index.php?search=Cartesian%20plane) can lead to gross distortions of reality (see Section [17\.3\.2](ch-spatial.html#sec:projections)).
#### 2\.2\.1\.3 Scale
Scales translate values into visual cues.
The choice of scale is often crucial.
The central question is *how* does distance in the data graphic translate into meaningful differences in quantity? Each coordinate axis can have its own scale, for which we have three different choices:
* **Numeric**: A numeric quantity is most commonly set on a *linear*, *logarithmic*, or *percentage* scale. Note that a logarithmic scale does not have the property that, say, a one\-centimeter difference in position corresponds to an equal difference in quantity anywhere on the scale.
* **Categorical**: A categorical variable may have no ordering (e.g., Democrat, Republican, or Independent), or it may be *ordinal* (e.g., never, former, or current smoker).
* **Time**: A numeric quantity that has some special properties. First, because of the calendar, it can be demarcated by a series of different units (e.g., year, month, day, etc.). Second, it can be considered periodically (or cyclically) as a “wrap\-around” scale. Time is also so commonly used and misused that it warrants careful consideration.
Misleading with scale is easy, since it has the potential to completely distort the relative positions of data points in any graphic.
#### 2\.2\.1\.4 Context
The purpose of data graphics is to help the viewer make *meaningful* comparisons, but a bad data graphic can do just the opposite: It can instead focus the viewer’s attention on meaningless artifacts, or ignore crucial pieces of relevant but external knowledge. Context can be added to data graphics in the form of titles or subtitles that explain what is being shown, axis labels that make it clear how units and scale are depicted, or reference points or lines that contribute relevant external information. While one should avoid cluttering up a data graphic with excessive annotations, it is necessary to provide proper context.
#### 2\.2\.1\.5 Small multiples and layers
One of the fundamental challenges of creating data graphics is condensing multivariate information into a two\-dimensional image. While three\-dimensional images are occasionally useful, they are often more confusing than anything else. Instead, here are three common ways of incorporating more variables into a two\-dimensional data graphic:
* **Small multiples**: Also known as [*facets*](https://en.wikipedia.org/w/index.php?search=facets), a single data graphic can be composed of several small multiples of the same basic plot, with one (discrete) variable changing in each of the small sub\-images.
* **Layers**: It is sometimes appropriate to draw a new layer on top of an existing data graphic. This new layer can provide context or comparison, but there is a limit to how many layers humans can reliably parse.
* **Animation**: If time is the additional variable, then an animation can sometimes effectively convey changes in that variable. Of course, this doesn’t work on the printed page and makes it impossible for the user to see all the data at once.
#### 2\.2\.1\.1 Visual Cues
Visual cues are graphical elements that draw the eye to what you want your audience to focus upon. They are the fundamental building blocks of data graphics, and the choice of which visual cues to use to represent which quantities is the central question for the data graphic composer. Table [2\.1](ch-vizI.html#tab:visual-cues) identifies nine distinct visual cues, for which we also list whether that cue is used to encode a numerical or categorical quantity:
Table 2\.1: Visual cues and what they signify.
| Visual Cue | Variable Type | Question |
| --- | --- | --- |
| Position | numerical | where in relation to other things? |
| Length | numerical | how big (in one dimension)? |
| Angle | numerical | how wide? parallel to something else? |
| Direction | numerical | at what slope? in a time series, going up or down? |
| Shape | categorical | belonging to which group? |
| Area | numerical | how big (in two dimensions)? |
| Volume | numerical | how big (in three dimensions)? |
| Shade | either | to what extent? how severely? |
| Color | either | to what extent? how severely? |
Research into graphical perception (dating back to the mid\-1980s) has shown that human beings’ ability to perceive differences in magnitude accurately descends in this order (Cleveland and McGill 1984\). That is, humans are quite good at accurately perceiving differences in position (e.g., how much taller one bar is than another), but not as good at perceiving differences in angles. This is one reason why many people prefer bar charts to [*pie charts*](https://en.wikipedia.org/w/index.php?search=pie%20charts). Our relatively poor ability to perceive differences in color is a major factor in the relatively low opinion of [*heat maps*](https://en.wikipedia.org/w/index.php?search=heat%20maps) that many data scientists have.
#### 2\.2\.1\.2 Coordinate systems
How are the data points organized? While any number of coordinate systems are possible, three are most common:
* **Cartesian**: The familiar \\((x,y)\\)\-rectangular coordinate system with two perpendicular axes.
* **Polar**: The radial analog of the Cartesian system with points identified by their radius \\(\\rho\\) and angle \\(\\theta\\).
* **Geographic**: The increasingly important system in which we have locations on the curved surface of the Earth, but we are trying to represent these locations in a flat two\-dimensional plane. We will discuss such geospatial analyses in Chapter [17](ch-spatial.html#ch:spatial).
An appropriate choice for a coordinate system is critical in representing one’s data accurately, since, for example, displaying geospatial data like airline routes on a flat [*Cartesian plane*](https://en.wikipedia.org/w/index.php?search=Cartesian%20plane) can lead to gross distortions of reality (see Section [17\.3\.2](ch-spatial.html#sec:projections)).
#### 2\.2\.1\.3 Scale
Scales translate values into visual cues.
The choice of scale is often crucial.
The central question is *how* does distance in the data graphic translate into meaningful differences in quantity? Each coordinate axis can have its own scale, for which we have three different choices:
* **Numeric**: A numeric quantity is most commonly set on a *linear*, *logarithmic*, or *percentage* scale. Note that a logarithmic scale does not have the property that, say, a one\-centimeter difference in position corresponds to an equal difference in quantity anywhere on the scale.
* **Categorical**: A categorical variable may have no ordering (e.g., Democrat, Republican, or Independent), or it may be *ordinal* (e.g., never, former, or current smoker).
* **Time**: A numeric quantity that has some special properties. First, because of the calendar, it can be demarcated by a series of different units (e.g., year, month, day, etc.). Second, it can be considered periodically (or cyclically) as a “wrap\-around” scale. Time is also so commonly used and misused that it warrants careful consideration.
Misleading with scale is easy, since it has the potential to completely distort the relative positions of data points in any graphic.
#### 2\.2\.1\.4 Context
The purpose of data graphics is to help the viewer make *meaningful* comparisons, but a bad data graphic can do just the opposite: It can instead focus the viewer’s attention on meaningless artifacts, or ignore crucial pieces of relevant but external knowledge. Context can be added to data graphics in the form of titles or subtitles that explain what is being shown, axis labels that make it clear how units and scale are depicted, or reference points or lines that contribute relevant external information. While one should avoid cluttering up a data graphic with excessive annotations, it is necessary to provide proper context.
#### 2\.2\.1\.5 Small multiples and layers
One of the fundamental challenges of creating data graphics is condensing multivariate information into a two\-dimensional image. While three\-dimensional images are occasionally useful, they are often more confusing than anything else. Instead, here are three common ways of incorporating more variables into a two\-dimensional data graphic:
* **Small multiples**: Also known as [*facets*](https://en.wikipedia.org/w/index.php?search=facets), a single data graphic can be composed of several small multiples of the same basic plot, with one (discrete) variable changing in each of the small sub\-images.
* **Layers**: It is sometimes appropriate to draw a new layer on top of an existing data graphic. This new layer can provide context or comparison, but there is a limit to how many layers humans can reliably parse.
* **Animation**: If time is the additional variable, then an animation can sometimes effectively convey changes in that variable. Of course, this doesn’t work on the printed page and makes it impossible for the user to see all the data at once.
### 2\.2\.2 Color
Color is one of the flashiest, but most misperceived and misused visual cues.
In making color choices, there are a few key ideas that are important for any data scientist to understand.
First, as we saw above, color and its monochromatic cousin *shade* are two of the most poorly perceived visual cues. Thus, while potentially useful for a small number of levels of a categorical variable, color and shade are not particularly faithful ways to represent numerical variables—especially if small differences in those quantities are important to distinguish. This means that while color can be visually appealing to humans, it often isn’t as informative as we might hope. For two numeric variables, it is hard to think of examples where color and shade would be more useful than position. Where color can be most effective is to represent a *third* or *fourth* numeric quantity on a scatterplot—once the two position cues have been exhausted.
Second, approximately 8% of the population—most of whom are men—have some form of color blindness. Most commonly, this renders them incapable of seeing colors accurately, most notably of distinguishing between red and green. Compounding the problem, many of these people do not know that they are color\-blind. Thus, for professional graphics it is worth thinking carefully about which colors to use. The [*National Football League*](https://en.wikipedia.org/w/index.php?search=National%20Football%20League) famously failed to account for this in [a 2015 game](http://espn.go.com/nfl/story/_/id/14116795/newyork-jets-buffalo-bills-jerseys-problematic-colorblind-fans) in which the [*Buffalo Bills*](https://en.wikipedia.org/w/index.php?search=Buffalo%20Bills) wore all\-red jerseys and the [*New York Jets*](https://en.wikipedia.org/w/index.php?search=New%20York%20Jets) wore all\-green, leaving colorblind fans unable to distinguish one team from the other!
To prevent issues with color blindness, avoid contrasting red with green in data graphics. As a bonus, your plots won’t seem Christmas\-y!
Thankfully, we have been freed from the burden of having to create such intelligent palettes by the research of [Cynthia Brewer](https://en.wikipedia.org/w/index.php?search=Cynthia%20Brewer), creator of the [ColorBrewer](http://colorbrewer2.org/) website (and inspiration for the **RColorBrewer** **R** package).
Brewer has created colorblind\-safe palettes in a variety of hues for three different types of numeric data in a single variable:
* **Sequential**: The ordering of the data has only one direction. Positive integers are sequential because they can only go up: they can’t go past 0\. (Thus, if 0 is encoded as white, then any darker shade of gray indicates a larger number.)
* **Diverging**: The ordering of the data has two directions. In an election forecast, we commonly see states colored based on how they are expected to vote for the president. Since red is associated with Republicans and blue with Democrats, states that are solidly red or blue are on opposite ends of the scale. But “swing states” that could go either way may appear purple, white, or some other neutral color that is “between” red and blue (see Figure [2\.10](ch-vizI.html#fig:brewer2)).
* **Qualitative**: There is no ordering of the data, and we simply need color to differentiate different categories.
Figure 2\.10: Diverging red\-blue color palette.
The **RColorBrewer** package provides functionality to use these palettes directly in **R**. Figure [2\.11](ch-vizI.html#fig:brewer) illustrates the sequential, qualitative, and diverging palettes built into **RColorBrewer**.
Figure 2\.11: Palettes available through the **RColorBrewer** package.
Take the extra time to use a well\-designed color palette. Accept that those who work with color for a living will probably choose better colors than you.
Other excellent perceptually distinct color palettes are provided by the **viridis** package. These palettes mimic those that are used in the [*matplotlib*](https://en.wikipedia.org/w/index.php?search=matplotlib) plotting library for [*Python*](https://en.wikipedia.org/w/index.php?search=Python). The **viridis** palettes are also accessible in **ggplot2** through, for example, the `scale_color_viridis()` function.
### 2\.2\.3 Dissecting data graphics
With a little practice, one can learn to dissect data graphics in terms of the taxonomy outlined above. For example, your basic scatterplot uses *position* in the *Cartesian* plane with *linear* scales to show the relationship between two variables. In what follows, we identify the visual cues, coordinate system, and scale in a series of simple data graphics.
1. The bar graph in Figure [2\.12](ch-vizI.html#fig:sat-dot) displays the average score on the math portion of the 1994–1995 [*SAT*](https://en.wikipedia.org/w/index.php?search=SAT) (with possible scores ranging from 200 to 800\) among states for whom at least two\-thirds of the students took the SAT.
Figure 2\.12: Bar graph of average SAT scores among states with at least two\-thirds of students taking the test.
This plot uses the visual cue of [*length*](https://en.wikipedia.org/w/index.php?search=length) to represent the math SAT score on the vertical axis with a *linear* scale. The [*categorical*](https://en.wikipedia.org/w/index.php?search=categorical) variable of `state` is arrayed on the horizontal axis. Although the states are ordered alphabetically, it would not be appropriate to consider the `state` variable to be ordinal, since the ordering is not meaningful in the context of math SAT scores. The coordinate system is *Cartesian*, although as noted previously, the horizontal coordinate is meaningless. Context is provided by the axis labels and title. Note also that since 200 is the minimum score possible on each section of the SAT, the vertical axis has been constrained to start at 200\.
2. Next, we consider a time series that shows the progression of the world record times in the [*100\-meter freestyle*](https://en.wikipedia.org/w/index.php?search=100-meter%20freestyle) swimming event for men and women. Figure [2\.13](ch-vizI.html#fig:swimgg) displays the times as a function of the year in which the new record was set.
Figure 2\.13: Scatterplot of world record time in 100\-m freestyle swimming.
At some level this is simply a scatterplot that uses [*position*](https://en.wikipedia.org/w/index.php?search=position) on both the vertical and horizontal axes to indicate swimming time and chronological time, respectively, in a [*Cartesian plane*](https://en.wikipedia.org/w/index.php?search=Cartesian%20plane). The numeric scale on the vertical axis is linear, in units of seconds, while the scale on the horizontal axis is also linear, measured in years. But there is more going on here. Color is being used as a visual cue to distinguish the categorical variable `sex`. Furthermore, since the points are connected by lines, [*direction*](https://en.wikipedia.org/w/index.php?search=direction) is being used to indicate the progression of the record times. (In this case, the records can only get faster, so the direction is always down.) One might even argue that [*angle*](https://en.wikipedia.org/w/index.php?search=angle) is being used to compare the descent of the world records across time and/or gender. In fact, in this case [*shape*](https://en.wikipedia.org/w/index.php?search=shape) is also being used to distinguish `sex`.
3. Next, we present two pie charts in Figure [2\.14](ch-vizI.html#fig:pie) indicating the different substance of abuse for subjects in the [*Health Evaluation and Linkage to Primary Care*](https://en.wikipedia.org/w/index.php?search=Health%20Evaluation%20and%20Linkage%20to%20Primary%20Care) (HELP) clinical trial (“Linking Alcohol and Drug Dependent Adults to Primary Medicalcare: Aa Randomized Controlled Trial of a Multidisciplinaryhealth Intervention in a Detoxification Unit” 2003\).
Each subject was identified with involvement with one primary substance (alcohol, cocaine, or heroin).
On the right, we see the distribution of substance for housed (no nights in shelter or on the street) participants is fairly evenly distributed, while on the left, we see the same distribution for those who were homeless one or more nights (more likely to have alcohol as their primary substance of abuse).
Figure 2\.14: Pie charts showing the breakdown of substance of abuse among HELP study participants, faceted by homeless status. Compare this to Figure [3\.13](ch-vizII.html#fig:stacked-bar).
This graphic uses a [*radial*](https://en.wikipedia.org/w/index.php?search=radial) coordinate system and the visual cue of [*color*](https://en.wikipedia.org/w/index.php?search=color) to distinguish the three levels of the [*categorical*](https://en.wikipedia.org/w/index.php?search=categorical) variable `substance`.
The visual cue of [*angle*](https://en.wikipedia.org/w/index.php?search=angle) is being used to quantify the differences in the proportion of patients using each substance.
Are you able to accurately identify these percentages from the figure? The actual percentages are shown as follows.
```
# A tibble: 3 × 3
substance Homeless Housed
<fct> <chr> <chr>
1 alcohol n = 103 (49.3%) n = 74 (30.3%)
2 cocaine n = 59 (28.2%) n = 93 (38.1%)
3 heroin n = 47 (22.5%) n = 77 (31.6%)
```
This is a case where a simple table of these proportions is more effective at communicating the true differences than this—and probably any—data graphic. Note that there are only six data points presented, so any graphic is probably gratuitous.
Don’t use pie charts, except perhaps in small multiples.
4. Finally, in Figure [2\.15](ch-vizI.html#fig:choropleth-ma) we present a [*choropleth map*](https://en.wikipedia.org/w/index.php?search=choropleth%20map) showing the population of Massachusetts by the 2010 Census tracts.
Figure 2\.15: Choropleth map of population among Massachusetts Census tracts, based on 2018 American Community Survey.
Clearly, we are using a *geographic* coordinate system here, with [*latitude*](https://en.wikipedia.org/w/index.php?search=latitude) and [*longitude*](https://en.wikipedia.org/w/index.php?search=longitude) on the vertical and horizontal axes, respectively. (This plot is not projected: More information about projection systems is provided in Chapter [17](ch-spatial.html#ch:spatial).) *Shade* is once again being used to represent the quantity `population`, but here the scale is more complicated. The ten shades of blue have been mapped to the [*decile*](https://en.wikipedia.org/w/index.php?search=decile)s of the census tract populations, and since the distribution of population across these tracts is [*right\-skewed*](https://en.wikipedia.org/w/index.php?search=right-skewed), each shade does not correspond to a range of people of the same width, but rather to the same number of tracts that have a population in that range. Helpful context is provided by the title, subtitle, and legend.
2\.3 Importance of data graphics: *Challenger*
----------------------------------------------
On January 27th, 1986, engineers at [*Morton Thiokol*](https://en.wikipedia.org/w/index.php?search=Morton%20Thiokol), who supplied solid rocket motors (SRMs) to [*NASA*](https://en.wikipedia.org/w/index.php?search=NASA) for the [*space shuttle*](https://en.wikipedia.org/w/index.php?search=space%20shuttle), recommended that NASA delay the launch of the space shuttle [*Challenger*](https://en.wikipedia.org/w/index.php?search=Challenger) due to concerns that the cold weather forecast for the next day’s launch would jeopardize the stability of the rubber [*O\-rings*](https://en.wikipedia.org/w/index.php?search=O-rings) that held the rockets together. These engineers provided 13 charts that were reviewed over a two\-hour conference call involving the engineers, their managers, and NASA. The engineers’ recommendation was overruled due to a lack of persuasive evidence, and the launch proceeded on schedule. The O\-rings failed in exactly the manner the engineers had feared 73 seconds after launch, [*Challenger* exploded](https://www.youtube.com/watch?v=j4JOjcDFtBE), and all seven astronauts on board died (Tufte 1997\).
In addition to the tragic loss of life, the incident was a devastating blow to NASA and the United States space program. The hand\-wringing that followed included a two\-and\-a\-half year hiatus for NASA and the formation of the [*Rogers Commission*](https://en.wikipedia.org/w/index.php?search=Rogers%20Commission) to study the disaster. What became clear is that the Morton Thiokol engineers had correctly identified the key causal link between *temperature* and *O\-ring damage*. They did this using statistical data analysis combined with a plausible physical explanation: in short, that the rubber O\-rings became brittle in low temperatures. (This link was famously demonstrated by legendary physicist and Rogers Commission member [Richard Feynman](https://en.wikipedia.org/w/index.php?search=Richard%20Feynman) during the hearings, using a glass of water and some ice cubes (Tufte 1997\).) Thus, the engineers were able to identify the critical weakness using their [*domain knowledge*](https://en.wikipedia.org/w/index.php?search=domain%20knowledge)—in this case, rocket science—and their data analysis.
Their failure—and its horrific consequences—was one of persuasion: They simply did not present their evidence in a convincing manner to the NASA officials who ultimately made the decision to proceed with the launch. More than 30 years later this tragedy remains critically important. The evidence brought to the discussions about whether to launch was in the form of hand written data tables (or “charts”), but none were graphical. In his sweeping critique of the incident, [Edward Tufte](https://en.wikipedia.org/w/index.php?search=Edward%20Tufte) created a powerful scatterplot similar to Figures [2\.16](ch-vizI.html#fig:tufte0) and [2\.17](ch-vizI.html#fig:tufte), which were derived from data that the engineers had at the time, but in a far more effective presentation (Tufte 1997\).
Figure 2\.16: A scatterplot with smoother demonstrating the relationship between temperature and O\-ring damage on solid rocket motors. The dots are semi\-transparent, so that darker dots indicate multiple observations with the same values.
Figure [2\.16](ch-vizI.html#fig:tufte0) indicates a clear relationship between the ambient temperature and O\-ring damage on the solid rocket motors. To demonstrate the dramatic extrapolation made to the predicted temperature on January 27th, 1986, Tufte extended the horizontal axis in his scatterplot (Figure [2\.17](ch-vizI.html#fig:tufte)) to include the forecast temperature. The huge gap makes plain the problem with extrapolation.
Reprints of two Morton Thiokol data graphics are shown in Figures [2\.18](ch-vizI.html#fig:challenger1) and [2\.19](ch-vizI.html#fig:challenger2) (Tufte 1997\).
Figure 2\.17: A recreation of Tufte’s scatterplot demonstrating the relationship between temperature and O\-ring damage on solid rocket motors.
Figure 2\.18: One of the original 13 charts presented by Morton Thiokol engineers to NASA on the conference call the night before the [*Challenger*](https://en.wikipedia.org/w/index.php?search=Challenger) launch. This is one of the more data\-intensive charts.
Figure 2\.19: Evidence presented during the congressional hearings after the [*Challenger*](https://en.wikipedia.org/w/index.php?search=Challenger) explosion.
Tufte provides a full critique of the engineers’ failures (Tufte 1997\), many of which are instructive for data scientists.
* **Lack of authorship**: There were no names on any of the charts. This creates a lack of accountability. No single person was willing to take responsibility for the data contained in any of the charts. It is much easier to refute an argument made by a group of nameless people, than to a single or group of named people.
* **Univariate analysis**: The engineers provided several data tables, but all were essentially univariate. That is, they presented data on a single variable, but did not illustrate the relationship between two variables. Note that while Figure [2\.18](ch-vizI.html#fig:challenger1) does show data for two different variables, it is very hard to see the connection between the two in tabular form. Since the crucial connection here was between temperature and O\-ring damage, this lack of bivariate analysis was probably the single most damaging omission in the engineers’ presentation.
* **Anecdotal evidence**: With such a small sample size, anecdotal evidence can be particularly challenging to refute. In this case, a bogus comparison was made based on two observations. While the engineers argued that SRM\-15 had the most damage on the coldest previous launch date (see Figure [2\.17](ch-vizI.html#fig:tufte)), NASA officials were able to counter that SRM\-22 had the second\-most damage on one of the warmer launch dates. These anecdotal pieces of evidence fall apart when all of the data are considered in context—in Figure [2\.17](ch-vizI.html#fig:tufte), it is clear that SRM\-22 is an outlier that deviates from the general pattern—but the engineers never presented all of the data in context.
* **Omitted data**: For some reason, the engineers chose not to present data from 22 other flights, which collectively represented 92% of
launches. This may have been due to time constraints. This dramatic reduction in the accumulated evidence played a role in enabling the anecdotal evidence outlined above.
* **Confusion**: No doubt working against the clock, and most likely working in tandem, the engineers were not always clear about two different types of damage: *erosion* and *blow\-by*. A failure to clearly define these terms may have hindered understanding on the part of NASA officials.
* **Extrapolation**: Most forcefully, the failure to include a simple scatterplot of the full data obscured the “stupendous extrapolation” (Tufte 1997\) necessary to justify the launch. The bottom line was that the forecast launch temperature (between 26 and 29 degrees [*Fahrenheit*](https://en.wikipedia.org/w/index.php?search=Fahrenheit)) was so much colder than anything that had occurred previously, any model for O\-ring damage as a function of temperature would be untested.
When more than a handful of observations are present, data graphics are often more revealing than tables. Always consider alternative representations to improve communication.
Tufte notes that the cardinal sin of the engineers was a failure to frame the data *in relation to what*? The notion that certain data may be understood in relation to something is perhaps the fundamental and defining characteristic of statistical reasoning. We will follow this thread throughout the book.
Always ensure that graphical displays are clearly described with appropriate axis labels, additional text descriptions, and a caption.
We present this tragic episode in this chapter as motivation for a careful study of data visualization. It illustrates a critical truism for practicing data scientists: Being right isn’t enough—you have to be *convincing*. Note that Figure [2\.19](ch-vizI.html#fig:challenger2) contains the same data that are present in Figure [2\.17](ch-vizI.html#fig:tufte) but in a far less suggestive format. It just so happens that for most human beings, graphical explanations are particularly persuasive. Thus, to be a successful data analyst, one must master at least the basics of data visualization.
2\.4 Creating effective presentations
-------------------------------------
Giving effective presentations is an important skill for a data scientist. Whether these presentations are in academic conferences, in a classroom, in a boardroom, or even on stage, the ability to communicate to an audience is of immeasurable value. While some people may be naturally more comfortable in the limelight, everyone can improve the quality of their presentations.
A few pieces of general advice are warranted (Ludwig 2012\):
* **Budget your time**: Often you will only have a few minutes to speak and usually a few additional minutes to answer questions. If your talk runs too short or too long, it makes you seem unprepared. Rehearse your talk several times in order to get a better feel for your timing. Note also that you may have a tendency to talk faster during your actual talk than you will during your rehearsal. Talking faster in order to speed up is a bad strategy—you are much better off simply cutting material ahead of time. You will probably have a hard time getting through \\(x\\) slides in \\(x\\) minutes.
Talking faster in order to speed up is not a good strategy—you are much better off simply cutting material ahead of time or moving to a key slide or conclusion.
* **Don’t write too much on each slide**: You don’t want people to have to read your slides, because if the audience is reading your slides, then they aren’t listening to you. You want your slides to provide visual cues to the points that you are making—not substitute for your spoken words. Concentrate on graphical displays and bullet\-pointed lists of ideas.
* **Put your problem in context**: Remember that (in most cases) most of your audience will have little or no knowledge of your subject matter. The easiest way to lose people is to dive right into technical details that require prior domain knowledge. Spend a few minutes at the beginning of your talk introducing your audience to the most basic aspects of your topic and presenting some motivation for what you are studying.
* **Speak loudly and clearly**: Remember that (in most cases) you know more about your topic that anyone else in the room, so speak and act with confidence!
* **Tell a story, but not necessarily the whole story**: It is unrealistic to expect that you can tell your audience everything that you know about your topic in \\(x\\) minutes. You should strive to convey the big ideas in a clear fashion but not dwell on the details. Your talk will be successful if your audience is able to walk away with an understanding of what your research question was, how you addressed it, and what the implications of your findings are.
2\.5 The wider world of data visualization
------------------------------------------
Thus far our discussion of data visualization has been limited to static, two\-dimensional data graphics. However, there are many additional ways to visualize data. While Chapter [3](ch-vizII.html#ch:vizII) focuses on static data graphics, Chapter [14](ch-vizIII.html#ch:vizIII) presents several cutting\-edge tools for making interactive data visualizations.
Even more broadly, the field of [*visual analytics*](https://en.wikipedia.org/w/index.php?search=visual%20analytics) is concerned with the science behind building interactive visual interfaces that enhance one’s ability to reason about data.
Finally, we have [*data art*](https://en.wikipedia.org/w/index.php?search=data%20art).
You can do many things with data. On one end of the spectrum, you might be focused on predicting the outcome of a specific response variable. In such cases, your goal is very well\-defined and your success can be quantified. On the other end of the spectrum are projects called [*data art*](https://en.wikipedia.org/w/index.php?search=data%20art), wherein the meaning of what you are doing with the data is elusive, but the experience of viewing the data in a new way is in itself meaningful.
Consider [Memo Akten](https://en.wikipedia.org/w/index.php?search=Memo%20Akten) and [Quayola](https://en.wikipedia.org/w/index.php?search=Quayola)’s [*Forms*](http://www.memo.tv/forms/), which was inspired by the physical movement of athletes in the [*Commonwealth Games*](https://en.wikipedia.org/w/index.php?search=Commonwealth%20Games).
Through video analysis, these movements were translated into three\-dimensional digital objects shown in Figure [2\.20](ch-vizI.html#fig:forms). Note how the image in the upper\-left is evocative of a swimmer surfacing after a dive. When viewed as [a movie](https://vimeo.com/38421611), *Forms* is an arresting example of data art.
Watch [Forms (process)](https://vimeo.com/38421611) from [Memo Akten](https://vimeo.com/memotv) on [Vimeo](https://vimeo.com).
Figure 2\.20: Still images from *Forms*, by Memo Akten and Quayola. Each image represents an athletic movement made by a competitor at the Commonwealth Games, but reimagined as a collection of moving three\-dimensional digital objects. Reprinted with permission.
Successful data art projects require both artistic talent and technical ability. *Before Us is the Salesman’s House* is a live, continuously\-updating exploration of the online marketplace [*eBay*](https://en.wikipedia.org/w/index.php?search=eBay). [This installation](https://vimeo.com/50146828) was created by statistician [Mark Hansen](https://en.wikipedia.org/w/index.php?search=Mark%20Hansen) and digital artist [Jer Thorpe](https://en.wikipedia.org/w/index.php?search=Jer%20Thorpe) and is projected on a big screen as you enter eBay’s campus.
Watch [Before us is the Salesman’s House—Three Cycles](https://vimeo.com/50146828) from [blprnt](https://vimeo.com/user313340) on [Vimeo](https://vimeo.com).
The display begins by pulling up [Arthur Miller](https://en.wikipedia.org/w/index.php?search=Arthur%20Miller)’s classic play [*Death of a Salesman*](https://en.wikipedia.org/w/index.php?search=Death%20of%20a%20Salesman), and “reading” the text of the first chapter. Along the way, several nouns are plucked from the text (e.g., flute, refrigerator, chair, bed, trophy, etc.). For each in succession, the display then shifts to a geographic display of where things with that noun in the description are *currently* being sold on eBay, replete with price and auction information. (Note that these descriptions are not always perfect. In the video, a search for “refrigerator” turns up a T\-shirt of former [*Chicago Bears*](https://en.wikipedia.org/w/index.php?search=Chicago%20Bears) defensive end William \[Refrigerator] Perry.)
Next, one city where such an item is being sold is chosen, and any classic books of American literature being sold nearby are collected. One is chosen, and the cycle returns to the beginning by “reading” the first page of that book. This process continues indefinitely.
When describing the exhibit, Hansen spoke of “one data set reading another.” It is this interplay of data and literature that makes such data art projects so powerful.
Finally, we consider another [Mark Hansen](https://en.wikipedia.org/w/index.php?search=Mark%20Hansen) collaboration, this time with [Ben Rubin](https://en.wikipedia.org/w/index.php?search=Ben%20Rubin) and [Michele Gorman](https://en.wikipedia.org/w/index.php?search=Michele%20Gorman). In [*Shakespeare Machine*](https://vimeo.com/54858820), 37 digital LCD blades—each corresponding to one of [*Shakespeare*](https://en.wikipedia.org/w/index.php?search=Shakespeare)’s plays—are arrayed in a circle. The display on each blade is a pattern of words culled from the text of these plays. First, pairs of hyphenated words are shown. Next, Boolean pairs (e.g., “good or bad”) are found. Third, articles and adjectives modifying nouns (e.g., “the holy father”). In this manner, the artistic masterpieces of Shakespeare are shattered into formulaic chunks. In Chapter [19](ch-text.html#ch:text), we will learn how to use [*regular expressions*](https://en.wikipedia.org/w/index.php?search=regular%20expressions) to find the data for *Shakespeare Machine*.
Watch [Shakespeare Machine](https://vimeo.com/54858820) by [Ben Rubin, Mark Hansen, Michele Gorman](https://vimeo.com/c4sr) on [Vimeo](https://vimeo.com).
2\.6 Further resources
----------------------
While issues related to data visualization pervade this entire text, they will be the particular focus of Chapters [3](ch-vizII.html#ch:vizII) (Data visualization II), [14](ch-vizIII.html#ch:vizIII) (Data visualization III), and [17](ch-spatial.html#ch:spatial) (Geospatial data).
No education in data graphics is complete without reading Tufte’s [*Visual Display of Quantitative Information*](https://en.wikipedia.org/w/index.php?search=Visual%20Display%20of%20Quantitative%20Information) (Tufte 2001\), which also contains a description of [John Snow](https://en.wikipedia.org/w/index.php?search=John%20Snow)’s cholera map (see Chapter [17](ch-spatial.html#ch:spatial)). For a full description of the [*Challenger*](https://en.wikipedia.org/w/index.php?search=Challenger) incident, see (Tufte 1997\). Tufte has also published two other landmark books (Tufte 1990, 2006\), as well as reasoned polemics about the shortcomings of [*PowerPoint*](https://en.wikipedia.org/w/index.php?search=PowerPoint) (Tufte 2003\). Cleveland and McGill (1984\) provide the foundation for Yau’s taxonomy (Yau 2013\). Yau (2011\) provides many examples of thought\-provoking data visualizations, particularly data art.
The grammar of graphics was first described by Wilkinson et al. (2005\). Hadley Wickham (2016\) implemented **ggplot2** based on this formulation.
Many important data graphics were developed by Tukey (1990\). A. Gelman, Pasarica, and Dodhia (2002\) have also written persuasively about data graphics in statistical journals. Gelman discusses a set of canonical data graphics as well as Tufte’s suggested modifications to them. D. Nolan and Perrett (2016\) discuss data visualization assignments and rubrics that can be used to grade them.
Steven J. Murdoch has created some **R** functions for drawing the kind of [modified diagrams](http://www.cl.cam.ac.uk/~sjm217/projects/graphics/) described in Tufte (2001\). These also appear in the **ggthemes** package (Arnold 2019\).
Cynthia Brewer’s color palettes are available at [http://colorbrewer2\.org](http://colorbrewer2.org) and through the **RColorBrewer** package. Her work is described in more detail in Brewer (1994\) and Brewer (1999\). The **viridis** (Garnier 2021a) and **viridisLite** (Garnier 2021b) packages provide [*matplotlib*](https://en.wikipedia.org/w/index.php?search=matplotlib)\-like palettes for **R**. Ram and Wickham (2018\) created the whimsical color palette that evokes [Wes Anderson](https://en.wikipedia.org/w/index.php?search=Wes%20Anderson)’s distinctive movies.
[Technically Speaking](http://techspeaking.denison.edu/Technically_Speaking/Home.html) is an NSF\-funded project for presentation advice that contains instructional videos for students (Ludwig 2012\).
2\.7 Exercises
--------------
**Problem 1 (Easy)**: Consider the following data graphic.
The `am` variable takes the value `0` if the car has [automatic transmission](https://en.wikipedia.org/wiki/Automatic_transmission) and `1` if the car has [manual transmission](https://en.wikipedia.org/wiki/Manual_transmission).
How could you differentiate the cars in the graphic based on their transmission type?
**Problem 2 (Medium)**: Pick one of the Science Notebook entries at <https://www.edwardtufte.com/tufte> (e.g., “Making better inferences from statistical graphics”).
Write a brief reflection on the graphical principles that are illustrated by this entry.
**Problem 3 (Medium)**: Find two graphs published in a newspaper or on the internet in the last two years.
1. Identify a graphical display that you find compelling. What aspects of the display work well, and how do these relate to the principles established in this chapter? Include a screen shot of the display along with your solution.
2. Identify a graphical display that you find less than compelling. What aspects of the display don’t work well? Are there ways that the display might be improved? Include a screen shot of the display along with your solution.
**Problem 4 (Medium)**: Find two scientific papers from the last two years in a peer\-reviewed journal (*Nature* and *Science* are good choices).
1. Identify a graphical display that you find compelling. What aspects of the display work well, and how do these relate to the principles established in this chapter? Include a screen shot of the display along with your solution.
2. Identify a graphical display that you find less than compelling. What aspects of the display don’t work well? Are there ways that the display might be improved? Include a screen shot of the display along with your solution.
**Problem 5 (Medium)**: Consider the two graphics related to *The New York Times* “Taxmageddon” article at [http://www.nytimes.com/2012/04/15/sunday\-review/coming\-soon\-taxmageddon.html](http://www.nytimes.com/2012/04/15/sunday-review/coming-soon-taxmageddon.html). The first is [“Whose Tax Rates Rose or Fell”](http://www.nytimes.com/imagepages/2012/04/13/opinion/sunday/0415web-leonhardt.html) and the second is [“Who Gains Most From Tax Breaks.”](http://www.nytimes.com/imagepages/2012/04/13/opinion/sunday/0415web-leonhardt2.html)
1. Examine the two graphics carefully. Discuss what you think they convey. What story do the graphics tell?
2. Evaluate both graphics in terms of the taxonomy described in this chapter. Are the scales appropriate? Consistent? Clearly labeled? Do variable dimensions exceed data dimensions?
3. What, if anything, is misleading about these graphics?
**Problem 6 (Medium)**: Consider the data graphic [http://tinyurl.com/nytimes\-unplanned](http://tinyurl.com/nytimes-unplanned) about birth control methods.
1. What quantity is being shown on the \\(y\\)\-axis of each plot?
2. List the variables displayed in the data graphic, along with the units and a few typical values for each.
3. List the visual cues used in the data graphic and explain how each visual cue is linked to each variable.
4. Examine the graphic carefully. Describe, in words, what *information* you think the data graphic conveys. Do not just summarize the *data*—interpret the data in the context of the problem and tell us what it means.
(Note: *information* is meaningful to human beings—it is not the same thing as *data*.)
2\.8 Supplementary exercises
----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-vizI.html\#datavizI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-vizI.html#datavizI-online-exercises)
**Problem 1 (Easy)**: Consider the following data\-driven image, available for purchase at [NBA Playoff Rings](http://champsring.com/products/pro-basketball-2013): [https://cdn.shopify.com/s/files/1/0144/6552/products/NBA\-Basketball\-2013\-\_6\_1024x1024\.jpg](https://cdn.shopify.com/s/files/1/0144/6552/products/NBA-Basketball-2013-_6_1024x1024.jpg)
1. Identify the visual cues, coordinate system, and scale(s).
2. How many variables are depicted in the graphic? Explicitly link each variable to a visual cue that you listed above.
3. Critique this data graphic using the taxonomy described in this chapter.
**Problem 2 (Easy)**: 2016 ELECTION: Consider the following data graphic about results from the [2016 presidential election in Massachusetts](https://www.wbur.org/politicker/2016/11/08/massachusetts-election-map).
What type of color palette is used in this graphic?
**Problem 3 (Easy)**: Choose *one* of the data graphics listed at [http://mdsr\-book.github.io/exercises.html\#exercise\_23](http://mdsr-book.github.io/exercises.html#exercise_23)
and answer the following questions. Be sure to indicate which graphical display you picked.
2. [World’s Top 10 Best Selling Cigarette Brands, 2004\-2007](http://vizwiz.blogspot.com/2009/12/simple-is-better.html)
3. ~~[GNPD Usage by Food Categories](http://www.nrsmithdesign.com/wp-content/uploads/2012/10/data-graphic.jpg)~~
4. [UK University Rankings](http://static.guim.co.uk/sys-images/Guardian/Pix/maps_and_graphs/2010/9/7/1283876186403/Top-universities-graphic-001.jpg)
5. [Childhood Obesity in the US](http://www.sparkpe.org/blog/wp-content/uploads/2010/07/childhood-obesity-bmi.gif)
6. [Relationship between ages and psychosocial maturity](http://ars.els-cdn.com/content/image/1-s2.0-S1043276005002602-gr2.jpg)
1. Identify the visual cues, coordinate system, and scale(s).
2. How many variables are depicted in the graphic? Explicitly link each variable to a visual cue that you listed above.
3. Critique this data graphic using the taxonomy described in this chapter.
**Problem 4 (Medium)**: Answer the following questions for each of the following collections of data graphics listed at ([http://mdsr\-book.github.io/exercises.html\#exercise\_24](http://mdsr-book.github.io/exercises.html#exercise_24)).
1. [What is a Data Scientist?](http://tinyurl.com/what-is-datascientist)
2. [Charts that explain food in America](http://www.vox.com/a/explain-food-america)
Briefly (one paragraph) critique the designer’s choices. Would you have made different choices? Why or why not?
> Note: Each link contains a collection of many data graphics, and we don’t expect (or want) you to write a full report on each individual graphic.
> But each collection shares some common stylistic elements.
> You should comment on a few things that you notice about the design of the collection.
**Problem 5 (Medium)**: Consider one of the more complicated data graphics listed at ([http://mdsr\-book.github.io/exercises.html\#exercise\_25](http://mdsr-book.github.io/exercises.html#exercise_25)):
1. What story does the data graphic tell? What is the main message that you take away from it?
2. Can the data graphic be described in terms of the taxonomy presented in this chapter? If so, list the visual cues, coordinate system, and scales(s) as you did in Problem 2(a). If not, describe the feature of this data graphic that lies outside of that taxonomy.
3. Critique and/or praise the visualization choices made by the designer. Do they work? Are they misleading? Thought\-provoking? Brilliant? Are there things that you would have done differently? Justify your response.
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-vizII.html |
Chapter 3 A grammar for graphics
================================
In Chapter [2](ch-vizI.html#ch:vizI), we presented a taxonomy for understanding data graphics. In this chapter, we illustrate how the **ggplot2** package can be used to create data graphics. Other
packages for creating static, two\-dimensional data graphics in **R** include **base** graphics and the **lattice** system.
We employ the **ggplot2** system because it provides a unifying framework—a grammar—for describing and specifying graphics.
The grammar for specifying graphics will allow the creation of custom data graphics that support visual display in a purposeful way.
We note that while the terminology used in **ggplot2** is not the same as the taxonomy we outlined in Chapter [2](ch-vizI.html#ch:vizI), there are many close parallels, which we will make explicit.
3\.1 A grammar for data graphics
--------------------------------
The **ggplot2** package is one of the many creations of prolific **R** programmer [Hadley Wickham](https://en.wikipedia.org/w/index.php?search=Hadley%20Wickham). It has become one of the most widely\-used **R** packages, in no small part because of the way it builds data graphics incrementally from small pieces of code.
In the grammar of **ggplot2**, an [*aesthetic*](https://en.wikipedia.org/w/index.php?search=aesthetic) is an explicit mapping between a variable and the visual cues that represent its values.
A [*glyph*](https://en.wikipedia.org/w/index.php?search=glyph) is the basic graphical element that represents one case (other terms used include “mark” and “symbol”). In a scatterplot, the *positions* of a glyph on the plot—in both the horizontal and vertical senses—are the [*visual cues*](https://en.wikipedia.org/w/index.php?search=visual%20cues) that help the viewer understand how big the corresponding quantities are. The *aesthetic* is the mapping that defines these correspondences. When more than two variables are present, additional aesthetics can marshal additional visual cues. Note also that some visual cues (like *direction* in a time series) are implicit and do not have a corresponding aesthetic.
For many of the chapters in this book, the first step in following these examples will be to load the **mdsr** package, which contains many of the data sets referenced in this book.
In addition, we load the **tidyverse** package, which in turn loads **dplyr** and **ggplot2**. (For more information about the **mdsr** package see Appendix [A](ch-mdsr.html#ch:mdsr).
If you are using **R** for the first time, please see Appendix [B](ch-R.html#ch:R) for an introduction.)
```
library(mdsr)
library(tidyverse)
```
If you want to learn how to use a particular command, we highly recommend running the example code on your own.
We begin with a data set that includes measures that are relevant to answer questions about economic productivity.
The `CIACountries` data table contains seven variables collected for each of 236 countries: population (`pop`), area (`area`), gross domestic product (`gdp`), percentage of GDP spent on education (`educ`), length of roadways per unit area (`roadways`), Internet use as a fraction of the population (`net_users`), and the number of barrels of oil produced per day (`oil_prod`).
Table [3\.1](ch-vizII.html#tab:countrydata) displays a selection of variables for the first six countries.
Table 3\.1: A selection of variables from the first six rows of the CIA countries data table.
| country | oil\_prod | gdp | educ | roadways | net\_users |
| --- | --- | --- | --- | --- | --- |
| Afghanistan | 0 | 1900 | NA | 0\.065 | \>5% |
| Albania | 20510 | 11900 | 3\.3 | 0\.626 | \>35% |
| Algeria | 1420000 | 14500 | 4\.3 | 0\.048 | \>15% |
| American Samoa | 0 | 13000 | NA | 1\.211 | NA |
| Andorra | NA | 37200 | NA | 0\.684 | \>60% |
| Angola | 1742000 | 7300 | 3\.5 | 0\.041 | \>15% |
### 3\.1\.1 Aesthetics
In the simple scatterplot shown in Figure [3\.1](ch-vizII.html#fig:simple-glyph), we employ the grammar of graphics to build a multivariate data graphic. In **ggplot2**, the `ggplot()` command creates a plot object `g`, and any arguments to that function are applied across any subsequent plotting directives. In this case, this means that any variables mentioned anywhere in the plot are understood to be within the `CIACountries` data frame, since we have specified that in the `data` argument. Graphics in **ggplot2** are built incrementally by elements. In this case, the only glyphs added are points, which are plotted using the `geom_point()` function. The arguments to `geom_point()` specify *where* and *how* the points are drawn. Here, the two *aesthetics* (`aes()`) map the vertical (`y`) coordinate to the `gdp` variable, and the horizontal (`x`) coordinate to the `educ` variable. The `size` argument to `geom_point()` changes the size of all of the glyphs. Note that here, every dot is the same size. Thus, size is *not* an aesthetic, since it does not map a variable to a visual cue.
Since each case (i.e., row in the data frame) is a country, each dot represents one country.
```
g <- ggplot(data = CIACountries, aes(y = gdp, x = educ))
g + geom_point(size = 3)
```
Figure 3\.1: Scatterplot using only the position aesthetic for glyphs.
In Figure [3\.1](ch-vizII.html#fig:simple-glyph) the glyphs are simple.
Only position in the frame distinguishes one glyph from another.
The shape, size, etc. of all of the glyphs are identical—there is nothing about the glyph itself that identifies the country.
However, it is possible to use a glyph with several attributes. We can define additional aesthetics to create new visual cues.
In Figure [3\.2](ch-vizII.html#fig:net-use-color), we have extended the previous example by mapping the color of each dot to the categorical `net_users` variable.
```
g + geom_point(aes(color = net_users), size = 3)
```
Figure 3\.2: Scatterplot in which `net_users` is mapped to color.
Changing the glyph is as simple as changing the function that draws that glyph—the aesthetic can often be kept exactly the same. In Figure [3\.3](ch-vizII.html#fig:country-labels), we plot text instead of a dot.
```
g + geom_text(aes(label = country, color = net_users), size = 3)
```
Figure 3\.3: Scatterplot using both location and label as aesthetics.
Of course, we can employ multiple aesthetics. There are four aesthetics in Figure [3\.4](ch-vizII.html#fig:four-variables).
Each of the four aesthetics is set in correspondence with a variable—we say the variable is [*mapped*](https://en.wikipedia.org/w/index.php?search=mapped) to the aesthetic.
Educational attainment is being mapped to horizontal position, GDP to vertical position, Internet connectivity to color, and length of roadways to size.
Thus, we encode four variables (`gdp`, `educ`, `net_users`, and `roadways`) using the visual cues of position, position, color, and area, respectively.
```
g + geom_point(aes(color = net_users, size = roadways))
```
Figure 3\.4: Scatterplot in which `net_users` is mapped to color and `educ` mapped to size. Compare this graphic to Figure [3\.7](ch-vizII.html#fig:facet-internet), which displays the same data using facets.
A data table provides the basis for drawing a data graphic.
The relationship between a data table and a graphic is simple: Each case (row) in the data table becomes a mark in the graph (we will return to the notion of [*glyph\-ready data*](https://en.wikipedia.org/w/index.php?search=glyph-ready%20data) in Chapter [6](ch-dataII.html#ch:dataII)).
As the designer of the graphic, you choose which variables the graphic will display and how each variable is to be represented graphically: position, size, color, and so on.
### 3\.1\.2 Scales
Compare Figure [3\.4](ch-vizII.html#fig:four-variables) to Figure [3\.5](ch-vizII.html#fig:log-scale).
In the former, it is hard to discern differences in GDP due to its right\-skewed distribution and the choice of a *linear* scale. In the latter, the *logarithmic* scale on the vertical axis makes the scatterplot more readable.
Of course, this makes interpreting the plot more complex, so we must be very careful when doing so.
Note that the only difference in the code is the addition of the `coord_trans()` directive.
```
g +
geom_point(aes(color = net_users, size = roadways)) +
coord_trans(y = "log10")
```
Figure 3\.5: Scatterplot using a logarithmic transformation of GDP that helps to mitigate visual clustering caused by the right\-skewed distribution of GDP among countries.
Scales can also be manipulated in **ggplot2** using any of the scale functions.
For example, instead of using the `coord_trans()` function as we did above, we could have achieved a similar plot through the use of the `scale_y_continuous()` function, as illustrated in Figure [3\.6](ch-vizII.html#fig:log-scale2).
In either case, the points will be drawn in the same location—the difference in the two plots is how and where the major tick marks and axis labels are drawn. We prefer to use `coord_trans()` in Figure [3\.5](ch-vizII.html#fig:log-scale) because it draws attention to the use of the log scale (compare with Figure [3\.6](ch-vizII.html#fig:log-scale2)).
Similarly\-named functions (e.g., `scale_x_continuous()`, `scale_x_discrete()`, `scale_color()`, etc.) perform analogous operations on different aesthetics.
```
g +
geom_point(aes(color = net_users, size = roadways)) +
scale_y_continuous(
name = "Gross Domestic Product",
trans = "log10",
labels = scales::comma
)
```
Figure 3\.6: Scatterplot using a logarithmic transformation of GDP. The use of a log scale on the \\(y\\)\-axis is less obvious than it is in Figure [3\.5](ch-vizII.html#fig:log-scale) due to the uniformly\-spaced horizontal grid lines.
Not all scales are about position.
For instance, in Figure [3\.4](ch-vizII.html#fig:four-variables), `net_users` is translated to color.
Similarly, `roadways` is translated to size: the largest dot corresponds to a value of 30 roadways per unit area.
### 3\.1\.3 Guides
Context is provided by *guides* (more commonly called legends).
A guide helps a human reader understand the meaning of the visual cues by providing context.
For position visual cues, the most common sort of guide is the familiar axis with its tick marks and labels. But other guides exist. In Figures [3\.4](ch-vizII.html#fig:four-variables) and [3\.5](ch-vizII.html#fig:log-scale), legends relate how dot color corresponds to internet connectivity, and how dot size corresponds to length of roadways (note the use of a log scale). The `geom_text()` and `geom_label()` functions can also be used to provide specific textual annotations on the plot. Examples of how to use these functions for annotations are provide in Section [3\.3](ch-vizII.html#sec:babynames).
### 3\.1\.4 Facets
Using multiple aesthetics such as shape, color, and size to display multiple variables can produce a confusing, hard\-to\-read graph.
[*Facets*](https://en.wikipedia.org/w/index.php?search=Facets)—multiple side\-by\-side graphs used to display levels of a categorical variable—provide a simple and effective alternative.
Figure [3\.7](ch-vizII.html#fig:facet-internet) uses facets to show different levels of Internet connectivity, providing a better view than Figure [3\.4](ch-vizII.html#fig:four-variables).
There are two functions that create facets: `facet_wrap()` and `facet_grid()`.
The former creates a facet for each level of a single categorical variable, whereas the latter creates a facet for each combination of two categorical variables, arranging them in a grid.
```
g +
geom_point(alpha = 0.9, aes(size = roadways)) +
coord_trans(y = "log10") +
facet_wrap(~net_users, nrow = 1) +
theme(legend.position = "top")
```
Figure 3\.7: Scatterplot using facets for different ranges of Internet connectivity.
### 3\.1\.5 Layers
On occasion, data from more than one data table are graphed together.
For example, the `MedicareCharges` and `MedicareProviders` data tables provide information about the average cost of each medical procedure in each state.
If you live in [*New Jersey*](https://en.wikipedia.org/w/index.php?search=New%20Jersey), you might wonder how providers in your state charge for different medical procedures.
However, you will certainly want to understand those averages in the context of the averages across all states. In the `MedicareCharges` table, each row represents a different medical procedure (`drg`) with its associated average cost in each state.
We also create a second data table called `ChargesNJ`, which contains only those rows corresponding to providers in the state of New Jersey. Do not worry if these commands aren’t familiar—we will learn these in Chapter [4](ch-dataI.html#ch:dataI).
```
ChargesNJ <- MedicareCharges %>%
filter(stateProvider == "NJ")
```
The first few rows from the data table for New Jersey are shown in Table [3\.2](ch-vizII.html#tab:drg-NJ). This glyph\-ready table (see Chapter [6](ch-dataII.html#ch:dataII)) can be translated to a chart (Figure [3\.8](ch-vizII.html#fig:compare-NJ)) using bars to represent the average charges for different medical procedures in New Jersey. The `geom_col()` function creates a separate bar for each of the 100 different medical procedures.
Table 3\.2: Glyph\-ready data for the barplot layer.
| drg | stateProvider | num\_charges | mean\_charge |
| --- | --- | --- | --- |
| 039 | NJ | 31 | 35104 |
| 057 | NJ | 55 | 45692 |
| 064 | NJ | 55 | 87042 |
| 065 | NJ | 59 | 59576 |
| 066 | NJ | 56 | 45819 |
| 069 | NJ | 61 | 41917 |
| 074 | NJ | 41 | 42993 |
| 101 | NJ | 58 | 42314 |
| 149 | NJ | 50 | 34916 |
| 176 | NJ | 36 | 58941 |
```
p <- ggplot(
data = ChargesNJ,
aes(x = reorder(drg, mean_charge), y = mean_charge)
) +
geom_col(fill = "gray") +
ylab("Statewide Average Charges ($)") +
xlab("Medical Procedure (DRG)") +
theme(axis.text.x = element_text(angle = 90, hjust = 1, size = rel(0.5)))
p
```
Figure 3\.8: Bar graph of average charges for medical procedures in New Jersey.
How do the charges in New Jersey compare to those in other states? The two data tables, one for New Jersey and one for the whole country, can be plotted with different glyph types: bars for New Jersey and dots for the states across the whole country as in Figure [3\.9](ch-vizII.html#fig:compare-NJ-2).
```
p + geom_point(data = MedicareCharges, size = 1, alpha = 0.3)
```
Figure 3\.9: Bar graph adding a second layer to provide a comparison of New Jersey to other states. Each dot represents one state, while the bars represent New Jersey.
With the context provided by the individual states, it is easy to see that the charges in New Jersey are among the highest in the country for each medical procedure.
3\.2 Canonical data graphics in **R**
-------------------------------------
Over time, statisticians have developed standard data graphics for specific use cases (Tukey 1990\).
While these data graphics are not always mesmerizing, they are hard to beat for simple effectiveness.
Every data scientist should know how to make and interpret these canonical data graphics—they are ignored at your peril.
### 3\.2\.1 Univariate displays
It is generally useful to understand how a single variable is distributed. If that variable is numeric, then its distribution is commonly summarized graphically using a [*histogram*](https://en.wikipedia.org/w/index.php?search=histogram) or [*density plot*](https://en.wikipedia.org/w/index.php?search=density%20plot). Using the **ggplot2** package, we can display either plot for the `math` variable in the `SAT_2010` data frame by binding the `math` variable to the `x` aesthetic.
```
g <- ggplot(data = SAT_2010, aes(x = math))
```
Then we only need to choose either `geom_histogram()` or `geom_density()`. Both Figures [3\.10](ch-vizII.html#fig:SAT-1) and [3\.11](ch-vizII.html#fig:SAT-2) convey the same information, but whereas the histogram uses pre\-defined bins to create a discrete distribution, a density plot uses a [*kernel smoother*](https://en.wikipedia.org/w/index.php?search=kernel%20smoother) to make a continuous curve.
```
g + geom_histogram(binwidth = 10) + labs(x = "Average math SAT score")
```
Figure 3\.10: Histogram showing the distribution of math SAT scores by state.
Note that the `binwidth` argument is being used to specify the width of bins in the histogram.
Here, each bin contains a 10\-point range of SAT scores.
In general, the appearance of a histogram can vary considerably based on the choice of bins, and there is no one “best” choice (Lunzer and McNamara 2017\).
You will have to decide what bin width is most appropriate for your data.
```
g + geom_density(adjust = 0.3)
```
Figure 3\.11: Density plot showing the distribution of average math SAT scores by state.
Similarly, in the density plot shown in Figure [3\.11](ch-vizII.html#fig:SAT-2) we use the `adjust` argument to modify the [*bandwidth*](https://en.wikipedia.org/w/index.php?search=bandwidth) being used by the kernel smoother. In the taxonomy defined above, a density plot uses position and direction in a Cartesian plane with a horizontal scale defined by the units in the data.
If your variable is categorical, it doesn’t make sense to think about the values as having a continuous density. Instead, we can use a [*bar graph*](https://en.wikipedia.org/w/index.php?search=bar%20graph) to display the distribution of a categorical variable.
To make a simple bar graph for `math`, identifying each bar by the label `state`, we use the `geom_col()` command, as displayed in Figure [3\.12](ch-vizII.html#fig:bar2). Note that we add a few wrinkles to this plot. First, we use the `head()` function to display only the first 10 states (in alphabetical order). Second, we use the `reorder()` function to sort the state names in order of their average `math` SAT score.
```
ggplot(
data = head(SAT_2010, 10),
aes(x = reorder(state, math), y = math)
) +
geom_col() +
labs(x = "State", y = "Average math SAT score")
```
Figure 3\.12: A bar plot showing the distribution of average math SAT scores for a selection of states.
As noted earlier, we recommend against the use of pie charts to display the distribution of a categorical variable since, in most cases, a table of frequencies is more informative. An informative graphical display can be achieved using a [*stacked bar plot*](https://en.wikipedia.org/w/index.php?search=stacked%20bar%20plot), such as the one shown in Figure [3\.13](ch-vizII.html#fig:stacked-bar) using the `geom_bar()` function. Note that we have used the `coord_flip()` function to display the bars horizontally instead of vertically.
```
ggplot(data = mosaicData::HELPrct, aes(x = homeless)) +
geom_bar(aes(fill = substance), position = "fill") +
scale_fill_brewer(palette = "Spectral") +
coord_flip()
```
Figure 3\.13: A stacked bar plot showing the distribution of substance of abuse for participants in the HELP study. Compare this to Figure [2\.14](ch-vizI.html#fig:pie).
This method of graphical display enables a more direct comparison of proportions than would be possible using two pie charts. In this case, it is clear that homeless participants were more likely to identify as being involved with alcohol as their primary substance of abuse. However, like pie charts, bar charts are sometimes criticized for having a low [*data\-to\-ink ratio*](https://en.wikipedia.org/w/index.php?search=data-to-ink%20ratio). That is, they use a comparatively large amount of ink to depict relatively few data points.
### 3\.2\.2 Multivariate displays
Multivariate displays are the most effective way to convey the relationship between more than one variable. The venerable [*scatterplot*](https://en.wikipedia.org/w/index.php?search=scatterplot) remains an excellent way to display observations of two quantitative (or numerical) variables. The scatterplot is provided in **ggplot2** by the `geom_point()` command. The main purpose of a scatterplot is to show the relationship between two variables across many cases. Most often, there is a Cartesian coordinate system in which the \\(x\\)\-axis represents one variable and the \\(y\\)\-axis the value of a second variable.
```
g <- ggplot(
data = SAT_2010,
aes(x = expenditure, y = math)
) +
geom_point()
```
We will also add a smooth trend line and some more specific axis labels. We use the `geom_smooth()` function in order to plot the simple linear regression line (`method = "lm"`) through the points (see Section [9\.6](ch-foundations.html#sec:confound) and Appendix [E](ch-regression.html#ch:regression)).
```
g <- g +
geom_smooth(method = "lm", se = FALSE) +
xlab("Average expenditure per student ($1000)") +
ylab("Average score on math SAT")
```
In Figures [3\.14](ch-vizII.html#fig:groups-color) and [3\.15](ch-vizII.html#fig:bar-facet), we plot the relationship between the average SAT math score and the expenditure per pupil (in thousands of United States dollars) among states in 2010\.
A third (categorical) variable can be added through *faceting* and/or *layering*.
In this case, we use the `mutate()` function (see Chapter [4](ch-dataI.html#ch:dataI)) to create a new variable called `SAT_rate` that places states into bins (e.g., high, medium, low) based on the percentage of students taking the SAT.
Additionally, in order to include that new variable in our plots, we use the `%+%` operator to update the data frame that is bound to our plot.
```
SAT_2010 <- SAT_2010 %>%
mutate(
SAT_rate = cut(
sat_pct,
breaks = c(0, 30, 60, 100),
labels = c("low", "medium", "high")
)
)
g <- g %+% SAT_2010
```
In Figure [3\.14](ch-vizII.html#fig:groups-color), we use the `color` aesthetic to separate the data by `SAT_rate` on a single plot (i.e., layering).
Compare this with Figure [3\.15](ch-vizII.html#fig:bar-facet), where we add a `facet_wrap()` mapped to `SAT_rate` to separate by facet.
```
g + aes(color = SAT_rate)
```
Figure 3\.14: Scatterplot using the `color` aesthetic to separate the relationship between two numeric variables by a third categorical variable.
```
g + facet_wrap(~ SAT_rate)
```
Figure 3\.15: Scatterplot using a `facet_wrap()` to separate the relationship between two numeric variables by a third categorical variable.
The `NHANES` data table provides medical, behavioral, and morphometric measurements of individuals. The scatterplot in Figure [3\.16](ch-vizII.html#fig:NHANES-height-age) shows the relationship between two of the variables, height and age. Each dot represents one person and the position of that dot signifies the value of the two variables for that person. Scatterplots are useful for visualizing a simple relationship between two variables. For instance, you can see in Figure [3\.16](ch-vizII.html#fig:NHANES-height-age) the familiar pattern of growth in height from birth to the late teens.
It’s helpful to do a bit more wrangling (more on this later) to ensure that the spatial relationship of the lines (adult men tend to be taller than adult women) matches the ordering of the legend labels. Here we use the `fct_relevel()` function (from the **forcats** package) to reset the factor levels.
```
library(NHANES)
ggplot(
data = slice_sample(NHANES, n = 1000),
aes(x = Age, y = Height, color = fct_relevel(Gender, "male"))
) +
geom_point() +
geom_smooth() +
xlab("Age (years)") +
ylab("Height (cm)") +
labs(color = "Gender")
```
Figure 3\.16: A scatterplot for 1,000 random individuals from the **NHANES** study. Note how mapping gender to color illuminates the differences in height between men and women.
Some scatterplots have special meanings. A [*time series*](https://en.wikipedia.org/w/index.php?search=time%20series)—such as the one shown in Figure [3\.17](ch-vizII.html#fig:time-series)—is just a scatterplot with time on the horizontal axis and points connected by lines to indicate temporal continuity.
In Figure [3\.17](ch-vizII.html#fig:time-series), the temperature at a weather station in western [*Massachusetts*](https://en.wikipedia.org/w/index.php?search=Massachusetts) is plotted over the course of the year.
The familiar fluctuations based on the seasons are evident.
Be especially aware of dubious causality in these plots: Is time really a good explanatory variable?
```
library(macleish)
ggplot(data = whately_2015, aes(x = when, y = temperature)) +
geom_line(color = "darkgray") +
geom_smooth() +
xlab(NULL) +
ylab("Temperature (degrees Celsius)")
```
Figure 3\.17: A time series showing the change in temperature at the MacLeish field station in 2015\.
For displaying a numerical response variable against a categorical explanatory variable, a common choice is a [*box\-and\-whisker*](https://en.wikipedia.org/w/index.php?search=box-and-whisker) (or box) plot, as shown in Figure [3\.18](ch-vizII.html#fig:macleishbox).
(More details about the data wrangling needed to create the categorical `month` variable will be provided in later chapters.)
It may be easiest to think about this as simply a graphical depiction of the [*five\-number summary*](https://en.wikipedia.org/w/index.php?search=five-number%20summary) (minimum \[0th percentile], Q1 \[25th percentile], median \[50th percentile], Q3 \[75th percentile], and maximum \[100th percentile]).
```
whately_2015 %>%
mutate(month = as.factor(lubridate::month(when, label = TRUE))) %>%
group_by(month) %>%
skim(temperature) %>%
select(-na)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var month n mean sd p0 p25 p50 p75 p100
<chr> <ord> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 temperature Jan 4464 -6.37 5.14 -22.3 -10.3 -6.25 -2.35 6.16
2 temperature Feb 4032 -9.26 5.11 -22.2 -12.3 -9.43 -5.50 4.27
3 temperature Mar 4464 -0.873 5.06 -16.2 -4.61 -0.550 2.99 13.5
4 temperature Apr 4320 8.04 5.51 -3.04 3.77 7.61 11.8 22.7
5 temperature May 4464 17.4 5.94 2.29 12.8 17.5 21.4 31.4
6 temperature Jun 4320 17.7 5.11 6.53 14.2 18.0 21.2 29.4
7 temperature Jul 4464 21.6 3.90 12.0 18.6 21.2 24.3 32.1
8 temperature Aug 4464 21.4 3.79 12.9 18.4 21.1 24.3 31.2
9 temperature Sep 4320 19.3 5.07 5.43 15.8 19 22.5 33.1
10 temperature Oct 4464 9.79 5.00 -3.97 6.58 9.49 13.3 22.3
11 temperature Nov 4320 7.28 5.65 -4.84 3.14 7.11 10.8 22.8
12 temperature Dec 4464 4.95 4.59 -6.16 1.61 5.15 8.38 18.4
```
```
ggplot(
data = whately_2015,
aes(
x = lubridate::month(when, label = TRUE),
y = temperature
)
) +
geom_boxplot() +
xlab("Month") +
ylab("Temperature (degrees Celsius)")
```
Figure 3\.18: A box\-and\-whisker of temperatures by month at the MacLeish field station.
When both the explanatory and response variables are categorical (or binned), points and lines don’t work as well.
How likely is a person to have [*diabetes*](https://en.wikipedia.org/w/index.php?search=diabetes), based on their age and [*BMI*](https://en.wikipedia.org/w/index.php?search=BMI) (body mass index)?
In the [*mosaic plot*](https://en.wikipedia.org/w/index.php?search=mosaic%20plot) (or eikosogram) shown in Figure [3\.19](ch-vizII.html#fig:NHANES-smoke) the number of observations in each cell is proportional to the area of the box.
Thus, you can see that diabetes tends to be more common for older people as well as for those who are obese, since the blue\-shaded regions are larger than expected under an independence model while the pink are less than expected.
These provide a more accurate depiction of the intuitive notions of probability familiar from [*Venn diagrams*](https://en.wikipedia.org/w/index.php?search=Venn%20diagrams) (Olford and Cherry 2003\).
Figure 3\.19: Mosaic plot (eikosogram) of diabetes by age and weight status (BMI).
In Table [3\.3](ch-vizII.html#tab:lattice) we summarize the use of **ggplot2** plotting commands and their relationship to canonical data graphics. Note that the `geom_mosaic()` function is not part of **ggplot2** but rather is available through the **ggmosaic** package.
Table 3\.3: Table of canonical data graphics and their corresponding **ggplot2** commands. Note that the mosaic plot function is not part of the **ggplot2** package.
| response (\\(y\\)) | explanatory (\\(x\\)) | plot type | geom\_\*() |
| --- | --- | --- | --- |
| | numeric | histogram, density | `geom_histogram()`, `geom_density()` |
| | categorical | stacked bar | `geom_bar()` |
| numeric | numeric | scatter | `geom_point()` |
| numeric | categorical | box | `geom_boxplot()` |
| categorical | categorical | mosaic | `geom_mosaic()` |
### 3\.2\.3 Maps
Using a map to display data geographically helps both to identify particular cases and to show spatial patterns and discrepancies. In Figure [3\.20](ch-vizII.html#fig:oil-map), the shading of each country represents its oil production.
This sort of map, where the fill color of each region reflects the value of a variable, is sometimes called a [*choropleth map*](https://en.wikipedia.org/w/index.php?search=choropleth%20map).
We will learn more about mapping and how to work with spatial data in Chapter [17](ch-spatial.html#ch:spatial).
Figure 3\.20: A choropleth map displaying oil production by countries around the world in barrels per day.
### 3\.2\.4 Networks
A [*network*](https://en.wikipedia.org/w/index.php?search=network) is a set of connections, called [*edges*](https://en.wikipedia.org/w/index.php?search=edges), between nodes, called [*vertices*](https://en.wikipedia.org/w/index.php?search=vertices). A vertex represents an entity. The edges indicate pairwise relationships between those entities.
The `NCI60` data set is about the genetics of cancer.
The data set contains more than 40,000 probes for the expression of genes, in each of 60 cancers.
In the network displayed in Figure [3\.21](ch-vizII.html#fig:cancer-network), a vertex is a given cell line, and each is depicted as a dot.
The dot’s color and label gives the type of cancer involved.
These are ovarian, colon, central nervous system, melanoma, renal, breast, and lung cancers.
The edges between vertices show pairs of cell lines that had a strong correlation in gene expression.
Figure 3\.21: A network diagram displaying the relationship between types of cancer cell lines.
The network shows that the melanoma cell lines (ME) are closely related to each other but not so much to other cell lines. The same is true for colon cancer cell lines (CO) and for central nervous system (CN) cell lines. Lung cancers, on the other hand, tend to have associations with multiple other types of cancers. We will explore the topic of [*network science*](https://en.wikipedia.org/w/index.php?search=network%20science) in greater depth in Chapter [20](ch-netsci.html#ch:netsci).
3\.3 Extended example: Historical baby names
--------------------------------------------
For many of us, there are few things that are more personal than our name. It is impossible to remember a time when you didn’t have your name, and you carry it with you wherever you go. You instinctively react when you hear it. And yet, you didn’t choose your name—your parents did (unless you’ve changed your name).
How do parents go about choosing names? Clearly, there seem to be both short\- and long\-term trends in baby names. The popularity of the name “Bella” spiked after the lead character in [*Twilight*](https://en.wikipedia.org/w/index.php?search=Twilight) became a cultural phenomenon. Other once\-popular names seem to have fallen out of favor—writers at [*FiveThirtyEight*](https://en.wikipedia.org/w/index.php?search=FiveThirtyEight) asked, “[where have all the Elmer’s gone](http://fivethirtyeight.com/features/how-to-tell-someones-age-when-all-you-know-is-her-name/)?”
Using data from the **babynames** package, which uses public data from the [*Social Security Administration*](https://en.wikipedia.org/w/index.php?search=Social%20Security%20Administration) (SSA), we can recreate many of the plots presented in the FiveThirtyEight article, and in the process learn how to use **ggplot2** to make production\-quality data graphics.
In [the link](https://fivethirtyeight.com/wp-content/uploads/2014/05/silver-feature-joseph2.png) in the footnote[3](#fn3), FiveThirtyEight presents an informative, annotated data graphic that shows the relative ages of American males named “Joseph.” Drawing on what you have learned in Chapter [2](ch-vizI.html#ch:vizI), take a minute to jot down the visual cues, coordinate system, scales, and context present in this plot. This analysis will facilitate our use of **ggplot2** to re\-construct it. (Look ahead to Figure [3\.22](ch-vizII.html#fig:joseph) to see our recreation.)
The key insight of the FiveThirtyEight work is the estimation of the number of people with each name who are currently alive. The `lifetables` table from the **babynames** package contains
[actuarial](https://www.ssa.gov/oact/NOTES/as120/LifeTables_Tbl_7_1980.html) estimates of the number of people per 100,000 who are alive at age \\(x\\), for every \\(0 \\leq x \\leq 114\\). The `make_babynames_dist()` function in the **mdsr** package adds some more convenient variables and filters for only the data that is relevant to people alive in 2014\.[4](#fn4)
```
library(babynames)
BabynamesDist <- make_babynames_dist()
BabynamesDist
```
```
# A tibble: 1,639,722 × 9
year sex name n prop alive_prob count_thousands age_today
<dbl> <chr> <chr> <int> <dbl> <dbl> <dbl> <dbl>
1 1900 F Mary 16706 0.0526 0 16.7 114
2 1900 F Helen 6343 0.0200 0 6.34 114
3 1900 F Anna 6114 0.0192 0 6.11 114
4 1900 F Margaret 5304 0.0167 0 5.30 114
5 1900 F Ruth 4765 0.0150 0 4.76 114
6 1900 F Elizabeth 4096 0.0129 0 4.10 114
7 1900 F Florence 3920 0.0123 0 3.92 114
8 1900 F Ethel 3896 0.0123 0 3.90 114
9 1900 F Marie 3856 0.0121 0 3.86 114
10 1900 F Lillian 3414 0.0107 0 3.41 114
# … with 1,639,712 more rows, and 1 more variable: est_alive_today <dbl>
```
To find information about a specific name, we use the `filter()` function.
```
BabynamesDist %>%
filter(name == "Benjamin")
```
### 3\.3\.1 Percentage of people alive today
How did you break down Figure [3\.22](ch-vizII.html#fig:joseph)? There are two main data elements in that plot: a thick black line indicating the number of Josephs born each year, and the thin light blue bars indicating the number of Josephs born in each year that are expected to still be alive today. In both cases, the vertical axis corresponds to the number of people (in thousands), and the horizontal axis corresponds to the year of birth.
We can compose a similar plot in **ggplot2**. First we take the relevant subset of the data and set up the initial **ggplot2** object. The data frame `joseph` is bound to the plot, since this contains all of the data that we need for this plot, but we will be using it with multiple geoms. Moreover, the `year` variable is mapped to the \\(x\\)\-axis as an aesthetic. This will ensure that everything will line up properly.
```
joseph <- BabynamesDist %>%
filter(name == "Joseph" & sex == "M")
name_plot <- ggplot(data = joseph, aes(x = year))
```
Next, we will add the bars.
```
name_plot <- name_plot +
geom_col(
aes(y = count_thousands * alive_prob),
fill = "#b2d7e9",
color = "white",
size = 0.1
)
```
The `geom_col()` function adds bars, which are filled with a light blue color and a white border. The height of the bars is an aesthetic that is mapped to the estimated number of people alive today who were born in each year.
The black line is easily added using the `geom_line()` function.
```
name_plot <- name_plot +
geom_line(aes(y = count_thousands), size = 2)
```
Adding an informative label for the vertical axis and removing an uninformative label for the horizontal axis will improve the readability of our plot.
```
name_plot <- name_plot +
ylab("Number of People (thousands)") +
xlab(NULL)
```
Inspecting the `summary()` of our plot at this point can help us keep things straight—take note of the mappings. Do they accord with what you jotted down previously?
```
summary(name_plot)
```
```
data: year, sex, name, n, prop, alive_prob, count_thousands,
age_today, est_alive_today [111x9]
mapping: x = ~year
faceting: <ggproto object: Class FacetNull, Facet, gg>
compute_layout: function
draw_back: function
draw_front: function
draw_labels: function
draw_panels: function
finish_data: function
init_scales: function
map_data: function
params: list
setup_data: function
setup_params: function
shrink: TRUE
train_scales: function
vars: function
super: <ggproto object: Class FacetNull, Facet, gg>
-----------------------------------
mapping: y = ~count_thousands * alive_prob
geom_col: width = NULL, na.rm = FALSE
stat_identity: na.rm = FALSE
position_stack
mapping: y = ~count_thousands
geom_line: na.rm = FALSE, orientation = NA
stat_identity: na.rm = FALSE
position_identity
```
The final data\-driven element of the FiveThirtyEight graphic is a darker blue bar indicating the median year of birth. We can compute this with the `wtd.quantile()` function in the **Hmisc** package. Setting the `probs` argument to 0\.5 will give us the median `year` of birth, weighted by the number of people estimated to be alive today (`est_alive_today`). The `pull()` function simply extracts the `year` variable from the data frame returned by `summarize()`.
```
wtd_quantile <- Hmisc::wtd.quantile
median_yob <- joseph %>%
summarize(
year = wtd_quantile(year, est_alive_today, probs = 0.5)
) %>%
pull(year)
median_yob
```
```
50%
1975
```
We can then overplot a single bar in a darker shade of blue. Here, we are using the `ifelse()` function cleverly. If the `year` is equal to the median year of birth, then the height of the bar is the estimated number of Josephs alive today. Otherwise, the height of the bar is zero (so you can’t see it at all). In this manner, we plot only the one darker blue bar that we want to highlight.
```
name_plot <- name_plot +
geom_col(
color = "white", fill = "#008fd5",
aes(y = ifelse(year == median_yob, est_alive_today / 1000, 0))
)
```
Lastly, the FiveThirtyEight graphic contains many contextual elements specific to the name Joseph. We can add a title, annotated text, and an arrow providing focus to a specific element of the plot. Figure [3\.22](ch-vizII.html#fig:joseph) displays our reproduction.
There are a few differences in the presentation of fonts, title, etc. These can be altered using **ggplot2**’s theming framework, but we won’t explore these subtleties here (see Section [14\.5](ch-vizIII.html#sec:themes)).[5](#fn5)
Here we create a `tribble()` (a row\-wise simple data frame) to add annotations.
```
context <- tribble(
~year, ~num_people, ~label,
1935, 40, "Number of Josephs\nborn each year",
1915, 13, "Number of Josephs\nborn each year
\nestimated to be alive\non 1/1/2014",
2003, 40, "The median\nliving Joseph\nis 37 years old",
)
name_plot +
ggtitle("Age Distribution of American Boys Named Joseph") +
geom_text(
data = context,
aes(y = num_people, label = label, color = label)
) +
geom_curve(
x = 1990, xend = 1974, y = 40, yend = 24,
arrow = arrow(length = unit(0.3, "cm")), curvature = 0.5
) +
scale_color_manual(
guide = FALSE,
values = c("black", "#b2d7e9", "darkgray")
) +
ylim(0, 42)
```
```
Warning: It is deprecated to specify `guide = FALSE` to remove a guide.
Please use `guide = "none"` instead.
```
Figure 3\.22: Recreation of the age distribution of “Joseph” plot.
Notice that we did not update the `name_plot` object with this contextual information. This was intentional, since we can update the `data` argument of `name_plot` and obtain an analogous plot for another name. This functionality makes use of the special `%+%` operator. As shown in Figure [3\.23](ch-vizII.html#fig:josephine), the name “Josephine” enjoyed a spike in popularity around 1920 that later subsided.
```
name_plot %+% filter(
BabynamesDist,
name == "Josephine" & sex == "F"
)
```
Figure 3\.23: Age distribution of American girls named “Josephine.”
While some names are almost always associated with a particular gender, many are not. More interestingly, the proportion of people assigned male or female with a given name often varies over time. These data were presented nicely by [Nathan Yau](https://en.wikipedia.org/w/index.php?search=Nathan%20Yau) at [FlowingData](https://flowingdata.com/2013/09/25/the-most-unisex-names-in-us-history/).
We can compare how our `name_plot` differs by gender for a given name using a *facet*. To do this, we will simply add a call to the `facet_wrap()` function, which will create small multiples based on a single categorical variable, and then feed a new data frame to the plot that contains data for both sexes assigned at birth. In Figure [3\.24](ch-vizII.html#fig:jessie), we show the prevalence of “Jessie” changed for the two sexes.
```
names_plot <- name_plot +
facet_wrap(~sex)
names_plot %+% filter(BabynamesDist, name == "Jessie")
```
Figure 3\.24: Comparison of the name “Jessie” across two genders.
The plot at FlowingData shows the 35 most common “unisex” names—that is, the names that have historically had the greatest balance between those assigned male and female at birth. We can use a `facet_grid()` to compare the gender breakdown for a few of the most common of these, as shown in Figures [3\.25](ch-vizII.html#fig:many-names) and [3\.26](ch-vizII.html#fig:many-names2).
```
many_names_plot <- name_plot +
facet_grid(name ~ sex)
mnp <- many_names_plot %+% filter(
BabynamesDist,
name %in% c("Jessie", "Marion", "Jackie")
)
mnp
```
Figure 3\.25: Gender breakdown for the three most unisex names.
Reversing the order of the variables in the call to `facet_grid()` flips the orientation of the facets.
```
mnp + facet_grid(sex ~ name)
```
Figure 3\.26: Gender breakdown for the three most unisex names, oriented vertically.
### 3\.3\.2 Most common names for women
A second interesting data graphic from the same FiveThirtyEight article is recreated in Figure [3\.27](ch-vizII.html#fig:women).
Take a moment to jump ahead and analyze this data graphic.
What are visual cues?
What are the variables?
How are the variables being mapped to the visual cues?
What geoms are present?
To recreate this data graphic, we need to collect the right data.
We begin by figuring out what the 25 most common female names are among those estimated to be alive today.
We can do this by counting the estimated number of people alive today for each name, filtering for women, sorting by the number estimated to be alive, and then taking the top 25 results.
We also need to know the median age, as well as the first and third quartiles for age among people having each name.
```
com_fem <- BabynamesDist %>%
filter(n > 100, sex == "F") %>%
group_by(name) %>%
mutate(wgt = est_alive_today / sum(est_alive_today)) %>%
summarize(
N = n(),
est_num_alive = sum(est_alive_today),
quantiles = list(
wtd_quantile(
age_today, est_alive_today, probs = 1:3/4, na.rm = TRUE
)
)
) %>%
mutate(measures = list(c("q1_age", "median_age", "q3_age"))) %>%
unnest(cols = c(quantiles, measures)) %>%
pivot_wider(names_from = measures, values_from = quantiles) %>%
arrange(desc(est_num_alive)) %>%
head(25)
```
This data graphic is a bit trickier than the previous one. We’ll start by binding the data, and defining the \\(x\\) and \\(y\\) aesthetics.
We put the names on the \\(x\\)\-axis and the `median_age` on the \\(y\\)—the reasons for doing so will be made clearer later.
We will also define the title of the plot, and remove the \\(x\\)\-axis label, since it is self\-evident.
```
w_plot <- ggplot(
data = com_fem,
aes(x = reorder(name, -median_age), y = median_age)
) +
xlab(NULL) +
ylab("Age (in years)") +
ggtitle("Median ages for females with the 25 most common names")
```
The next elements to add are the gold rectangles. To do this, we use the `geom_linerange()` function.
It may help to think of these not as rectangles, but as really thick lines.
Because we have already mapped the names to the \\(x\\)\-axis, we only need to specify the mappings for `ymin` and `ymax`.
These are mapped to the first and third quartiles, respectively.
We will also make these lines very thick and color them appropriately.
The `geom_linerange()` function only understands `ymin` and `ymax`—there is not a corresponding function with `xmin` and `xmax`.
However, we will fix this later by transposing the figure.
We have also added a slight `alpha` transparency to allow the gridlines to be visible underneath the gold rectangles.
```
w_plot <- w_plot +
geom_linerange(
aes(ymin = q1_age, ymax = q3_age),
color = "#f3d478",
size = 4.5,
alpha = 0.8
)
```
There is a red dot indicating the median age for each of these names. If you look carefully, you can see a white border around each red dot. The default glyph for `geom_point()` is a solid dot, which is `shape` 19\. By changing it to `shape` 21, we can use both the `fill` and `color` arguments.
```
w_plot <- w_plot +
geom_point(
fill = "#ed3324",
color = "white",
size = 2,
shape = 21
)
```
It remains only to add the context and flip our plot around so the orientation matches the original figure.
The `coord_flip()` function does exactly that.
```
context <- tribble(
~median_age, ~x, ~label,
65, 24, "median",
29, 16, "25th",
48, 16, "75th percentile",
)
age_breaks <- 1:7 * 10 + 5
w_plot +
geom_point(
aes(y = 60, x = 24),
fill = "#ed3324",
color = "white",
size = 2,
shape = 21
) +
geom_text(data = context, aes(x = x, label = label)) +
geom_point(aes(y = 24, x = 16), shape = 17) +
geom_point(aes(y = 56, x = 16), shape = 17) +
geom_hline(
data = tibble(x = age_breaks),
aes(yintercept = x),
linetype = 3
) +
scale_y_continuous(breaks = age_breaks) +
coord_flip()
```
Figure 3\.27: Recreation of FiveThirtyEight’s plot of the age distributions for the 25 most common women’s names.
You will note that the name “Anna” was fifth most common in the original FiveThirtyEight article but did not appear in Figure [3\.27](ch-vizII.html#fig:women). This appears to be a result of that name’s extraordinarily large range and the pro\-rating that FiveThirtyEight did to their data. The “older” names—including Anna—were more affected by this alteration. Anna was the 47th most popular name by our calculations.
3\.4 Further resources
----------------------
The grammar of graphics was created by Wilkinson et al. (2005\), and implemented in **ggplot2** by Hadley Wickham (2016\), now in a second edition. The **ggplot2** [cheat sheet](https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf) produced by **RStudio** is an excellent reference for understanding the various features of **ggplot2**.
3\.5 Exercises
--------------
**Problem 1 (Easy)**: [Angelica Schuyler Church](https://en.wikipedia.org/wiki/Angelica_Schuyler_Church) (1756–1814\) was the daughter of New York Governer Philip Schuyler and sister of
Elizabeth Schuyler Hamilton. Angelica, New York was named after her. Using the `babynames` package generate a plot of the reported proportion of babies born with the name Angelica over time and interpret the figure.
**Problem 2 (Easy)**: Using data from the `nasaweather` package, create a scatterplot between `wind` and `pressure`, with color being used to distinguish the `type` of storm.
**Problem 3 (Medium)**: The following questions use the `Marriage` data set from the `mosaicData` package.
```
library(mosaicData)
```
1. Create an informative and meaningful data graphic.
2. Identify each of the visual cues that you are using, and describe how they are related to each variable.
3. Create a data graphic with at least *five* variables (either quantitative or categorical). For the purposes of this exercise, do not worry about making your visualization meaningful—just try to encode five variables into one plot.
**Problem 4 (Medium)**: The `macleish` package contains weather data collected every 10 minutes in 2015 from two weather stations in Whately, MA.
```
library(tidyverse)
library(macleish)
glimpse(whately_2015)
```
```
Rows: 52,560
Columns: 8
$ when <dttm> 2015-01-01 00:00:00, 2015-01-01 00:10:00, 2015-01…
$ temperature <dbl> -9.32, -9.46, -9.44, -9.30, -9.32, -9.34, -9.30, -…
$ wind_speed <dbl> 1.40, 1.51, 1.62, 1.14, 1.22, 1.09, 1.17, 1.31, 1.…
$ wind_dir <dbl> 225, 248, 258, 244, 238, 242, 242, 244, 226, 220, …
$ rel_humidity <dbl> 54.5, 55.4, 56.2, 56.4, 56.9, 57.2, 57.7, 58.2, 59…
$ pressure <int> 985, 985, 985, 985, 984, 984, 984, 984, 984, 984, …
$ solar_radiation <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
$ rainfall <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
```
Using `ggplot2`, create a data graphic that displays the average temperature over each 10\-minute interval (`temperature`) as a function of time (`when`).
**Problem 5 (Medium)**: Use the `MLB_teams` data in the `mdsr` package to create an informative data graphic that illustrates the relationship between winning percentage and payroll in context.
**Problem 6 (Medium)**: The `MLB_teams` data set in the `mdsr` package contains information about Major League Baseball teams from 2008–2014\. There are several quantitative and a few categorical variables present. See how many variables you can illustrate on a single plot in R. The current record is 7\. (Note: This is *not* good graphical practice—it is merely an exercise to help you understand how to use visual cues and aesthetics!)
**Problem 7 (Medium)**: The `RailTrail` data set from the `mosaicData` package describes the usage of a rail trail in Western Massachusetts.
Use these data to answer the following questions.
1. Create a scatterplot of the number of crossings per day `volume` against the high temperature that day
2. Separate your plot into facets by `weekday` (an indicator of weekend/holiday vs. weekday)
3. Add regression lines to the two facets
**Problem 8 (Medium)**: Using data from the `nasaweather` package, use the `geom_path` function to plot the path of each tropical storm in the `storms` data table. Use color to distinguish the storms from one another, and use faceting to plot each `year` in its own panel.
**Problem 9 (Medium)**: Using the `penguins` data set from the `palmerpenguins` package:
1. Create a scatterplot of `bill_length_mm` against `bill_depth_mm` where individual species are colored and a regression line is added to each species. Add regression lines to all of your facets.
What do you observe about the association of bill depth and bill length?
2. Repeat the same scatterplot but now separate your plot into facets by `species`.
How would you summarize the association between bill depth and bill length.
**Problem 10 (Hard)**: Use the `make_babynames_dist()` function in the `mdsr` package to recreate the “Deadest Names” graphic from FiveThirtyEight ([https://fivethirtyeight.com/features/how\-to\-tell\-someones\-age\-when\-all\-you\-know\-is\-her\-name](https://fivethirtyeight.com/features/how-to-tell-someones-age-when-all-you-know-is-her-name)).
```
library(tidyverse)
library(mdsr)
babynames_dist <- make_babynames_dist()
```
3\.6 Supplementary exercises
----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-vizII.html\#datavizII\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-vizII.html#datavizII-online-exercises)
**Problem 1 (Easy)**:
Consider the live [Wikipedia Recent Changes Map](http://www.hatnote.com/#en).
1. Identify the visual cues, coordinate system, and scale(s).
2. How many variables are depicted in the graphic? Explicitly link each variable to a visual cue that you listed above.
3. Critique this data graphic using the taxonomy described in this chapter.
**Problem 2 (Easy)**: Consider the following data graphic about [Denzel Washington](https://en.wikipedia.org/wiki/Denzel_Washington), a two\-time Academy Award\-winning actor. It may be helpful to read the original article, entitled “[The Four Types of Denzel Washington Movies](https://fivethirtyeight.com/features/the-four-types-of-denzel-washington-movies/).”
What variable is mapped to the color aesthetic?
**Problem 3 (Easy)**: Consider the following data graphic about world\-class swimmers. Emphasis is on [Katie Ledecky](https://en.wikipedia.org/wiki/Katie_Ledecky), a five\-time Olympic gold medalist. It may be helpful to peruse the original article, entitled “[Katie Ledecky Is The Present And The Future Of Swimming](https://fivethirtyeight.com/features/katie-ledecky-is-the-present-and-the-future-of-swimming/).”
Suppose that the graphic was generated from a data frame like the one shown below (it wasn’t—these are fake data).
```
# A tibble: 3 × 4
name gender distance time_in_sd
<chr> <chr> <dbl> <dbl>
1 Ledecky F 100 -0.8
2 Ledecky F 200 1.7
3 Ledecky F 400 2.9
```
Note: Recall that [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) is a measure of the spread of a set of numbers. In this case, a time that is \+1 standard deviation *above* the mean is *faster* than the average time (among the top 50 times).
1. What variable is mapped to the position aesthetic in the horizontal direction?
2. What variable is mapped to the color aesthetic in the vertical direction?
3. What variable is mapped to the position aesthetic in the vertical direction?
**Problem 4 (Easy)**: Consider the following data graphic, taken from the article “[Who does not Pay Income Tax?](http://thefuturebuzz.com/2012/09/19/simplicity-with-data-visualization-is-still-best)”
1. Identify the visual cues, coordinate system, and scale(s).
2. How many variables are depicted in the graphic? Explicitly link each variable to a visual cue that you listed above.
3. Critique this data graphic using the taxonomy described in this chapter.
**Problem 5 (Hard)**: Using the `babynames` package, and the name ‘Jessie,’ make a plot that resembles this graphic: [the most unisex names in US history](https://flowingdata.com/2013/09/25/the-most-unisex-names-in-us-history/).
---
3\.1 A grammar for data graphics
--------------------------------
The **ggplot2** package is one of the many creations of prolific **R** programmer [Hadley Wickham](https://en.wikipedia.org/w/index.php?search=Hadley%20Wickham). It has become one of the most widely\-used **R** packages, in no small part because of the way it builds data graphics incrementally from small pieces of code.
In the grammar of **ggplot2**, an [*aesthetic*](https://en.wikipedia.org/w/index.php?search=aesthetic) is an explicit mapping between a variable and the visual cues that represent its values.
A [*glyph*](https://en.wikipedia.org/w/index.php?search=glyph) is the basic graphical element that represents one case (other terms used include “mark” and “symbol”). In a scatterplot, the *positions* of a glyph on the plot—in both the horizontal and vertical senses—are the [*visual cues*](https://en.wikipedia.org/w/index.php?search=visual%20cues) that help the viewer understand how big the corresponding quantities are. The *aesthetic* is the mapping that defines these correspondences. When more than two variables are present, additional aesthetics can marshal additional visual cues. Note also that some visual cues (like *direction* in a time series) are implicit and do not have a corresponding aesthetic.
For many of the chapters in this book, the first step in following these examples will be to load the **mdsr** package, which contains many of the data sets referenced in this book.
In addition, we load the **tidyverse** package, which in turn loads **dplyr** and **ggplot2**. (For more information about the **mdsr** package see Appendix [A](ch-mdsr.html#ch:mdsr).
If you are using **R** for the first time, please see Appendix [B](ch-R.html#ch:R) for an introduction.)
```
library(mdsr)
library(tidyverse)
```
If you want to learn how to use a particular command, we highly recommend running the example code on your own.
We begin with a data set that includes measures that are relevant to answer questions about economic productivity.
The `CIACountries` data table contains seven variables collected for each of 236 countries: population (`pop`), area (`area`), gross domestic product (`gdp`), percentage of GDP spent on education (`educ`), length of roadways per unit area (`roadways`), Internet use as a fraction of the population (`net_users`), and the number of barrels of oil produced per day (`oil_prod`).
Table [3\.1](ch-vizII.html#tab:countrydata) displays a selection of variables for the first six countries.
Table 3\.1: A selection of variables from the first six rows of the CIA countries data table.
| country | oil\_prod | gdp | educ | roadways | net\_users |
| --- | --- | --- | --- | --- | --- |
| Afghanistan | 0 | 1900 | NA | 0\.065 | \>5% |
| Albania | 20510 | 11900 | 3\.3 | 0\.626 | \>35% |
| Algeria | 1420000 | 14500 | 4\.3 | 0\.048 | \>15% |
| American Samoa | 0 | 13000 | NA | 1\.211 | NA |
| Andorra | NA | 37200 | NA | 0\.684 | \>60% |
| Angola | 1742000 | 7300 | 3\.5 | 0\.041 | \>15% |
### 3\.1\.1 Aesthetics
In the simple scatterplot shown in Figure [3\.1](ch-vizII.html#fig:simple-glyph), we employ the grammar of graphics to build a multivariate data graphic. In **ggplot2**, the `ggplot()` command creates a plot object `g`, and any arguments to that function are applied across any subsequent plotting directives. In this case, this means that any variables mentioned anywhere in the plot are understood to be within the `CIACountries` data frame, since we have specified that in the `data` argument. Graphics in **ggplot2** are built incrementally by elements. In this case, the only glyphs added are points, which are plotted using the `geom_point()` function. The arguments to `geom_point()` specify *where* and *how* the points are drawn. Here, the two *aesthetics* (`aes()`) map the vertical (`y`) coordinate to the `gdp` variable, and the horizontal (`x`) coordinate to the `educ` variable. The `size` argument to `geom_point()` changes the size of all of the glyphs. Note that here, every dot is the same size. Thus, size is *not* an aesthetic, since it does not map a variable to a visual cue.
Since each case (i.e., row in the data frame) is a country, each dot represents one country.
```
g <- ggplot(data = CIACountries, aes(y = gdp, x = educ))
g + geom_point(size = 3)
```
Figure 3\.1: Scatterplot using only the position aesthetic for glyphs.
In Figure [3\.1](ch-vizII.html#fig:simple-glyph) the glyphs are simple.
Only position in the frame distinguishes one glyph from another.
The shape, size, etc. of all of the glyphs are identical—there is nothing about the glyph itself that identifies the country.
However, it is possible to use a glyph with several attributes. We can define additional aesthetics to create new visual cues.
In Figure [3\.2](ch-vizII.html#fig:net-use-color), we have extended the previous example by mapping the color of each dot to the categorical `net_users` variable.
```
g + geom_point(aes(color = net_users), size = 3)
```
Figure 3\.2: Scatterplot in which `net_users` is mapped to color.
Changing the glyph is as simple as changing the function that draws that glyph—the aesthetic can often be kept exactly the same. In Figure [3\.3](ch-vizII.html#fig:country-labels), we plot text instead of a dot.
```
g + geom_text(aes(label = country, color = net_users), size = 3)
```
Figure 3\.3: Scatterplot using both location and label as aesthetics.
Of course, we can employ multiple aesthetics. There are four aesthetics in Figure [3\.4](ch-vizII.html#fig:four-variables).
Each of the four aesthetics is set in correspondence with a variable—we say the variable is [*mapped*](https://en.wikipedia.org/w/index.php?search=mapped) to the aesthetic.
Educational attainment is being mapped to horizontal position, GDP to vertical position, Internet connectivity to color, and length of roadways to size.
Thus, we encode four variables (`gdp`, `educ`, `net_users`, and `roadways`) using the visual cues of position, position, color, and area, respectively.
```
g + geom_point(aes(color = net_users, size = roadways))
```
Figure 3\.4: Scatterplot in which `net_users` is mapped to color and `educ` mapped to size. Compare this graphic to Figure [3\.7](ch-vizII.html#fig:facet-internet), which displays the same data using facets.
A data table provides the basis for drawing a data graphic.
The relationship between a data table and a graphic is simple: Each case (row) in the data table becomes a mark in the graph (we will return to the notion of [*glyph\-ready data*](https://en.wikipedia.org/w/index.php?search=glyph-ready%20data) in Chapter [6](ch-dataII.html#ch:dataII)).
As the designer of the graphic, you choose which variables the graphic will display and how each variable is to be represented graphically: position, size, color, and so on.
### 3\.1\.2 Scales
Compare Figure [3\.4](ch-vizII.html#fig:four-variables) to Figure [3\.5](ch-vizII.html#fig:log-scale).
In the former, it is hard to discern differences in GDP due to its right\-skewed distribution and the choice of a *linear* scale. In the latter, the *logarithmic* scale on the vertical axis makes the scatterplot more readable.
Of course, this makes interpreting the plot more complex, so we must be very careful when doing so.
Note that the only difference in the code is the addition of the `coord_trans()` directive.
```
g +
geom_point(aes(color = net_users, size = roadways)) +
coord_trans(y = "log10")
```
Figure 3\.5: Scatterplot using a logarithmic transformation of GDP that helps to mitigate visual clustering caused by the right\-skewed distribution of GDP among countries.
Scales can also be manipulated in **ggplot2** using any of the scale functions.
For example, instead of using the `coord_trans()` function as we did above, we could have achieved a similar plot through the use of the `scale_y_continuous()` function, as illustrated in Figure [3\.6](ch-vizII.html#fig:log-scale2).
In either case, the points will be drawn in the same location—the difference in the two plots is how and where the major tick marks and axis labels are drawn. We prefer to use `coord_trans()` in Figure [3\.5](ch-vizII.html#fig:log-scale) because it draws attention to the use of the log scale (compare with Figure [3\.6](ch-vizII.html#fig:log-scale2)).
Similarly\-named functions (e.g., `scale_x_continuous()`, `scale_x_discrete()`, `scale_color()`, etc.) perform analogous operations on different aesthetics.
```
g +
geom_point(aes(color = net_users, size = roadways)) +
scale_y_continuous(
name = "Gross Domestic Product",
trans = "log10",
labels = scales::comma
)
```
Figure 3\.6: Scatterplot using a logarithmic transformation of GDP. The use of a log scale on the \\(y\\)\-axis is less obvious than it is in Figure [3\.5](ch-vizII.html#fig:log-scale) due to the uniformly\-spaced horizontal grid lines.
Not all scales are about position.
For instance, in Figure [3\.4](ch-vizII.html#fig:four-variables), `net_users` is translated to color.
Similarly, `roadways` is translated to size: the largest dot corresponds to a value of 30 roadways per unit area.
### 3\.1\.3 Guides
Context is provided by *guides* (more commonly called legends).
A guide helps a human reader understand the meaning of the visual cues by providing context.
For position visual cues, the most common sort of guide is the familiar axis with its tick marks and labels. But other guides exist. In Figures [3\.4](ch-vizII.html#fig:four-variables) and [3\.5](ch-vizII.html#fig:log-scale), legends relate how dot color corresponds to internet connectivity, and how dot size corresponds to length of roadways (note the use of a log scale). The `geom_text()` and `geom_label()` functions can also be used to provide specific textual annotations on the plot. Examples of how to use these functions for annotations are provide in Section [3\.3](ch-vizII.html#sec:babynames).
### 3\.1\.4 Facets
Using multiple aesthetics such as shape, color, and size to display multiple variables can produce a confusing, hard\-to\-read graph.
[*Facets*](https://en.wikipedia.org/w/index.php?search=Facets)—multiple side\-by\-side graphs used to display levels of a categorical variable—provide a simple and effective alternative.
Figure [3\.7](ch-vizII.html#fig:facet-internet) uses facets to show different levels of Internet connectivity, providing a better view than Figure [3\.4](ch-vizII.html#fig:four-variables).
There are two functions that create facets: `facet_wrap()` and `facet_grid()`.
The former creates a facet for each level of a single categorical variable, whereas the latter creates a facet for each combination of two categorical variables, arranging them in a grid.
```
g +
geom_point(alpha = 0.9, aes(size = roadways)) +
coord_trans(y = "log10") +
facet_wrap(~net_users, nrow = 1) +
theme(legend.position = "top")
```
Figure 3\.7: Scatterplot using facets for different ranges of Internet connectivity.
### 3\.1\.5 Layers
On occasion, data from more than one data table are graphed together.
For example, the `MedicareCharges` and `MedicareProviders` data tables provide information about the average cost of each medical procedure in each state.
If you live in [*New Jersey*](https://en.wikipedia.org/w/index.php?search=New%20Jersey), you might wonder how providers in your state charge for different medical procedures.
However, you will certainly want to understand those averages in the context of the averages across all states. In the `MedicareCharges` table, each row represents a different medical procedure (`drg`) with its associated average cost in each state.
We also create a second data table called `ChargesNJ`, which contains only those rows corresponding to providers in the state of New Jersey. Do not worry if these commands aren’t familiar—we will learn these in Chapter [4](ch-dataI.html#ch:dataI).
```
ChargesNJ <- MedicareCharges %>%
filter(stateProvider == "NJ")
```
The first few rows from the data table for New Jersey are shown in Table [3\.2](ch-vizII.html#tab:drg-NJ). This glyph\-ready table (see Chapter [6](ch-dataII.html#ch:dataII)) can be translated to a chart (Figure [3\.8](ch-vizII.html#fig:compare-NJ)) using bars to represent the average charges for different medical procedures in New Jersey. The `geom_col()` function creates a separate bar for each of the 100 different medical procedures.
Table 3\.2: Glyph\-ready data for the barplot layer.
| drg | stateProvider | num\_charges | mean\_charge |
| --- | --- | --- | --- |
| 039 | NJ | 31 | 35104 |
| 057 | NJ | 55 | 45692 |
| 064 | NJ | 55 | 87042 |
| 065 | NJ | 59 | 59576 |
| 066 | NJ | 56 | 45819 |
| 069 | NJ | 61 | 41917 |
| 074 | NJ | 41 | 42993 |
| 101 | NJ | 58 | 42314 |
| 149 | NJ | 50 | 34916 |
| 176 | NJ | 36 | 58941 |
```
p <- ggplot(
data = ChargesNJ,
aes(x = reorder(drg, mean_charge), y = mean_charge)
) +
geom_col(fill = "gray") +
ylab("Statewide Average Charges ($)") +
xlab("Medical Procedure (DRG)") +
theme(axis.text.x = element_text(angle = 90, hjust = 1, size = rel(0.5)))
p
```
Figure 3\.8: Bar graph of average charges for medical procedures in New Jersey.
How do the charges in New Jersey compare to those in other states? The two data tables, one for New Jersey and one for the whole country, can be plotted with different glyph types: bars for New Jersey and dots for the states across the whole country as in Figure [3\.9](ch-vizII.html#fig:compare-NJ-2).
```
p + geom_point(data = MedicareCharges, size = 1, alpha = 0.3)
```
Figure 3\.9: Bar graph adding a second layer to provide a comparison of New Jersey to other states. Each dot represents one state, while the bars represent New Jersey.
With the context provided by the individual states, it is easy to see that the charges in New Jersey are among the highest in the country for each medical procedure.
### 3\.1\.1 Aesthetics
In the simple scatterplot shown in Figure [3\.1](ch-vizII.html#fig:simple-glyph), we employ the grammar of graphics to build a multivariate data graphic. In **ggplot2**, the `ggplot()` command creates a plot object `g`, and any arguments to that function are applied across any subsequent plotting directives. In this case, this means that any variables mentioned anywhere in the plot are understood to be within the `CIACountries` data frame, since we have specified that in the `data` argument. Graphics in **ggplot2** are built incrementally by elements. In this case, the only glyphs added are points, which are plotted using the `geom_point()` function. The arguments to `geom_point()` specify *where* and *how* the points are drawn. Here, the two *aesthetics* (`aes()`) map the vertical (`y`) coordinate to the `gdp` variable, and the horizontal (`x`) coordinate to the `educ` variable. The `size` argument to `geom_point()` changes the size of all of the glyphs. Note that here, every dot is the same size. Thus, size is *not* an aesthetic, since it does not map a variable to a visual cue.
Since each case (i.e., row in the data frame) is a country, each dot represents one country.
```
g <- ggplot(data = CIACountries, aes(y = gdp, x = educ))
g + geom_point(size = 3)
```
Figure 3\.1: Scatterplot using only the position aesthetic for glyphs.
In Figure [3\.1](ch-vizII.html#fig:simple-glyph) the glyphs are simple.
Only position in the frame distinguishes one glyph from another.
The shape, size, etc. of all of the glyphs are identical—there is nothing about the glyph itself that identifies the country.
However, it is possible to use a glyph with several attributes. We can define additional aesthetics to create new visual cues.
In Figure [3\.2](ch-vizII.html#fig:net-use-color), we have extended the previous example by mapping the color of each dot to the categorical `net_users` variable.
```
g + geom_point(aes(color = net_users), size = 3)
```
Figure 3\.2: Scatterplot in which `net_users` is mapped to color.
Changing the glyph is as simple as changing the function that draws that glyph—the aesthetic can often be kept exactly the same. In Figure [3\.3](ch-vizII.html#fig:country-labels), we plot text instead of a dot.
```
g + geom_text(aes(label = country, color = net_users), size = 3)
```
Figure 3\.3: Scatterplot using both location and label as aesthetics.
Of course, we can employ multiple aesthetics. There are four aesthetics in Figure [3\.4](ch-vizII.html#fig:four-variables).
Each of the four aesthetics is set in correspondence with a variable—we say the variable is [*mapped*](https://en.wikipedia.org/w/index.php?search=mapped) to the aesthetic.
Educational attainment is being mapped to horizontal position, GDP to vertical position, Internet connectivity to color, and length of roadways to size.
Thus, we encode four variables (`gdp`, `educ`, `net_users`, and `roadways`) using the visual cues of position, position, color, and area, respectively.
```
g + geom_point(aes(color = net_users, size = roadways))
```
Figure 3\.4: Scatterplot in which `net_users` is mapped to color and `educ` mapped to size. Compare this graphic to Figure [3\.7](ch-vizII.html#fig:facet-internet), which displays the same data using facets.
A data table provides the basis for drawing a data graphic.
The relationship between a data table and a graphic is simple: Each case (row) in the data table becomes a mark in the graph (we will return to the notion of [*glyph\-ready data*](https://en.wikipedia.org/w/index.php?search=glyph-ready%20data) in Chapter [6](ch-dataII.html#ch:dataII)).
As the designer of the graphic, you choose which variables the graphic will display and how each variable is to be represented graphically: position, size, color, and so on.
### 3\.1\.2 Scales
Compare Figure [3\.4](ch-vizII.html#fig:four-variables) to Figure [3\.5](ch-vizII.html#fig:log-scale).
In the former, it is hard to discern differences in GDP due to its right\-skewed distribution and the choice of a *linear* scale. In the latter, the *logarithmic* scale on the vertical axis makes the scatterplot more readable.
Of course, this makes interpreting the plot more complex, so we must be very careful when doing so.
Note that the only difference in the code is the addition of the `coord_trans()` directive.
```
g +
geom_point(aes(color = net_users, size = roadways)) +
coord_trans(y = "log10")
```
Figure 3\.5: Scatterplot using a logarithmic transformation of GDP that helps to mitigate visual clustering caused by the right\-skewed distribution of GDP among countries.
Scales can also be manipulated in **ggplot2** using any of the scale functions.
For example, instead of using the `coord_trans()` function as we did above, we could have achieved a similar plot through the use of the `scale_y_continuous()` function, as illustrated in Figure [3\.6](ch-vizII.html#fig:log-scale2).
In either case, the points will be drawn in the same location—the difference in the two plots is how and where the major tick marks and axis labels are drawn. We prefer to use `coord_trans()` in Figure [3\.5](ch-vizII.html#fig:log-scale) because it draws attention to the use of the log scale (compare with Figure [3\.6](ch-vizII.html#fig:log-scale2)).
Similarly\-named functions (e.g., `scale_x_continuous()`, `scale_x_discrete()`, `scale_color()`, etc.) perform analogous operations on different aesthetics.
```
g +
geom_point(aes(color = net_users, size = roadways)) +
scale_y_continuous(
name = "Gross Domestic Product",
trans = "log10",
labels = scales::comma
)
```
Figure 3\.6: Scatterplot using a logarithmic transformation of GDP. The use of a log scale on the \\(y\\)\-axis is less obvious than it is in Figure [3\.5](ch-vizII.html#fig:log-scale) due to the uniformly\-spaced horizontal grid lines.
Not all scales are about position.
For instance, in Figure [3\.4](ch-vizII.html#fig:four-variables), `net_users` is translated to color.
Similarly, `roadways` is translated to size: the largest dot corresponds to a value of 30 roadways per unit area.
### 3\.1\.3 Guides
Context is provided by *guides* (more commonly called legends).
A guide helps a human reader understand the meaning of the visual cues by providing context.
For position visual cues, the most common sort of guide is the familiar axis with its tick marks and labels. But other guides exist. In Figures [3\.4](ch-vizII.html#fig:four-variables) and [3\.5](ch-vizII.html#fig:log-scale), legends relate how dot color corresponds to internet connectivity, and how dot size corresponds to length of roadways (note the use of a log scale). The `geom_text()` and `geom_label()` functions can also be used to provide specific textual annotations on the plot. Examples of how to use these functions for annotations are provide in Section [3\.3](ch-vizII.html#sec:babynames).
### 3\.1\.4 Facets
Using multiple aesthetics such as shape, color, and size to display multiple variables can produce a confusing, hard\-to\-read graph.
[*Facets*](https://en.wikipedia.org/w/index.php?search=Facets)—multiple side\-by\-side graphs used to display levels of a categorical variable—provide a simple and effective alternative.
Figure [3\.7](ch-vizII.html#fig:facet-internet) uses facets to show different levels of Internet connectivity, providing a better view than Figure [3\.4](ch-vizII.html#fig:four-variables).
There are two functions that create facets: `facet_wrap()` and `facet_grid()`.
The former creates a facet for each level of a single categorical variable, whereas the latter creates a facet for each combination of two categorical variables, arranging them in a grid.
```
g +
geom_point(alpha = 0.9, aes(size = roadways)) +
coord_trans(y = "log10") +
facet_wrap(~net_users, nrow = 1) +
theme(legend.position = "top")
```
Figure 3\.7: Scatterplot using facets for different ranges of Internet connectivity.
### 3\.1\.5 Layers
On occasion, data from more than one data table are graphed together.
For example, the `MedicareCharges` and `MedicareProviders` data tables provide information about the average cost of each medical procedure in each state.
If you live in [*New Jersey*](https://en.wikipedia.org/w/index.php?search=New%20Jersey), you might wonder how providers in your state charge for different medical procedures.
However, you will certainly want to understand those averages in the context of the averages across all states. In the `MedicareCharges` table, each row represents a different medical procedure (`drg`) with its associated average cost in each state.
We also create a second data table called `ChargesNJ`, which contains only those rows corresponding to providers in the state of New Jersey. Do not worry if these commands aren’t familiar—we will learn these in Chapter [4](ch-dataI.html#ch:dataI).
```
ChargesNJ <- MedicareCharges %>%
filter(stateProvider == "NJ")
```
The first few rows from the data table for New Jersey are shown in Table [3\.2](ch-vizII.html#tab:drg-NJ). This glyph\-ready table (see Chapter [6](ch-dataII.html#ch:dataII)) can be translated to a chart (Figure [3\.8](ch-vizII.html#fig:compare-NJ)) using bars to represent the average charges for different medical procedures in New Jersey. The `geom_col()` function creates a separate bar for each of the 100 different medical procedures.
Table 3\.2: Glyph\-ready data for the barplot layer.
| drg | stateProvider | num\_charges | mean\_charge |
| --- | --- | --- | --- |
| 039 | NJ | 31 | 35104 |
| 057 | NJ | 55 | 45692 |
| 064 | NJ | 55 | 87042 |
| 065 | NJ | 59 | 59576 |
| 066 | NJ | 56 | 45819 |
| 069 | NJ | 61 | 41917 |
| 074 | NJ | 41 | 42993 |
| 101 | NJ | 58 | 42314 |
| 149 | NJ | 50 | 34916 |
| 176 | NJ | 36 | 58941 |
```
p <- ggplot(
data = ChargesNJ,
aes(x = reorder(drg, mean_charge), y = mean_charge)
) +
geom_col(fill = "gray") +
ylab("Statewide Average Charges ($)") +
xlab("Medical Procedure (DRG)") +
theme(axis.text.x = element_text(angle = 90, hjust = 1, size = rel(0.5)))
p
```
Figure 3\.8: Bar graph of average charges for medical procedures in New Jersey.
How do the charges in New Jersey compare to those in other states? The two data tables, one for New Jersey and one for the whole country, can be plotted with different glyph types: bars for New Jersey and dots for the states across the whole country as in Figure [3\.9](ch-vizII.html#fig:compare-NJ-2).
```
p + geom_point(data = MedicareCharges, size = 1, alpha = 0.3)
```
Figure 3\.9: Bar graph adding a second layer to provide a comparison of New Jersey to other states. Each dot represents one state, while the bars represent New Jersey.
With the context provided by the individual states, it is easy to see that the charges in New Jersey are among the highest in the country for each medical procedure.
3\.2 Canonical data graphics in **R**
-------------------------------------
Over time, statisticians have developed standard data graphics for specific use cases (Tukey 1990\).
While these data graphics are not always mesmerizing, they are hard to beat for simple effectiveness.
Every data scientist should know how to make and interpret these canonical data graphics—they are ignored at your peril.
### 3\.2\.1 Univariate displays
It is generally useful to understand how a single variable is distributed. If that variable is numeric, then its distribution is commonly summarized graphically using a [*histogram*](https://en.wikipedia.org/w/index.php?search=histogram) or [*density plot*](https://en.wikipedia.org/w/index.php?search=density%20plot). Using the **ggplot2** package, we can display either plot for the `math` variable in the `SAT_2010` data frame by binding the `math` variable to the `x` aesthetic.
```
g <- ggplot(data = SAT_2010, aes(x = math))
```
Then we only need to choose either `geom_histogram()` or `geom_density()`. Both Figures [3\.10](ch-vizII.html#fig:SAT-1) and [3\.11](ch-vizII.html#fig:SAT-2) convey the same information, but whereas the histogram uses pre\-defined bins to create a discrete distribution, a density plot uses a [*kernel smoother*](https://en.wikipedia.org/w/index.php?search=kernel%20smoother) to make a continuous curve.
```
g + geom_histogram(binwidth = 10) + labs(x = "Average math SAT score")
```
Figure 3\.10: Histogram showing the distribution of math SAT scores by state.
Note that the `binwidth` argument is being used to specify the width of bins in the histogram.
Here, each bin contains a 10\-point range of SAT scores.
In general, the appearance of a histogram can vary considerably based on the choice of bins, and there is no one “best” choice (Lunzer and McNamara 2017\).
You will have to decide what bin width is most appropriate for your data.
```
g + geom_density(adjust = 0.3)
```
Figure 3\.11: Density plot showing the distribution of average math SAT scores by state.
Similarly, in the density plot shown in Figure [3\.11](ch-vizII.html#fig:SAT-2) we use the `adjust` argument to modify the [*bandwidth*](https://en.wikipedia.org/w/index.php?search=bandwidth) being used by the kernel smoother. In the taxonomy defined above, a density plot uses position and direction in a Cartesian plane with a horizontal scale defined by the units in the data.
If your variable is categorical, it doesn’t make sense to think about the values as having a continuous density. Instead, we can use a [*bar graph*](https://en.wikipedia.org/w/index.php?search=bar%20graph) to display the distribution of a categorical variable.
To make a simple bar graph for `math`, identifying each bar by the label `state`, we use the `geom_col()` command, as displayed in Figure [3\.12](ch-vizII.html#fig:bar2). Note that we add a few wrinkles to this plot. First, we use the `head()` function to display only the first 10 states (in alphabetical order). Second, we use the `reorder()` function to sort the state names in order of their average `math` SAT score.
```
ggplot(
data = head(SAT_2010, 10),
aes(x = reorder(state, math), y = math)
) +
geom_col() +
labs(x = "State", y = "Average math SAT score")
```
Figure 3\.12: A bar plot showing the distribution of average math SAT scores for a selection of states.
As noted earlier, we recommend against the use of pie charts to display the distribution of a categorical variable since, in most cases, a table of frequencies is more informative. An informative graphical display can be achieved using a [*stacked bar plot*](https://en.wikipedia.org/w/index.php?search=stacked%20bar%20plot), such as the one shown in Figure [3\.13](ch-vizII.html#fig:stacked-bar) using the `geom_bar()` function. Note that we have used the `coord_flip()` function to display the bars horizontally instead of vertically.
```
ggplot(data = mosaicData::HELPrct, aes(x = homeless)) +
geom_bar(aes(fill = substance), position = "fill") +
scale_fill_brewer(palette = "Spectral") +
coord_flip()
```
Figure 3\.13: A stacked bar plot showing the distribution of substance of abuse for participants in the HELP study. Compare this to Figure [2\.14](ch-vizI.html#fig:pie).
This method of graphical display enables a more direct comparison of proportions than would be possible using two pie charts. In this case, it is clear that homeless participants were more likely to identify as being involved with alcohol as their primary substance of abuse. However, like pie charts, bar charts are sometimes criticized for having a low [*data\-to\-ink ratio*](https://en.wikipedia.org/w/index.php?search=data-to-ink%20ratio). That is, they use a comparatively large amount of ink to depict relatively few data points.
### 3\.2\.2 Multivariate displays
Multivariate displays are the most effective way to convey the relationship between more than one variable. The venerable [*scatterplot*](https://en.wikipedia.org/w/index.php?search=scatterplot) remains an excellent way to display observations of two quantitative (or numerical) variables. The scatterplot is provided in **ggplot2** by the `geom_point()` command. The main purpose of a scatterplot is to show the relationship between two variables across many cases. Most often, there is a Cartesian coordinate system in which the \\(x\\)\-axis represents one variable and the \\(y\\)\-axis the value of a second variable.
```
g <- ggplot(
data = SAT_2010,
aes(x = expenditure, y = math)
) +
geom_point()
```
We will also add a smooth trend line and some more specific axis labels. We use the `geom_smooth()` function in order to plot the simple linear regression line (`method = "lm"`) through the points (see Section [9\.6](ch-foundations.html#sec:confound) and Appendix [E](ch-regression.html#ch:regression)).
```
g <- g +
geom_smooth(method = "lm", se = FALSE) +
xlab("Average expenditure per student ($1000)") +
ylab("Average score on math SAT")
```
In Figures [3\.14](ch-vizII.html#fig:groups-color) and [3\.15](ch-vizII.html#fig:bar-facet), we plot the relationship between the average SAT math score and the expenditure per pupil (in thousands of United States dollars) among states in 2010\.
A third (categorical) variable can be added through *faceting* and/or *layering*.
In this case, we use the `mutate()` function (see Chapter [4](ch-dataI.html#ch:dataI)) to create a new variable called `SAT_rate` that places states into bins (e.g., high, medium, low) based on the percentage of students taking the SAT.
Additionally, in order to include that new variable in our plots, we use the `%+%` operator to update the data frame that is bound to our plot.
```
SAT_2010 <- SAT_2010 %>%
mutate(
SAT_rate = cut(
sat_pct,
breaks = c(0, 30, 60, 100),
labels = c("low", "medium", "high")
)
)
g <- g %+% SAT_2010
```
In Figure [3\.14](ch-vizII.html#fig:groups-color), we use the `color` aesthetic to separate the data by `SAT_rate` on a single plot (i.e., layering).
Compare this with Figure [3\.15](ch-vizII.html#fig:bar-facet), where we add a `facet_wrap()` mapped to `SAT_rate` to separate by facet.
```
g + aes(color = SAT_rate)
```
Figure 3\.14: Scatterplot using the `color` aesthetic to separate the relationship between two numeric variables by a third categorical variable.
```
g + facet_wrap(~ SAT_rate)
```
Figure 3\.15: Scatterplot using a `facet_wrap()` to separate the relationship between two numeric variables by a third categorical variable.
The `NHANES` data table provides medical, behavioral, and morphometric measurements of individuals. The scatterplot in Figure [3\.16](ch-vizII.html#fig:NHANES-height-age) shows the relationship between two of the variables, height and age. Each dot represents one person and the position of that dot signifies the value of the two variables for that person. Scatterplots are useful for visualizing a simple relationship between two variables. For instance, you can see in Figure [3\.16](ch-vizII.html#fig:NHANES-height-age) the familiar pattern of growth in height from birth to the late teens.
It’s helpful to do a bit more wrangling (more on this later) to ensure that the spatial relationship of the lines (adult men tend to be taller than adult women) matches the ordering of the legend labels. Here we use the `fct_relevel()` function (from the **forcats** package) to reset the factor levels.
```
library(NHANES)
ggplot(
data = slice_sample(NHANES, n = 1000),
aes(x = Age, y = Height, color = fct_relevel(Gender, "male"))
) +
geom_point() +
geom_smooth() +
xlab("Age (years)") +
ylab("Height (cm)") +
labs(color = "Gender")
```
Figure 3\.16: A scatterplot for 1,000 random individuals from the **NHANES** study. Note how mapping gender to color illuminates the differences in height between men and women.
Some scatterplots have special meanings. A [*time series*](https://en.wikipedia.org/w/index.php?search=time%20series)—such as the one shown in Figure [3\.17](ch-vizII.html#fig:time-series)—is just a scatterplot with time on the horizontal axis and points connected by lines to indicate temporal continuity.
In Figure [3\.17](ch-vizII.html#fig:time-series), the temperature at a weather station in western [*Massachusetts*](https://en.wikipedia.org/w/index.php?search=Massachusetts) is plotted over the course of the year.
The familiar fluctuations based on the seasons are evident.
Be especially aware of dubious causality in these plots: Is time really a good explanatory variable?
```
library(macleish)
ggplot(data = whately_2015, aes(x = when, y = temperature)) +
geom_line(color = "darkgray") +
geom_smooth() +
xlab(NULL) +
ylab("Temperature (degrees Celsius)")
```
Figure 3\.17: A time series showing the change in temperature at the MacLeish field station in 2015\.
For displaying a numerical response variable against a categorical explanatory variable, a common choice is a [*box\-and\-whisker*](https://en.wikipedia.org/w/index.php?search=box-and-whisker) (or box) plot, as shown in Figure [3\.18](ch-vizII.html#fig:macleishbox).
(More details about the data wrangling needed to create the categorical `month` variable will be provided in later chapters.)
It may be easiest to think about this as simply a graphical depiction of the [*five\-number summary*](https://en.wikipedia.org/w/index.php?search=five-number%20summary) (minimum \[0th percentile], Q1 \[25th percentile], median \[50th percentile], Q3 \[75th percentile], and maximum \[100th percentile]).
```
whately_2015 %>%
mutate(month = as.factor(lubridate::month(when, label = TRUE))) %>%
group_by(month) %>%
skim(temperature) %>%
select(-na)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var month n mean sd p0 p25 p50 p75 p100
<chr> <ord> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 temperature Jan 4464 -6.37 5.14 -22.3 -10.3 -6.25 -2.35 6.16
2 temperature Feb 4032 -9.26 5.11 -22.2 -12.3 -9.43 -5.50 4.27
3 temperature Mar 4464 -0.873 5.06 -16.2 -4.61 -0.550 2.99 13.5
4 temperature Apr 4320 8.04 5.51 -3.04 3.77 7.61 11.8 22.7
5 temperature May 4464 17.4 5.94 2.29 12.8 17.5 21.4 31.4
6 temperature Jun 4320 17.7 5.11 6.53 14.2 18.0 21.2 29.4
7 temperature Jul 4464 21.6 3.90 12.0 18.6 21.2 24.3 32.1
8 temperature Aug 4464 21.4 3.79 12.9 18.4 21.1 24.3 31.2
9 temperature Sep 4320 19.3 5.07 5.43 15.8 19 22.5 33.1
10 temperature Oct 4464 9.79 5.00 -3.97 6.58 9.49 13.3 22.3
11 temperature Nov 4320 7.28 5.65 -4.84 3.14 7.11 10.8 22.8
12 temperature Dec 4464 4.95 4.59 -6.16 1.61 5.15 8.38 18.4
```
```
ggplot(
data = whately_2015,
aes(
x = lubridate::month(when, label = TRUE),
y = temperature
)
) +
geom_boxplot() +
xlab("Month") +
ylab("Temperature (degrees Celsius)")
```
Figure 3\.18: A box\-and\-whisker of temperatures by month at the MacLeish field station.
When both the explanatory and response variables are categorical (or binned), points and lines don’t work as well.
How likely is a person to have [*diabetes*](https://en.wikipedia.org/w/index.php?search=diabetes), based on their age and [*BMI*](https://en.wikipedia.org/w/index.php?search=BMI) (body mass index)?
In the [*mosaic plot*](https://en.wikipedia.org/w/index.php?search=mosaic%20plot) (or eikosogram) shown in Figure [3\.19](ch-vizII.html#fig:NHANES-smoke) the number of observations in each cell is proportional to the area of the box.
Thus, you can see that diabetes tends to be more common for older people as well as for those who are obese, since the blue\-shaded regions are larger than expected under an independence model while the pink are less than expected.
These provide a more accurate depiction of the intuitive notions of probability familiar from [*Venn diagrams*](https://en.wikipedia.org/w/index.php?search=Venn%20diagrams) (Olford and Cherry 2003\).
Figure 3\.19: Mosaic plot (eikosogram) of diabetes by age and weight status (BMI).
In Table [3\.3](ch-vizII.html#tab:lattice) we summarize the use of **ggplot2** plotting commands and their relationship to canonical data graphics. Note that the `geom_mosaic()` function is not part of **ggplot2** but rather is available through the **ggmosaic** package.
Table 3\.3: Table of canonical data graphics and their corresponding **ggplot2** commands. Note that the mosaic plot function is not part of the **ggplot2** package.
| response (\\(y\\)) | explanatory (\\(x\\)) | plot type | geom\_\*() |
| --- | --- | --- | --- |
| | numeric | histogram, density | `geom_histogram()`, `geom_density()` |
| | categorical | stacked bar | `geom_bar()` |
| numeric | numeric | scatter | `geom_point()` |
| numeric | categorical | box | `geom_boxplot()` |
| categorical | categorical | mosaic | `geom_mosaic()` |
### 3\.2\.3 Maps
Using a map to display data geographically helps both to identify particular cases and to show spatial patterns and discrepancies. In Figure [3\.20](ch-vizII.html#fig:oil-map), the shading of each country represents its oil production.
This sort of map, where the fill color of each region reflects the value of a variable, is sometimes called a [*choropleth map*](https://en.wikipedia.org/w/index.php?search=choropleth%20map).
We will learn more about mapping and how to work with spatial data in Chapter [17](ch-spatial.html#ch:spatial).
Figure 3\.20: A choropleth map displaying oil production by countries around the world in barrels per day.
### 3\.2\.4 Networks
A [*network*](https://en.wikipedia.org/w/index.php?search=network) is a set of connections, called [*edges*](https://en.wikipedia.org/w/index.php?search=edges), between nodes, called [*vertices*](https://en.wikipedia.org/w/index.php?search=vertices). A vertex represents an entity. The edges indicate pairwise relationships between those entities.
The `NCI60` data set is about the genetics of cancer.
The data set contains more than 40,000 probes for the expression of genes, in each of 60 cancers.
In the network displayed in Figure [3\.21](ch-vizII.html#fig:cancer-network), a vertex is a given cell line, and each is depicted as a dot.
The dot’s color and label gives the type of cancer involved.
These are ovarian, colon, central nervous system, melanoma, renal, breast, and lung cancers.
The edges between vertices show pairs of cell lines that had a strong correlation in gene expression.
Figure 3\.21: A network diagram displaying the relationship between types of cancer cell lines.
The network shows that the melanoma cell lines (ME) are closely related to each other but not so much to other cell lines. The same is true for colon cancer cell lines (CO) and for central nervous system (CN) cell lines. Lung cancers, on the other hand, tend to have associations with multiple other types of cancers. We will explore the topic of [*network science*](https://en.wikipedia.org/w/index.php?search=network%20science) in greater depth in Chapter [20](ch-netsci.html#ch:netsci).
### 3\.2\.1 Univariate displays
It is generally useful to understand how a single variable is distributed. If that variable is numeric, then its distribution is commonly summarized graphically using a [*histogram*](https://en.wikipedia.org/w/index.php?search=histogram) or [*density plot*](https://en.wikipedia.org/w/index.php?search=density%20plot). Using the **ggplot2** package, we can display either plot for the `math` variable in the `SAT_2010` data frame by binding the `math` variable to the `x` aesthetic.
```
g <- ggplot(data = SAT_2010, aes(x = math))
```
Then we only need to choose either `geom_histogram()` or `geom_density()`. Both Figures [3\.10](ch-vizII.html#fig:SAT-1) and [3\.11](ch-vizII.html#fig:SAT-2) convey the same information, but whereas the histogram uses pre\-defined bins to create a discrete distribution, a density plot uses a [*kernel smoother*](https://en.wikipedia.org/w/index.php?search=kernel%20smoother) to make a continuous curve.
```
g + geom_histogram(binwidth = 10) + labs(x = "Average math SAT score")
```
Figure 3\.10: Histogram showing the distribution of math SAT scores by state.
Note that the `binwidth` argument is being used to specify the width of bins in the histogram.
Here, each bin contains a 10\-point range of SAT scores.
In general, the appearance of a histogram can vary considerably based on the choice of bins, and there is no one “best” choice (Lunzer and McNamara 2017\).
You will have to decide what bin width is most appropriate for your data.
```
g + geom_density(adjust = 0.3)
```
Figure 3\.11: Density plot showing the distribution of average math SAT scores by state.
Similarly, in the density plot shown in Figure [3\.11](ch-vizII.html#fig:SAT-2) we use the `adjust` argument to modify the [*bandwidth*](https://en.wikipedia.org/w/index.php?search=bandwidth) being used by the kernel smoother. In the taxonomy defined above, a density plot uses position and direction in a Cartesian plane with a horizontal scale defined by the units in the data.
If your variable is categorical, it doesn’t make sense to think about the values as having a continuous density. Instead, we can use a [*bar graph*](https://en.wikipedia.org/w/index.php?search=bar%20graph) to display the distribution of a categorical variable.
To make a simple bar graph for `math`, identifying each bar by the label `state`, we use the `geom_col()` command, as displayed in Figure [3\.12](ch-vizII.html#fig:bar2). Note that we add a few wrinkles to this plot. First, we use the `head()` function to display only the first 10 states (in alphabetical order). Second, we use the `reorder()` function to sort the state names in order of their average `math` SAT score.
```
ggplot(
data = head(SAT_2010, 10),
aes(x = reorder(state, math), y = math)
) +
geom_col() +
labs(x = "State", y = "Average math SAT score")
```
Figure 3\.12: A bar plot showing the distribution of average math SAT scores for a selection of states.
As noted earlier, we recommend against the use of pie charts to display the distribution of a categorical variable since, in most cases, a table of frequencies is more informative. An informative graphical display can be achieved using a [*stacked bar plot*](https://en.wikipedia.org/w/index.php?search=stacked%20bar%20plot), such as the one shown in Figure [3\.13](ch-vizII.html#fig:stacked-bar) using the `geom_bar()` function. Note that we have used the `coord_flip()` function to display the bars horizontally instead of vertically.
```
ggplot(data = mosaicData::HELPrct, aes(x = homeless)) +
geom_bar(aes(fill = substance), position = "fill") +
scale_fill_brewer(palette = "Spectral") +
coord_flip()
```
Figure 3\.13: A stacked bar plot showing the distribution of substance of abuse for participants in the HELP study. Compare this to Figure [2\.14](ch-vizI.html#fig:pie).
This method of graphical display enables a more direct comparison of proportions than would be possible using two pie charts. In this case, it is clear that homeless participants were more likely to identify as being involved with alcohol as their primary substance of abuse. However, like pie charts, bar charts are sometimes criticized for having a low [*data\-to\-ink ratio*](https://en.wikipedia.org/w/index.php?search=data-to-ink%20ratio). That is, they use a comparatively large amount of ink to depict relatively few data points.
### 3\.2\.2 Multivariate displays
Multivariate displays are the most effective way to convey the relationship between more than one variable. The venerable [*scatterplot*](https://en.wikipedia.org/w/index.php?search=scatterplot) remains an excellent way to display observations of two quantitative (or numerical) variables. The scatterplot is provided in **ggplot2** by the `geom_point()` command. The main purpose of a scatterplot is to show the relationship between two variables across many cases. Most often, there is a Cartesian coordinate system in which the \\(x\\)\-axis represents one variable and the \\(y\\)\-axis the value of a second variable.
```
g <- ggplot(
data = SAT_2010,
aes(x = expenditure, y = math)
) +
geom_point()
```
We will also add a smooth trend line and some more specific axis labels. We use the `geom_smooth()` function in order to plot the simple linear regression line (`method = "lm"`) through the points (see Section [9\.6](ch-foundations.html#sec:confound) and Appendix [E](ch-regression.html#ch:regression)).
```
g <- g +
geom_smooth(method = "lm", se = FALSE) +
xlab("Average expenditure per student ($1000)") +
ylab("Average score on math SAT")
```
In Figures [3\.14](ch-vizII.html#fig:groups-color) and [3\.15](ch-vizII.html#fig:bar-facet), we plot the relationship between the average SAT math score and the expenditure per pupil (in thousands of United States dollars) among states in 2010\.
A third (categorical) variable can be added through *faceting* and/or *layering*.
In this case, we use the `mutate()` function (see Chapter [4](ch-dataI.html#ch:dataI)) to create a new variable called `SAT_rate` that places states into bins (e.g., high, medium, low) based on the percentage of students taking the SAT.
Additionally, in order to include that new variable in our plots, we use the `%+%` operator to update the data frame that is bound to our plot.
```
SAT_2010 <- SAT_2010 %>%
mutate(
SAT_rate = cut(
sat_pct,
breaks = c(0, 30, 60, 100),
labels = c("low", "medium", "high")
)
)
g <- g %+% SAT_2010
```
In Figure [3\.14](ch-vizII.html#fig:groups-color), we use the `color` aesthetic to separate the data by `SAT_rate` on a single plot (i.e., layering).
Compare this with Figure [3\.15](ch-vizII.html#fig:bar-facet), where we add a `facet_wrap()` mapped to `SAT_rate` to separate by facet.
```
g + aes(color = SAT_rate)
```
Figure 3\.14: Scatterplot using the `color` aesthetic to separate the relationship between two numeric variables by a third categorical variable.
```
g + facet_wrap(~ SAT_rate)
```
Figure 3\.15: Scatterplot using a `facet_wrap()` to separate the relationship between two numeric variables by a third categorical variable.
The `NHANES` data table provides medical, behavioral, and morphometric measurements of individuals. The scatterplot in Figure [3\.16](ch-vizII.html#fig:NHANES-height-age) shows the relationship between two of the variables, height and age. Each dot represents one person and the position of that dot signifies the value of the two variables for that person. Scatterplots are useful for visualizing a simple relationship between two variables. For instance, you can see in Figure [3\.16](ch-vizII.html#fig:NHANES-height-age) the familiar pattern of growth in height from birth to the late teens.
It’s helpful to do a bit more wrangling (more on this later) to ensure that the spatial relationship of the lines (adult men tend to be taller than adult women) matches the ordering of the legend labels. Here we use the `fct_relevel()` function (from the **forcats** package) to reset the factor levels.
```
library(NHANES)
ggplot(
data = slice_sample(NHANES, n = 1000),
aes(x = Age, y = Height, color = fct_relevel(Gender, "male"))
) +
geom_point() +
geom_smooth() +
xlab("Age (years)") +
ylab("Height (cm)") +
labs(color = "Gender")
```
Figure 3\.16: A scatterplot for 1,000 random individuals from the **NHANES** study. Note how mapping gender to color illuminates the differences in height between men and women.
Some scatterplots have special meanings. A [*time series*](https://en.wikipedia.org/w/index.php?search=time%20series)—such as the one shown in Figure [3\.17](ch-vizII.html#fig:time-series)—is just a scatterplot with time on the horizontal axis and points connected by lines to indicate temporal continuity.
In Figure [3\.17](ch-vizII.html#fig:time-series), the temperature at a weather station in western [*Massachusetts*](https://en.wikipedia.org/w/index.php?search=Massachusetts) is plotted over the course of the year.
The familiar fluctuations based on the seasons are evident.
Be especially aware of dubious causality in these plots: Is time really a good explanatory variable?
```
library(macleish)
ggplot(data = whately_2015, aes(x = when, y = temperature)) +
geom_line(color = "darkgray") +
geom_smooth() +
xlab(NULL) +
ylab("Temperature (degrees Celsius)")
```
Figure 3\.17: A time series showing the change in temperature at the MacLeish field station in 2015\.
For displaying a numerical response variable against a categorical explanatory variable, a common choice is a [*box\-and\-whisker*](https://en.wikipedia.org/w/index.php?search=box-and-whisker) (or box) plot, as shown in Figure [3\.18](ch-vizII.html#fig:macleishbox).
(More details about the data wrangling needed to create the categorical `month` variable will be provided in later chapters.)
It may be easiest to think about this as simply a graphical depiction of the [*five\-number summary*](https://en.wikipedia.org/w/index.php?search=five-number%20summary) (minimum \[0th percentile], Q1 \[25th percentile], median \[50th percentile], Q3 \[75th percentile], and maximum \[100th percentile]).
```
whately_2015 %>%
mutate(month = as.factor(lubridate::month(when, label = TRUE))) %>%
group_by(month) %>%
skim(temperature) %>%
select(-na)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var month n mean sd p0 p25 p50 p75 p100
<chr> <ord> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 temperature Jan 4464 -6.37 5.14 -22.3 -10.3 -6.25 -2.35 6.16
2 temperature Feb 4032 -9.26 5.11 -22.2 -12.3 -9.43 -5.50 4.27
3 temperature Mar 4464 -0.873 5.06 -16.2 -4.61 -0.550 2.99 13.5
4 temperature Apr 4320 8.04 5.51 -3.04 3.77 7.61 11.8 22.7
5 temperature May 4464 17.4 5.94 2.29 12.8 17.5 21.4 31.4
6 temperature Jun 4320 17.7 5.11 6.53 14.2 18.0 21.2 29.4
7 temperature Jul 4464 21.6 3.90 12.0 18.6 21.2 24.3 32.1
8 temperature Aug 4464 21.4 3.79 12.9 18.4 21.1 24.3 31.2
9 temperature Sep 4320 19.3 5.07 5.43 15.8 19 22.5 33.1
10 temperature Oct 4464 9.79 5.00 -3.97 6.58 9.49 13.3 22.3
11 temperature Nov 4320 7.28 5.65 -4.84 3.14 7.11 10.8 22.8
12 temperature Dec 4464 4.95 4.59 -6.16 1.61 5.15 8.38 18.4
```
```
ggplot(
data = whately_2015,
aes(
x = lubridate::month(when, label = TRUE),
y = temperature
)
) +
geom_boxplot() +
xlab("Month") +
ylab("Temperature (degrees Celsius)")
```
Figure 3\.18: A box\-and\-whisker of temperatures by month at the MacLeish field station.
When both the explanatory and response variables are categorical (or binned), points and lines don’t work as well.
How likely is a person to have [*diabetes*](https://en.wikipedia.org/w/index.php?search=diabetes), based on their age and [*BMI*](https://en.wikipedia.org/w/index.php?search=BMI) (body mass index)?
In the [*mosaic plot*](https://en.wikipedia.org/w/index.php?search=mosaic%20plot) (or eikosogram) shown in Figure [3\.19](ch-vizII.html#fig:NHANES-smoke) the number of observations in each cell is proportional to the area of the box.
Thus, you can see that diabetes tends to be more common for older people as well as for those who are obese, since the blue\-shaded regions are larger than expected under an independence model while the pink are less than expected.
These provide a more accurate depiction of the intuitive notions of probability familiar from [*Venn diagrams*](https://en.wikipedia.org/w/index.php?search=Venn%20diagrams) (Olford and Cherry 2003\).
Figure 3\.19: Mosaic plot (eikosogram) of diabetes by age and weight status (BMI).
In Table [3\.3](ch-vizII.html#tab:lattice) we summarize the use of **ggplot2** plotting commands and their relationship to canonical data graphics. Note that the `geom_mosaic()` function is not part of **ggplot2** but rather is available through the **ggmosaic** package.
Table 3\.3: Table of canonical data graphics and their corresponding **ggplot2** commands. Note that the mosaic plot function is not part of the **ggplot2** package.
| response (\\(y\\)) | explanatory (\\(x\\)) | plot type | geom\_\*() |
| --- | --- | --- | --- |
| | numeric | histogram, density | `geom_histogram()`, `geom_density()` |
| | categorical | stacked bar | `geom_bar()` |
| numeric | numeric | scatter | `geom_point()` |
| numeric | categorical | box | `geom_boxplot()` |
| categorical | categorical | mosaic | `geom_mosaic()` |
### 3\.2\.3 Maps
Using a map to display data geographically helps both to identify particular cases and to show spatial patterns and discrepancies. In Figure [3\.20](ch-vizII.html#fig:oil-map), the shading of each country represents its oil production.
This sort of map, where the fill color of each region reflects the value of a variable, is sometimes called a [*choropleth map*](https://en.wikipedia.org/w/index.php?search=choropleth%20map).
We will learn more about mapping and how to work with spatial data in Chapter [17](ch-spatial.html#ch:spatial).
Figure 3\.20: A choropleth map displaying oil production by countries around the world in barrels per day.
### 3\.2\.4 Networks
A [*network*](https://en.wikipedia.org/w/index.php?search=network) is a set of connections, called [*edges*](https://en.wikipedia.org/w/index.php?search=edges), between nodes, called [*vertices*](https://en.wikipedia.org/w/index.php?search=vertices). A vertex represents an entity. The edges indicate pairwise relationships between those entities.
The `NCI60` data set is about the genetics of cancer.
The data set contains more than 40,000 probes for the expression of genes, in each of 60 cancers.
In the network displayed in Figure [3\.21](ch-vizII.html#fig:cancer-network), a vertex is a given cell line, and each is depicted as a dot.
The dot’s color and label gives the type of cancer involved.
These are ovarian, colon, central nervous system, melanoma, renal, breast, and lung cancers.
The edges between vertices show pairs of cell lines that had a strong correlation in gene expression.
Figure 3\.21: A network diagram displaying the relationship between types of cancer cell lines.
The network shows that the melanoma cell lines (ME) are closely related to each other but not so much to other cell lines. The same is true for colon cancer cell lines (CO) and for central nervous system (CN) cell lines. Lung cancers, on the other hand, tend to have associations with multiple other types of cancers. We will explore the topic of [*network science*](https://en.wikipedia.org/w/index.php?search=network%20science) in greater depth in Chapter [20](ch-netsci.html#ch:netsci).
3\.3 Extended example: Historical baby names
--------------------------------------------
For many of us, there are few things that are more personal than our name. It is impossible to remember a time when you didn’t have your name, and you carry it with you wherever you go. You instinctively react when you hear it. And yet, you didn’t choose your name—your parents did (unless you’ve changed your name).
How do parents go about choosing names? Clearly, there seem to be both short\- and long\-term trends in baby names. The popularity of the name “Bella” spiked after the lead character in [*Twilight*](https://en.wikipedia.org/w/index.php?search=Twilight) became a cultural phenomenon. Other once\-popular names seem to have fallen out of favor—writers at [*FiveThirtyEight*](https://en.wikipedia.org/w/index.php?search=FiveThirtyEight) asked, “[where have all the Elmer’s gone](http://fivethirtyeight.com/features/how-to-tell-someones-age-when-all-you-know-is-her-name/)?”
Using data from the **babynames** package, which uses public data from the [*Social Security Administration*](https://en.wikipedia.org/w/index.php?search=Social%20Security%20Administration) (SSA), we can recreate many of the plots presented in the FiveThirtyEight article, and in the process learn how to use **ggplot2** to make production\-quality data graphics.
In [the link](https://fivethirtyeight.com/wp-content/uploads/2014/05/silver-feature-joseph2.png) in the footnote[3](#fn3), FiveThirtyEight presents an informative, annotated data graphic that shows the relative ages of American males named “Joseph.” Drawing on what you have learned in Chapter [2](ch-vizI.html#ch:vizI), take a minute to jot down the visual cues, coordinate system, scales, and context present in this plot. This analysis will facilitate our use of **ggplot2** to re\-construct it. (Look ahead to Figure [3\.22](ch-vizII.html#fig:joseph) to see our recreation.)
The key insight of the FiveThirtyEight work is the estimation of the number of people with each name who are currently alive. The `lifetables` table from the **babynames** package contains
[actuarial](https://www.ssa.gov/oact/NOTES/as120/LifeTables_Tbl_7_1980.html) estimates of the number of people per 100,000 who are alive at age \\(x\\), for every \\(0 \\leq x \\leq 114\\). The `make_babynames_dist()` function in the **mdsr** package adds some more convenient variables and filters for only the data that is relevant to people alive in 2014\.[4](#fn4)
```
library(babynames)
BabynamesDist <- make_babynames_dist()
BabynamesDist
```
```
# A tibble: 1,639,722 × 9
year sex name n prop alive_prob count_thousands age_today
<dbl> <chr> <chr> <int> <dbl> <dbl> <dbl> <dbl>
1 1900 F Mary 16706 0.0526 0 16.7 114
2 1900 F Helen 6343 0.0200 0 6.34 114
3 1900 F Anna 6114 0.0192 0 6.11 114
4 1900 F Margaret 5304 0.0167 0 5.30 114
5 1900 F Ruth 4765 0.0150 0 4.76 114
6 1900 F Elizabeth 4096 0.0129 0 4.10 114
7 1900 F Florence 3920 0.0123 0 3.92 114
8 1900 F Ethel 3896 0.0123 0 3.90 114
9 1900 F Marie 3856 0.0121 0 3.86 114
10 1900 F Lillian 3414 0.0107 0 3.41 114
# … with 1,639,712 more rows, and 1 more variable: est_alive_today <dbl>
```
To find information about a specific name, we use the `filter()` function.
```
BabynamesDist %>%
filter(name == "Benjamin")
```
### 3\.3\.1 Percentage of people alive today
How did you break down Figure [3\.22](ch-vizII.html#fig:joseph)? There are two main data elements in that plot: a thick black line indicating the number of Josephs born each year, and the thin light blue bars indicating the number of Josephs born in each year that are expected to still be alive today. In both cases, the vertical axis corresponds to the number of people (in thousands), and the horizontal axis corresponds to the year of birth.
We can compose a similar plot in **ggplot2**. First we take the relevant subset of the data and set up the initial **ggplot2** object. The data frame `joseph` is bound to the plot, since this contains all of the data that we need for this plot, but we will be using it with multiple geoms. Moreover, the `year` variable is mapped to the \\(x\\)\-axis as an aesthetic. This will ensure that everything will line up properly.
```
joseph <- BabynamesDist %>%
filter(name == "Joseph" & sex == "M")
name_plot <- ggplot(data = joseph, aes(x = year))
```
Next, we will add the bars.
```
name_plot <- name_plot +
geom_col(
aes(y = count_thousands * alive_prob),
fill = "#b2d7e9",
color = "white",
size = 0.1
)
```
The `geom_col()` function adds bars, which are filled with a light blue color and a white border. The height of the bars is an aesthetic that is mapped to the estimated number of people alive today who were born in each year.
The black line is easily added using the `geom_line()` function.
```
name_plot <- name_plot +
geom_line(aes(y = count_thousands), size = 2)
```
Adding an informative label for the vertical axis and removing an uninformative label for the horizontal axis will improve the readability of our plot.
```
name_plot <- name_plot +
ylab("Number of People (thousands)") +
xlab(NULL)
```
Inspecting the `summary()` of our plot at this point can help us keep things straight—take note of the mappings. Do they accord with what you jotted down previously?
```
summary(name_plot)
```
```
data: year, sex, name, n, prop, alive_prob, count_thousands,
age_today, est_alive_today [111x9]
mapping: x = ~year
faceting: <ggproto object: Class FacetNull, Facet, gg>
compute_layout: function
draw_back: function
draw_front: function
draw_labels: function
draw_panels: function
finish_data: function
init_scales: function
map_data: function
params: list
setup_data: function
setup_params: function
shrink: TRUE
train_scales: function
vars: function
super: <ggproto object: Class FacetNull, Facet, gg>
-----------------------------------
mapping: y = ~count_thousands * alive_prob
geom_col: width = NULL, na.rm = FALSE
stat_identity: na.rm = FALSE
position_stack
mapping: y = ~count_thousands
geom_line: na.rm = FALSE, orientation = NA
stat_identity: na.rm = FALSE
position_identity
```
The final data\-driven element of the FiveThirtyEight graphic is a darker blue bar indicating the median year of birth. We can compute this with the `wtd.quantile()` function in the **Hmisc** package. Setting the `probs` argument to 0\.5 will give us the median `year` of birth, weighted by the number of people estimated to be alive today (`est_alive_today`). The `pull()` function simply extracts the `year` variable from the data frame returned by `summarize()`.
```
wtd_quantile <- Hmisc::wtd.quantile
median_yob <- joseph %>%
summarize(
year = wtd_quantile(year, est_alive_today, probs = 0.5)
) %>%
pull(year)
median_yob
```
```
50%
1975
```
We can then overplot a single bar in a darker shade of blue. Here, we are using the `ifelse()` function cleverly. If the `year` is equal to the median year of birth, then the height of the bar is the estimated number of Josephs alive today. Otherwise, the height of the bar is zero (so you can’t see it at all). In this manner, we plot only the one darker blue bar that we want to highlight.
```
name_plot <- name_plot +
geom_col(
color = "white", fill = "#008fd5",
aes(y = ifelse(year == median_yob, est_alive_today / 1000, 0))
)
```
Lastly, the FiveThirtyEight graphic contains many contextual elements specific to the name Joseph. We can add a title, annotated text, and an arrow providing focus to a specific element of the plot. Figure [3\.22](ch-vizII.html#fig:joseph) displays our reproduction.
There are a few differences in the presentation of fonts, title, etc. These can be altered using **ggplot2**’s theming framework, but we won’t explore these subtleties here (see Section [14\.5](ch-vizIII.html#sec:themes)).[5](#fn5)
Here we create a `tribble()` (a row\-wise simple data frame) to add annotations.
```
context <- tribble(
~year, ~num_people, ~label,
1935, 40, "Number of Josephs\nborn each year",
1915, 13, "Number of Josephs\nborn each year
\nestimated to be alive\non 1/1/2014",
2003, 40, "The median\nliving Joseph\nis 37 years old",
)
name_plot +
ggtitle("Age Distribution of American Boys Named Joseph") +
geom_text(
data = context,
aes(y = num_people, label = label, color = label)
) +
geom_curve(
x = 1990, xend = 1974, y = 40, yend = 24,
arrow = arrow(length = unit(0.3, "cm")), curvature = 0.5
) +
scale_color_manual(
guide = FALSE,
values = c("black", "#b2d7e9", "darkgray")
) +
ylim(0, 42)
```
```
Warning: It is deprecated to specify `guide = FALSE` to remove a guide.
Please use `guide = "none"` instead.
```
Figure 3\.22: Recreation of the age distribution of “Joseph” plot.
Notice that we did not update the `name_plot` object with this contextual information. This was intentional, since we can update the `data` argument of `name_plot` and obtain an analogous plot for another name. This functionality makes use of the special `%+%` operator. As shown in Figure [3\.23](ch-vizII.html#fig:josephine), the name “Josephine” enjoyed a spike in popularity around 1920 that later subsided.
```
name_plot %+% filter(
BabynamesDist,
name == "Josephine" & sex == "F"
)
```
Figure 3\.23: Age distribution of American girls named “Josephine.”
While some names are almost always associated with a particular gender, many are not. More interestingly, the proportion of people assigned male or female with a given name often varies over time. These data were presented nicely by [Nathan Yau](https://en.wikipedia.org/w/index.php?search=Nathan%20Yau) at [FlowingData](https://flowingdata.com/2013/09/25/the-most-unisex-names-in-us-history/).
We can compare how our `name_plot` differs by gender for a given name using a *facet*. To do this, we will simply add a call to the `facet_wrap()` function, which will create small multiples based on a single categorical variable, and then feed a new data frame to the plot that contains data for both sexes assigned at birth. In Figure [3\.24](ch-vizII.html#fig:jessie), we show the prevalence of “Jessie” changed for the two sexes.
```
names_plot <- name_plot +
facet_wrap(~sex)
names_plot %+% filter(BabynamesDist, name == "Jessie")
```
Figure 3\.24: Comparison of the name “Jessie” across two genders.
The plot at FlowingData shows the 35 most common “unisex” names—that is, the names that have historically had the greatest balance between those assigned male and female at birth. We can use a `facet_grid()` to compare the gender breakdown for a few of the most common of these, as shown in Figures [3\.25](ch-vizII.html#fig:many-names) and [3\.26](ch-vizII.html#fig:many-names2).
```
many_names_plot <- name_plot +
facet_grid(name ~ sex)
mnp <- many_names_plot %+% filter(
BabynamesDist,
name %in% c("Jessie", "Marion", "Jackie")
)
mnp
```
Figure 3\.25: Gender breakdown for the three most unisex names.
Reversing the order of the variables in the call to `facet_grid()` flips the orientation of the facets.
```
mnp + facet_grid(sex ~ name)
```
Figure 3\.26: Gender breakdown for the three most unisex names, oriented vertically.
### 3\.3\.2 Most common names for women
A second interesting data graphic from the same FiveThirtyEight article is recreated in Figure [3\.27](ch-vizII.html#fig:women).
Take a moment to jump ahead and analyze this data graphic.
What are visual cues?
What are the variables?
How are the variables being mapped to the visual cues?
What geoms are present?
To recreate this data graphic, we need to collect the right data.
We begin by figuring out what the 25 most common female names are among those estimated to be alive today.
We can do this by counting the estimated number of people alive today for each name, filtering for women, sorting by the number estimated to be alive, and then taking the top 25 results.
We also need to know the median age, as well as the first and third quartiles for age among people having each name.
```
com_fem <- BabynamesDist %>%
filter(n > 100, sex == "F") %>%
group_by(name) %>%
mutate(wgt = est_alive_today / sum(est_alive_today)) %>%
summarize(
N = n(),
est_num_alive = sum(est_alive_today),
quantiles = list(
wtd_quantile(
age_today, est_alive_today, probs = 1:3/4, na.rm = TRUE
)
)
) %>%
mutate(measures = list(c("q1_age", "median_age", "q3_age"))) %>%
unnest(cols = c(quantiles, measures)) %>%
pivot_wider(names_from = measures, values_from = quantiles) %>%
arrange(desc(est_num_alive)) %>%
head(25)
```
This data graphic is a bit trickier than the previous one. We’ll start by binding the data, and defining the \\(x\\) and \\(y\\) aesthetics.
We put the names on the \\(x\\)\-axis and the `median_age` on the \\(y\\)—the reasons for doing so will be made clearer later.
We will also define the title of the plot, and remove the \\(x\\)\-axis label, since it is self\-evident.
```
w_plot <- ggplot(
data = com_fem,
aes(x = reorder(name, -median_age), y = median_age)
) +
xlab(NULL) +
ylab("Age (in years)") +
ggtitle("Median ages for females with the 25 most common names")
```
The next elements to add are the gold rectangles. To do this, we use the `geom_linerange()` function.
It may help to think of these not as rectangles, but as really thick lines.
Because we have already mapped the names to the \\(x\\)\-axis, we only need to specify the mappings for `ymin` and `ymax`.
These are mapped to the first and third quartiles, respectively.
We will also make these lines very thick and color them appropriately.
The `geom_linerange()` function only understands `ymin` and `ymax`—there is not a corresponding function with `xmin` and `xmax`.
However, we will fix this later by transposing the figure.
We have also added a slight `alpha` transparency to allow the gridlines to be visible underneath the gold rectangles.
```
w_plot <- w_plot +
geom_linerange(
aes(ymin = q1_age, ymax = q3_age),
color = "#f3d478",
size = 4.5,
alpha = 0.8
)
```
There is a red dot indicating the median age for each of these names. If you look carefully, you can see a white border around each red dot. The default glyph for `geom_point()` is a solid dot, which is `shape` 19\. By changing it to `shape` 21, we can use both the `fill` and `color` arguments.
```
w_plot <- w_plot +
geom_point(
fill = "#ed3324",
color = "white",
size = 2,
shape = 21
)
```
It remains only to add the context and flip our plot around so the orientation matches the original figure.
The `coord_flip()` function does exactly that.
```
context <- tribble(
~median_age, ~x, ~label,
65, 24, "median",
29, 16, "25th",
48, 16, "75th percentile",
)
age_breaks <- 1:7 * 10 + 5
w_plot +
geom_point(
aes(y = 60, x = 24),
fill = "#ed3324",
color = "white",
size = 2,
shape = 21
) +
geom_text(data = context, aes(x = x, label = label)) +
geom_point(aes(y = 24, x = 16), shape = 17) +
geom_point(aes(y = 56, x = 16), shape = 17) +
geom_hline(
data = tibble(x = age_breaks),
aes(yintercept = x),
linetype = 3
) +
scale_y_continuous(breaks = age_breaks) +
coord_flip()
```
Figure 3\.27: Recreation of FiveThirtyEight’s plot of the age distributions for the 25 most common women’s names.
You will note that the name “Anna” was fifth most common in the original FiveThirtyEight article but did not appear in Figure [3\.27](ch-vizII.html#fig:women). This appears to be a result of that name’s extraordinarily large range and the pro\-rating that FiveThirtyEight did to their data. The “older” names—including Anna—were more affected by this alteration. Anna was the 47th most popular name by our calculations.
### 3\.3\.1 Percentage of people alive today
How did you break down Figure [3\.22](ch-vizII.html#fig:joseph)? There are two main data elements in that plot: a thick black line indicating the number of Josephs born each year, and the thin light blue bars indicating the number of Josephs born in each year that are expected to still be alive today. In both cases, the vertical axis corresponds to the number of people (in thousands), and the horizontal axis corresponds to the year of birth.
We can compose a similar plot in **ggplot2**. First we take the relevant subset of the data and set up the initial **ggplot2** object. The data frame `joseph` is bound to the plot, since this contains all of the data that we need for this plot, but we will be using it with multiple geoms. Moreover, the `year` variable is mapped to the \\(x\\)\-axis as an aesthetic. This will ensure that everything will line up properly.
```
joseph <- BabynamesDist %>%
filter(name == "Joseph" & sex == "M")
name_plot <- ggplot(data = joseph, aes(x = year))
```
Next, we will add the bars.
```
name_plot <- name_plot +
geom_col(
aes(y = count_thousands * alive_prob),
fill = "#b2d7e9",
color = "white",
size = 0.1
)
```
The `geom_col()` function adds bars, which are filled with a light blue color and a white border. The height of the bars is an aesthetic that is mapped to the estimated number of people alive today who were born in each year.
The black line is easily added using the `geom_line()` function.
```
name_plot <- name_plot +
geom_line(aes(y = count_thousands), size = 2)
```
Adding an informative label for the vertical axis and removing an uninformative label for the horizontal axis will improve the readability of our plot.
```
name_plot <- name_plot +
ylab("Number of People (thousands)") +
xlab(NULL)
```
Inspecting the `summary()` of our plot at this point can help us keep things straight—take note of the mappings. Do they accord with what you jotted down previously?
```
summary(name_plot)
```
```
data: year, sex, name, n, prop, alive_prob, count_thousands,
age_today, est_alive_today [111x9]
mapping: x = ~year
faceting: <ggproto object: Class FacetNull, Facet, gg>
compute_layout: function
draw_back: function
draw_front: function
draw_labels: function
draw_panels: function
finish_data: function
init_scales: function
map_data: function
params: list
setup_data: function
setup_params: function
shrink: TRUE
train_scales: function
vars: function
super: <ggproto object: Class FacetNull, Facet, gg>
-----------------------------------
mapping: y = ~count_thousands * alive_prob
geom_col: width = NULL, na.rm = FALSE
stat_identity: na.rm = FALSE
position_stack
mapping: y = ~count_thousands
geom_line: na.rm = FALSE, orientation = NA
stat_identity: na.rm = FALSE
position_identity
```
The final data\-driven element of the FiveThirtyEight graphic is a darker blue bar indicating the median year of birth. We can compute this with the `wtd.quantile()` function in the **Hmisc** package. Setting the `probs` argument to 0\.5 will give us the median `year` of birth, weighted by the number of people estimated to be alive today (`est_alive_today`). The `pull()` function simply extracts the `year` variable from the data frame returned by `summarize()`.
```
wtd_quantile <- Hmisc::wtd.quantile
median_yob <- joseph %>%
summarize(
year = wtd_quantile(year, est_alive_today, probs = 0.5)
) %>%
pull(year)
median_yob
```
```
50%
1975
```
We can then overplot a single bar in a darker shade of blue. Here, we are using the `ifelse()` function cleverly. If the `year` is equal to the median year of birth, then the height of the bar is the estimated number of Josephs alive today. Otherwise, the height of the bar is zero (so you can’t see it at all). In this manner, we plot only the one darker blue bar that we want to highlight.
```
name_plot <- name_plot +
geom_col(
color = "white", fill = "#008fd5",
aes(y = ifelse(year == median_yob, est_alive_today / 1000, 0))
)
```
Lastly, the FiveThirtyEight graphic contains many contextual elements specific to the name Joseph. We can add a title, annotated text, and an arrow providing focus to a specific element of the plot. Figure [3\.22](ch-vizII.html#fig:joseph) displays our reproduction.
There are a few differences in the presentation of fonts, title, etc. These can be altered using **ggplot2**’s theming framework, but we won’t explore these subtleties here (see Section [14\.5](ch-vizIII.html#sec:themes)).[5](#fn5)
Here we create a `tribble()` (a row\-wise simple data frame) to add annotations.
```
context <- tribble(
~year, ~num_people, ~label,
1935, 40, "Number of Josephs\nborn each year",
1915, 13, "Number of Josephs\nborn each year
\nestimated to be alive\non 1/1/2014",
2003, 40, "The median\nliving Joseph\nis 37 years old",
)
name_plot +
ggtitle("Age Distribution of American Boys Named Joseph") +
geom_text(
data = context,
aes(y = num_people, label = label, color = label)
) +
geom_curve(
x = 1990, xend = 1974, y = 40, yend = 24,
arrow = arrow(length = unit(0.3, "cm")), curvature = 0.5
) +
scale_color_manual(
guide = FALSE,
values = c("black", "#b2d7e9", "darkgray")
) +
ylim(0, 42)
```
```
Warning: It is deprecated to specify `guide = FALSE` to remove a guide.
Please use `guide = "none"` instead.
```
Figure 3\.22: Recreation of the age distribution of “Joseph” plot.
Notice that we did not update the `name_plot` object with this contextual information. This was intentional, since we can update the `data` argument of `name_plot` and obtain an analogous plot for another name. This functionality makes use of the special `%+%` operator. As shown in Figure [3\.23](ch-vizII.html#fig:josephine), the name “Josephine” enjoyed a spike in popularity around 1920 that later subsided.
```
name_plot %+% filter(
BabynamesDist,
name == "Josephine" & sex == "F"
)
```
Figure 3\.23: Age distribution of American girls named “Josephine.”
While some names are almost always associated with a particular gender, many are not. More interestingly, the proportion of people assigned male or female with a given name often varies over time. These data were presented nicely by [Nathan Yau](https://en.wikipedia.org/w/index.php?search=Nathan%20Yau) at [FlowingData](https://flowingdata.com/2013/09/25/the-most-unisex-names-in-us-history/).
We can compare how our `name_plot` differs by gender for a given name using a *facet*. To do this, we will simply add a call to the `facet_wrap()` function, which will create small multiples based on a single categorical variable, and then feed a new data frame to the plot that contains data for both sexes assigned at birth. In Figure [3\.24](ch-vizII.html#fig:jessie), we show the prevalence of “Jessie” changed for the two sexes.
```
names_plot <- name_plot +
facet_wrap(~sex)
names_plot %+% filter(BabynamesDist, name == "Jessie")
```
Figure 3\.24: Comparison of the name “Jessie” across two genders.
The plot at FlowingData shows the 35 most common “unisex” names—that is, the names that have historically had the greatest balance between those assigned male and female at birth. We can use a `facet_grid()` to compare the gender breakdown for a few of the most common of these, as shown in Figures [3\.25](ch-vizII.html#fig:many-names) and [3\.26](ch-vizII.html#fig:many-names2).
```
many_names_plot <- name_plot +
facet_grid(name ~ sex)
mnp <- many_names_plot %+% filter(
BabynamesDist,
name %in% c("Jessie", "Marion", "Jackie")
)
mnp
```
Figure 3\.25: Gender breakdown for the three most unisex names.
Reversing the order of the variables in the call to `facet_grid()` flips the orientation of the facets.
```
mnp + facet_grid(sex ~ name)
```
Figure 3\.26: Gender breakdown for the three most unisex names, oriented vertically.
### 3\.3\.2 Most common names for women
A second interesting data graphic from the same FiveThirtyEight article is recreated in Figure [3\.27](ch-vizII.html#fig:women).
Take a moment to jump ahead and analyze this data graphic.
What are visual cues?
What are the variables?
How are the variables being mapped to the visual cues?
What geoms are present?
To recreate this data graphic, we need to collect the right data.
We begin by figuring out what the 25 most common female names are among those estimated to be alive today.
We can do this by counting the estimated number of people alive today for each name, filtering for women, sorting by the number estimated to be alive, and then taking the top 25 results.
We also need to know the median age, as well as the first and third quartiles for age among people having each name.
```
com_fem <- BabynamesDist %>%
filter(n > 100, sex == "F") %>%
group_by(name) %>%
mutate(wgt = est_alive_today / sum(est_alive_today)) %>%
summarize(
N = n(),
est_num_alive = sum(est_alive_today),
quantiles = list(
wtd_quantile(
age_today, est_alive_today, probs = 1:3/4, na.rm = TRUE
)
)
) %>%
mutate(measures = list(c("q1_age", "median_age", "q3_age"))) %>%
unnest(cols = c(quantiles, measures)) %>%
pivot_wider(names_from = measures, values_from = quantiles) %>%
arrange(desc(est_num_alive)) %>%
head(25)
```
This data graphic is a bit trickier than the previous one. We’ll start by binding the data, and defining the \\(x\\) and \\(y\\) aesthetics.
We put the names on the \\(x\\)\-axis and the `median_age` on the \\(y\\)—the reasons for doing so will be made clearer later.
We will also define the title of the plot, and remove the \\(x\\)\-axis label, since it is self\-evident.
```
w_plot <- ggplot(
data = com_fem,
aes(x = reorder(name, -median_age), y = median_age)
) +
xlab(NULL) +
ylab("Age (in years)") +
ggtitle("Median ages for females with the 25 most common names")
```
The next elements to add are the gold rectangles. To do this, we use the `geom_linerange()` function.
It may help to think of these not as rectangles, but as really thick lines.
Because we have already mapped the names to the \\(x\\)\-axis, we only need to specify the mappings for `ymin` and `ymax`.
These are mapped to the first and third quartiles, respectively.
We will also make these lines very thick and color them appropriately.
The `geom_linerange()` function only understands `ymin` and `ymax`—there is not a corresponding function with `xmin` and `xmax`.
However, we will fix this later by transposing the figure.
We have also added a slight `alpha` transparency to allow the gridlines to be visible underneath the gold rectangles.
```
w_plot <- w_plot +
geom_linerange(
aes(ymin = q1_age, ymax = q3_age),
color = "#f3d478",
size = 4.5,
alpha = 0.8
)
```
There is a red dot indicating the median age for each of these names. If you look carefully, you can see a white border around each red dot. The default glyph for `geom_point()` is a solid dot, which is `shape` 19\. By changing it to `shape` 21, we can use both the `fill` and `color` arguments.
```
w_plot <- w_plot +
geom_point(
fill = "#ed3324",
color = "white",
size = 2,
shape = 21
)
```
It remains only to add the context and flip our plot around so the orientation matches the original figure.
The `coord_flip()` function does exactly that.
```
context <- tribble(
~median_age, ~x, ~label,
65, 24, "median",
29, 16, "25th",
48, 16, "75th percentile",
)
age_breaks <- 1:7 * 10 + 5
w_plot +
geom_point(
aes(y = 60, x = 24),
fill = "#ed3324",
color = "white",
size = 2,
shape = 21
) +
geom_text(data = context, aes(x = x, label = label)) +
geom_point(aes(y = 24, x = 16), shape = 17) +
geom_point(aes(y = 56, x = 16), shape = 17) +
geom_hline(
data = tibble(x = age_breaks),
aes(yintercept = x),
linetype = 3
) +
scale_y_continuous(breaks = age_breaks) +
coord_flip()
```
Figure 3\.27: Recreation of FiveThirtyEight’s plot of the age distributions for the 25 most common women’s names.
You will note that the name “Anna” was fifth most common in the original FiveThirtyEight article but did not appear in Figure [3\.27](ch-vizII.html#fig:women). This appears to be a result of that name’s extraordinarily large range and the pro\-rating that FiveThirtyEight did to their data. The “older” names—including Anna—were more affected by this alteration. Anna was the 47th most popular name by our calculations.
3\.4 Further resources
----------------------
The grammar of graphics was created by Wilkinson et al. (2005\), and implemented in **ggplot2** by Hadley Wickham (2016\), now in a second edition. The **ggplot2** [cheat sheet](https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf) produced by **RStudio** is an excellent reference for understanding the various features of **ggplot2**.
3\.5 Exercises
--------------
**Problem 1 (Easy)**: [Angelica Schuyler Church](https://en.wikipedia.org/wiki/Angelica_Schuyler_Church) (1756–1814\) was the daughter of New York Governer Philip Schuyler and sister of
Elizabeth Schuyler Hamilton. Angelica, New York was named after her. Using the `babynames` package generate a plot of the reported proportion of babies born with the name Angelica over time and interpret the figure.
**Problem 2 (Easy)**: Using data from the `nasaweather` package, create a scatterplot between `wind` and `pressure`, with color being used to distinguish the `type` of storm.
**Problem 3 (Medium)**: The following questions use the `Marriage` data set from the `mosaicData` package.
```
library(mosaicData)
```
1. Create an informative and meaningful data graphic.
2. Identify each of the visual cues that you are using, and describe how they are related to each variable.
3. Create a data graphic with at least *five* variables (either quantitative or categorical). For the purposes of this exercise, do not worry about making your visualization meaningful—just try to encode five variables into one plot.
**Problem 4 (Medium)**: The `macleish` package contains weather data collected every 10 minutes in 2015 from two weather stations in Whately, MA.
```
library(tidyverse)
library(macleish)
glimpse(whately_2015)
```
```
Rows: 52,560
Columns: 8
$ when <dttm> 2015-01-01 00:00:00, 2015-01-01 00:10:00, 2015-01…
$ temperature <dbl> -9.32, -9.46, -9.44, -9.30, -9.32, -9.34, -9.30, -…
$ wind_speed <dbl> 1.40, 1.51, 1.62, 1.14, 1.22, 1.09, 1.17, 1.31, 1.…
$ wind_dir <dbl> 225, 248, 258, 244, 238, 242, 242, 244, 226, 220, …
$ rel_humidity <dbl> 54.5, 55.4, 56.2, 56.4, 56.9, 57.2, 57.7, 58.2, 59…
$ pressure <int> 985, 985, 985, 985, 984, 984, 984, 984, 984, 984, …
$ solar_radiation <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
$ rainfall <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
```
Using `ggplot2`, create a data graphic that displays the average temperature over each 10\-minute interval (`temperature`) as a function of time (`when`).
**Problem 5 (Medium)**: Use the `MLB_teams` data in the `mdsr` package to create an informative data graphic that illustrates the relationship between winning percentage and payroll in context.
**Problem 6 (Medium)**: The `MLB_teams` data set in the `mdsr` package contains information about Major League Baseball teams from 2008–2014\. There are several quantitative and a few categorical variables present. See how many variables you can illustrate on a single plot in R. The current record is 7\. (Note: This is *not* good graphical practice—it is merely an exercise to help you understand how to use visual cues and aesthetics!)
**Problem 7 (Medium)**: The `RailTrail` data set from the `mosaicData` package describes the usage of a rail trail in Western Massachusetts.
Use these data to answer the following questions.
1. Create a scatterplot of the number of crossings per day `volume` against the high temperature that day
2. Separate your plot into facets by `weekday` (an indicator of weekend/holiday vs. weekday)
3. Add regression lines to the two facets
**Problem 8 (Medium)**: Using data from the `nasaweather` package, use the `geom_path` function to plot the path of each tropical storm in the `storms` data table. Use color to distinguish the storms from one another, and use faceting to plot each `year` in its own panel.
**Problem 9 (Medium)**: Using the `penguins` data set from the `palmerpenguins` package:
1. Create a scatterplot of `bill_length_mm` against `bill_depth_mm` where individual species are colored and a regression line is added to each species. Add regression lines to all of your facets.
What do you observe about the association of bill depth and bill length?
2. Repeat the same scatterplot but now separate your plot into facets by `species`.
How would you summarize the association between bill depth and bill length.
**Problem 10 (Hard)**: Use the `make_babynames_dist()` function in the `mdsr` package to recreate the “Deadest Names” graphic from FiveThirtyEight ([https://fivethirtyeight.com/features/how\-to\-tell\-someones\-age\-when\-all\-you\-know\-is\-her\-name](https://fivethirtyeight.com/features/how-to-tell-someones-age-when-all-you-know-is-her-name)).
```
library(tidyverse)
library(mdsr)
babynames_dist <- make_babynames_dist()
```
3\.6 Supplementary exercises
----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-vizII.html\#datavizII\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-vizII.html#datavizII-online-exercises)
**Problem 1 (Easy)**:
Consider the live [Wikipedia Recent Changes Map](http://www.hatnote.com/#en).
1. Identify the visual cues, coordinate system, and scale(s).
2. How many variables are depicted in the graphic? Explicitly link each variable to a visual cue that you listed above.
3. Critique this data graphic using the taxonomy described in this chapter.
**Problem 2 (Easy)**: Consider the following data graphic about [Denzel Washington](https://en.wikipedia.org/wiki/Denzel_Washington), a two\-time Academy Award\-winning actor. It may be helpful to read the original article, entitled “[The Four Types of Denzel Washington Movies](https://fivethirtyeight.com/features/the-four-types-of-denzel-washington-movies/).”
What variable is mapped to the color aesthetic?
**Problem 3 (Easy)**: Consider the following data graphic about world\-class swimmers. Emphasis is on [Katie Ledecky](https://en.wikipedia.org/wiki/Katie_Ledecky), a five\-time Olympic gold medalist. It may be helpful to peruse the original article, entitled “[Katie Ledecky Is The Present And The Future Of Swimming](https://fivethirtyeight.com/features/katie-ledecky-is-the-present-and-the-future-of-swimming/).”
Suppose that the graphic was generated from a data frame like the one shown below (it wasn’t—these are fake data).
```
# A tibble: 3 × 4
name gender distance time_in_sd
<chr> <chr> <dbl> <dbl>
1 Ledecky F 100 -0.8
2 Ledecky F 200 1.7
3 Ledecky F 400 2.9
```
Note: Recall that [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) is a measure of the spread of a set of numbers. In this case, a time that is \+1 standard deviation *above* the mean is *faster* than the average time (among the top 50 times).
1. What variable is mapped to the position aesthetic in the horizontal direction?
2. What variable is mapped to the color aesthetic in the vertical direction?
3. What variable is mapped to the position aesthetic in the vertical direction?
**Problem 4 (Easy)**: Consider the following data graphic, taken from the article “[Who does not Pay Income Tax?](http://thefuturebuzz.com/2012/09/19/simplicity-with-data-visualization-is-still-best)”
1. Identify the visual cues, coordinate system, and scale(s).
2. How many variables are depicted in the graphic? Explicitly link each variable to a visual cue that you listed above.
3. Critique this data graphic using the taxonomy described in this chapter.
**Problem 5 (Hard)**: Using the `babynames` package, and the name ‘Jessie,’ make a plot that resembles this graphic: [the most unisex names in US history](https://flowingdata.com/2013/09/25/the-most-unisex-names-in-us-history/).
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-dataI.html |
Chapter 4 Data wrangling on one table
=====================================
This chapter introduces basics of how to wrangle data in **R**. Wrangling skills will provide an intellectual and practical foundation for working with modern data.
4\.1 A grammar for data wrangling
---------------------------------
In much the same way that **ggplot2** presents a grammar for data graphics, the **dplyr** package presents a grammar for data wrangling (H. Wickham and Francois 2020\).
This package is loaded when `library(tidyverse)` is run.
[Hadley Wickham](https://en.wikipedia.org/w/index.php?search=Hadley%20Wickham), one of the authors of **dplyr** and the **tidyverse**, has identified five [*verbs*](https://en.wikipedia.org/w/index.php?search=verbs) for working with data in a data frame:
* `select()`: take a subset of the columns (i.e., features, variables)
* `filter()`: take a subset of the rows (i.e., observations)
* `mutate()`: add or modify existing columns
* `arrange()`: sort the rows
* `summarize()`: aggregate the data across rows (e.g., group it according to some criteria)
Each of these functions takes a data frame as its first argument, and returns a data frame.
These five verbs can be used in conjunction with each other to provide a powerful means to slice\-and\-dice a single table of data.
As with any grammar, what these verbs mean on their own is one thing, but being able to combine these verbs with nouns (i.e., data frames) and adverbs (i.e., arguments) creates a flexible and powerful way to wrangle data.
Mastery of these five verbs can make the computation of most any descriptive statistic a breeze and facilitate further analysis.
Wickham’s approach is inspired by his desire to blur the boundaries between **R** and the ubiquitous [*relational database*](https://en.wikipedia.org/w/index.php?search=relational%20database) querying syntax [*SQL*](https://en.wikipedia.org/w/index.php?search=SQL).
When we revisit SQL in Chapter [15](ch-sql.html#ch:sql), we will see the close relationship between these two computing paradigms.
A related concept more popular in business settings is the [*OLAP*](https://en.wikipedia.org/w/index.php?search=OLAP) (online analytical processing) hypercube, which refers to the process by which multidimensional data is “sliced\-and\-diced.”
### 4\.1\.1 `select()` and `filter()`
The two simplest of the five verbs are `filter()` and `select()`, which return a subset of the rows or columns of a data frame, respectively.
Generally, if we have a data frame that consists of \\(n\\) rows and \\(p\\) columns, Figures [4\.1](ch-dataI.html#fig:filter) and [4\.2](ch-dataI.html#fig:select) illustrate the effect of filtering this data frame based on a condition on one of the columns, and selecting a subset of the columns, respectively.
Figure 4\.1: The `filter()` function. At left, a data frame that contains matching entries in a certain column for only a subset of the rows. At right, the resulting data frame after filtering.
Figure 4\.2: The `select()` function. At left, a data frame, from which we retrieve only a few of the columns. At right, the resulting data frame after selecting those columns.
We will demonstrate the use of these functions on the `presidential` data frame (from the **ggplot2** package), which contains \\(p\=4\\) variables about the terms of \\(n\=11\\) recent U.S. presidents.
```
library(tidyverse)
library(mdsr)
presidential
```
```
# A tibble: 11 × 4
name start end party
<chr> <date> <date> <chr>
1 Eisenhower 1953-01-20 1961-01-20 Republican
2 Kennedy 1961-01-20 1963-11-22 Democratic
3 Johnson 1963-11-22 1969-01-20 Democratic
4 Nixon 1969-01-20 1974-08-09 Republican
5 Ford 1974-08-09 1977-01-20 Republican
6 Carter 1977-01-20 1981-01-20 Democratic
7 Reagan 1981-01-20 1989-01-20 Republican
8 Bush 1989-01-20 1993-01-20 Republican
9 Clinton 1993-01-20 2001-01-20 Democratic
10 Bush 2001-01-20 2009-01-20 Republican
11 Obama 2009-01-20 2017-01-20 Democratic
```
To retrieve only the names and party affiliations of these presidents, we would use `select()`.
The first [*argument*](https://en.wikipedia.org/w/index.php?search=argument) to the `select()` function is the data frame, followed by an arbitrarily long list of column names, separated by commas.
```
select(presidential, name, party)
```
```
# A tibble: 11 × 2
name party
<chr> <chr>
1 Eisenhower Republican
2 Kennedy Democratic
3 Johnson Democratic
4 Nixon Republican
5 Ford Republican
6 Carter Democratic
7 Reagan Republican
8 Bush Republican
9 Clinton Democratic
10 Bush Republican
11 Obama Democratic
```
Similarly, the first argument to `filter()` is a data frame, and subsequent arguments are logical conditions that are evaluated on any involved columns.
If we want to retrieve only those rows that pertain to Republican presidents, we need to specify that the value of the `party` variable is equal to `Republican`.
```
filter(presidential, party == "Republican")
```
```
# A tibble: 6 × 4
name start end party
<chr> <date> <date> <chr>
1 Eisenhower 1953-01-20 1961-01-20 Republican
2 Nixon 1969-01-20 1974-08-09 Republican
3 Ford 1974-08-09 1977-01-20 Republican
4 Reagan 1981-01-20 1989-01-20 Republican
5 Bush 1989-01-20 1993-01-20 Republican
6 Bush 2001-01-20 2009-01-20 Republican
```
Note that the `==` is a *test for equality*.
If we were to use only a single equal sign here, we would be asserting that the value of `party` was `Republican`.
This would result in an error.
The quotation marks around `Republican` are necessary here, since `Republican` is a literal value, and not a variable name.
Combining the `filter()` and `select()` commands enables one to drill down to very specific pieces of information.
For example, we can find which Democratic presidents served since [*Watergate*](https://en.wikipedia.org/w/index.php?search=Watergate).
```
select(
filter(presidential, lubridate::year(start) > 1973 & party == "Democratic"),
name
)
```
```
# A tibble: 3 × 1
name
<chr>
1 Carter
2 Clinton
3 Obama
```
In the syntax demonstrated above, the `filter()` operation is [*nested*](https://en.wikipedia.org/w/index.php?search=nested) inside the `select()` operation.
As noted above, each of the five verbs takes and returns a data frame, which makes this type of nesting possible.
Shortly, we will see how these verbs can be chained together to make rather long expressions that can become very difficult to read.
Instead, we recommend the use of the `%>%` (pipe) operator.
Pipe\-forwarding is an alternative to nesting that yields code that can be easily read from top to bottom.
With the pipe, we can write the same expression as above in this more readable syntax.
```
presidential %>%
filter(lubridate::year(start) > 1973 & party == "Democratic") %>%
select(name)
```
```
# A tibble: 3 × 1
name
<chr>
1 Carter
2 Clinton
3 Obama
```
This expression is called a [*pipeline*](https://en.wikipedia.org/w/index.php?search=pipeline).
Notice how the expression
```
dataframe %>% filter(condition)
```
is equivalent to `filter(dataframe, condition)`. In later examples, we will see how this operator can make code more readable and efficient, particularly for complex operations on large data sets.
### 4\.1\.2 `mutate()` and `rename()`
Frequently, in the process of conducting our analysis, we will create, re\-define, and rename some of our variables.
The functions `mutate()` and `rename()` provide these capabilities.
A graphical illustration of the `mutate()` operation is shown in Figure [4\.3](ch-dataI.html#fig:mutate).
Figure 4\.3: The `mutate()` function. At right, the resulting data frame after adding a new column.
While we have the raw data on when each of these presidents took and relinquished office, we don’t actually have a numeric variable giving the length of each president’s term.
Of course, we can derive this information from the dates given, and add the result as a new column to our data frame.
This date arithmetic is made easier through the use of the **lubridate** package, which we use to compute the number of years (`dyears()`) that elapsed since during the `interval()` from the `start` until the `end` of each president’s term.
In this situation, it is generally considered good style to create a new object rather than [*clobbering*](https://en.wikipedia.org/w/index.php?search=clobbering) the one that comes from an external source.
To preserve the existing `presidential` data frame, we save the result of `mutate()` as a new object called `my_presidents`.
```
library(lubridate)
my_presidents <- presidential %>%
mutate(term.length = interval(start, end) / dyears(1))
my_presidents
```
```
# A tibble: 11 × 5
name start end party term.length
<chr> <date> <date> <chr> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8
2 Kennedy 1961-01-20 1963-11-22 Democratic 2.84
3 Johnson 1963-11-22 1969-01-20 Democratic 5.16
4 Nixon 1969-01-20 1974-08-09 Republican 5.55
5 Ford 1974-08-09 1977-01-20 Republican 2.45
6 Carter 1977-01-20 1981-01-20 Democratic 4
7 Reagan 1981-01-20 1989-01-20 Republican 8
8 Bush 1989-01-20 1993-01-20 Republican 4
9 Clinton 1993-01-20 2001-01-20 Democratic 8
10 Bush 2001-01-20 2009-01-20 Republican 8
11 Obama 2009-01-20 2017-01-20 Democratic 8
```
The `mutate()` function can also be used to modify the data in an existing column.
Suppose that we wanted to add to our data frame a variable containing the year in which each president was elected.
Our first (naïve) attempt might assume that every president was elected in the year before he took office.
Note that `mutate()` returns a data frame, so if we want to modify our existing data frame, we need to overwrite it with the results.
```
my_presidents <- my_presidents %>%
mutate(elected = year(start) - 1)
my_presidents
```
```
# A tibble: 11 × 6
name start end party term.length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
2 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
3 Johnson 1963-11-22 1969-01-20 Democratic 5.16 1962
4 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
5 Ford 1974-08-09 1977-01-20 Republican 2.45 1973
6 Carter 1977-01-20 1981-01-20 Democratic 4 1976
7 Reagan 1981-01-20 1989-01-20 Republican 8 1980
8 Bush 1989-01-20 1993-01-20 Republican 4 1988
9 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
10 Bush 2001-01-20 2009-01-20 Republican 8 2000
11 Obama 2009-01-20 2017-01-20 Democratic 8 2008
```
Some entries in this data set are wrong, because presidential elections are only held every four years.
[Lyndon Johnson](https://en.wikipedia.org/w/index.php?search=Lyndon%20Johnson) assumed the office after President [John Kennedy](https://en.wikipedia.org/w/index.php?search=John%20Kennedy) was assassinated in 1963, and [Gerald Ford](https://en.wikipedia.org/w/index.php?search=Gerald%20Ford) took over after President [Richard Nixon](https://en.wikipedia.org/w/index.php?search=Richard%20Nixon) resigned in 1974\.
Thus, there were no presidential elections in 1962 or 1973, as suggested in our data frame.
We should overwrite these values with `NA`’s—which is how **R** denotes missing values.
We can use the `ifelse()` function to do this.
Here, if the value of `elected` is either 1962 or 1973, we overwrite that value with `NA`.[6](#fn6)
Otherwise, we overwrite it with the same value that it currently has.
In this case, instead of checking to see whether the value of `elected` equals `1962` or `1973`, for brevity we can use the `%in%` operator to check to see whether the value of `elected` belongs to the vector consisting of `1962` and `1973`.
```
my_presidents <- my_presidents %>%
mutate(elected = ifelse(elected %in% c(1962, 1973), NA, elected))
my_presidents
```
```
# A tibble: 11 × 6
name start end party term.length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
2 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
3 Johnson 1963-11-22 1969-01-20 Democratic 5.16 NA
4 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
5 Ford 1974-08-09 1977-01-20 Republican 2.45 NA
6 Carter 1977-01-20 1981-01-20 Democratic 4 1976
7 Reagan 1981-01-20 1989-01-20 Republican 8 1980
8 Bush 1989-01-20 1993-01-20 Republican 4 1988
9 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
10 Bush 2001-01-20 2009-01-20 Republican 8 2000
11 Obama 2009-01-20 2017-01-20 Democratic 8 2008
```
Finally, it is considered bad practice to use periods in the name of functions, data frames, and variables in **R**. Ill\-advised periods could conflict with **R**’s use of [*generic functions*](https://en.wikipedia.org/w/index.php?search=generic%20functions) (i.e., **R**’s mechanism for [method overloading](http://en.wikipedia.org/wiki/Function_overloading)).
Thus, we should change the name of the `term.length` column that we created earlier.
We can achieve this using the `rename()` function.
In this book, we will use [*snake\_case*](https://en.wikipedia.org/w/index.php?search=snake_case) for function and variable names.
Don’t use periods in the names of functions, data frames, or variables, as this can be confused with the object\-oriented programming model.
```
my_presidents <- my_presidents %>%
rename(term_length = term.length)
my_presidents
```
```
# A tibble: 11 × 6
name start end party term_length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
2 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
3 Johnson 1963-11-22 1969-01-20 Democratic 5.16 NA
4 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
5 Ford 1974-08-09 1977-01-20 Republican 2.45 NA
6 Carter 1977-01-20 1981-01-20 Democratic 4 1976
7 Reagan 1981-01-20 1989-01-20 Republican 8 1980
8 Bush 1989-01-20 1993-01-20 Republican 4 1988
9 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
10 Bush 2001-01-20 2009-01-20 Republican 8 2000
11 Obama 2009-01-20 2017-01-20 Democratic 8 2008
```
### 4\.1\.3 `arrange()`
The function `sort()` will sort a vector but not a data frame. The function that will sort a data frame is called `arrange()`, and its behavior is illustrated in Figure [4\.4](ch-dataI.html#fig:arrange).
Figure 4\.4: The `arrange()` function. At left, a data frame with an ordinal variable. At right, the resulting data frame after sorting the rows in descending order of that variable.
In order to use `arrange()` on a data frame, you have to specify the data frame, and the column by which you want it to be sorted.
You also have to specify the direction in which you want it to be sorted. Specifying multiple sort conditions will help break ties.
To sort our `presidential` data frame by the length of each president’s term, we specify that we want the column `term_length` in descending order.
```
my_presidents %>%
arrange(desc(term_length))
```
```
# A tibble: 11 × 6
name start end party term_length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
2 Reagan 1981-01-20 1989-01-20 Republican 8 1980
3 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
4 Bush 2001-01-20 2009-01-20 Republican 8 2000
5 Obama 2009-01-20 2017-01-20 Democratic 8 2008
6 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
7 Johnson 1963-11-22 1969-01-20 Democratic 5.16 NA
8 Carter 1977-01-20 1981-01-20 Democratic 4 1976
9 Bush 1989-01-20 1993-01-20 Republican 4 1988
10 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
11 Ford 1974-08-09 1977-01-20 Republican 2.45 NA
```
A number of presidents completed either one or two full terms, and thus have the exact same term length (4 or 8 years, respectively).
To break these ties, we can further sort by `party` and `elected`.
```
my_presidents %>%
arrange(desc(term_length), party, elected)
```
```
# A tibble: 11 × 6
name start end party term_length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
2 Obama 2009-01-20 2017-01-20 Democratic 8 2008
3 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
4 Reagan 1981-01-20 1989-01-20 Republican 8 1980
5 Bush 2001-01-20 2009-01-20 Republican 8 2000
6 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
7 Johnson 1963-11-22 1969-01-20 Democratic 5.16 NA
8 Carter 1977-01-20 1981-01-20 Democratic 4 1976
9 Bush 1989-01-20 1993-01-20 Republican 4 1988
10 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
11 Ford 1974-08-09 1977-01-20 Republican 2.45 NA
```
Note that the default sort order is [*ascending order*](https://en.wikipedia.org/w/index.php?search=ascending%20order), so we do not need to specify an order if that is what we want.
### 4\.1\.4 `summarize()` with `group_by()`
Our last of the five verbs for single\-table analysis is `summarize()`, which is nearly always used in conjunction with `group_by()`.
The previous four verbs provided us with means to manipulate a data frame in powerful and flexible ways.
But the extent of the analysis we can perform with these four verbs alone is limited.
On the other hand, `summarize()` with `group_by()` enables us to make comparisons.
Figure 4\.5: The `summarize()` function. At left, a data frame. At right, the resulting data frame after aggregating four of the columns.
When used alone, `summarize()` collapses a data frame into a single row[7](#fn7).
This is illustrated in Figure [4\.5](ch-dataI.html#fig:summarize).
Critically, we have to specify *how* we want to reduce an entire column of data into a single value.
The method of aggregation that we specify controls what will appear in the output.
```
my_presidents %>%
summarize(
N = n(),
first_year = min(year(start)),
last_year = max(year(end)),
num_dems = sum(party == "Democratic"),
years = sum(term_length),
avg_term_length = mean(term_length)
)
```
```
# A tibble: 1 × 6
N first_year last_year num_dems years avg_term_length
<int> <dbl> <dbl> <int> <dbl> <dbl>
1 11 1953 2017 5 64 5.82
```
The first argument to `summarize()` is a data frame, followed by a list of variables that will appear in the output.
Note that every variable in the output is defined by operations performed on *vectors*—not on individual values.
This is essential, since if the specification of an output variable is not an operation on a vector, there is no way for **R** to know how to collapse each column.
In this example, the function `n()` simply counts the number of rows.
This is often useful information.
To help ensure that data aggregation is being done correctly, use `n()` every time you use `summarize()`.
The next two variables determine the first year that one of these presidents assumed office.
This is the smallest year in the `start` column.
Similarly, the most recent year is the largest year in the `end` column.
The variable `num_dems` simply counts the number of rows in which the value of the `party` variable was `Democratic`.
Finally, the last two variables compute the sum and average of the `term_length` variable.
We see that 5 of the 11 presidents who served from 1953 to 2017 were Democrats, and the average term length over these 64 years was about 5\.8 years.
This begs the question of whether Democratic or Republican presidents served a longer average term during this time period.
To figure this out, we can just execute `summarize()` again, but this time, instead of the first argument being the data frame `my_presidents`, we will specify that the rows of the `my_presidents` data frame should be grouped by the values of the `party` variable.
In this manner, the same computations as above will be carried out for each party separately.
```
my_presidents %>%
group_by(party) %>%
summarize(
N = n(),
first_year = min(year(start)),
last_year = max(year(end)),
num_dems = sum(party == "Democratic"),
years = sum(term_length),
avg_term_length = mean(term_length)
)
```
```
# A tibble: 2 × 7
party N first_year last_year num_dems years avg_term_length
<chr> <int> <dbl> <dbl> <int> <dbl> <dbl>
1 Democratic 5 1961 2017 5 28 5.6
2 Republican 6 1953 2009 0 36 6
```
This provides us with the valuable information that the six Republican presidents served an average of 6 years in office, while the five Democratic presidents served an average of only 5\.6\. As with all of the **dplyr** verbs, the final output is a data frame.
In this chapter, we are using the **dplyr** package. The most common way to extract data from data tables is with SQL (structured query language). We’ll introduce SQL in Chapter [15](ch-sql.html#ch:sql). The **dplyr** package provides an interface that fits more smoothly into an overall data analysis workflow and is, in our opinion, easier to learn. Once you understand data wrangling with **dplyr**, it’s straightforward to learn SQL if needed. **dplyr** can also work as an interface to many systems that use SQL internally.
4\.2 Extended example: Ben’s time with the Mets
-----------------------------------------------
In this extended example, we will continue to explore [Sean Lahman](https://en.wikipedia.org/w/index.php?search=Sean%20Lahman)’s historical baseball database, which contains complete seasonal records for all players on all [*Major League Baseball*](https://en.wikipedia.org/w/index.php?search=Major%20League%20Baseball) (MLB) teams going back to 1871\.
These data are made available in **R** via the **Lahman** package (Friendly et al. 2021\).
Here again, while domain knowledge may be helpful, it is not necessary to follow the example.
To flesh out your understanding, try reading the [Wikipedia entry on Major League Baseball](https://en.wikipedia.org/wiki/Major_League_Baseball).
```
library(Lahman)
dim(Teams)
```
```
[1] 2955 48
```
The `Teams` table contains the seasonal results of every major league team in every season since 1871\.
There are 2955 rows and 48 columns in this table, which is far too much to show here, and would make for a quite unwieldy spreadsheet.
Of course, we can take a peek at what this table looks like by printing the first few rows of the table to the screen with the `head()` command, but we won’t print that on the page of this book.
[Ben Baumer](https://en.wikipedia.org/w/index.php?search=Ben%20Baumer) worked for the [New York Mets](https://en.wikipedia.org/w/index.php?search=New%20York%20Mets) from 2004 to 2012\.
How did the team do during those years?
We can use `filter()` and `select()` to quickly identify only those pieces of information that we care about.
```
mets <- Teams %>%
filter(teamID == "NYN")
my_mets <- mets %>%
filter(yearID %in% 2004:2012)
my_mets %>%
select(yearID, teamID, W, L)
```
```
yearID teamID W L
1 2004 NYN 71 91
2 2005 NYN 83 79
3 2006 NYN 97 65
4 2007 NYN 88 74
5 2008 NYN 89 73
6 2009 NYN 70 92
7 2010 NYN 79 83
8 2011 NYN 77 85
9 2012 NYN 74 88
```
Notice that we have broken this down into three steps. First, we filter the rows of the `Teams` data frame into only those teams that correspond to the New York Mets.[8](#fn8)
There are 59 of those, since the Mets joined the [*National League*](https://en.wikipedia.org/w/index.php?search=National%20League) in 1962\.
```
nrow(mets)
```
```
[1] 59
```
Next, we filtered these data so as to include only those seasons in which Ben worked for the team—those with `yearID` between 2004 and 2012\.
Finally, we printed to the screen only those columns that were relevant to our question: the year, the team’s ID, and the number of wins and losses that the team had.
While this process is logical, the code can get unruly, since two ancillary data frames (`mets` and `my_mets`) were created during the process.
It may be the case that we’d like to use data frames later in the analysis.
But if not, they are just cluttering our workspace, and eating up memory.
A more streamlined way to achieve the same result would be to *nest* these commands together.
```
select(filter(Teams, teamID == "NYN" & yearID %in% 2004:2012),
yearID, teamID, W, L)
```
```
yearID teamID W L
1 2004 NYN 71 91
2 2005 NYN 83 79
3 2006 NYN 97 65
4 2007 NYN 88 74
5 2008 NYN 89 73
6 2009 NYN 70 92
7 2010 NYN 79 83
8 2011 NYN 77 85
9 2012 NYN 74 88
```
This way, no additional data frames were created.
However, it is easy to see that as we nest more and more of these operations together, this code could become difficult to read.
To maintain readability, we instead chain these operations, rather than nest them (and get the same exact results).
```
Teams %>%
filter(teamID == "NYN" & yearID %in% 2004:2012) %>%
select(yearID, teamID, W, L)
```
This [*piping*](https://en.wikipedia.org/w/index.php?search=piping) syntax
(introduced in Section [4\.1\.1](ch-dataI.html#sec:pipe)) is
provided by the **dplyr** package.
It retains the step\-by\-step logic of our original code, while being easily readable, and efficient with respect to memory and the creation of temporary data frames.
In fact, there are also performance enhancements under the hood that make this the most efficient way to do these kinds of computations.
For these reasons we will use this syntax whenever possible throughout the book.
Note that we only have to type `Teams` once—it is implied by the pipe operator (`%>%`) that the subsequent command takes the previous data frame as its first argument. Thus, `df %>% f(y)` is equivalent to `f(df, y)`.
We’ve answered the simple question of how the Mets performed during the time that Ben was there, but since we are data scientists, we are interested in deeper questions.
For example, some of these seasons were subpar—the Mets had more losses than wins. Did the team just get unlucky in those seasons?
Or did they actually play as badly as their record indicates?
In order to answer this question, we need a model for expected winning percentage.
It turns out that one of the most widely used contributions to the field of baseball analytics (courtesy of [Bill James](https://en.wikipedia.org/w/index.php?search=Bill%20James)) is exactly that.
This model translates the number of runs[9](#fn9) that a team scores and allows over the course of an entire season into an expectation for how many games they should have won.
The simplest version of this model is this:
\\\[
\\widehat{WPct} \= \\frac{1}{1 \+ \\left( \\frac{RA}{RS} \\right)^2} \\,,
\\]
where \\(RA\\) is the number of runs the team allows to be scored, \\(RS\\) is the number of runs that the team scores, and \\(\\widehat{WPct}\\) is the team’s expected winning percentage.
Luckily for us, the runs scored and allowed are present in the `Teams` table, so let’s grab them and save them in a new data frame.
```
mets_ben <- Teams %>%
select(yearID, teamID, W, L, R, RA) %>%
filter(teamID == "NYN" & yearID %in% 2004:2012)
mets_ben
```
```
yearID teamID W L R RA
1 2004 NYN 71 91 684 731
2 2005 NYN 83 79 722 648
3 2006 NYN 97 65 834 731
4 2007 NYN 88 74 804 750
5 2008 NYN 89 73 799 715
6 2009 NYN 70 92 671 757
7 2010 NYN 79 83 656 652
8 2011 NYN 77 85 718 742
9 2012 NYN 74 88 650 709
```
First, note that the runs\-scored variable is called `R` in the `Teams` table, but to stick with our notation we want to rename it `RS`.
```
mets_ben <- mets_ben %>%
rename(RS = R) # new name = old name
mets_ben
```
```
yearID teamID W L RS RA
1 2004 NYN 71 91 684 731
2 2005 NYN 83 79 722 648
3 2006 NYN 97 65 834 731
4 2007 NYN 88 74 804 750
5 2008 NYN 89 73 799 715
6 2009 NYN 70 92 671 757
7 2010 NYN 79 83 656 652
8 2011 NYN 77 85 718 742
9 2012 NYN 74 88 650 709
```
Next, we need to compute the team’s actual winning percentage in each of these seasons.
Thus, we need to add a new column to our data frame, and we do this with the `mutate()` command.
```
mets_ben <- mets_ben %>%
mutate(WPct = W / (W + L))
mets_ben
```
```
yearID teamID W L RS RA WPct
1 2004 NYN 71 91 684 731 0.438
2 2005 NYN 83 79 722 648 0.512
3 2006 NYN 97 65 834 731 0.599
4 2007 NYN 88 74 804 750 0.543
5 2008 NYN 89 73 799 715 0.549
6 2009 NYN 70 92 671 757 0.432
7 2010 NYN 79 83 656 652 0.488
8 2011 NYN 77 85 718 742 0.475
9 2012 NYN 74 88 650 709 0.457
```
We also need to compute the model estimates for winning percentage.
```
mets_ben <- mets_ben %>%
mutate(WPct_hat = 1 / (1 + (RA/RS)^2))
mets_ben
```
```
yearID teamID W L RS RA WPct WPct_hat
1 2004 NYN 71 91 684 731 0.438 0.467
2 2005 NYN 83 79 722 648 0.512 0.554
3 2006 NYN 97 65 834 731 0.599 0.566
4 2007 NYN 88 74 804 750 0.543 0.535
5 2008 NYN 89 73 799 715 0.549 0.555
6 2009 NYN 70 92 671 757 0.432 0.440
7 2010 NYN 79 83 656 652 0.488 0.503
8 2011 NYN 77 85 718 742 0.475 0.484
9 2012 NYN 74 88 650 709 0.457 0.457
```
The expected number of wins is then equal to the product of the expected winning percentage times the number of games.
```
mets_ben <- mets_ben %>%
mutate(W_hat = WPct_hat * (W + L))
mets_ben
```
```
yearID teamID W L RS RA WPct WPct_hat W_hat
1 2004 NYN 71 91 684 731 0.438 0.467 75.6
2 2005 NYN 83 79 722 648 0.512 0.554 89.7
3 2006 NYN 97 65 834 731 0.599 0.566 91.6
4 2007 NYN 88 74 804 750 0.543 0.535 86.6
5 2008 NYN 89 73 799 715 0.549 0.555 90.0
6 2009 NYN 70 92 671 757 0.432 0.440 71.3
7 2010 NYN 79 83 656 652 0.488 0.503 81.5
8 2011 NYN 77 85 718 742 0.475 0.484 78.3
9 2012 NYN 74 88 650 709 0.457 0.457 74.0
```
In this case, the Mets’ fortunes were better than expected in three of these seasons, and worse than expected in the other six.
```
filter(mets_ben, W >= W_hat)
```
```
yearID teamID W L RS RA WPct WPct_hat W_hat
1 2006 NYN 97 65 834 731 0.599 0.566 91.6
2 2007 NYN 88 74 804 750 0.543 0.535 86.6
3 2012 NYN 74 88 650 709 0.457 0.457 74.0
```
```
filter(mets_ben, W < W_hat)
```
```
yearID teamID W L RS RA WPct WPct_hat W_hat
1 2004 NYN 71 91 684 731 0.438 0.467 75.6
2 2005 NYN 83 79 722 648 0.512 0.554 89.7
3 2008 NYN 89 73 799 715 0.549 0.555 90.0
4 2009 NYN 70 92 671 757 0.432 0.440 71.3
5 2010 NYN 79 83 656 652 0.488 0.503 81.5
6 2011 NYN 77 85 718 742 0.475 0.484 78.3
```
Naturally, the Mets experienced ups and downs during Ben’s time with the team.
Which seasons were best?
To figure this out, we can simply sort the rows of the data frame.
```
arrange(mets_ben, desc(WPct))
```
```
yearID teamID W L RS RA WPct WPct_hat W_hat
1 2006 NYN 97 65 834 731 0.599 0.566 91.6
2 2008 NYN 89 73 799 715 0.549 0.555 90.0
3 2007 NYN 88 74 804 750 0.543 0.535 86.6
4 2005 NYN 83 79 722 648 0.512 0.554 89.7
5 2010 NYN 79 83 656 652 0.488 0.503 81.5
6 2011 NYN 77 85 718 742 0.475 0.484 78.3
7 2012 NYN 74 88 650 709 0.457 0.457 74.0
8 2004 NYN 71 91 684 731 0.438 0.467 75.6
9 2009 NYN 70 92 671 757 0.432 0.440 71.3
```
In 2006, the Mets had the best record in baseball during the regular season and nearly made the [*World Series*](https://en.wikipedia.org/w/index.php?search=World%20Series).
How do these seasons rank in terms of the team’s performance relative to our model?
```
mets_ben %>%
mutate(Diff = W - W_hat) %>%
arrange(desc(Diff))
```
```
yearID teamID W L RS RA WPct WPct_hat W_hat Diff
1 2006 NYN 97 65 834 731 0.599 0.566 91.6 5.3840
2 2007 NYN 88 74 804 750 0.543 0.535 86.6 1.3774
3 2012 NYN 74 88 650 709 0.457 0.457 74.0 0.0199
4 2008 NYN 89 73 799 715 0.549 0.555 90.0 -0.9605
5 2009 NYN 70 92 671 757 0.432 0.440 71.3 -1.2790
6 2011 NYN 77 85 718 742 0.475 0.484 78.3 -1.3377
7 2010 NYN 79 83 656 652 0.488 0.503 81.5 -2.4954
8 2004 NYN 71 91 684 731 0.438 0.467 75.6 -4.6250
9 2005 NYN 83 79 722 648 0.512 0.554 89.7 -6.7249
```
It appears that 2006 was the Mets’ most fortunate year—since they won five more games than our model predicts—but 2005 was the least fortunate—since they won almost seven games fewer than our model predicts.
This type of analysis helps us understand how the Mets performed in individual seasons, but we know that any randomness that occurs in individual years is likely to average out over time.
So while it is clear that the Mets performed well in some seasons and poorly in others, what can we say about their overall performance?
We can easily summarize a single variable with the `skim()` command from the **mdsr** package.
```
mets_ben %>%
skim(W)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 W 9 0 80.9 9.10 70 74 79 88 97
```
This tells us that the Mets won nearly 81 games on average during Ben’s tenure, which corresponds almost exactly to a 0\.500 winning percentage, since there are 162 games in a regular season.
But we may be interested in aggregating more than one variable at a time.
To do this, we use `summarize()`.
```
mets_ben %>%
summarize(
num_years = n(),
total_W = sum(W),
total_L = sum(L),
total_WPct = sum(W) / sum(W + L),
sum_resid = sum(W - W_hat)
)
```
```
num_years total_W total_L total_WPct sum_resid
1 9 728 730 0.499 -10.6
```
In these nine years, the Mets had a combined record of 728 wins and 730 losses, for an overall winning percentage of .499\. Just one extra win would have made them exactly 0\.500!
(If we could pick which game, we would definitely pick [the final game of the 2007 season](https://www.baseball-reference.com/boxes/NYN/NYN200709300.shtml).
A win there would have resulted in a playoff berth.)
However, we’ve also learned that the team under\-performed relative to our model by a total of 10\.6 games over those nine seasons.
Usually, when we are summarizing a data frame like we did above, it is interesting to consider different groups.
In this case, we can discretize these years into three chunks: one for each of the three general managers under whom Ben worked. [Jim Duquette](https://en.wikipedia.org/w/index.php?search=Jim%20Duquette) was the Mets’ [*general manager*](https://en.wikipedia.org/w/index.php?search=general%20manager) in 2004, [Omar Minaya](https://en.wikipedia.org/w/index.php?search=Omar%20Minaya) from 2005 to 2010, and [Sandy Alderson](https://en.wikipedia.org/w/index.php?search=Sandy%20Alderson) from 2011 to 2012\.
We can define these eras using two nested `ifelse()` functions.
```
mets_ben <- mets_ben %>%
mutate(
gm = ifelse(
yearID == 2004,
"Duquette",
ifelse(
yearID >= 2011,
"Alderson",
"Minaya")
)
)
```
Another, more scalable approach to accomplishing this same task is to use a `case_when()` expression.
```
mets_ben <- mets_ben %>%
mutate(
gm = case_when(
yearID == 2004 ~ "Duquette",
yearID >= 2011 ~ "Alderson",
TRUE ~ "Minaya"
)
)
```
Don’t use nested `ifelse()` statements: `case_when()` is far simpler.
Next, we use the `gm` variable to define these groups with the `group_by()` operator.
The combination of summarizing data by groups can be very powerful.
Note that while the Mets were far more successful during Minaya’s regime (i.e., many more wins than losses), they did not meet expectations in any of the three periods.
```
mets_ben %>%
group_by(gm) %>%
summarize(
num_years = n(),
total_W = sum(W),
total_L = sum(L),
total_WPct = sum(W) / sum(W + L),
sum_resid = sum(W - W_hat)
) %>%
arrange(desc(sum_resid))
```
```
# A tibble: 3 × 6
gm num_years total_W total_L total_WPct sum_resid
<chr> <int> <int> <int> <dbl> <dbl>
1 Alderson 2 151 173 0.466 -1.32
2 Duquette 1 71 91 0.438 -4.63
3 Minaya 6 506 466 0.521 -4.70
```
The full power of the chaining operator is revealed below, where we do all the analysis at once, but retain the step\-by\-step logic.
```
Teams %>%
select(yearID, teamID, W, L, R, RA) %>%
filter(teamID == "NYN" & yearID %in% 2004:2012) %>%
rename(RS = R) %>%
mutate(
WPct = W / (W + L),
WPct_hat = 1 / (1 + (RA/RS)^2),
W_hat = WPct_hat * (W + L),
gm = case_when(
yearID == 2004 ~ "Duquette",
yearID >= 2011 ~ "Alderson",
TRUE ~ "Minaya"
)
) %>%
group_by(gm) %>%
summarize(
num_years = n(),
total_W = sum(W),
total_L = sum(L),
total_WPct = sum(W) / sum(W + L),
sum_resid = sum(W - W_hat)
) %>%
arrange(desc(sum_resid))
```
```
# A tibble: 3 × 6
gm num_years total_W total_L total_WPct sum_resid
<chr> <int> <int> <int> <dbl> <dbl>
1 Alderson 2 151 173 0.466 -1.32
2 Duquette 1 71 91 0.438 -4.63
3 Minaya 6 506 466 0.521 -4.70
```
Even more generally, we might be more interested in how the Mets performed relative to our model, in the context of all teams during that 9\-year period.
All we need to do is remove the `teamID` filter and group by franchise (`franchID`) instead.
```
Teams %>%
select(yearID, teamID, franchID, W, L, R, RA) %>%
filter(yearID %in% 2004:2012) %>%
rename(RS = R) %>%
mutate(
WPct = W / (W + L),
WPct_hat = 1 / (1 + (RA/RS)^2),
W_hat = WPct_hat * (W + L)
) %>%
group_by(franchID) %>%
summarize(
num_years = n(),
total_W = sum(W),
total_L = sum(L),
total_WPct = sum(W) / sum(W + L),
sum_resid = sum(W - W_hat)
) %>%
arrange(sum_resid) %>%
head(6)
```
```
# A tibble: 6 × 6
franchID num_years total_W total_L total_WPct sum_resid
<fct> <int> <int> <int> <dbl> <dbl>
1 TOR 9 717 740 0.492 -29.2
2 ATL 9 781 677 0.536 -24.0
3 COL 9 687 772 0.471 -22.7
4 CHC 9 706 750 0.485 -14.5
5 CLE 9 710 748 0.487 -13.9
6 NYM 9 728 730 0.499 -10.6
```
We can see now that only five other teams fared worse than the Mets,[10](#fn10) relative to our model, during this time period.
Perhaps they are cursed!
4\.3 Further resources
----------------------
[Hadley Wickham](https://en.wikipedia.org/w/index.php?search=Hadley%20Wickham) is an influential innovator in the field of statistical computing.
Along with his colleagues at **RStudio** and other organizations, he has made significant contributions to improve data wrangling in **R**.
These packages are called the [*tidyverse*](https://en.wikipedia.org/w/index.php?search=tidyverse), and are now manageable through a single **tidyverse** (Hadley Wickham 2021g) package.
His papers and vignettes describing widely\-used packages such as **dplyr** (H. Wickham and Francois 2020\) and **tidyr** (Hadley Wickham 2020c) are highly recommended reading.
Finzer (2013\) writes of a “data habit of mind” that needs to be inculcated among data scientists.
The **RStudio** data wrangling [cheat sheet](https://rstudio.com/resources/cheatsheets) is a useful reference.
4\.4 Exercises
--------------
**Problem 1 (Easy)**: Here is a random subset of the `babynames` data frame in the `babynames` package:
```
Random_subset
```
```
# A tibble: 10 × 5
year sex name n prop
<dbl> <chr> <chr> <int> <dbl>
1 2003 M Bilal 146 0.0000695
2 1999 F Terria 23 0.0000118
3 2010 F Naziyah 45 0.0000230
4 1989 F Shawana 41 0.0000206
5 1989 F Jessi 210 0.000105
6 1928 M Tillman 43 0.0000377
7 1981 F Leslee 83 0.0000464
8 1981 F Sherise 27 0.0000151
9 1920 F Marquerite 26 0.0000209
10 1941 M Lorraine 24 0.0000191
```
For each of the following tables wrangled from `Random_subset`, figure out what `dplyr` wrangling statement will produce the result.
1. Hint: Both rows and variables are missing from the original
```
# A tibble: 4 × 4
year sex name n
<dbl> <chr> <chr> <int>
1 2010 F Naziyah 45
2 1989 F Shawana 41
3 1928 M Tillman 43
4 1981 F Leslee 83
```
2. Hint: the `nchar()` function is used in the statement.
```
# A tibble: 2 × 5
year sex name n prop
<dbl> <chr> <chr> <int> <dbl>
1 1999 F Terria 23 0.0000118
2 1981 F Leslee 83 0.0000464
```
3. Hint: Note the new column, which is constructed from `n` and `prop`.
```
# A tibble: 2 × 6
year sex name n prop total
<dbl> <chr> <chr> <int> <dbl> <dbl>
1 1989 F Shawana 41 0.0000206 1992225.
2 1989 F Jessi 210 0.000105 1991843.
```
4. Hint: All the years are still there, but there are only 8 rows as opposed to the original 10 rows.
```
# A tibble: 8 × 2
year total
<dbl> <int>
1 1920 26
2 1928 43
3 1941 24
4 1981 110
5 1989 251
6 1999 23
7 2003 146
8 2010 45
```
**Problem 2 (Easy)**: We’ll be working with the `babynames` data frame in the `babynames` package. To remind you what `babynames` looks like, here are a few rows.
```
# A tibble: 3 × 5
year sex name n prop
<dbl> <chr> <chr> <int> <dbl>
1 2004 M Arjun 250 0.000118
2 1894 F Fedora 5 0.0000212
3 1952 F Donalda 10 0.00000526
```
Say what’s wrong (if anything) with each of the following wrangling commands.
1. `babynames %>% select(n > 100)`
2. `babynames %>% select(- year)`
3. `babynames %>% mutate(name_length == nchar(name))`
4. `babynames %>% sex == M %>% select(-prop)`
5. `babynames %>% select(year, year, sex)`
6. `babynames %>% group_by(n) %>% summarize(ave = mean(n))`
7. `babynames %>% group_by(n > 100) %>% summarize(total = sum(n))`
**Problem 3 (Easy)**: Consider the following pipeline:
```
library(tidyverse)
mtcars %>%
group_by(cyl) %>%
summarize(avg_mpg = mean(mpg)) %>%
filter(am == 1)
```
What is the problem with this pipeline?
**Problem 4 (Easy)**: Define two new variables in the `Teams` data frame in the `Lahman` package.
1. batting average (\\(BA\\)). Batting average is the ratio of hits (`H`) to at\-bats (`AB`).
2. slugging percentage (\\(SLG\\)). Slugging percentage is total bases divided by at\-bats. To compute total bases, you get 1 for a single, 2 for a double, 3 for a triple, and 4 for a home run.
3. Plot out the \\(SLG\\) versus `yearID`, showing the individual teams and a smooth curve.
4. Same as (c), but plot \\(BA\\) versus year.
**Problem 5 (Easy)**: Consider the following pipeline:
```
mtcars %>%
group_by(cyl) %>%
summarize(
N = n(),
avg_mpg = mean(mpg)
)
```
```
# A tibble: 3 × 3
cyl N avg_mpg
<dbl> <int> <dbl>
1 4 11 26.7
2 6 7 19.7
3 8 14 15.1
```
What is the real\-world meaning of the variable `N` in the result set?
**Problem 6 (Easy)**: Each of these tasks can be performed using a single data verb. For each task, say which verb it is:
1. Find the average of one of the variables.
2. Add a new column that is the ratio between two variables.
3. Sort the cases in descending order of a variable.
4. Create a new data table that includes only those cases that meet a criterion.
5. From a data table with three categorical variables A, B, and C, and a quantitative variable X, produce a data frame that has the same cases but only the variables A and X.
**Problem 7 (Medium)**: Using the `Teams` data frame in the `Lahman` package, display the top\-5 teams ranked in terms of slugging percentage (\\(SLG\\)) in Major League Baseball history. Repeat this using teams since 1969\. Slugging percentage is total bases divided by at\-bats. To compute total bases, you get 1 for a single, 2 for a double, 3 for a triple, and 4 for a home run.
**Problem 8 (Medium)**: Using the `Teams` data frame in the `Lahman` package:
1. Plot `SLG` versus `yearID` since 1954 conditioned by league (American vs. National, see `lgID`). Slugging percentage is total bases divided by at\-bats. To compute total bases, you get 1 for a single, 2 for a double, 3 for a triple, and 4 for a home run.
2. Is slugging percentage typically higher in the American League (AL) or the National League (NL)? Can you think of why this might be the case?
**Problem 9 (Medium)**: Use the `nycflights13` package and the `flights` data frame to answer the following questions: What month had the highest proportion of cancelled flights? What month had the lowest? Interpret any seasonal patterns.
**Problem 10 (Medium)**: Using the `Teams` data frame in the `Lahman` package:
1. Create a factor called `election` that divides the `yearID` into 4\-year blocks that correspond to U.S. presidential terms. The first presidential term started in 1788\. They each last 4 years and are still on the schedule set in 1788\.
2. During which term have the most home runs been hit?
**Problem 11 (Medium)**: The `Violations` data set in the `mdsr` package contains information regarding the outcome of health inspections of restaurants in New York City. Use these data to calculate the median violation score by zip code for zip codes in Manhattan with 50 or more inspections. What pattern do you see between the number of inspections and the median score?
**Problem 12 (Medium)**: The `nycflights13` package includes a table (`weather`) that describes the weather during 2013\. Use that table to answer the following questions:
1. What is the distribution of temperature in July, 2013? Identify any important outliers in terms of the `wind_speed` variable.
2. What is the relationship between `dewp` and `humid`?
3. What is the relationship between `precip` and `visib`?
**Problem 13 (Medium)**: The Major League Baseball Angels have at times been called the California Angels (*CAL*), the Anaheim Angels (*ANA*), and the Los Angeles Angels of Anaheim (*LAA*). Using the `Teams` data frame in the `Lahman` package:
1. Find the 10 most successful seasons in Angels history, defining “successful” as the fraction of regular\-season games won in the year. In the table you create, include the `yearID`, `teamID`, `lgID`, `W`, `L`, and `WSWin`. See the documentation for `Teams` for the definition of these variables.
2. Have the Angels ever won the World Series?
**Problem 14 (Medium)**: Use the `nycflights13` package and the `flights` data frame to answer the following question: What plane (specified by the `tailnum` variable) traveled the most times from New York City airports in 2013? Plot the number of trips per week over the year.
**Problem 15 (Hard)**: Replicate the wrangling to create the `house_elections` table in the `fec` package from the original Excel source file.
4\.5 Supplementary exercises
----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-dataI.html\#dataI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-dataI.html#dataI-online-exercises)
**Problem 1 (Easy)**: Which `dplyr` operation is depicted below?
**Problem 2 (Easy)**: Which `dplyr` operation is depicted below?
**Problem 3 (Easy)**: Which `dplyr` operation is depicted below?
**Problem 4 (Easy)**: Which `dplyr` operation is depicted below?
**Problem 5 (Easy)**: Which `dplyr` operation is depicted below?
**Problem 6 (Easy)**: Which `dplyr` operation is depicted below?
---
4\.1 A grammar for data wrangling
---------------------------------
In much the same way that **ggplot2** presents a grammar for data graphics, the **dplyr** package presents a grammar for data wrangling (H. Wickham and Francois 2020\).
This package is loaded when `library(tidyverse)` is run.
[Hadley Wickham](https://en.wikipedia.org/w/index.php?search=Hadley%20Wickham), one of the authors of **dplyr** and the **tidyverse**, has identified five [*verbs*](https://en.wikipedia.org/w/index.php?search=verbs) for working with data in a data frame:
* `select()`: take a subset of the columns (i.e., features, variables)
* `filter()`: take a subset of the rows (i.e., observations)
* `mutate()`: add or modify existing columns
* `arrange()`: sort the rows
* `summarize()`: aggregate the data across rows (e.g., group it according to some criteria)
Each of these functions takes a data frame as its first argument, and returns a data frame.
These five verbs can be used in conjunction with each other to provide a powerful means to slice\-and\-dice a single table of data.
As with any grammar, what these verbs mean on their own is one thing, but being able to combine these verbs with nouns (i.e., data frames) and adverbs (i.e., arguments) creates a flexible and powerful way to wrangle data.
Mastery of these five verbs can make the computation of most any descriptive statistic a breeze and facilitate further analysis.
Wickham’s approach is inspired by his desire to blur the boundaries between **R** and the ubiquitous [*relational database*](https://en.wikipedia.org/w/index.php?search=relational%20database) querying syntax [*SQL*](https://en.wikipedia.org/w/index.php?search=SQL).
When we revisit SQL in Chapter [15](ch-sql.html#ch:sql), we will see the close relationship between these two computing paradigms.
A related concept more popular in business settings is the [*OLAP*](https://en.wikipedia.org/w/index.php?search=OLAP) (online analytical processing) hypercube, which refers to the process by which multidimensional data is “sliced\-and\-diced.”
### 4\.1\.1 `select()` and `filter()`
The two simplest of the five verbs are `filter()` and `select()`, which return a subset of the rows or columns of a data frame, respectively.
Generally, if we have a data frame that consists of \\(n\\) rows and \\(p\\) columns, Figures [4\.1](ch-dataI.html#fig:filter) and [4\.2](ch-dataI.html#fig:select) illustrate the effect of filtering this data frame based on a condition on one of the columns, and selecting a subset of the columns, respectively.
Figure 4\.1: The `filter()` function. At left, a data frame that contains matching entries in a certain column for only a subset of the rows. At right, the resulting data frame after filtering.
Figure 4\.2: The `select()` function. At left, a data frame, from which we retrieve only a few of the columns. At right, the resulting data frame after selecting those columns.
We will demonstrate the use of these functions on the `presidential` data frame (from the **ggplot2** package), which contains \\(p\=4\\) variables about the terms of \\(n\=11\\) recent U.S. presidents.
```
library(tidyverse)
library(mdsr)
presidential
```
```
# A tibble: 11 × 4
name start end party
<chr> <date> <date> <chr>
1 Eisenhower 1953-01-20 1961-01-20 Republican
2 Kennedy 1961-01-20 1963-11-22 Democratic
3 Johnson 1963-11-22 1969-01-20 Democratic
4 Nixon 1969-01-20 1974-08-09 Republican
5 Ford 1974-08-09 1977-01-20 Republican
6 Carter 1977-01-20 1981-01-20 Democratic
7 Reagan 1981-01-20 1989-01-20 Republican
8 Bush 1989-01-20 1993-01-20 Republican
9 Clinton 1993-01-20 2001-01-20 Democratic
10 Bush 2001-01-20 2009-01-20 Republican
11 Obama 2009-01-20 2017-01-20 Democratic
```
To retrieve only the names and party affiliations of these presidents, we would use `select()`.
The first [*argument*](https://en.wikipedia.org/w/index.php?search=argument) to the `select()` function is the data frame, followed by an arbitrarily long list of column names, separated by commas.
```
select(presidential, name, party)
```
```
# A tibble: 11 × 2
name party
<chr> <chr>
1 Eisenhower Republican
2 Kennedy Democratic
3 Johnson Democratic
4 Nixon Republican
5 Ford Republican
6 Carter Democratic
7 Reagan Republican
8 Bush Republican
9 Clinton Democratic
10 Bush Republican
11 Obama Democratic
```
Similarly, the first argument to `filter()` is a data frame, and subsequent arguments are logical conditions that are evaluated on any involved columns.
If we want to retrieve only those rows that pertain to Republican presidents, we need to specify that the value of the `party` variable is equal to `Republican`.
```
filter(presidential, party == "Republican")
```
```
# A tibble: 6 × 4
name start end party
<chr> <date> <date> <chr>
1 Eisenhower 1953-01-20 1961-01-20 Republican
2 Nixon 1969-01-20 1974-08-09 Republican
3 Ford 1974-08-09 1977-01-20 Republican
4 Reagan 1981-01-20 1989-01-20 Republican
5 Bush 1989-01-20 1993-01-20 Republican
6 Bush 2001-01-20 2009-01-20 Republican
```
Note that the `==` is a *test for equality*.
If we were to use only a single equal sign here, we would be asserting that the value of `party` was `Republican`.
This would result in an error.
The quotation marks around `Republican` are necessary here, since `Republican` is a literal value, and not a variable name.
Combining the `filter()` and `select()` commands enables one to drill down to very specific pieces of information.
For example, we can find which Democratic presidents served since [*Watergate*](https://en.wikipedia.org/w/index.php?search=Watergate).
```
select(
filter(presidential, lubridate::year(start) > 1973 & party == "Democratic"),
name
)
```
```
# A tibble: 3 × 1
name
<chr>
1 Carter
2 Clinton
3 Obama
```
In the syntax demonstrated above, the `filter()` operation is [*nested*](https://en.wikipedia.org/w/index.php?search=nested) inside the `select()` operation.
As noted above, each of the five verbs takes and returns a data frame, which makes this type of nesting possible.
Shortly, we will see how these verbs can be chained together to make rather long expressions that can become very difficult to read.
Instead, we recommend the use of the `%>%` (pipe) operator.
Pipe\-forwarding is an alternative to nesting that yields code that can be easily read from top to bottom.
With the pipe, we can write the same expression as above in this more readable syntax.
```
presidential %>%
filter(lubridate::year(start) > 1973 & party == "Democratic") %>%
select(name)
```
```
# A tibble: 3 × 1
name
<chr>
1 Carter
2 Clinton
3 Obama
```
This expression is called a [*pipeline*](https://en.wikipedia.org/w/index.php?search=pipeline).
Notice how the expression
```
dataframe %>% filter(condition)
```
is equivalent to `filter(dataframe, condition)`. In later examples, we will see how this operator can make code more readable and efficient, particularly for complex operations on large data sets.
### 4\.1\.2 `mutate()` and `rename()`
Frequently, in the process of conducting our analysis, we will create, re\-define, and rename some of our variables.
The functions `mutate()` and `rename()` provide these capabilities.
A graphical illustration of the `mutate()` operation is shown in Figure [4\.3](ch-dataI.html#fig:mutate).
Figure 4\.3: The `mutate()` function. At right, the resulting data frame after adding a new column.
While we have the raw data on when each of these presidents took and relinquished office, we don’t actually have a numeric variable giving the length of each president’s term.
Of course, we can derive this information from the dates given, and add the result as a new column to our data frame.
This date arithmetic is made easier through the use of the **lubridate** package, which we use to compute the number of years (`dyears()`) that elapsed since during the `interval()` from the `start` until the `end` of each president’s term.
In this situation, it is generally considered good style to create a new object rather than [*clobbering*](https://en.wikipedia.org/w/index.php?search=clobbering) the one that comes from an external source.
To preserve the existing `presidential` data frame, we save the result of `mutate()` as a new object called `my_presidents`.
```
library(lubridate)
my_presidents <- presidential %>%
mutate(term.length = interval(start, end) / dyears(1))
my_presidents
```
```
# A tibble: 11 × 5
name start end party term.length
<chr> <date> <date> <chr> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8
2 Kennedy 1961-01-20 1963-11-22 Democratic 2.84
3 Johnson 1963-11-22 1969-01-20 Democratic 5.16
4 Nixon 1969-01-20 1974-08-09 Republican 5.55
5 Ford 1974-08-09 1977-01-20 Republican 2.45
6 Carter 1977-01-20 1981-01-20 Democratic 4
7 Reagan 1981-01-20 1989-01-20 Republican 8
8 Bush 1989-01-20 1993-01-20 Republican 4
9 Clinton 1993-01-20 2001-01-20 Democratic 8
10 Bush 2001-01-20 2009-01-20 Republican 8
11 Obama 2009-01-20 2017-01-20 Democratic 8
```
The `mutate()` function can also be used to modify the data in an existing column.
Suppose that we wanted to add to our data frame a variable containing the year in which each president was elected.
Our first (naïve) attempt might assume that every president was elected in the year before he took office.
Note that `mutate()` returns a data frame, so if we want to modify our existing data frame, we need to overwrite it with the results.
```
my_presidents <- my_presidents %>%
mutate(elected = year(start) - 1)
my_presidents
```
```
# A tibble: 11 × 6
name start end party term.length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
2 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
3 Johnson 1963-11-22 1969-01-20 Democratic 5.16 1962
4 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
5 Ford 1974-08-09 1977-01-20 Republican 2.45 1973
6 Carter 1977-01-20 1981-01-20 Democratic 4 1976
7 Reagan 1981-01-20 1989-01-20 Republican 8 1980
8 Bush 1989-01-20 1993-01-20 Republican 4 1988
9 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
10 Bush 2001-01-20 2009-01-20 Republican 8 2000
11 Obama 2009-01-20 2017-01-20 Democratic 8 2008
```
Some entries in this data set are wrong, because presidential elections are only held every four years.
[Lyndon Johnson](https://en.wikipedia.org/w/index.php?search=Lyndon%20Johnson) assumed the office after President [John Kennedy](https://en.wikipedia.org/w/index.php?search=John%20Kennedy) was assassinated in 1963, and [Gerald Ford](https://en.wikipedia.org/w/index.php?search=Gerald%20Ford) took over after President [Richard Nixon](https://en.wikipedia.org/w/index.php?search=Richard%20Nixon) resigned in 1974\.
Thus, there were no presidential elections in 1962 or 1973, as suggested in our data frame.
We should overwrite these values with `NA`’s—which is how **R** denotes missing values.
We can use the `ifelse()` function to do this.
Here, if the value of `elected` is either 1962 or 1973, we overwrite that value with `NA`.[6](#fn6)
Otherwise, we overwrite it with the same value that it currently has.
In this case, instead of checking to see whether the value of `elected` equals `1962` or `1973`, for brevity we can use the `%in%` operator to check to see whether the value of `elected` belongs to the vector consisting of `1962` and `1973`.
```
my_presidents <- my_presidents %>%
mutate(elected = ifelse(elected %in% c(1962, 1973), NA, elected))
my_presidents
```
```
# A tibble: 11 × 6
name start end party term.length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
2 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
3 Johnson 1963-11-22 1969-01-20 Democratic 5.16 NA
4 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
5 Ford 1974-08-09 1977-01-20 Republican 2.45 NA
6 Carter 1977-01-20 1981-01-20 Democratic 4 1976
7 Reagan 1981-01-20 1989-01-20 Republican 8 1980
8 Bush 1989-01-20 1993-01-20 Republican 4 1988
9 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
10 Bush 2001-01-20 2009-01-20 Republican 8 2000
11 Obama 2009-01-20 2017-01-20 Democratic 8 2008
```
Finally, it is considered bad practice to use periods in the name of functions, data frames, and variables in **R**. Ill\-advised periods could conflict with **R**’s use of [*generic functions*](https://en.wikipedia.org/w/index.php?search=generic%20functions) (i.e., **R**’s mechanism for [method overloading](http://en.wikipedia.org/wiki/Function_overloading)).
Thus, we should change the name of the `term.length` column that we created earlier.
We can achieve this using the `rename()` function.
In this book, we will use [*snake\_case*](https://en.wikipedia.org/w/index.php?search=snake_case) for function and variable names.
Don’t use periods in the names of functions, data frames, or variables, as this can be confused with the object\-oriented programming model.
```
my_presidents <- my_presidents %>%
rename(term_length = term.length)
my_presidents
```
```
# A tibble: 11 × 6
name start end party term_length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
2 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
3 Johnson 1963-11-22 1969-01-20 Democratic 5.16 NA
4 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
5 Ford 1974-08-09 1977-01-20 Republican 2.45 NA
6 Carter 1977-01-20 1981-01-20 Democratic 4 1976
7 Reagan 1981-01-20 1989-01-20 Republican 8 1980
8 Bush 1989-01-20 1993-01-20 Republican 4 1988
9 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
10 Bush 2001-01-20 2009-01-20 Republican 8 2000
11 Obama 2009-01-20 2017-01-20 Democratic 8 2008
```
### 4\.1\.3 `arrange()`
The function `sort()` will sort a vector but not a data frame. The function that will sort a data frame is called `arrange()`, and its behavior is illustrated in Figure [4\.4](ch-dataI.html#fig:arrange).
Figure 4\.4: The `arrange()` function. At left, a data frame with an ordinal variable. At right, the resulting data frame after sorting the rows in descending order of that variable.
In order to use `arrange()` on a data frame, you have to specify the data frame, and the column by which you want it to be sorted.
You also have to specify the direction in which you want it to be sorted. Specifying multiple sort conditions will help break ties.
To sort our `presidential` data frame by the length of each president’s term, we specify that we want the column `term_length` in descending order.
```
my_presidents %>%
arrange(desc(term_length))
```
```
# A tibble: 11 × 6
name start end party term_length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
2 Reagan 1981-01-20 1989-01-20 Republican 8 1980
3 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
4 Bush 2001-01-20 2009-01-20 Republican 8 2000
5 Obama 2009-01-20 2017-01-20 Democratic 8 2008
6 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
7 Johnson 1963-11-22 1969-01-20 Democratic 5.16 NA
8 Carter 1977-01-20 1981-01-20 Democratic 4 1976
9 Bush 1989-01-20 1993-01-20 Republican 4 1988
10 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
11 Ford 1974-08-09 1977-01-20 Republican 2.45 NA
```
A number of presidents completed either one or two full terms, and thus have the exact same term length (4 or 8 years, respectively).
To break these ties, we can further sort by `party` and `elected`.
```
my_presidents %>%
arrange(desc(term_length), party, elected)
```
```
# A tibble: 11 × 6
name start end party term_length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
2 Obama 2009-01-20 2017-01-20 Democratic 8 2008
3 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
4 Reagan 1981-01-20 1989-01-20 Republican 8 1980
5 Bush 2001-01-20 2009-01-20 Republican 8 2000
6 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
7 Johnson 1963-11-22 1969-01-20 Democratic 5.16 NA
8 Carter 1977-01-20 1981-01-20 Democratic 4 1976
9 Bush 1989-01-20 1993-01-20 Republican 4 1988
10 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
11 Ford 1974-08-09 1977-01-20 Republican 2.45 NA
```
Note that the default sort order is [*ascending order*](https://en.wikipedia.org/w/index.php?search=ascending%20order), so we do not need to specify an order if that is what we want.
### 4\.1\.4 `summarize()` with `group_by()`
Our last of the five verbs for single\-table analysis is `summarize()`, which is nearly always used in conjunction with `group_by()`.
The previous four verbs provided us with means to manipulate a data frame in powerful and flexible ways.
But the extent of the analysis we can perform with these four verbs alone is limited.
On the other hand, `summarize()` with `group_by()` enables us to make comparisons.
Figure 4\.5: The `summarize()` function. At left, a data frame. At right, the resulting data frame after aggregating four of the columns.
When used alone, `summarize()` collapses a data frame into a single row[7](#fn7).
This is illustrated in Figure [4\.5](ch-dataI.html#fig:summarize).
Critically, we have to specify *how* we want to reduce an entire column of data into a single value.
The method of aggregation that we specify controls what will appear in the output.
```
my_presidents %>%
summarize(
N = n(),
first_year = min(year(start)),
last_year = max(year(end)),
num_dems = sum(party == "Democratic"),
years = sum(term_length),
avg_term_length = mean(term_length)
)
```
```
# A tibble: 1 × 6
N first_year last_year num_dems years avg_term_length
<int> <dbl> <dbl> <int> <dbl> <dbl>
1 11 1953 2017 5 64 5.82
```
The first argument to `summarize()` is a data frame, followed by a list of variables that will appear in the output.
Note that every variable in the output is defined by operations performed on *vectors*—not on individual values.
This is essential, since if the specification of an output variable is not an operation on a vector, there is no way for **R** to know how to collapse each column.
In this example, the function `n()` simply counts the number of rows.
This is often useful information.
To help ensure that data aggregation is being done correctly, use `n()` every time you use `summarize()`.
The next two variables determine the first year that one of these presidents assumed office.
This is the smallest year in the `start` column.
Similarly, the most recent year is the largest year in the `end` column.
The variable `num_dems` simply counts the number of rows in which the value of the `party` variable was `Democratic`.
Finally, the last two variables compute the sum and average of the `term_length` variable.
We see that 5 of the 11 presidents who served from 1953 to 2017 were Democrats, and the average term length over these 64 years was about 5\.8 years.
This begs the question of whether Democratic or Republican presidents served a longer average term during this time period.
To figure this out, we can just execute `summarize()` again, but this time, instead of the first argument being the data frame `my_presidents`, we will specify that the rows of the `my_presidents` data frame should be grouped by the values of the `party` variable.
In this manner, the same computations as above will be carried out for each party separately.
```
my_presidents %>%
group_by(party) %>%
summarize(
N = n(),
first_year = min(year(start)),
last_year = max(year(end)),
num_dems = sum(party == "Democratic"),
years = sum(term_length),
avg_term_length = mean(term_length)
)
```
```
# A tibble: 2 × 7
party N first_year last_year num_dems years avg_term_length
<chr> <int> <dbl> <dbl> <int> <dbl> <dbl>
1 Democratic 5 1961 2017 5 28 5.6
2 Republican 6 1953 2009 0 36 6
```
This provides us with the valuable information that the six Republican presidents served an average of 6 years in office, while the five Democratic presidents served an average of only 5\.6\. As with all of the **dplyr** verbs, the final output is a data frame.
In this chapter, we are using the **dplyr** package. The most common way to extract data from data tables is with SQL (structured query language). We’ll introduce SQL in Chapter [15](ch-sql.html#ch:sql). The **dplyr** package provides an interface that fits more smoothly into an overall data analysis workflow and is, in our opinion, easier to learn. Once you understand data wrangling with **dplyr**, it’s straightforward to learn SQL if needed. **dplyr** can also work as an interface to many systems that use SQL internally.
### 4\.1\.1 `select()` and `filter()`
The two simplest of the five verbs are `filter()` and `select()`, which return a subset of the rows or columns of a data frame, respectively.
Generally, if we have a data frame that consists of \\(n\\) rows and \\(p\\) columns, Figures [4\.1](ch-dataI.html#fig:filter) and [4\.2](ch-dataI.html#fig:select) illustrate the effect of filtering this data frame based on a condition on one of the columns, and selecting a subset of the columns, respectively.
Figure 4\.1: The `filter()` function. At left, a data frame that contains matching entries in a certain column for only a subset of the rows. At right, the resulting data frame after filtering.
Figure 4\.2: The `select()` function. At left, a data frame, from which we retrieve only a few of the columns. At right, the resulting data frame after selecting those columns.
We will demonstrate the use of these functions on the `presidential` data frame (from the **ggplot2** package), which contains \\(p\=4\\) variables about the terms of \\(n\=11\\) recent U.S. presidents.
```
library(tidyverse)
library(mdsr)
presidential
```
```
# A tibble: 11 × 4
name start end party
<chr> <date> <date> <chr>
1 Eisenhower 1953-01-20 1961-01-20 Republican
2 Kennedy 1961-01-20 1963-11-22 Democratic
3 Johnson 1963-11-22 1969-01-20 Democratic
4 Nixon 1969-01-20 1974-08-09 Republican
5 Ford 1974-08-09 1977-01-20 Republican
6 Carter 1977-01-20 1981-01-20 Democratic
7 Reagan 1981-01-20 1989-01-20 Republican
8 Bush 1989-01-20 1993-01-20 Republican
9 Clinton 1993-01-20 2001-01-20 Democratic
10 Bush 2001-01-20 2009-01-20 Republican
11 Obama 2009-01-20 2017-01-20 Democratic
```
To retrieve only the names and party affiliations of these presidents, we would use `select()`.
The first [*argument*](https://en.wikipedia.org/w/index.php?search=argument) to the `select()` function is the data frame, followed by an arbitrarily long list of column names, separated by commas.
```
select(presidential, name, party)
```
```
# A tibble: 11 × 2
name party
<chr> <chr>
1 Eisenhower Republican
2 Kennedy Democratic
3 Johnson Democratic
4 Nixon Republican
5 Ford Republican
6 Carter Democratic
7 Reagan Republican
8 Bush Republican
9 Clinton Democratic
10 Bush Republican
11 Obama Democratic
```
Similarly, the first argument to `filter()` is a data frame, and subsequent arguments are logical conditions that are evaluated on any involved columns.
If we want to retrieve only those rows that pertain to Republican presidents, we need to specify that the value of the `party` variable is equal to `Republican`.
```
filter(presidential, party == "Republican")
```
```
# A tibble: 6 × 4
name start end party
<chr> <date> <date> <chr>
1 Eisenhower 1953-01-20 1961-01-20 Republican
2 Nixon 1969-01-20 1974-08-09 Republican
3 Ford 1974-08-09 1977-01-20 Republican
4 Reagan 1981-01-20 1989-01-20 Republican
5 Bush 1989-01-20 1993-01-20 Republican
6 Bush 2001-01-20 2009-01-20 Republican
```
Note that the `==` is a *test for equality*.
If we were to use only a single equal sign here, we would be asserting that the value of `party` was `Republican`.
This would result in an error.
The quotation marks around `Republican` are necessary here, since `Republican` is a literal value, and not a variable name.
Combining the `filter()` and `select()` commands enables one to drill down to very specific pieces of information.
For example, we can find which Democratic presidents served since [*Watergate*](https://en.wikipedia.org/w/index.php?search=Watergate).
```
select(
filter(presidential, lubridate::year(start) > 1973 & party == "Democratic"),
name
)
```
```
# A tibble: 3 × 1
name
<chr>
1 Carter
2 Clinton
3 Obama
```
In the syntax demonstrated above, the `filter()` operation is [*nested*](https://en.wikipedia.org/w/index.php?search=nested) inside the `select()` operation.
As noted above, each of the five verbs takes and returns a data frame, which makes this type of nesting possible.
Shortly, we will see how these verbs can be chained together to make rather long expressions that can become very difficult to read.
Instead, we recommend the use of the `%>%` (pipe) operator.
Pipe\-forwarding is an alternative to nesting that yields code that can be easily read from top to bottom.
With the pipe, we can write the same expression as above in this more readable syntax.
```
presidential %>%
filter(lubridate::year(start) > 1973 & party == "Democratic") %>%
select(name)
```
```
# A tibble: 3 × 1
name
<chr>
1 Carter
2 Clinton
3 Obama
```
This expression is called a [*pipeline*](https://en.wikipedia.org/w/index.php?search=pipeline).
Notice how the expression
```
dataframe %>% filter(condition)
```
is equivalent to `filter(dataframe, condition)`. In later examples, we will see how this operator can make code more readable and efficient, particularly for complex operations on large data sets.
### 4\.1\.2 `mutate()` and `rename()`
Frequently, in the process of conducting our analysis, we will create, re\-define, and rename some of our variables.
The functions `mutate()` and `rename()` provide these capabilities.
A graphical illustration of the `mutate()` operation is shown in Figure [4\.3](ch-dataI.html#fig:mutate).
Figure 4\.3: The `mutate()` function. At right, the resulting data frame after adding a new column.
While we have the raw data on when each of these presidents took and relinquished office, we don’t actually have a numeric variable giving the length of each president’s term.
Of course, we can derive this information from the dates given, and add the result as a new column to our data frame.
This date arithmetic is made easier through the use of the **lubridate** package, which we use to compute the number of years (`dyears()`) that elapsed since during the `interval()` from the `start` until the `end` of each president’s term.
In this situation, it is generally considered good style to create a new object rather than [*clobbering*](https://en.wikipedia.org/w/index.php?search=clobbering) the one that comes from an external source.
To preserve the existing `presidential` data frame, we save the result of `mutate()` as a new object called `my_presidents`.
```
library(lubridate)
my_presidents <- presidential %>%
mutate(term.length = interval(start, end) / dyears(1))
my_presidents
```
```
# A tibble: 11 × 5
name start end party term.length
<chr> <date> <date> <chr> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8
2 Kennedy 1961-01-20 1963-11-22 Democratic 2.84
3 Johnson 1963-11-22 1969-01-20 Democratic 5.16
4 Nixon 1969-01-20 1974-08-09 Republican 5.55
5 Ford 1974-08-09 1977-01-20 Republican 2.45
6 Carter 1977-01-20 1981-01-20 Democratic 4
7 Reagan 1981-01-20 1989-01-20 Republican 8
8 Bush 1989-01-20 1993-01-20 Republican 4
9 Clinton 1993-01-20 2001-01-20 Democratic 8
10 Bush 2001-01-20 2009-01-20 Republican 8
11 Obama 2009-01-20 2017-01-20 Democratic 8
```
The `mutate()` function can also be used to modify the data in an existing column.
Suppose that we wanted to add to our data frame a variable containing the year in which each president was elected.
Our first (naïve) attempt might assume that every president was elected in the year before he took office.
Note that `mutate()` returns a data frame, so if we want to modify our existing data frame, we need to overwrite it with the results.
```
my_presidents <- my_presidents %>%
mutate(elected = year(start) - 1)
my_presidents
```
```
# A tibble: 11 × 6
name start end party term.length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
2 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
3 Johnson 1963-11-22 1969-01-20 Democratic 5.16 1962
4 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
5 Ford 1974-08-09 1977-01-20 Republican 2.45 1973
6 Carter 1977-01-20 1981-01-20 Democratic 4 1976
7 Reagan 1981-01-20 1989-01-20 Republican 8 1980
8 Bush 1989-01-20 1993-01-20 Republican 4 1988
9 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
10 Bush 2001-01-20 2009-01-20 Republican 8 2000
11 Obama 2009-01-20 2017-01-20 Democratic 8 2008
```
Some entries in this data set are wrong, because presidential elections are only held every four years.
[Lyndon Johnson](https://en.wikipedia.org/w/index.php?search=Lyndon%20Johnson) assumed the office after President [John Kennedy](https://en.wikipedia.org/w/index.php?search=John%20Kennedy) was assassinated in 1963, and [Gerald Ford](https://en.wikipedia.org/w/index.php?search=Gerald%20Ford) took over after President [Richard Nixon](https://en.wikipedia.org/w/index.php?search=Richard%20Nixon) resigned in 1974\.
Thus, there were no presidential elections in 1962 or 1973, as suggested in our data frame.
We should overwrite these values with `NA`’s—which is how **R** denotes missing values.
We can use the `ifelse()` function to do this.
Here, if the value of `elected` is either 1962 or 1973, we overwrite that value with `NA`.[6](#fn6)
Otherwise, we overwrite it with the same value that it currently has.
In this case, instead of checking to see whether the value of `elected` equals `1962` or `1973`, for brevity we can use the `%in%` operator to check to see whether the value of `elected` belongs to the vector consisting of `1962` and `1973`.
```
my_presidents <- my_presidents %>%
mutate(elected = ifelse(elected %in% c(1962, 1973), NA, elected))
my_presidents
```
```
# A tibble: 11 × 6
name start end party term.length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
2 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
3 Johnson 1963-11-22 1969-01-20 Democratic 5.16 NA
4 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
5 Ford 1974-08-09 1977-01-20 Republican 2.45 NA
6 Carter 1977-01-20 1981-01-20 Democratic 4 1976
7 Reagan 1981-01-20 1989-01-20 Republican 8 1980
8 Bush 1989-01-20 1993-01-20 Republican 4 1988
9 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
10 Bush 2001-01-20 2009-01-20 Republican 8 2000
11 Obama 2009-01-20 2017-01-20 Democratic 8 2008
```
Finally, it is considered bad practice to use periods in the name of functions, data frames, and variables in **R**. Ill\-advised periods could conflict with **R**’s use of [*generic functions*](https://en.wikipedia.org/w/index.php?search=generic%20functions) (i.e., **R**’s mechanism for [method overloading](http://en.wikipedia.org/wiki/Function_overloading)).
Thus, we should change the name of the `term.length` column that we created earlier.
We can achieve this using the `rename()` function.
In this book, we will use [*snake\_case*](https://en.wikipedia.org/w/index.php?search=snake_case) for function and variable names.
Don’t use periods in the names of functions, data frames, or variables, as this can be confused with the object\-oriented programming model.
```
my_presidents <- my_presidents %>%
rename(term_length = term.length)
my_presidents
```
```
# A tibble: 11 × 6
name start end party term_length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
2 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
3 Johnson 1963-11-22 1969-01-20 Democratic 5.16 NA
4 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
5 Ford 1974-08-09 1977-01-20 Republican 2.45 NA
6 Carter 1977-01-20 1981-01-20 Democratic 4 1976
7 Reagan 1981-01-20 1989-01-20 Republican 8 1980
8 Bush 1989-01-20 1993-01-20 Republican 4 1988
9 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
10 Bush 2001-01-20 2009-01-20 Republican 8 2000
11 Obama 2009-01-20 2017-01-20 Democratic 8 2008
```
### 4\.1\.3 `arrange()`
The function `sort()` will sort a vector but not a data frame. The function that will sort a data frame is called `arrange()`, and its behavior is illustrated in Figure [4\.4](ch-dataI.html#fig:arrange).
Figure 4\.4: The `arrange()` function. At left, a data frame with an ordinal variable. At right, the resulting data frame after sorting the rows in descending order of that variable.
In order to use `arrange()` on a data frame, you have to specify the data frame, and the column by which you want it to be sorted.
You also have to specify the direction in which you want it to be sorted. Specifying multiple sort conditions will help break ties.
To sort our `presidential` data frame by the length of each president’s term, we specify that we want the column `term_length` in descending order.
```
my_presidents %>%
arrange(desc(term_length))
```
```
# A tibble: 11 × 6
name start end party term_length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
2 Reagan 1981-01-20 1989-01-20 Republican 8 1980
3 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
4 Bush 2001-01-20 2009-01-20 Republican 8 2000
5 Obama 2009-01-20 2017-01-20 Democratic 8 2008
6 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
7 Johnson 1963-11-22 1969-01-20 Democratic 5.16 NA
8 Carter 1977-01-20 1981-01-20 Democratic 4 1976
9 Bush 1989-01-20 1993-01-20 Republican 4 1988
10 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
11 Ford 1974-08-09 1977-01-20 Republican 2.45 NA
```
A number of presidents completed either one or two full terms, and thus have the exact same term length (4 or 8 years, respectively).
To break these ties, we can further sort by `party` and `elected`.
```
my_presidents %>%
arrange(desc(term_length), party, elected)
```
```
# A tibble: 11 × 6
name start end party term_length elected
<chr> <date> <date> <chr> <dbl> <dbl>
1 Clinton 1993-01-20 2001-01-20 Democratic 8 1992
2 Obama 2009-01-20 2017-01-20 Democratic 8 2008
3 Eisenhower 1953-01-20 1961-01-20 Republican 8 1952
4 Reagan 1981-01-20 1989-01-20 Republican 8 1980
5 Bush 2001-01-20 2009-01-20 Republican 8 2000
6 Nixon 1969-01-20 1974-08-09 Republican 5.55 1968
7 Johnson 1963-11-22 1969-01-20 Democratic 5.16 NA
8 Carter 1977-01-20 1981-01-20 Democratic 4 1976
9 Bush 1989-01-20 1993-01-20 Republican 4 1988
10 Kennedy 1961-01-20 1963-11-22 Democratic 2.84 1960
11 Ford 1974-08-09 1977-01-20 Republican 2.45 NA
```
Note that the default sort order is [*ascending order*](https://en.wikipedia.org/w/index.php?search=ascending%20order), so we do not need to specify an order if that is what we want.
### 4\.1\.4 `summarize()` with `group_by()`
Our last of the five verbs for single\-table analysis is `summarize()`, which is nearly always used in conjunction with `group_by()`.
The previous four verbs provided us with means to manipulate a data frame in powerful and flexible ways.
But the extent of the analysis we can perform with these four verbs alone is limited.
On the other hand, `summarize()` with `group_by()` enables us to make comparisons.
Figure 4\.5: The `summarize()` function. At left, a data frame. At right, the resulting data frame after aggregating four of the columns.
When used alone, `summarize()` collapses a data frame into a single row[7](#fn7).
This is illustrated in Figure [4\.5](ch-dataI.html#fig:summarize).
Critically, we have to specify *how* we want to reduce an entire column of data into a single value.
The method of aggregation that we specify controls what will appear in the output.
```
my_presidents %>%
summarize(
N = n(),
first_year = min(year(start)),
last_year = max(year(end)),
num_dems = sum(party == "Democratic"),
years = sum(term_length),
avg_term_length = mean(term_length)
)
```
```
# A tibble: 1 × 6
N first_year last_year num_dems years avg_term_length
<int> <dbl> <dbl> <int> <dbl> <dbl>
1 11 1953 2017 5 64 5.82
```
The first argument to `summarize()` is a data frame, followed by a list of variables that will appear in the output.
Note that every variable in the output is defined by operations performed on *vectors*—not on individual values.
This is essential, since if the specification of an output variable is not an operation on a vector, there is no way for **R** to know how to collapse each column.
In this example, the function `n()` simply counts the number of rows.
This is often useful information.
To help ensure that data aggregation is being done correctly, use `n()` every time you use `summarize()`.
The next two variables determine the first year that one of these presidents assumed office.
This is the smallest year in the `start` column.
Similarly, the most recent year is the largest year in the `end` column.
The variable `num_dems` simply counts the number of rows in which the value of the `party` variable was `Democratic`.
Finally, the last two variables compute the sum and average of the `term_length` variable.
We see that 5 of the 11 presidents who served from 1953 to 2017 were Democrats, and the average term length over these 64 years was about 5\.8 years.
This begs the question of whether Democratic or Republican presidents served a longer average term during this time period.
To figure this out, we can just execute `summarize()` again, but this time, instead of the first argument being the data frame `my_presidents`, we will specify that the rows of the `my_presidents` data frame should be grouped by the values of the `party` variable.
In this manner, the same computations as above will be carried out for each party separately.
```
my_presidents %>%
group_by(party) %>%
summarize(
N = n(),
first_year = min(year(start)),
last_year = max(year(end)),
num_dems = sum(party == "Democratic"),
years = sum(term_length),
avg_term_length = mean(term_length)
)
```
```
# A tibble: 2 × 7
party N first_year last_year num_dems years avg_term_length
<chr> <int> <dbl> <dbl> <int> <dbl> <dbl>
1 Democratic 5 1961 2017 5 28 5.6
2 Republican 6 1953 2009 0 36 6
```
This provides us with the valuable information that the six Republican presidents served an average of 6 years in office, while the five Democratic presidents served an average of only 5\.6\. As with all of the **dplyr** verbs, the final output is a data frame.
In this chapter, we are using the **dplyr** package. The most common way to extract data from data tables is with SQL (structured query language). We’ll introduce SQL in Chapter [15](ch-sql.html#ch:sql). The **dplyr** package provides an interface that fits more smoothly into an overall data analysis workflow and is, in our opinion, easier to learn. Once you understand data wrangling with **dplyr**, it’s straightforward to learn SQL if needed. **dplyr** can also work as an interface to many systems that use SQL internally.
4\.2 Extended example: Ben’s time with the Mets
-----------------------------------------------
In this extended example, we will continue to explore [Sean Lahman](https://en.wikipedia.org/w/index.php?search=Sean%20Lahman)’s historical baseball database, which contains complete seasonal records for all players on all [*Major League Baseball*](https://en.wikipedia.org/w/index.php?search=Major%20League%20Baseball) (MLB) teams going back to 1871\.
These data are made available in **R** via the **Lahman** package (Friendly et al. 2021\).
Here again, while domain knowledge may be helpful, it is not necessary to follow the example.
To flesh out your understanding, try reading the [Wikipedia entry on Major League Baseball](https://en.wikipedia.org/wiki/Major_League_Baseball).
```
library(Lahman)
dim(Teams)
```
```
[1] 2955 48
```
The `Teams` table contains the seasonal results of every major league team in every season since 1871\.
There are 2955 rows and 48 columns in this table, which is far too much to show here, and would make for a quite unwieldy spreadsheet.
Of course, we can take a peek at what this table looks like by printing the first few rows of the table to the screen with the `head()` command, but we won’t print that on the page of this book.
[Ben Baumer](https://en.wikipedia.org/w/index.php?search=Ben%20Baumer) worked for the [New York Mets](https://en.wikipedia.org/w/index.php?search=New%20York%20Mets) from 2004 to 2012\.
How did the team do during those years?
We can use `filter()` and `select()` to quickly identify only those pieces of information that we care about.
```
mets <- Teams %>%
filter(teamID == "NYN")
my_mets <- mets %>%
filter(yearID %in% 2004:2012)
my_mets %>%
select(yearID, teamID, W, L)
```
```
yearID teamID W L
1 2004 NYN 71 91
2 2005 NYN 83 79
3 2006 NYN 97 65
4 2007 NYN 88 74
5 2008 NYN 89 73
6 2009 NYN 70 92
7 2010 NYN 79 83
8 2011 NYN 77 85
9 2012 NYN 74 88
```
Notice that we have broken this down into three steps. First, we filter the rows of the `Teams` data frame into only those teams that correspond to the New York Mets.[8](#fn8)
There are 59 of those, since the Mets joined the [*National League*](https://en.wikipedia.org/w/index.php?search=National%20League) in 1962\.
```
nrow(mets)
```
```
[1] 59
```
Next, we filtered these data so as to include only those seasons in which Ben worked for the team—those with `yearID` between 2004 and 2012\.
Finally, we printed to the screen only those columns that were relevant to our question: the year, the team’s ID, and the number of wins and losses that the team had.
While this process is logical, the code can get unruly, since two ancillary data frames (`mets` and `my_mets`) were created during the process.
It may be the case that we’d like to use data frames later in the analysis.
But if not, they are just cluttering our workspace, and eating up memory.
A more streamlined way to achieve the same result would be to *nest* these commands together.
```
select(filter(Teams, teamID == "NYN" & yearID %in% 2004:2012),
yearID, teamID, W, L)
```
```
yearID teamID W L
1 2004 NYN 71 91
2 2005 NYN 83 79
3 2006 NYN 97 65
4 2007 NYN 88 74
5 2008 NYN 89 73
6 2009 NYN 70 92
7 2010 NYN 79 83
8 2011 NYN 77 85
9 2012 NYN 74 88
```
This way, no additional data frames were created.
However, it is easy to see that as we nest more and more of these operations together, this code could become difficult to read.
To maintain readability, we instead chain these operations, rather than nest them (and get the same exact results).
```
Teams %>%
filter(teamID == "NYN" & yearID %in% 2004:2012) %>%
select(yearID, teamID, W, L)
```
This [*piping*](https://en.wikipedia.org/w/index.php?search=piping) syntax
(introduced in Section [4\.1\.1](ch-dataI.html#sec:pipe)) is
provided by the **dplyr** package.
It retains the step\-by\-step logic of our original code, while being easily readable, and efficient with respect to memory and the creation of temporary data frames.
In fact, there are also performance enhancements under the hood that make this the most efficient way to do these kinds of computations.
For these reasons we will use this syntax whenever possible throughout the book.
Note that we only have to type `Teams` once—it is implied by the pipe operator (`%>%`) that the subsequent command takes the previous data frame as its first argument. Thus, `df %>% f(y)` is equivalent to `f(df, y)`.
We’ve answered the simple question of how the Mets performed during the time that Ben was there, but since we are data scientists, we are interested in deeper questions.
For example, some of these seasons were subpar—the Mets had more losses than wins. Did the team just get unlucky in those seasons?
Or did they actually play as badly as their record indicates?
In order to answer this question, we need a model for expected winning percentage.
It turns out that one of the most widely used contributions to the field of baseball analytics (courtesy of [Bill James](https://en.wikipedia.org/w/index.php?search=Bill%20James)) is exactly that.
This model translates the number of runs[9](#fn9) that a team scores and allows over the course of an entire season into an expectation for how many games they should have won.
The simplest version of this model is this:
\\\[
\\widehat{WPct} \= \\frac{1}{1 \+ \\left( \\frac{RA}{RS} \\right)^2} \\,,
\\]
where \\(RA\\) is the number of runs the team allows to be scored, \\(RS\\) is the number of runs that the team scores, and \\(\\widehat{WPct}\\) is the team’s expected winning percentage.
Luckily for us, the runs scored and allowed are present in the `Teams` table, so let’s grab them and save them in a new data frame.
```
mets_ben <- Teams %>%
select(yearID, teamID, W, L, R, RA) %>%
filter(teamID == "NYN" & yearID %in% 2004:2012)
mets_ben
```
```
yearID teamID W L R RA
1 2004 NYN 71 91 684 731
2 2005 NYN 83 79 722 648
3 2006 NYN 97 65 834 731
4 2007 NYN 88 74 804 750
5 2008 NYN 89 73 799 715
6 2009 NYN 70 92 671 757
7 2010 NYN 79 83 656 652
8 2011 NYN 77 85 718 742
9 2012 NYN 74 88 650 709
```
First, note that the runs\-scored variable is called `R` in the `Teams` table, but to stick with our notation we want to rename it `RS`.
```
mets_ben <- mets_ben %>%
rename(RS = R) # new name = old name
mets_ben
```
```
yearID teamID W L RS RA
1 2004 NYN 71 91 684 731
2 2005 NYN 83 79 722 648
3 2006 NYN 97 65 834 731
4 2007 NYN 88 74 804 750
5 2008 NYN 89 73 799 715
6 2009 NYN 70 92 671 757
7 2010 NYN 79 83 656 652
8 2011 NYN 77 85 718 742
9 2012 NYN 74 88 650 709
```
Next, we need to compute the team’s actual winning percentage in each of these seasons.
Thus, we need to add a new column to our data frame, and we do this with the `mutate()` command.
```
mets_ben <- mets_ben %>%
mutate(WPct = W / (W + L))
mets_ben
```
```
yearID teamID W L RS RA WPct
1 2004 NYN 71 91 684 731 0.438
2 2005 NYN 83 79 722 648 0.512
3 2006 NYN 97 65 834 731 0.599
4 2007 NYN 88 74 804 750 0.543
5 2008 NYN 89 73 799 715 0.549
6 2009 NYN 70 92 671 757 0.432
7 2010 NYN 79 83 656 652 0.488
8 2011 NYN 77 85 718 742 0.475
9 2012 NYN 74 88 650 709 0.457
```
We also need to compute the model estimates for winning percentage.
```
mets_ben <- mets_ben %>%
mutate(WPct_hat = 1 / (1 + (RA/RS)^2))
mets_ben
```
```
yearID teamID W L RS RA WPct WPct_hat
1 2004 NYN 71 91 684 731 0.438 0.467
2 2005 NYN 83 79 722 648 0.512 0.554
3 2006 NYN 97 65 834 731 0.599 0.566
4 2007 NYN 88 74 804 750 0.543 0.535
5 2008 NYN 89 73 799 715 0.549 0.555
6 2009 NYN 70 92 671 757 0.432 0.440
7 2010 NYN 79 83 656 652 0.488 0.503
8 2011 NYN 77 85 718 742 0.475 0.484
9 2012 NYN 74 88 650 709 0.457 0.457
```
The expected number of wins is then equal to the product of the expected winning percentage times the number of games.
```
mets_ben <- mets_ben %>%
mutate(W_hat = WPct_hat * (W + L))
mets_ben
```
```
yearID teamID W L RS RA WPct WPct_hat W_hat
1 2004 NYN 71 91 684 731 0.438 0.467 75.6
2 2005 NYN 83 79 722 648 0.512 0.554 89.7
3 2006 NYN 97 65 834 731 0.599 0.566 91.6
4 2007 NYN 88 74 804 750 0.543 0.535 86.6
5 2008 NYN 89 73 799 715 0.549 0.555 90.0
6 2009 NYN 70 92 671 757 0.432 0.440 71.3
7 2010 NYN 79 83 656 652 0.488 0.503 81.5
8 2011 NYN 77 85 718 742 0.475 0.484 78.3
9 2012 NYN 74 88 650 709 0.457 0.457 74.0
```
In this case, the Mets’ fortunes were better than expected in three of these seasons, and worse than expected in the other six.
```
filter(mets_ben, W >= W_hat)
```
```
yearID teamID W L RS RA WPct WPct_hat W_hat
1 2006 NYN 97 65 834 731 0.599 0.566 91.6
2 2007 NYN 88 74 804 750 0.543 0.535 86.6
3 2012 NYN 74 88 650 709 0.457 0.457 74.0
```
```
filter(mets_ben, W < W_hat)
```
```
yearID teamID W L RS RA WPct WPct_hat W_hat
1 2004 NYN 71 91 684 731 0.438 0.467 75.6
2 2005 NYN 83 79 722 648 0.512 0.554 89.7
3 2008 NYN 89 73 799 715 0.549 0.555 90.0
4 2009 NYN 70 92 671 757 0.432 0.440 71.3
5 2010 NYN 79 83 656 652 0.488 0.503 81.5
6 2011 NYN 77 85 718 742 0.475 0.484 78.3
```
Naturally, the Mets experienced ups and downs during Ben’s time with the team.
Which seasons were best?
To figure this out, we can simply sort the rows of the data frame.
```
arrange(mets_ben, desc(WPct))
```
```
yearID teamID W L RS RA WPct WPct_hat W_hat
1 2006 NYN 97 65 834 731 0.599 0.566 91.6
2 2008 NYN 89 73 799 715 0.549 0.555 90.0
3 2007 NYN 88 74 804 750 0.543 0.535 86.6
4 2005 NYN 83 79 722 648 0.512 0.554 89.7
5 2010 NYN 79 83 656 652 0.488 0.503 81.5
6 2011 NYN 77 85 718 742 0.475 0.484 78.3
7 2012 NYN 74 88 650 709 0.457 0.457 74.0
8 2004 NYN 71 91 684 731 0.438 0.467 75.6
9 2009 NYN 70 92 671 757 0.432 0.440 71.3
```
In 2006, the Mets had the best record in baseball during the regular season and nearly made the [*World Series*](https://en.wikipedia.org/w/index.php?search=World%20Series).
How do these seasons rank in terms of the team’s performance relative to our model?
```
mets_ben %>%
mutate(Diff = W - W_hat) %>%
arrange(desc(Diff))
```
```
yearID teamID W L RS RA WPct WPct_hat W_hat Diff
1 2006 NYN 97 65 834 731 0.599 0.566 91.6 5.3840
2 2007 NYN 88 74 804 750 0.543 0.535 86.6 1.3774
3 2012 NYN 74 88 650 709 0.457 0.457 74.0 0.0199
4 2008 NYN 89 73 799 715 0.549 0.555 90.0 -0.9605
5 2009 NYN 70 92 671 757 0.432 0.440 71.3 -1.2790
6 2011 NYN 77 85 718 742 0.475 0.484 78.3 -1.3377
7 2010 NYN 79 83 656 652 0.488 0.503 81.5 -2.4954
8 2004 NYN 71 91 684 731 0.438 0.467 75.6 -4.6250
9 2005 NYN 83 79 722 648 0.512 0.554 89.7 -6.7249
```
It appears that 2006 was the Mets’ most fortunate year—since they won five more games than our model predicts—but 2005 was the least fortunate—since they won almost seven games fewer than our model predicts.
This type of analysis helps us understand how the Mets performed in individual seasons, but we know that any randomness that occurs in individual years is likely to average out over time.
So while it is clear that the Mets performed well in some seasons and poorly in others, what can we say about their overall performance?
We can easily summarize a single variable with the `skim()` command from the **mdsr** package.
```
mets_ben %>%
skim(W)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 W 9 0 80.9 9.10 70 74 79 88 97
```
This tells us that the Mets won nearly 81 games on average during Ben’s tenure, which corresponds almost exactly to a 0\.500 winning percentage, since there are 162 games in a regular season.
But we may be interested in aggregating more than one variable at a time.
To do this, we use `summarize()`.
```
mets_ben %>%
summarize(
num_years = n(),
total_W = sum(W),
total_L = sum(L),
total_WPct = sum(W) / sum(W + L),
sum_resid = sum(W - W_hat)
)
```
```
num_years total_W total_L total_WPct sum_resid
1 9 728 730 0.499 -10.6
```
In these nine years, the Mets had a combined record of 728 wins and 730 losses, for an overall winning percentage of .499\. Just one extra win would have made them exactly 0\.500!
(If we could pick which game, we would definitely pick [the final game of the 2007 season](https://www.baseball-reference.com/boxes/NYN/NYN200709300.shtml).
A win there would have resulted in a playoff berth.)
However, we’ve also learned that the team under\-performed relative to our model by a total of 10\.6 games over those nine seasons.
Usually, when we are summarizing a data frame like we did above, it is interesting to consider different groups.
In this case, we can discretize these years into three chunks: one for each of the three general managers under whom Ben worked. [Jim Duquette](https://en.wikipedia.org/w/index.php?search=Jim%20Duquette) was the Mets’ [*general manager*](https://en.wikipedia.org/w/index.php?search=general%20manager) in 2004, [Omar Minaya](https://en.wikipedia.org/w/index.php?search=Omar%20Minaya) from 2005 to 2010, and [Sandy Alderson](https://en.wikipedia.org/w/index.php?search=Sandy%20Alderson) from 2011 to 2012\.
We can define these eras using two nested `ifelse()` functions.
```
mets_ben <- mets_ben %>%
mutate(
gm = ifelse(
yearID == 2004,
"Duquette",
ifelse(
yearID >= 2011,
"Alderson",
"Minaya")
)
)
```
Another, more scalable approach to accomplishing this same task is to use a `case_when()` expression.
```
mets_ben <- mets_ben %>%
mutate(
gm = case_when(
yearID == 2004 ~ "Duquette",
yearID >= 2011 ~ "Alderson",
TRUE ~ "Minaya"
)
)
```
Don’t use nested `ifelse()` statements: `case_when()` is far simpler.
Next, we use the `gm` variable to define these groups with the `group_by()` operator.
The combination of summarizing data by groups can be very powerful.
Note that while the Mets were far more successful during Minaya’s regime (i.e., many more wins than losses), they did not meet expectations in any of the three periods.
```
mets_ben %>%
group_by(gm) %>%
summarize(
num_years = n(),
total_W = sum(W),
total_L = sum(L),
total_WPct = sum(W) / sum(W + L),
sum_resid = sum(W - W_hat)
) %>%
arrange(desc(sum_resid))
```
```
# A tibble: 3 × 6
gm num_years total_W total_L total_WPct sum_resid
<chr> <int> <int> <int> <dbl> <dbl>
1 Alderson 2 151 173 0.466 -1.32
2 Duquette 1 71 91 0.438 -4.63
3 Minaya 6 506 466 0.521 -4.70
```
The full power of the chaining operator is revealed below, where we do all the analysis at once, but retain the step\-by\-step logic.
```
Teams %>%
select(yearID, teamID, W, L, R, RA) %>%
filter(teamID == "NYN" & yearID %in% 2004:2012) %>%
rename(RS = R) %>%
mutate(
WPct = W / (W + L),
WPct_hat = 1 / (1 + (RA/RS)^2),
W_hat = WPct_hat * (W + L),
gm = case_when(
yearID == 2004 ~ "Duquette",
yearID >= 2011 ~ "Alderson",
TRUE ~ "Minaya"
)
) %>%
group_by(gm) %>%
summarize(
num_years = n(),
total_W = sum(W),
total_L = sum(L),
total_WPct = sum(W) / sum(W + L),
sum_resid = sum(W - W_hat)
) %>%
arrange(desc(sum_resid))
```
```
# A tibble: 3 × 6
gm num_years total_W total_L total_WPct sum_resid
<chr> <int> <int> <int> <dbl> <dbl>
1 Alderson 2 151 173 0.466 -1.32
2 Duquette 1 71 91 0.438 -4.63
3 Minaya 6 506 466 0.521 -4.70
```
Even more generally, we might be more interested in how the Mets performed relative to our model, in the context of all teams during that 9\-year period.
All we need to do is remove the `teamID` filter and group by franchise (`franchID`) instead.
```
Teams %>%
select(yearID, teamID, franchID, W, L, R, RA) %>%
filter(yearID %in% 2004:2012) %>%
rename(RS = R) %>%
mutate(
WPct = W / (W + L),
WPct_hat = 1 / (1 + (RA/RS)^2),
W_hat = WPct_hat * (W + L)
) %>%
group_by(franchID) %>%
summarize(
num_years = n(),
total_W = sum(W),
total_L = sum(L),
total_WPct = sum(W) / sum(W + L),
sum_resid = sum(W - W_hat)
) %>%
arrange(sum_resid) %>%
head(6)
```
```
# A tibble: 6 × 6
franchID num_years total_W total_L total_WPct sum_resid
<fct> <int> <int> <int> <dbl> <dbl>
1 TOR 9 717 740 0.492 -29.2
2 ATL 9 781 677 0.536 -24.0
3 COL 9 687 772 0.471 -22.7
4 CHC 9 706 750 0.485 -14.5
5 CLE 9 710 748 0.487 -13.9
6 NYM 9 728 730 0.499 -10.6
```
We can see now that only five other teams fared worse than the Mets,[10](#fn10) relative to our model, during this time period.
Perhaps they are cursed!
4\.3 Further resources
----------------------
[Hadley Wickham](https://en.wikipedia.org/w/index.php?search=Hadley%20Wickham) is an influential innovator in the field of statistical computing.
Along with his colleagues at **RStudio** and other organizations, he has made significant contributions to improve data wrangling in **R**.
These packages are called the [*tidyverse*](https://en.wikipedia.org/w/index.php?search=tidyverse), and are now manageable through a single **tidyverse** (Hadley Wickham 2021g) package.
His papers and vignettes describing widely\-used packages such as **dplyr** (H. Wickham and Francois 2020\) and **tidyr** (Hadley Wickham 2020c) are highly recommended reading.
Finzer (2013\) writes of a “data habit of mind” that needs to be inculcated among data scientists.
The **RStudio** data wrangling [cheat sheet](https://rstudio.com/resources/cheatsheets) is a useful reference.
4\.4 Exercises
--------------
**Problem 1 (Easy)**: Here is a random subset of the `babynames` data frame in the `babynames` package:
```
Random_subset
```
```
# A tibble: 10 × 5
year sex name n prop
<dbl> <chr> <chr> <int> <dbl>
1 2003 M Bilal 146 0.0000695
2 1999 F Terria 23 0.0000118
3 2010 F Naziyah 45 0.0000230
4 1989 F Shawana 41 0.0000206
5 1989 F Jessi 210 0.000105
6 1928 M Tillman 43 0.0000377
7 1981 F Leslee 83 0.0000464
8 1981 F Sherise 27 0.0000151
9 1920 F Marquerite 26 0.0000209
10 1941 M Lorraine 24 0.0000191
```
For each of the following tables wrangled from `Random_subset`, figure out what `dplyr` wrangling statement will produce the result.
1. Hint: Both rows and variables are missing from the original
```
# A tibble: 4 × 4
year sex name n
<dbl> <chr> <chr> <int>
1 2010 F Naziyah 45
2 1989 F Shawana 41
3 1928 M Tillman 43
4 1981 F Leslee 83
```
2. Hint: the `nchar()` function is used in the statement.
```
# A tibble: 2 × 5
year sex name n prop
<dbl> <chr> <chr> <int> <dbl>
1 1999 F Terria 23 0.0000118
2 1981 F Leslee 83 0.0000464
```
3. Hint: Note the new column, which is constructed from `n` and `prop`.
```
# A tibble: 2 × 6
year sex name n prop total
<dbl> <chr> <chr> <int> <dbl> <dbl>
1 1989 F Shawana 41 0.0000206 1992225.
2 1989 F Jessi 210 0.000105 1991843.
```
4. Hint: All the years are still there, but there are only 8 rows as opposed to the original 10 rows.
```
# A tibble: 8 × 2
year total
<dbl> <int>
1 1920 26
2 1928 43
3 1941 24
4 1981 110
5 1989 251
6 1999 23
7 2003 146
8 2010 45
```
**Problem 2 (Easy)**: We’ll be working with the `babynames` data frame in the `babynames` package. To remind you what `babynames` looks like, here are a few rows.
```
# A tibble: 3 × 5
year sex name n prop
<dbl> <chr> <chr> <int> <dbl>
1 2004 M Arjun 250 0.000118
2 1894 F Fedora 5 0.0000212
3 1952 F Donalda 10 0.00000526
```
Say what’s wrong (if anything) with each of the following wrangling commands.
1. `babynames %>% select(n > 100)`
2. `babynames %>% select(- year)`
3. `babynames %>% mutate(name_length == nchar(name))`
4. `babynames %>% sex == M %>% select(-prop)`
5. `babynames %>% select(year, year, sex)`
6. `babynames %>% group_by(n) %>% summarize(ave = mean(n))`
7. `babynames %>% group_by(n > 100) %>% summarize(total = sum(n))`
**Problem 3 (Easy)**: Consider the following pipeline:
```
library(tidyverse)
mtcars %>%
group_by(cyl) %>%
summarize(avg_mpg = mean(mpg)) %>%
filter(am == 1)
```
What is the problem with this pipeline?
**Problem 4 (Easy)**: Define two new variables in the `Teams` data frame in the `Lahman` package.
1. batting average (\\(BA\\)). Batting average is the ratio of hits (`H`) to at\-bats (`AB`).
2. slugging percentage (\\(SLG\\)). Slugging percentage is total bases divided by at\-bats. To compute total bases, you get 1 for a single, 2 for a double, 3 for a triple, and 4 for a home run.
3. Plot out the \\(SLG\\) versus `yearID`, showing the individual teams and a smooth curve.
4. Same as (c), but plot \\(BA\\) versus year.
**Problem 5 (Easy)**: Consider the following pipeline:
```
mtcars %>%
group_by(cyl) %>%
summarize(
N = n(),
avg_mpg = mean(mpg)
)
```
```
# A tibble: 3 × 3
cyl N avg_mpg
<dbl> <int> <dbl>
1 4 11 26.7
2 6 7 19.7
3 8 14 15.1
```
What is the real\-world meaning of the variable `N` in the result set?
**Problem 6 (Easy)**: Each of these tasks can be performed using a single data verb. For each task, say which verb it is:
1. Find the average of one of the variables.
2. Add a new column that is the ratio between two variables.
3. Sort the cases in descending order of a variable.
4. Create a new data table that includes only those cases that meet a criterion.
5. From a data table with three categorical variables A, B, and C, and a quantitative variable X, produce a data frame that has the same cases but only the variables A and X.
**Problem 7 (Medium)**: Using the `Teams` data frame in the `Lahman` package, display the top\-5 teams ranked in terms of slugging percentage (\\(SLG\\)) in Major League Baseball history. Repeat this using teams since 1969\. Slugging percentage is total bases divided by at\-bats. To compute total bases, you get 1 for a single, 2 for a double, 3 for a triple, and 4 for a home run.
**Problem 8 (Medium)**: Using the `Teams` data frame in the `Lahman` package:
1. Plot `SLG` versus `yearID` since 1954 conditioned by league (American vs. National, see `lgID`). Slugging percentage is total bases divided by at\-bats. To compute total bases, you get 1 for a single, 2 for a double, 3 for a triple, and 4 for a home run.
2. Is slugging percentage typically higher in the American League (AL) or the National League (NL)? Can you think of why this might be the case?
**Problem 9 (Medium)**: Use the `nycflights13` package and the `flights` data frame to answer the following questions: What month had the highest proportion of cancelled flights? What month had the lowest? Interpret any seasonal patterns.
**Problem 10 (Medium)**: Using the `Teams` data frame in the `Lahman` package:
1. Create a factor called `election` that divides the `yearID` into 4\-year blocks that correspond to U.S. presidential terms. The first presidential term started in 1788\. They each last 4 years and are still on the schedule set in 1788\.
2. During which term have the most home runs been hit?
**Problem 11 (Medium)**: The `Violations` data set in the `mdsr` package contains information regarding the outcome of health inspections of restaurants in New York City. Use these data to calculate the median violation score by zip code for zip codes in Manhattan with 50 or more inspections. What pattern do you see between the number of inspections and the median score?
**Problem 12 (Medium)**: The `nycflights13` package includes a table (`weather`) that describes the weather during 2013\. Use that table to answer the following questions:
1. What is the distribution of temperature in July, 2013? Identify any important outliers in terms of the `wind_speed` variable.
2. What is the relationship between `dewp` and `humid`?
3. What is the relationship between `precip` and `visib`?
**Problem 13 (Medium)**: The Major League Baseball Angels have at times been called the California Angels (*CAL*), the Anaheim Angels (*ANA*), and the Los Angeles Angels of Anaheim (*LAA*). Using the `Teams` data frame in the `Lahman` package:
1. Find the 10 most successful seasons in Angels history, defining “successful” as the fraction of regular\-season games won in the year. In the table you create, include the `yearID`, `teamID`, `lgID`, `W`, `L`, and `WSWin`. See the documentation for `Teams` for the definition of these variables.
2. Have the Angels ever won the World Series?
**Problem 14 (Medium)**: Use the `nycflights13` package and the `flights` data frame to answer the following question: What plane (specified by the `tailnum` variable) traveled the most times from New York City airports in 2013? Plot the number of trips per week over the year.
**Problem 15 (Hard)**: Replicate the wrangling to create the `house_elections` table in the `fec` package from the original Excel source file.
4\.5 Supplementary exercises
----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-dataI.html\#dataI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-dataI.html#dataI-online-exercises)
**Problem 1 (Easy)**: Which `dplyr` operation is depicted below?
**Problem 2 (Easy)**: Which `dplyr` operation is depicted below?
**Problem 3 (Easy)**: Which `dplyr` operation is depicted below?
**Problem 4 (Easy)**: Which `dplyr` operation is depicted below?
**Problem 5 (Easy)**: Which `dplyr` operation is depicted below?
**Problem 6 (Easy)**: Which `dplyr` operation is depicted below?
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-join.html |
Chapter 5 Data wrangling on multiple tables
===========================================
In the previous chapter, we illustrated how the five data wrangling verbs can be chained to perform operations on a single table.
A single table is reminiscent of a single well\-organized spreadsheet.
But in the same way that a workbook can contain multiple spreadsheets, we will often work with multiple tables.
In Chapter [15](ch-sql.html#ch:sql), we will describe how multiple tables related by unique identifiers called [*keys*](https://en.wikipedia.org/w/index.php?search=keys) can be organized into a [*relational database management system*](https://en.wikipedia.org/w/index.php?search=relational%20database%20management%20system).
It is more efficient for the computer to store and search tables in which “like is stored with like.”
Thus, a database maintained by the [*Bureau of Transportation Statistics*](https://en.wikipedia.org/w/index.php?search=Bureau%20of%20Transportation%20Statistics) on the arrival times of U.S. commercial flights will consist of multiple tables, each of which contains data about different things.
For example, the **nycflights13** package contains one table about `flights`—each row in this table is a single flight.
As there are many flights, you can imagine that this table will get very long—hundreds of thousands of rows per year.
There are other related kinds of information that we will want to know about these flights.
We would certainly be interested in the particular airline to which each flight belonged.
It would be inefficient to store the complete name of the airline (e.g., `American Airlines Inc.`) in every row of the flights table. A simple code (e.g., `AA`) would take up less space on disk.
For small tables, the savings of storing two characters instead of 25 is insignificant, but for large tables, it can add up to noticeable savings both in terms of the size of data on disk, and the speed with which we can search it.
However, we still want to have the full names of the airlines available if we need them.
The solution is to store the data *about airlines* in a separate table called `airlines`, and to provide a [*key*](https://en.wikipedia.org/w/index.php?search=key) that links the data in the two tables together.
5\.1 `inner_join()`
-------------------
If we examine the first few rows of the `flights` table, we observe that the `carrier` column contains a two\-character string corresponding to the airline.
```
library(tidyverse)
library(mdsr)
library(nycflights13)
glimpse(flights)
```
```
Rows: 336,776
Columns: 19
$ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 201…
$ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
$ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
$ dep_time <int> 517, 533, 542, 544, 554, 554, 555, 557, 557, 558, 5…
$ sched_dep_time <int> 515, 529, 540, 545, 600, 558, 600, 600, 600, 600, 6…
$ dep_delay <dbl> 2, 4, 2, -1, -6, -4, -5, -3, -3, -2, -2, -2, -2, -2…
$ arr_time <int> 830, 850, 923, 1004, 812, 740, 913, 709, 838, 753, …
$ sched_arr_time <int> 819, 830, 850, 1022, 837, 728, 854, 723, 846, 745, …
$ arr_delay <dbl> 11, 20, 33, -18, -25, 12, 19, -14, -8, 8, -2, -3, 7…
$ carrier <chr> "UA", "UA", "AA", "B6", "DL", "UA", "B6", "EV", "B6…
$ flight <int> 1545, 1714, 1141, 725, 461, 1696, 507, 5708, 79, 30…
$ tailnum <chr> "N14228", "N24211", "N619AA", "N804JB", "N668DN", "…
$ origin <chr> "EWR", "LGA", "JFK", "JFK", "LGA", "EWR", "EWR", "L…
$ dest <chr> "IAH", "IAH", "MIA", "BQN", "ATL", "ORD", "FLL", "I…
$ air_time <dbl> 227, 227, 160, 183, 116, 150, 158, 53, 140, 138, 14…
$ distance <dbl> 1400, 1416, 1089, 1576, 762, 719, 1065, 229, 944, 7…
$ hour <dbl> 5, 5, 5, 5, 6, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5, 6, …
$ minute <dbl> 15, 29, 40, 45, 0, 58, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5…
$ time_hour <dttm> 2013-01-01 05:00:00, 2013-01-01 05:00:00, 2013-01-…
```
In the `airlines` table, we have those same two\-character strings, but also the full names of the airline.
```
head(airlines, 3)
```
```
# A tibble: 3 × 2
carrier name
<chr> <chr>
1 9E Endeavor Air Inc.
2 AA American Airlines Inc.
3 AS Alaska Airlines Inc.
```
In order to retrieve a list of flights and the full names of the airlines that managed each flight, we need to match up the rows in the `flights` table with those rows in the `airlines` table that have the corresponding values for the `carrier` column in *both* tables. This is achieved with the function `inner_join()`.
```
flights_joined <- flights %>%
inner_join(airlines, by = c("carrier" = "carrier"))
glimpse(flights_joined)
```
```
Rows: 336,776
Columns: 20
$ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 201…
$ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
$ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
$ dep_time <int> 517, 533, 542, 544, 554, 554, 555, 557, 557, 558, 5…
$ sched_dep_time <int> 515, 529, 540, 545, 600, 558, 600, 600, 600, 600, 6…
$ dep_delay <dbl> 2, 4, 2, -1, -6, -4, -5, -3, -3, -2, -2, -2, -2, -2…
$ arr_time <int> 830, 850, 923, 1004, 812, 740, 913, 709, 838, 753, …
$ sched_arr_time <int> 819, 830, 850, 1022, 837, 728, 854, 723, 846, 745, …
$ arr_delay <dbl> 11, 20, 33, -18, -25, 12, 19, -14, -8, 8, -2, -3, 7…
$ carrier <chr> "UA", "UA", "AA", "B6", "DL", "UA", "B6", "EV", "B6…
$ flight <int> 1545, 1714, 1141, 725, 461, 1696, 507, 5708, 79, 30…
$ tailnum <chr> "N14228", "N24211", "N619AA", "N804JB", "N668DN", "…
$ origin <chr> "EWR", "LGA", "JFK", "JFK", "LGA", "EWR", "EWR", "L…
$ dest <chr> "IAH", "IAH", "MIA", "BQN", "ATL", "ORD", "FLL", "I…
$ air_time <dbl> 227, 227, 160, 183, 116, 150, 158, 53, 140, 138, 14…
$ distance <dbl> 1400, 1416, 1089, 1576, 762, 719, 1065, 229, 944, 7…
$ hour <dbl> 5, 5, 5, 5, 6, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5, 6, …
$ minute <dbl> 15, 29, 40, 45, 0, 58, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5…
$ time_hour <dttm> 2013-01-01 05:00:00, 2013-01-01 05:00:00, 2013-01-…
$ name <chr> "United Air Lines Inc.", "United Air Lines Inc.", "…
```
Notice that the `flights_joined` data frame now has an additional variable called `name`.
This is the column from `airlines` that is now included in the combined data frame.
We can view the full names of the airlines instead of the cryptic two\-character codes.
```
flights_joined %>%
select(carrier, name, flight, origin, dest) %>%
head(3)
```
```
# A tibble: 3 × 5
carrier name flight origin dest
<chr> <chr> <int> <chr> <chr>
1 UA United Air Lines Inc. 1545 EWR IAH
2 UA United Air Lines Inc. 1714 LGA IAH
3 AA American Airlines Inc. 1141 JFK MIA
```
In an `inner_join()`, the result set contains only those rows that have matches in both tables. In this case, all of the rows in `flights` have exactly one corresponding entry in `airlines`, so the number of rows in `flights_joined` is the same as the number of rows in `flights` (this will not always be the case).
```
nrow(flights)
```
```
[1] 336776
```
```
nrow(flights_joined)
```
```
[1] 336776
```
It is always a good idea to carefully check that the number of rows returned by a join operation is what you expected. In particular, you should carefully check for rows in one table that matched to more than one row in the other table.
5\.2 `left_join()`
------------------
Another commonly\-used type of join is a `left_join()`. Here the rows of the first table are *always* returned, regardless of whether there is a match in the second table.
Suppose that we are only interested in flights from the [*New York City*](https://en.wikipedia.org/w/index.php?search=New%20York%20City) airports on the [*West Coast*](https://en.wikipedia.org/w/index.php?search=West%20Coast).
To restrict ourselves to airports in the [*Pacific Time Zone*](https://en.wikipedia.org/w/index.php?search=Pacific%20Time%20Zone) (UTC \-8\) we can filter the `airports` data frame to only include those airports.
```
airports_pt <- airports %>%
filter(tz == -8)
nrow(airports_pt)
```
```
[1] 178
```
Now, if we perform an `inner_join()` on `flights` and `airports_pt`, matching the destinations in `flights` to the [*FAA*](https://en.wikipedia.org/w/index.php?search=FAA) codes in `airports`, we retrieve only those flights that flew to our airports in the Pacific Time Zone.
```
nyc_dests_pt <- flights %>%
inner_join(airports_pt, by = c("dest" = "faa"))
nrow(nyc_dests_pt)
```
```
[1] 46324
```
However, if we use a `left_join()` with the same conditions, we retrieve all of the rows of `flights`. `NA`’s are inserted into the columns where no matched data was found.
```
nyc_dests <- flights %>%
left_join(airports_pt, by = c("dest" = "faa"))
nyc_dests %>%
summarize(
num_flights = n(),
num_flights_pt = sum(!is.na(name)),
num_flights_not_pt = sum(is.na(name))
)
```
```
# A tibble: 1 × 3
num_flights num_flights_pt num_flights_not_pt
<int> <int> <int>
1 336776 46324 290452
```
Left joins are particularly useful in databases in which [*referential integrity*](https://en.wikipedia.org/w/index.php?search=referential%20integrity) is broken (not all of the [*keys*](https://en.wikipedia.org/w/index.php?search=keys) are present—see Chapter [15](ch-sql.html#ch:sql)).
5\.3 Extended example: Manny Ramirez
------------------------------------
In the context of baseball and the **Lahman** package, multiple tables are used to store information. The batting statistics of players are stored in one table (`Batting`), while information about people (most of whom are players) is in a different table (`Master`).
Every row in the `Batting` table contains the statistics accumulated by a single player during a single stint for a single team in a single year. Thus, a player like [Manny Ramirez](https://en.wikipedia.org/w/index.php?search=Manny%20Ramirez) has many rows in the `Batting` table ([21, in fact](http://www.baseball-reference.com/players/r/ramirma02.shtml)).
```
library(Lahman)
manny <- Batting %>%
filter(playerID == "ramirma02")
nrow(manny)
```
```
[1] 21
```
Using what we’ve learned, we can quickly tabulate Ramirez’s most common career offensive statistics.
For those new to baseball, some additional background may be helpful.
A hit (`H`) occurs when a batter reaches base safely. A [*home run*](https://en.wikipedia.org/w/index.php?search=home%20run) (`HR`) occurs when the ball is hit out of the park or the runner advances through all of the bases during that play.
[Barry Bonds](https://en.wikipedia.org/w/index.php?search=Barry%20Bonds) has the record for most home runs (762\) hit in a career. A player’s batting average (`BA`) is the ratio of the number of hits to the number of eligible at\-bats.
The highest career batting average in [*Major League Baseball*](https://en.wikipedia.org/w/index.php?search=Major%20League%20Baseball) history of 0\.366 was achieved by [Ty Cobb](https://en.wikipedia.org/w/index.php?search=Ty%20Cobb)—season averages above 0\.300 are impressive.
Finally, runs batted in (`RBI`) is the number of runners (including the batter in the case of a home run) that score during that batter’s at\-bat.
[Hank Aaron](https://en.wikipedia.org/w/index.php?search=Hank%20Aaron) has the record for most career RBIs with 2,297\.
```
manny %>%
summarize(
span = paste(min(yearID), max(yearID), sep = "-"),
num_years = n_distinct(yearID),
num_teams = n_distinct(teamID),
BA = sum(H)/sum(AB),
tH = sum(H),
tHR = sum(HR),
tRBI = sum(RBI)
)
```
```
span num_years num_teams BA tH tHR tRBI
1 1993-2011 19 5 0.312 2574 555 1831
```
Notice how we have used the `paste()` function to combine results from multiple variables into a new variable, and how we have used the `n_distinct()` function to count the number of distinct rows.
In his 19\-year career, Ramirez hit 555 home runs, which puts him in the top 20 among all Major League players.
However, we also see that Ramirez played for five teams during his career.
Did he perform equally well for each of them?
Breaking his statistics down by team, or by league, is as easy as adding an appropriate `group_by()` command.
```
manny %>%
group_by(teamID) %>%
summarize(
span = paste(min(yearID), max(yearID), sep = "-"),
num_years = n_distinct(yearID),
num_teams = n_distinct(teamID),
BA = sum(H)/sum(AB),
tH = sum(H),
tHR = sum(HR),
tRBI = sum(RBI)
) %>%
arrange(span)
```
```
# A tibble: 5 × 8
teamID span num_years num_teams BA tH tHR tRBI
<fct> <chr> <int> <int> <dbl> <int> <int> <int>
1 CLE 1993-2000 8 1 0.313 1086 236 804
2 BOS 2001-2008 8 1 0.312 1232 274 868
3 LAN 2008-2010 3 1 0.322 237 44 156
4 CHA 2010-2010 1 1 0.261 18 1 2
5 TBA 2011-2011 1 1 0.0588 1 0 1
```
While Ramirez was very productive for Cleveland, Boston, and the [*Los Angeles Dodgers*](https://en.wikipedia.org/w/index.php?search=Los%20Angeles%20Dodgers), his brief tours with the [*Chicago White Sox*](https://en.wikipedia.org/w/index.php?search=Chicago%20White%20Sox) and [*Tampa Bay Rays*](https://en.wikipedia.org/w/index.php?search=Tampa%20Bay%20Rays) were less than stellar.
In the pipeline below, we can see that Ramirez spent the bulk of his career in the [*American League*](https://en.wikipedia.org/w/index.php?search=American%20League).
```
manny %>%
group_by(lgID) %>%
summarize(
span = paste(min(yearID), max(yearID), sep = "-"),
num_years = n_distinct(yearID),
num_teams = n_distinct(teamID),
BA = sum(H)/sum(AB),
tH = sum(H),
tHR = sum(HR),
tRBI = sum(RBI)
) %>%
arrange(span)
```
```
# A tibble: 2 × 8
lgID span num_years num_teams BA tH tHR tRBI
<fct> <chr> <int> <int> <dbl> <int> <int> <int>
1 AL 1993-2011 18 4 0.311 2337 511 1675
2 NL 2008-2010 3 1 0.322 237 44 156
```
If Ramirez played in only 19 different seasons, why were there 21 rows attributed to him? Notice that in 2008, he was traded from the [*Boston Red Sox*](https://en.wikipedia.org/w/index.php?search=Boston%20Red%20Sox) to the [*Los Angeles Dodgers*](https://en.wikipedia.org/w/index.php?search=Los%20Angeles%20Dodgers), and thus played for both teams. Similarly, in 2010 he played for both the Dodgers and the [*Chicago White Sox*](https://en.wikipedia.org/w/index.php?search=Chicago%20White%20Sox).
When summarizing data, it is critically important to understand exactly how the rows of your data frame are organized.
To see what can go wrong here, suppose we were interested in tabulating the number of seasons in which Ramirez hit at least 30 home runs. The simplest solution is:
```
manny %>%
filter(HR >= 30) %>%
nrow()
```
```
[1] 11
```
But this answer is wrong, because in 2008, Ramirez hit 20 home runs for Boston before being traded and then 17 more for the Dodgers afterwards. Neither of those rows were counted, since they were *both* filtered out. Thus, the year 2008 does not appear among the 11 that we counted in the previous pipeline. Recall that each row in the `manny` data frame corresponds to one stint with one team in one year. On the other hand, the question asks us to consider each year, *regardless of team*. In order to get the right answer, we have to aggregate the rows by team. Thus, the correct solution is:
```
manny %>%
group_by(yearID) %>%
summarize(tHR = sum(HR)) %>%
filter(tHR >= 30) %>%
nrow()
```
```
[1] 12
```
Note that the `filter()` operation is applied to `tHR`, the total number of home runs in a season, and not `HR`, the number of home runs in a single stint for a single team in a single season. (This distinction between filtering the rows of the original data versus the rows of the aggregated results will appear again in Chapter [15](ch-sql.html#ch:sql).)
We began this example by filtering the `Batting` table for the player with `playerID` equal to `ramirma02`.
How did we know to use this identifier?
This player ID is known as a [*key*](https://en.wikipedia.org/w/index.php?search=key), and in fact, `playerID` is the [*primary key*](https://en.wikipedia.org/w/index.php?search=primary%20key) defined in the `Master` table.
That is, every row in the `Master` table is uniquely identified by the value of `playerID`.
There is exactly one row in that table for which `playerID` is equal to `ramirma02`.
But how did we know that this ID corresponds to [Manny Ramirez](https://en.wikipedia.org/w/index.php?search=Manny%20Ramirez)? We can search the `Master` table.
The data in this table include characteristics about [Manny Ramirez](https://en.wikipedia.org/w/index.php?search=Manny%20Ramirez) that do not change across multiple seasons (with the possible exception of his weight).
```
Master %>%
filter(nameLast == "Ramirez" & nameFirst == "Manny")
```
```
playerID birthYear birthMonth birthDay birthCountry birthState
1 ramirma02 1972 5 30 D.R. Distrito Nacional
birthCity deathYear deathMonth deathDay deathCountry deathState
1 Santo Domingo NA NA NA <NA> <NA>
deathCity nameFirst nameLast nameGiven weight height bats throws
1 <NA> Manny Ramirez Manuel Aristides 225 72 R R
debut finalGame retroID bbrefID deathDate birthDate
1 1993-09-02 2011-04-06 ramim002 ramirma02 <NA> 1972-05-30
```
The `playerID` column forms a primary key in the `Master` table, but it does not in the `Batting` table, since as we saw previously, there were 21 rows with that `playerID`. In the `Batting` table, the `playerID` column is known as a [*foreign key*](https://en.wikipedia.org/w/index.php?search=foreign%20key), in that it references a primary key in another table. For our purposes, the presence of this column in both tables allows us to link them together. This way, we can combine data from the `Batting` table with data in the `Master` table. We do this with `inner_join()` by specifying the two tables that we want to join, and the corresponding columns in each table that provide the link. Thus, if we want to display Ramirez’s name in our previous result, as well as his age, we must join the `Batting` and `Master` tables together.
Always specify the `by` argument that defines the join condition. Don’t rely on the defaults.
```
Batting %>%
filter(playerID == "ramirma02") %>%
inner_join(Master, by = c("playerID" = "playerID")) %>%
group_by(yearID) %>%
summarize(
Age = max(yearID - birthYear),
num_teams = n_distinct(teamID),
BA = sum(H)/sum(AB),
tH = sum(H),
tHR = sum(HR),
tRBI = sum(RBI)
) %>%
arrange(yearID)
```
```
# A tibble: 19 × 7
yearID Age num_teams BA tH tHR tRBI
<int> <int> <int> <dbl> <int> <int> <int>
1 1993 21 1 0.170 9 2 5
2 1994 22 1 0.269 78 17 60
3 1995 23 1 0.308 149 31 107
4 1996 24 1 0.309 170 33 112
5 1997 25 1 0.328 184 26 88
6 1998 26 1 0.294 168 45 145
7 1999 27 1 0.333 174 44 165
8 2000 28 1 0.351 154 38 122
9 2001 29 1 0.306 162 41 125
10 2002 30 1 0.349 152 33 107
11 2003 31 1 0.325 185 37 104
12 2004 32 1 0.308 175 43 130
13 2005 33 1 0.292 162 45 144
14 2006 34 1 0.321 144 35 102
15 2007 35 1 0.296 143 20 88
16 2008 36 2 0.332 183 37 121
17 2009 37 1 0.290 102 19 63
18 2010 38 2 0.298 79 9 42
19 2011 39 1 0.0588 1 0 1
```
Notice that even though Ramirez’s age is a constant for each season, we have to use a vector operation (i.e., `max()` or `first()`) in order to reduce any potential vector to a single number.
Which season was Ramirez’s best as a hitter? One relatively simple measurement of batting prowess is OPS, or [*On\-Base Plus Slugging Percentage*](https://en.wikipedia.org/w/index.php?search=On-Base%20Plus%20Slugging%20Percentage), which is the simple sum of two other statistics: [*On\-Base Percentage*](https://en.wikipedia.org/w/index.php?search=On-Base%20Percentage) (OBP) and [*Slugging Percentage*](https://en.wikipedia.org/w/index.php?search=Slugging%20Percentage) (SLG). The former basically measures the proportion of time that a batter reaches base safely, whether it comes via a hit (`H`), a base on balls (`BB`), or from being hit by the pitch (`HBP`). The latter measures the average number of bases advanced per at\-bat (`AB`), where a single is worth one base, a double (`X2B`) is worth two, a triple (`X3B`) is worth three, and a home run (`HR`) is worth four. (Note that every hit is exactly one of a single, double, triple, or home run.)
Let’s add these statistics to our results and use it to rank the seasons.
```
manny_by_season <- Batting %>%
filter(playerID == "ramirma02") %>%
inner_join(Master, by = c("playerID" = "playerID")) %>%
group_by(yearID) %>%
summarize(
Age = max(yearID - birthYear),
num_teams = n_distinct(teamID),
BA = sum(H)/sum(AB),
tH = sum(H),
tHR = sum(HR),
tRBI = sum(RBI),
OBP = sum(H + BB + HBP) / sum(AB + BB + SF + HBP),
SLG = sum(H + X2B + 2 * X3B + 3 * HR) / sum(AB)
) %>%
mutate(OPS = OBP + SLG) %>%
arrange(desc(OPS))
manny_by_season
```
```
# A tibble: 19 × 10
yearID Age num_teams BA tH tHR tRBI OBP SLG OPS
<int> <int> <int> <dbl> <int> <int> <int> <dbl> <dbl> <dbl>
1 2000 28 1 0.351 154 38 122 0.457 0.697 1.15
2 1999 27 1 0.333 174 44 165 0.442 0.663 1.11
3 2002 30 1 0.349 152 33 107 0.450 0.647 1.10
4 2006 34 1 0.321 144 35 102 0.439 0.619 1.06
5 2008 36 2 0.332 183 37 121 0.430 0.601 1.03
6 2003 31 1 0.325 185 37 104 0.427 0.587 1.01
7 2001 29 1 0.306 162 41 125 0.405 0.609 1.01
8 2004 32 1 0.308 175 43 130 0.397 0.613 1.01
9 2005 33 1 0.292 162 45 144 0.388 0.594 0.982
10 1996 24 1 0.309 170 33 112 0.399 0.582 0.981
11 1998 26 1 0.294 168 45 145 0.377 0.599 0.976
12 1995 23 1 0.308 149 31 107 0.402 0.558 0.960
13 1997 25 1 0.328 184 26 88 0.415 0.538 0.953
14 2009 37 1 0.290 102 19 63 0.418 0.531 0.949
15 2007 35 1 0.296 143 20 88 0.388 0.493 0.881
16 1994 22 1 0.269 78 17 60 0.357 0.521 0.878
17 2010 38 2 0.298 79 9 42 0.409 0.460 0.870
18 1993 21 1 0.170 9 2 5 0.2 0.302 0.502
19 2011 39 1 0.0588 1 0 1 0.0588 0.0588 0.118
```
We see that Ramirez’s OPS was highest in 2000\. But 2000 was the height of the [*steroid era*](https://en.wikipedia.org/w/index.php?search=steroid%20era), when many sluggers were putting up tremendous offensive numbers. As data scientists, we know that it would be more instructive to put Ramirez’s OPS in context by comparing it to the league average OPS in each season—the resulting ratio is often called [*OPS\+*](https://en.wikipedia.org/w/index.php?search=OPS+). To do this, we will need to compute those averages. Because there is missing data in some of these columns in some of these years, we need to invoke the `na.rm` argument to ignore that data.
```
mlb <- Batting %>%
filter(yearID %in% 1993:2011) %>%
group_by(yearID) %>%
summarize(
lg_OBP = sum(H + BB + HBP, na.rm = TRUE) /
sum(AB + BB + SF + HBP, na.rm = TRUE),
lg_SLG = sum(H + X2B + 2*X3B + 3*HR, na.rm = TRUE) /
sum(AB, na.rm = TRUE)
) %>%
mutate(lg_OPS = lg_OBP + lg_SLG)
```
Next, we need to match these league average OPS values to the corresponding entries for Ramirez. We can do this by joining these tables together, and computing the ratio of Ramirez’s OPS to that of the league average.
```
manny_ratio <- manny_by_season %>%
inner_join(mlb, by = c("yearID" = "yearID")) %>%
mutate(OPS_plus = OPS / lg_OPS) %>%
select(yearID, Age, OPS, lg_OPS, OPS_plus) %>%
arrange(desc(OPS_plus))
manny_ratio
```
```
# A tibble: 19 × 5
yearID Age OPS lg_OPS OPS_plus
<int> <int> <dbl> <dbl> <dbl>
1 2000 28 1.15 0.782 1.48
2 2002 30 1.10 0.748 1.47
3 1999 27 1.11 0.778 1.42
4 2006 34 1.06 0.768 1.38
5 2008 36 1.03 0.749 1.38
6 2003 31 1.01 0.755 1.34
7 2001 29 1.01 0.759 1.34
8 2004 32 1.01 0.763 1.32
9 2005 33 0.982 0.749 1.31
10 1998 26 0.976 0.755 1.29
11 1996 24 0.981 0.767 1.28
12 1995 23 0.960 0.755 1.27
13 2009 37 0.949 0.751 1.26
14 1997 25 0.953 0.756 1.26
15 2010 38 0.870 0.728 1.19
16 2007 35 0.881 0.758 1.16
17 1994 22 0.878 0.763 1.15
18 1993 21 0.502 0.736 0.682
19 2011 39 0.118 0.720 0.163
```
In this case, 2000 still ranks as Ramirez’s best season relative to his peers, but notice that his 1999 season has fallen from 2nd to 3rd. Since by definition a league average batter has an OPS\+ of 1,
Ramirez posted 17 consecutive seasons with an OPS that was at least 15% better than the average across the major leagues—a truly impressive feat.
Finally, not all joins are the same.
An `inner_join()` requires corresponding entries in *both* tables.
Conversely, a `left_join()` returns at least as many rows as there are in the first table, regardless of whether there are matches in the second table.
An `inner_join()` is bidirectional, whereas in a `left_join()`, the order in which you specify the tables matters.
Consider the career of [Cal Ripken](https://en.wikipedia.org/w/index.php?search=Cal%20Ripken), who played in 21 seasons from 1981 to 2001\. His career overlapped with Ramirez’s in the nine seasons from 1993 to 2001, so for those, the league averages we computed before are useful.
```
ripken <- Batting %>%
filter(playerID == "ripkeca01")
ripken %>%
inner_join(mlb, by = c("yearID" = "yearID")) %>%
nrow()
```
```
[1] 9
```
```
# same
mlb %>%
inner_join(ripken, by = c("yearID" = "yearID")) %>%
nrow()
```
```
[1] 9
```
For seasons when Ramirez did not play, `NA`’s will be returned.
```
ripken %>%
left_join(mlb, by = c("yearID" = "yearID")) %>%
select(yearID, playerID, lg_OPS) %>%
head(3)
```
```
yearID playerID lg_OPS
1 1981 ripkeca01 NA
2 1982 ripkeca01 NA
3 1983 ripkeca01 NA
```
Conversely, by reversing the order of the tables in the join, we return the 19 seasons for which we have already computed the league averages, regardless of whether there is a match for Ripken (results not displayed).
```
mlb %>%
left_join(ripken, by = c("yearID" = "yearID")) %>%
select(yearID, playerID, lg_OPS)
```
5\.4 Further resources
----------------------
[Sean Lahman](https://en.wikipedia.org/w/index.php?search=Sean%20Lahman) has long curated his baseball data set, which feeds the popular website [baseball\-reference.com](http://www.baseball-reference.com). [Michael Friendly](https://en.wikipedia.org/w/index.php?search=Michael%20Friendly) maintains the **Lahman** **R** package (Friendly et al. 2021\). For the baseball enthusiast, [*Cleveland Indians*](https://en.wikipedia.org/w/index.php?search=Cleveland%20Indians) analyst [Max Marchi](https://en.wikipedia.org/w/index.php?search=Max%20Marchi) and [Jim Albert](https://en.wikipedia.org/w/index.php?search=Jim%20Albert) have written an excellent book on analyzing baseball data in **R** (Marchi and Albert 2013\). A second edition updates the code for the **tidyverse** (James Albert, Marchi, and Baumer 2018\). Albert has also written a book describing how baseball can be used as a motivating example for teaching statistics (Jim Albert 2003\).
5\.5 Exercises
--------------
**Problem 1 (Easy)**: Consider the following data frames with information about U.S. states from 1977\.
```
statenames <- tibble(names = state.name, twoletter = state.abb)
glimpse(statenames)
```
```
Rows: 50
Columns: 2
$ names <chr> "Alabama", "Alaska", "Arizona", "Arkansas", "California"…
$ twoletter <chr> "AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DE", "FL", "G…
```
```
statedata <- tibble(
names = state.name,
income = state.x77[, 2],
illiteracy = state.x77[, 3]
)
glimpse(statedata)
```
```
Rows: 50
Columns: 3
$ names <chr> "Alabama", "Alaska", "Arizona", "Arkansas", "California…
$ income <dbl> 3624, 6315, 4530, 3378, 5114, 4884, 5348, 4809, 4815, 4…
$ illiteracy <dbl> 2.1, 1.5, 1.8, 1.9, 1.1, 0.7, 1.1, 0.9, 1.3, 2.0, 1.9, …
```
Create a scatterplot of illiteracy (as a percent of population) and per capita income (in U.S. dollars) with points plus labels of the two letter state abbreviations. Add a smoother. Use the `ggrepel` package to offset the names that overlap. What pattern do you observe? Are there any outlying observations?
**Problem 2 (Medium)**: Use the `Batting`, `Pitching`, and `Master` tables in the `Lahman` package to answer the following questions.
1. Name every player in baseball history who has accumulated at least 300 home runs (`HR`) and at least 300 stolen bases (`SB`). You can find the first and last name of the player in the `Master` data frame. Join this to your result along with the total home runs and total bases stolen for each of these elite players.
2. Similarly, name every pitcher in baseball history who has accumulated at least 300 wins (`W`) and at least 3,000 strikeouts (`SO`).
3. Identify the name and year of every player who has hit at least 50 home runs in a single season. Which player had the lowest batting average in that season?
**Problem 3 (Medium)**: Use the `nycflights13` package and the `flights` and `planes` tables to answer the following questions:
1. How many planes have a missing date of manufacture?
2. What are the five most common manufacturers?
3. Has the distribution of manufacturer changed over time as reflected by the airplanes flying from NYC in 2013? (Hint: you may need to use `case_when()` to recode the manufacturer name and collapse rare vendors into a category called `Other`.)
**Problem 4 (Medium)**: Use the `nycflights13` package and the `flights` and `planes` tables to answer the following questions:
1. What is the oldest plane (specified by the `tailnum` variable) that flew from New York City airports in 2013?
2. How many airplanes that flew from New York City are included in the `planes` table?
**Problem 5 (Medium)**: The [Relative Age Effect](https://en.wikipedia.org/wiki/Relative_age_effect%7D) is an attempt to explain anomalies in the distribution of birth month among athletes. Briefly, the idea is that children born just after the age cut\-off to enroll in school will be as much as 11 months older than their fellow athletes, which is enough of a disparity to give them an advantage. That advantage will then be compounded over the years, resulting in notably more professional athletes born in these months.
1. Display the distribution of birth months of baseball players who batted during the decade of the 2000s.
2. How are they distributed over the calendar year? Does this support the notion of a relative age effect? Use the `Births78` data set from the `mosaicData` package as a reference.
**Problem 6 (Hard)**: Use the `fec12` package to download the Federal Election Commission data for 2012\. Re\-create Figures 2\.1 and 2\.2 from the text using `ggplot2`.
5\.6 Supplementary exercises
----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-join.html\#join\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-join.html#join-online-exercises)
**Problem 1 (Easy)**: What type of join operation is depicted below?
**Problem 2 (Easy)**: What type of join operation is depicted below?
**Problem 3 (Easy)**: What type of join operation is depicted below?
**Problem 4 (Hard)**: Use the FEC data to re\-create Figure 2\.8\.
---
5\.1 `inner_join()`
-------------------
If we examine the first few rows of the `flights` table, we observe that the `carrier` column contains a two\-character string corresponding to the airline.
```
library(tidyverse)
library(mdsr)
library(nycflights13)
glimpse(flights)
```
```
Rows: 336,776
Columns: 19
$ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 201…
$ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
$ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
$ dep_time <int> 517, 533, 542, 544, 554, 554, 555, 557, 557, 558, 5…
$ sched_dep_time <int> 515, 529, 540, 545, 600, 558, 600, 600, 600, 600, 6…
$ dep_delay <dbl> 2, 4, 2, -1, -6, -4, -5, -3, -3, -2, -2, -2, -2, -2…
$ arr_time <int> 830, 850, 923, 1004, 812, 740, 913, 709, 838, 753, …
$ sched_arr_time <int> 819, 830, 850, 1022, 837, 728, 854, 723, 846, 745, …
$ arr_delay <dbl> 11, 20, 33, -18, -25, 12, 19, -14, -8, 8, -2, -3, 7…
$ carrier <chr> "UA", "UA", "AA", "B6", "DL", "UA", "B6", "EV", "B6…
$ flight <int> 1545, 1714, 1141, 725, 461, 1696, 507, 5708, 79, 30…
$ tailnum <chr> "N14228", "N24211", "N619AA", "N804JB", "N668DN", "…
$ origin <chr> "EWR", "LGA", "JFK", "JFK", "LGA", "EWR", "EWR", "L…
$ dest <chr> "IAH", "IAH", "MIA", "BQN", "ATL", "ORD", "FLL", "I…
$ air_time <dbl> 227, 227, 160, 183, 116, 150, 158, 53, 140, 138, 14…
$ distance <dbl> 1400, 1416, 1089, 1576, 762, 719, 1065, 229, 944, 7…
$ hour <dbl> 5, 5, 5, 5, 6, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5, 6, …
$ minute <dbl> 15, 29, 40, 45, 0, 58, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5…
$ time_hour <dttm> 2013-01-01 05:00:00, 2013-01-01 05:00:00, 2013-01-…
```
In the `airlines` table, we have those same two\-character strings, but also the full names of the airline.
```
head(airlines, 3)
```
```
# A tibble: 3 × 2
carrier name
<chr> <chr>
1 9E Endeavor Air Inc.
2 AA American Airlines Inc.
3 AS Alaska Airlines Inc.
```
In order to retrieve a list of flights and the full names of the airlines that managed each flight, we need to match up the rows in the `flights` table with those rows in the `airlines` table that have the corresponding values for the `carrier` column in *both* tables. This is achieved with the function `inner_join()`.
```
flights_joined <- flights %>%
inner_join(airlines, by = c("carrier" = "carrier"))
glimpse(flights_joined)
```
```
Rows: 336,776
Columns: 20
$ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 201…
$ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
$ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
$ dep_time <int> 517, 533, 542, 544, 554, 554, 555, 557, 557, 558, 5…
$ sched_dep_time <int> 515, 529, 540, 545, 600, 558, 600, 600, 600, 600, 6…
$ dep_delay <dbl> 2, 4, 2, -1, -6, -4, -5, -3, -3, -2, -2, -2, -2, -2…
$ arr_time <int> 830, 850, 923, 1004, 812, 740, 913, 709, 838, 753, …
$ sched_arr_time <int> 819, 830, 850, 1022, 837, 728, 854, 723, 846, 745, …
$ arr_delay <dbl> 11, 20, 33, -18, -25, 12, 19, -14, -8, 8, -2, -3, 7…
$ carrier <chr> "UA", "UA", "AA", "B6", "DL", "UA", "B6", "EV", "B6…
$ flight <int> 1545, 1714, 1141, 725, 461, 1696, 507, 5708, 79, 30…
$ tailnum <chr> "N14228", "N24211", "N619AA", "N804JB", "N668DN", "…
$ origin <chr> "EWR", "LGA", "JFK", "JFK", "LGA", "EWR", "EWR", "L…
$ dest <chr> "IAH", "IAH", "MIA", "BQN", "ATL", "ORD", "FLL", "I…
$ air_time <dbl> 227, 227, 160, 183, 116, 150, 158, 53, 140, 138, 14…
$ distance <dbl> 1400, 1416, 1089, 1576, 762, 719, 1065, 229, 944, 7…
$ hour <dbl> 5, 5, 5, 5, 6, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5, 6, …
$ minute <dbl> 15, 29, 40, 45, 0, 58, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5…
$ time_hour <dttm> 2013-01-01 05:00:00, 2013-01-01 05:00:00, 2013-01-…
$ name <chr> "United Air Lines Inc.", "United Air Lines Inc.", "…
```
Notice that the `flights_joined` data frame now has an additional variable called `name`.
This is the column from `airlines` that is now included in the combined data frame.
We can view the full names of the airlines instead of the cryptic two\-character codes.
```
flights_joined %>%
select(carrier, name, flight, origin, dest) %>%
head(3)
```
```
# A tibble: 3 × 5
carrier name flight origin dest
<chr> <chr> <int> <chr> <chr>
1 UA United Air Lines Inc. 1545 EWR IAH
2 UA United Air Lines Inc. 1714 LGA IAH
3 AA American Airlines Inc. 1141 JFK MIA
```
In an `inner_join()`, the result set contains only those rows that have matches in both tables. In this case, all of the rows in `flights` have exactly one corresponding entry in `airlines`, so the number of rows in `flights_joined` is the same as the number of rows in `flights` (this will not always be the case).
```
nrow(flights)
```
```
[1] 336776
```
```
nrow(flights_joined)
```
```
[1] 336776
```
It is always a good idea to carefully check that the number of rows returned by a join operation is what you expected. In particular, you should carefully check for rows in one table that matched to more than one row in the other table.
5\.2 `left_join()`
------------------
Another commonly\-used type of join is a `left_join()`. Here the rows of the first table are *always* returned, regardless of whether there is a match in the second table.
Suppose that we are only interested in flights from the [*New York City*](https://en.wikipedia.org/w/index.php?search=New%20York%20City) airports on the [*West Coast*](https://en.wikipedia.org/w/index.php?search=West%20Coast).
To restrict ourselves to airports in the [*Pacific Time Zone*](https://en.wikipedia.org/w/index.php?search=Pacific%20Time%20Zone) (UTC \-8\) we can filter the `airports` data frame to only include those airports.
```
airports_pt <- airports %>%
filter(tz == -8)
nrow(airports_pt)
```
```
[1] 178
```
Now, if we perform an `inner_join()` on `flights` and `airports_pt`, matching the destinations in `flights` to the [*FAA*](https://en.wikipedia.org/w/index.php?search=FAA) codes in `airports`, we retrieve only those flights that flew to our airports in the Pacific Time Zone.
```
nyc_dests_pt <- flights %>%
inner_join(airports_pt, by = c("dest" = "faa"))
nrow(nyc_dests_pt)
```
```
[1] 46324
```
However, if we use a `left_join()` with the same conditions, we retrieve all of the rows of `flights`. `NA`’s are inserted into the columns where no matched data was found.
```
nyc_dests <- flights %>%
left_join(airports_pt, by = c("dest" = "faa"))
nyc_dests %>%
summarize(
num_flights = n(),
num_flights_pt = sum(!is.na(name)),
num_flights_not_pt = sum(is.na(name))
)
```
```
# A tibble: 1 × 3
num_flights num_flights_pt num_flights_not_pt
<int> <int> <int>
1 336776 46324 290452
```
Left joins are particularly useful in databases in which [*referential integrity*](https://en.wikipedia.org/w/index.php?search=referential%20integrity) is broken (not all of the [*keys*](https://en.wikipedia.org/w/index.php?search=keys) are present—see Chapter [15](ch-sql.html#ch:sql)).
5\.3 Extended example: Manny Ramirez
------------------------------------
In the context of baseball and the **Lahman** package, multiple tables are used to store information. The batting statistics of players are stored in one table (`Batting`), while information about people (most of whom are players) is in a different table (`Master`).
Every row in the `Batting` table contains the statistics accumulated by a single player during a single stint for a single team in a single year. Thus, a player like [Manny Ramirez](https://en.wikipedia.org/w/index.php?search=Manny%20Ramirez) has many rows in the `Batting` table ([21, in fact](http://www.baseball-reference.com/players/r/ramirma02.shtml)).
```
library(Lahman)
manny <- Batting %>%
filter(playerID == "ramirma02")
nrow(manny)
```
```
[1] 21
```
Using what we’ve learned, we can quickly tabulate Ramirez’s most common career offensive statistics.
For those new to baseball, some additional background may be helpful.
A hit (`H`) occurs when a batter reaches base safely. A [*home run*](https://en.wikipedia.org/w/index.php?search=home%20run) (`HR`) occurs when the ball is hit out of the park or the runner advances through all of the bases during that play.
[Barry Bonds](https://en.wikipedia.org/w/index.php?search=Barry%20Bonds) has the record for most home runs (762\) hit in a career. A player’s batting average (`BA`) is the ratio of the number of hits to the number of eligible at\-bats.
The highest career batting average in [*Major League Baseball*](https://en.wikipedia.org/w/index.php?search=Major%20League%20Baseball) history of 0\.366 was achieved by [Ty Cobb](https://en.wikipedia.org/w/index.php?search=Ty%20Cobb)—season averages above 0\.300 are impressive.
Finally, runs batted in (`RBI`) is the number of runners (including the batter in the case of a home run) that score during that batter’s at\-bat.
[Hank Aaron](https://en.wikipedia.org/w/index.php?search=Hank%20Aaron) has the record for most career RBIs with 2,297\.
```
manny %>%
summarize(
span = paste(min(yearID), max(yearID), sep = "-"),
num_years = n_distinct(yearID),
num_teams = n_distinct(teamID),
BA = sum(H)/sum(AB),
tH = sum(H),
tHR = sum(HR),
tRBI = sum(RBI)
)
```
```
span num_years num_teams BA tH tHR tRBI
1 1993-2011 19 5 0.312 2574 555 1831
```
Notice how we have used the `paste()` function to combine results from multiple variables into a new variable, and how we have used the `n_distinct()` function to count the number of distinct rows.
In his 19\-year career, Ramirez hit 555 home runs, which puts him in the top 20 among all Major League players.
However, we also see that Ramirez played for five teams during his career.
Did he perform equally well for each of them?
Breaking his statistics down by team, or by league, is as easy as adding an appropriate `group_by()` command.
```
manny %>%
group_by(teamID) %>%
summarize(
span = paste(min(yearID), max(yearID), sep = "-"),
num_years = n_distinct(yearID),
num_teams = n_distinct(teamID),
BA = sum(H)/sum(AB),
tH = sum(H),
tHR = sum(HR),
tRBI = sum(RBI)
) %>%
arrange(span)
```
```
# A tibble: 5 × 8
teamID span num_years num_teams BA tH tHR tRBI
<fct> <chr> <int> <int> <dbl> <int> <int> <int>
1 CLE 1993-2000 8 1 0.313 1086 236 804
2 BOS 2001-2008 8 1 0.312 1232 274 868
3 LAN 2008-2010 3 1 0.322 237 44 156
4 CHA 2010-2010 1 1 0.261 18 1 2
5 TBA 2011-2011 1 1 0.0588 1 0 1
```
While Ramirez was very productive for Cleveland, Boston, and the [*Los Angeles Dodgers*](https://en.wikipedia.org/w/index.php?search=Los%20Angeles%20Dodgers), his brief tours with the [*Chicago White Sox*](https://en.wikipedia.org/w/index.php?search=Chicago%20White%20Sox) and [*Tampa Bay Rays*](https://en.wikipedia.org/w/index.php?search=Tampa%20Bay%20Rays) were less than stellar.
In the pipeline below, we can see that Ramirez spent the bulk of his career in the [*American League*](https://en.wikipedia.org/w/index.php?search=American%20League).
```
manny %>%
group_by(lgID) %>%
summarize(
span = paste(min(yearID), max(yearID), sep = "-"),
num_years = n_distinct(yearID),
num_teams = n_distinct(teamID),
BA = sum(H)/sum(AB),
tH = sum(H),
tHR = sum(HR),
tRBI = sum(RBI)
) %>%
arrange(span)
```
```
# A tibble: 2 × 8
lgID span num_years num_teams BA tH tHR tRBI
<fct> <chr> <int> <int> <dbl> <int> <int> <int>
1 AL 1993-2011 18 4 0.311 2337 511 1675
2 NL 2008-2010 3 1 0.322 237 44 156
```
If Ramirez played in only 19 different seasons, why were there 21 rows attributed to him? Notice that in 2008, he was traded from the [*Boston Red Sox*](https://en.wikipedia.org/w/index.php?search=Boston%20Red%20Sox) to the [*Los Angeles Dodgers*](https://en.wikipedia.org/w/index.php?search=Los%20Angeles%20Dodgers), and thus played for both teams. Similarly, in 2010 he played for both the Dodgers and the [*Chicago White Sox*](https://en.wikipedia.org/w/index.php?search=Chicago%20White%20Sox).
When summarizing data, it is critically important to understand exactly how the rows of your data frame are organized.
To see what can go wrong here, suppose we were interested in tabulating the number of seasons in which Ramirez hit at least 30 home runs. The simplest solution is:
```
manny %>%
filter(HR >= 30) %>%
nrow()
```
```
[1] 11
```
But this answer is wrong, because in 2008, Ramirez hit 20 home runs for Boston before being traded and then 17 more for the Dodgers afterwards. Neither of those rows were counted, since they were *both* filtered out. Thus, the year 2008 does not appear among the 11 that we counted in the previous pipeline. Recall that each row in the `manny` data frame corresponds to one stint with one team in one year. On the other hand, the question asks us to consider each year, *regardless of team*. In order to get the right answer, we have to aggregate the rows by team. Thus, the correct solution is:
```
manny %>%
group_by(yearID) %>%
summarize(tHR = sum(HR)) %>%
filter(tHR >= 30) %>%
nrow()
```
```
[1] 12
```
Note that the `filter()` operation is applied to `tHR`, the total number of home runs in a season, and not `HR`, the number of home runs in a single stint for a single team in a single season. (This distinction between filtering the rows of the original data versus the rows of the aggregated results will appear again in Chapter [15](ch-sql.html#ch:sql).)
We began this example by filtering the `Batting` table for the player with `playerID` equal to `ramirma02`.
How did we know to use this identifier?
This player ID is known as a [*key*](https://en.wikipedia.org/w/index.php?search=key), and in fact, `playerID` is the [*primary key*](https://en.wikipedia.org/w/index.php?search=primary%20key) defined in the `Master` table.
That is, every row in the `Master` table is uniquely identified by the value of `playerID`.
There is exactly one row in that table for which `playerID` is equal to `ramirma02`.
But how did we know that this ID corresponds to [Manny Ramirez](https://en.wikipedia.org/w/index.php?search=Manny%20Ramirez)? We can search the `Master` table.
The data in this table include characteristics about [Manny Ramirez](https://en.wikipedia.org/w/index.php?search=Manny%20Ramirez) that do not change across multiple seasons (with the possible exception of his weight).
```
Master %>%
filter(nameLast == "Ramirez" & nameFirst == "Manny")
```
```
playerID birthYear birthMonth birthDay birthCountry birthState
1 ramirma02 1972 5 30 D.R. Distrito Nacional
birthCity deathYear deathMonth deathDay deathCountry deathState
1 Santo Domingo NA NA NA <NA> <NA>
deathCity nameFirst nameLast nameGiven weight height bats throws
1 <NA> Manny Ramirez Manuel Aristides 225 72 R R
debut finalGame retroID bbrefID deathDate birthDate
1 1993-09-02 2011-04-06 ramim002 ramirma02 <NA> 1972-05-30
```
The `playerID` column forms a primary key in the `Master` table, but it does not in the `Batting` table, since as we saw previously, there were 21 rows with that `playerID`. In the `Batting` table, the `playerID` column is known as a [*foreign key*](https://en.wikipedia.org/w/index.php?search=foreign%20key), in that it references a primary key in another table. For our purposes, the presence of this column in both tables allows us to link them together. This way, we can combine data from the `Batting` table with data in the `Master` table. We do this with `inner_join()` by specifying the two tables that we want to join, and the corresponding columns in each table that provide the link. Thus, if we want to display Ramirez’s name in our previous result, as well as his age, we must join the `Batting` and `Master` tables together.
Always specify the `by` argument that defines the join condition. Don’t rely on the defaults.
```
Batting %>%
filter(playerID == "ramirma02") %>%
inner_join(Master, by = c("playerID" = "playerID")) %>%
group_by(yearID) %>%
summarize(
Age = max(yearID - birthYear),
num_teams = n_distinct(teamID),
BA = sum(H)/sum(AB),
tH = sum(H),
tHR = sum(HR),
tRBI = sum(RBI)
) %>%
arrange(yearID)
```
```
# A tibble: 19 × 7
yearID Age num_teams BA tH tHR tRBI
<int> <int> <int> <dbl> <int> <int> <int>
1 1993 21 1 0.170 9 2 5
2 1994 22 1 0.269 78 17 60
3 1995 23 1 0.308 149 31 107
4 1996 24 1 0.309 170 33 112
5 1997 25 1 0.328 184 26 88
6 1998 26 1 0.294 168 45 145
7 1999 27 1 0.333 174 44 165
8 2000 28 1 0.351 154 38 122
9 2001 29 1 0.306 162 41 125
10 2002 30 1 0.349 152 33 107
11 2003 31 1 0.325 185 37 104
12 2004 32 1 0.308 175 43 130
13 2005 33 1 0.292 162 45 144
14 2006 34 1 0.321 144 35 102
15 2007 35 1 0.296 143 20 88
16 2008 36 2 0.332 183 37 121
17 2009 37 1 0.290 102 19 63
18 2010 38 2 0.298 79 9 42
19 2011 39 1 0.0588 1 0 1
```
Notice that even though Ramirez’s age is a constant for each season, we have to use a vector operation (i.e., `max()` or `first()`) in order to reduce any potential vector to a single number.
Which season was Ramirez’s best as a hitter? One relatively simple measurement of batting prowess is OPS, or [*On\-Base Plus Slugging Percentage*](https://en.wikipedia.org/w/index.php?search=On-Base%20Plus%20Slugging%20Percentage), which is the simple sum of two other statistics: [*On\-Base Percentage*](https://en.wikipedia.org/w/index.php?search=On-Base%20Percentage) (OBP) and [*Slugging Percentage*](https://en.wikipedia.org/w/index.php?search=Slugging%20Percentage) (SLG). The former basically measures the proportion of time that a batter reaches base safely, whether it comes via a hit (`H`), a base on balls (`BB`), or from being hit by the pitch (`HBP`). The latter measures the average number of bases advanced per at\-bat (`AB`), where a single is worth one base, a double (`X2B`) is worth two, a triple (`X3B`) is worth three, and a home run (`HR`) is worth four. (Note that every hit is exactly one of a single, double, triple, or home run.)
Let’s add these statistics to our results and use it to rank the seasons.
```
manny_by_season <- Batting %>%
filter(playerID == "ramirma02") %>%
inner_join(Master, by = c("playerID" = "playerID")) %>%
group_by(yearID) %>%
summarize(
Age = max(yearID - birthYear),
num_teams = n_distinct(teamID),
BA = sum(H)/sum(AB),
tH = sum(H),
tHR = sum(HR),
tRBI = sum(RBI),
OBP = sum(H + BB + HBP) / sum(AB + BB + SF + HBP),
SLG = sum(H + X2B + 2 * X3B + 3 * HR) / sum(AB)
) %>%
mutate(OPS = OBP + SLG) %>%
arrange(desc(OPS))
manny_by_season
```
```
# A tibble: 19 × 10
yearID Age num_teams BA tH tHR tRBI OBP SLG OPS
<int> <int> <int> <dbl> <int> <int> <int> <dbl> <dbl> <dbl>
1 2000 28 1 0.351 154 38 122 0.457 0.697 1.15
2 1999 27 1 0.333 174 44 165 0.442 0.663 1.11
3 2002 30 1 0.349 152 33 107 0.450 0.647 1.10
4 2006 34 1 0.321 144 35 102 0.439 0.619 1.06
5 2008 36 2 0.332 183 37 121 0.430 0.601 1.03
6 2003 31 1 0.325 185 37 104 0.427 0.587 1.01
7 2001 29 1 0.306 162 41 125 0.405 0.609 1.01
8 2004 32 1 0.308 175 43 130 0.397 0.613 1.01
9 2005 33 1 0.292 162 45 144 0.388 0.594 0.982
10 1996 24 1 0.309 170 33 112 0.399 0.582 0.981
11 1998 26 1 0.294 168 45 145 0.377 0.599 0.976
12 1995 23 1 0.308 149 31 107 0.402 0.558 0.960
13 1997 25 1 0.328 184 26 88 0.415 0.538 0.953
14 2009 37 1 0.290 102 19 63 0.418 0.531 0.949
15 2007 35 1 0.296 143 20 88 0.388 0.493 0.881
16 1994 22 1 0.269 78 17 60 0.357 0.521 0.878
17 2010 38 2 0.298 79 9 42 0.409 0.460 0.870
18 1993 21 1 0.170 9 2 5 0.2 0.302 0.502
19 2011 39 1 0.0588 1 0 1 0.0588 0.0588 0.118
```
We see that Ramirez’s OPS was highest in 2000\. But 2000 was the height of the [*steroid era*](https://en.wikipedia.org/w/index.php?search=steroid%20era), when many sluggers were putting up tremendous offensive numbers. As data scientists, we know that it would be more instructive to put Ramirez’s OPS in context by comparing it to the league average OPS in each season—the resulting ratio is often called [*OPS\+*](https://en.wikipedia.org/w/index.php?search=OPS+). To do this, we will need to compute those averages. Because there is missing data in some of these columns in some of these years, we need to invoke the `na.rm` argument to ignore that data.
```
mlb <- Batting %>%
filter(yearID %in% 1993:2011) %>%
group_by(yearID) %>%
summarize(
lg_OBP = sum(H + BB + HBP, na.rm = TRUE) /
sum(AB + BB + SF + HBP, na.rm = TRUE),
lg_SLG = sum(H + X2B + 2*X3B + 3*HR, na.rm = TRUE) /
sum(AB, na.rm = TRUE)
) %>%
mutate(lg_OPS = lg_OBP + lg_SLG)
```
Next, we need to match these league average OPS values to the corresponding entries for Ramirez. We can do this by joining these tables together, and computing the ratio of Ramirez’s OPS to that of the league average.
```
manny_ratio <- manny_by_season %>%
inner_join(mlb, by = c("yearID" = "yearID")) %>%
mutate(OPS_plus = OPS / lg_OPS) %>%
select(yearID, Age, OPS, lg_OPS, OPS_plus) %>%
arrange(desc(OPS_plus))
manny_ratio
```
```
# A tibble: 19 × 5
yearID Age OPS lg_OPS OPS_plus
<int> <int> <dbl> <dbl> <dbl>
1 2000 28 1.15 0.782 1.48
2 2002 30 1.10 0.748 1.47
3 1999 27 1.11 0.778 1.42
4 2006 34 1.06 0.768 1.38
5 2008 36 1.03 0.749 1.38
6 2003 31 1.01 0.755 1.34
7 2001 29 1.01 0.759 1.34
8 2004 32 1.01 0.763 1.32
9 2005 33 0.982 0.749 1.31
10 1998 26 0.976 0.755 1.29
11 1996 24 0.981 0.767 1.28
12 1995 23 0.960 0.755 1.27
13 2009 37 0.949 0.751 1.26
14 1997 25 0.953 0.756 1.26
15 2010 38 0.870 0.728 1.19
16 2007 35 0.881 0.758 1.16
17 1994 22 0.878 0.763 1.15
18 1993 21 0.502 0.736 0.682
19 2011 39 0.118 0.720 0.163
```
In this case, 2000 still ranks as Ramirez’s best season relative to his peers, but notice that his 1999 season has fallen from 2nd to 3rd. Since by definition a league average batter has an OPS\+ of 1,
Ramirez posted 17 consecutive seasons with an OPS that was at least 15% better than the average across the major leagues—a truly impressive feat.
Finally, not all joins are the same.
An `inner_join()` requires corresponding entries in *both* tables.
Conversely, a `left_join()` returns at least as many rows as there are in the first table, regardless of whether there are matches in the second table.
An `inner_join()` is bidirectional, whereas in a `left_join()`, the order in which you specify the tables matters.
Consider the career of [Cal Ripken](https://en.wikipedia.org/w/index.php?search=Cal%20Ripken), who played in 21 seasons from 1981 to 2001\. His career overlapped with Ramirez’s in the nine seasons from 1993 to 2001, so for those, the league averages we computed before are useful.
```
ripken <- Batting %>%
filter(playerID == "ripkeca01")
ripken %>%
inner_join(mlb, by = c("yearID" = "yearID")) %>%
nrow()
```
```
[1] 9
```
```
# same
mlb %>%
inner_join(ripken, by = c("yearID" = "yearID")) %>%
nrow()
```
```
[1] 9
```
For seasons when Ramirez did not play, `NA`’s will be returned.
```
ripken %>%
left_join(mlb, by = c("yearID" = "yearID")) %>%
select(yearID, playerID, lg_OPS) %>%
head(3)
```
```
yearID playerID lg_OPS
1 1981 ripkeca01 NA
2 1982 ripkeca01 NA
3 1983 ripkeca01 NA
```
Conversely, by reversing the order of the tables in the join, we return the 19 seasons for which we have already computed the league averages, regardless of whether there is a match for Ripken (results not displayed).
```
mlb %>%
left_join(ripken, by = c("yearID" = "yearID")) %>%
select(yearID, playerID, lg_OPS)
```
5\.4 Further resources
----------------------
[Sean Lahman](https://en.wikipedia.org/w/index.php?search=Sean%20Lahman) has long curated his baseball data set, which feeds the popular website [baseball\-reference.com](http://www.baseball-reference.com). [Michael Friendly](https://en.wikipedia.org/w/index.php?search=Michael%20Friendly) maintains the **Lahman** **R** package (Friendly et al. 2021\). For the baseball enthusiast, [*Cleveland Indians*](https://en.wikipedia.org/w/index.php?search=Cleveland%20Indians) analyst [Max Marchi](https://en.wikipedia.org/w/index.php?search=Max%20Marchi) and [Jim Albert](https://en.wikipedia.org/w/index.php?search=Jim%20Albert) have written an excellent book on analyzing baseball data in **R** (Marchi and Albert 2013\). A second edition updates the code for the **tidyverse** (James Albert, Marchi, and Baumer 2018\). Albert has also written a book describing how baseball can be used as a motivating example for teaching statistics (Jim Albert 2003\).
5\.5 Exercises
--------------
**Problem 1 (Easy)**: Consider the following data frames with information about U.S. states from 1977\.
```
statenames <- tibble(names = state.name, twoletter = state.abb)
glimpse(statenames)
```
```
Rows: 50
Columns: 2
$ names <chr> "Alabama", "Alaska", "Arizona", "Arkansas", "California"…
$ twoletter <chr> "AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DE", "FL", "G…
```
```
statedata <- tibble(
names = state.name,
income = state.x77[, 2],
illiteracy = state.x77[, 3]
)
glimpse(statedata)
```
```
Rows: 50
Columns: 3
$ names <chr> "Alabama", "Alaska", "Arizona", "Arkansas", "California…
$ income <dbl> 3624, 6315, 4530, 3378, 5114, 4884, 5348, 4809, 4815, 4…
$ illiteracy <dbl> 2.1, 1.5, 1.8, 1.9, 1.1, 0.7, 1.1, 0.9, 1.3, 2.0, 1.9, …
```
Create a scatterplot of illiteracy (as a percent of population) and per capita income (in U.S. dollars) with points plus labels of the two letter state abbreviations. Add a smoother. Use the `ggrepel` package to offset the names that overlap. What pattern do you observe? Are there any outlying observations?
**Problem 2 (Medium)**: Use the `Batting`, `Pitching`, and `Master` tables in the `Lahman` package to answer the following questions.
1. Name every player in baseball history who has accumulated at least 300 home runs (`HR`) and at least 300 stolen bases (`SB`). You can find the first and last name of the player in the `Master` data frame. Join this to your result along with the total home runs and total bases stolen for each of these elite players.
2. Similarly, name every pitcher in baseball history who has accumulated at least 300 wins (`W`) and at least 3,000 strikeouts (`SO`).
3. Identify the name and year of every player who has hit at least 50 home runs in a single season. Which player had the lowest batting average in that season?
**Problem 3 (Medium)**: Use the `nycflights13` package and the `flights` and `planes` tables to answer the following questions:
1. How many planes have a missing date of manufacture?
2. What are the five most common manufacturers?
3. Has the distribution of manufacturer changed over time as reflected by the airplanes flying from NYC in 2013? (Hint: you may need to use `case_when()` to recode the manufacturer name and collapse rare vendors into a category called `Other`.)
**Problem 4 (Medium)**: Use the `nycflights13` package and the `flights` and `planes` tables to answer the following questions:
1. What is the oldest plane (specified by the `tailnum` variable) that flew from New York City airports in 2013?
2. How many airplanes that flew from New York City are included in the `planes` table?
**Problem 5 (Medium)**: The [Relative Age Effect](https://en.wikipedia.org/wiki/Relative_age_effect%7D) is an attempt to explain anomalies in the distribution of birth month among athletes. Briefly, the idea is that children born just after the age cut\-off to enroll in school will be as much as 11 months older than their fellow athletes, which is enough of a disparity to give them an advantage. That advantage will then be compounded over the years, resulting in notably more professional athletes born in these months.
1. Display the distribution of birth months of baseball players who batted during the decade of the 2000s.
2. How are they distributed over the calendar year? Does this support the notion of a relative age effect? Use the `Births78` data set from the `mosaicData` package as a reference.
**Problem 6 (Hard)**: Use the `fec12` package to download the Federal Election Commission data for 2012\. Re\-create Figures 2\.1 and 2\.2 from the text using `ggplot2`.
5\.6 Supplementary exercises
----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-join.html\#join\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-join.html#join-online-exercises)
**Problem 1 (Easy)**: What type of join operation is depicted below?
**Problem 2 (Easy)**: What type of join operation is depicted below?
**Problem 3 (Easy)**: What type of join operation is depicted below?
**Problem 4 (Hard)**: Use the FEC data to re\-create Figure 2\.8\.
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-dataII.html |
Chapter 6 Tidy data
===================
In this chapter, we will continue to develop data wrangling skills. In particular, we will discuss [*tidy data*](https://en.wikipedia.org/w/index.php?search=tidy%20data), common file formats, and techniques for scraping and cleaning data, especially dates. Together with the material from Chapters [4](ch-dataI.html#ch:dataI) and [5](ch-join.html#ch:join), these skills will provide facility with wrangling data that is foundational for data science.
6\.1 Tidy data
--------------
### 6\.1\.1 Motivation
[*Gapminder*](https://en.wikipedia.org/w/index.php?search=Gapminder) (Rosling, Rönnlund, and Rosling 2005\) is the brainchild of the late Swedish physician and public health researcher [Hans Rosling](https://en.wikipedia.org/w/index.php?search=Hans%20Rosling).
Gapminder contains data about countries over time for a variety of different variables such as the prevalence of [*HIV*](https://en.wikipedia.org/w/index.php?search=HIV) (human immunodeficiency virus) among adults aged 15 to 49 and other health and economic indicators. These data are stored in [*Google Sheets*](https://en.wikipedia.org/w/index.php?search=Google%20Sheets), or one can download them as [*Microsoft Excel*](https://en.wikipedia.org/w/index.php?search=Microsoft%20Excel) workbooks. The typical presentation of a small subset of such data is shown below, where we have used the **googlesheets4** package to pull these data directly into **R**. (See Section [6\.2\.4](ch-dataII.html#sec:nest) for a description of the `unnest()` function.)
```
library(tidyverse)
library(mdsr)
library(googlesheets4)
hiv_key <- "1kWH_xdJDM4SMfT_Kzpkk-1yuxWChfurZuWYjfmv51EA"
hiv <- read_sheet(hiv_key) %>%
rename(Country = 1) %>%
filter(
Country %in% c("United States", "France", "South Africa")
) %>%
select(Country, `1979`, `1989`, `1999`, `2009`) %>%
unnest(cols = c(`2009`)) %>%
mutate(across(matches("[0-9]"), as.double))
hiv
```
```
# A tibble: 3 × 5
Country `1979` `1989` `1999` `2009`
<chr> <dbl> <dbl> <dbl> <dbl>
1 France NA NA 0.3 0.4
2 South Africa NA NA 14.8 17.2
3 United States 0.0318 NA 0.5 0.6
```
The data set has the form of a two\-dimensional array where each of the \\(n\=3\\) rows represents a country and each of the \\(p\=4\\) columns is a year.
Each entry represents the percentage of adults aged 15 to 49 living with HIV in the \\(i^{th}\\) country in the \\(j^{th}\\) year.
This presentation of the data has some advantages.
First, it is possible (with a big enough display) to *see* all of the data.
One can quickly follow the trend over time for a particular country, and one can also estimate quite easily the percentage of data that is missing (e.g., `NA`).
If visual inspection is the primary analytical technique, this [*spreadsheet*](https://en.wikipedia.org/w/index.php?search=spreadsheet)\-style presentation can be convenient.
Alternatively, consider this presentation of those same data.
```
hiv %>%
pivot_longer(-Country, names_to = "Year", values_to = "hiv_rate")
```
```
# A tibble: 12 × 3
Country Year hiv_rate
<chr> <chr> <dbl>
1 France 1979 NA
2 France 1989 NA
3 France 1999 0.3
4 France 2009 0.4
5 South Africa 1979 NA
6 South Africa 1989 NA
7 South Africa 1999 14.8
8 South Africa 2009 17.2
9 United States 1979 0.0318
10 United States 1989 NA
11 United States 1999 0.5
12 United States 2009 0.6
```
While our data can still be represented by a two\-dimensional array, it now has \\(np\=12\\) rows and just three columns. Visual inspection of the data is now more difficult, since our data are long and very narrow—the aspect ratio is not similar to that of our screen.
It turns out that there are substantive reasons to prefer the long (or tall), narrow version of these data. With multiple tables (see Chapter [15](ch-sql.html#ch:sql)), it is a more efficient way for the computer to store and retrieve the data. It is more convenient for the purpose of data analysis. And it is more scalable, in that the addition of a second variable simply contributes another column, whereas to add another variable to the spreadsheet presentation would require a confusing three\-dimensional view, multiple tabs in the spreadsheet, or worse, [*merged cells*](https://en.wikipedia.org/w/index.php?search=merged%20cells).
These gains come at a cost: we have relinquished our ability to *see all the data at once*. When data sets are small, being able to see them all at once can be useful, and even comforting. But in this era of big data, a quest to see all the data at once in a spreadsheet layout is a [*fool’s errand*](https://en.wikipedia.org/w/index.php?search=fool's%20errand). Learning to manage data via programming frees us from the [*click\-and\-drag*](https://en.wikipedia.org/w/index.php?search=click-and-drag) paradigm popularized by spreadsheet applications, allows us to work with data of arbitrary size, and reduces errors. Recording our data management operations in code also makes them reproducible (see Appendix [D](ch-reproduce.html#ch:reproduce))—an increasingly necessary trait in this era of collaboration.
It enables us to fully separate the raw data from our analysis, which is difficult to achieve using a spreadsheet.
(ref:files\-tip)
(ref:files\-tip) Always keep your raw data and your analysis in separate files. Store the uncorrected data file (with errors and problems) and make corrections with a script (see Appendix [D](ch-reproduce.html#ch:reproduce)) file that transforms the raw data into the data that will actually be analyzed. This process will maintain the provenance of your data and allow analyses to be updated with new data without having to start data wrangling from scratch.
The long, narrow format for the [*Gapminder*](https://en.wikipedia.org/w/index.php?search=Gapminder) data that we have outlined above is called [*tidy data*](https://en.wikipedia.org/w/index.php?search=tidy%20data) (H. Wickham 2014\). In what follows, we will further expand upon this notion and develop more sophisticated techniques for wrangling data.
### 6\.1\.2 What are tidy data?
Data can be as simple as a column of numbers in a spreadsheet file or as complex as the electronic medical records collected by a hospital. A newcomer to working with data may expect each source of data to be organized in a unique way and to require unique techniques. The expert, however, has learned to operate with a small set of standard tools. As you’ll see, each of the standard tools performs a comparatively simple task. Combining those simple tasks in appropriate ways is the key to dealing with complex data.
One reason the individual tools can be simple is that each tool gets applied to data arranged in a simple but precisely defined pattern called [*tidy data*](https://en.wikipedia.org/w/index.php?search=tidy%20data).
Tidy data exists in systematically defined [*data tables*](https://en.wikipedia.org/w/index.php?search=data%20tables) (e.g., the rectangular arrays of data seen previously).
Note that not all data tables are tidy.
To illustrate, Table [6\.1](ch-dataII.html#tab:names-short1) shows a handful of entries from a large [*United States Social Security Administration*](https://en.wikipedia.org/w/index.php?search=United%20States%20Social%20Security%20Administration) tabulation of names given to babies.
In particular, the table shows how many babies of each sex were given each name in each year.
Table 6\.1: A data table showing how many babies were given each name in each year in the United States, for a few names.
| year | sex | name | n |
| --- | --- | --- | --- |
| 1999 | M | Kavon | 104 |
| 1984 | F | Somaly | 6 |
| 2017 | F | Dnylah | 8 |
| 1918 | F | Eron | 6 |
| 1992 | F | Arleene | 5 |
| 1977 | F | Alissia | 5 |
| 1919 | F | Bular | 10 |
Table [6\.1](ch-dataII.html#tab:names-short1) shows that there were 104 boys named Kavon born in the U.S. in 1999 and 6 girls named Somaly born in 1984\.
As a whole, the `babynames` data table covers the years 1880 through 2017 and includes a total of 348,120,517 individuals, somewhat larger than the current population of the U.S.
The data in Table [6\.1](ch-dataII.html#tab:names-short1) are *tidy* because they are organized according to two simple rules.
1. The rows, called [*cases*](https://en.wikipedia.org/w/index.php?search=cases) or observations, each refer to a specific, unique, and similar sort of thing, e.g., girls named Somaly in 1984\.
2. The columns, called variables, each have the same sort of value recorded for each row. For instance, `n` gives the number of babies for each case; `sex` tells which gender was assigned at birth.
When data are in tidy form, it is relatively straightforward to transform the data into arrangements that are more useful for answering interesting questions. For instance, you might wish to know which were the most popular baby names over all the years. Even though Table [6\.1](ch-dataII.html#tab:names-short1) contains the popularity information implicitly, we need to rearrange these data by adding up the counts for a name across all the years before the popularity becomes obvious, as in Table [6\.2](ch-dataII.html#tab:names-popular1).
```
popular_names <- babynames %>%
group_by(sex, name) %>%
summarize(total_births = sum(n)) %>%
arrange(desc(total_births))
```
Table 6\.2: The most popular baby names across all years.
| sex | name | total\_births |
| --- | --- | --- |
| M | James | 5150472 |
| M | John | 5115466 |
| M | Robert | 4814815 |
| M | Michael | 4350824 |
| F | Mary | 4123200 |
| M | William | 4102604 |
| M | David | 3611329 |
| M | Joseph | 2603445 |
| M | Richard | 2563082 |
| M | Charles | 2386048 |
The process of transforming information that is implicit in a data table into another data table that gives the information explicitly is called [*data wrangling*](https://en.wikipedia.org/w/index.php?search=data%20wrangling).
The wrangling itself is accomplished by using [*data verbs*](https://en.wikipedia.org/w/index.php?search=data%20verbs) that take a tidy data table and transform it into another tidy data table in a different form.
In Chapters [4](ch-dataI.html#ch:dataI) and [5](ch-join.html#ch:join), you were introduced to several [*data verbs*](https://en.wikipedia.org/w/index.php?search=data%20verbs).
Figure 6\.1: Ward and precinct votes cast in the 2013 Minneapolis mayoral election.
Figure [6\.1](ch-dataII.html#fig:minn-vote-1) displays results from the [*Minneapolis*](https://en.wikipedia.org/w/index.php?search=Minneapolis) mayoral election.
Unlike `babynames`, it is not in tidy form, though the display is attractive and neatly laid out.
There are helpful labels and summaries that make it easy for a person to read and draw conclusions.
(For instance, Ward 1 had a higher voter turnout than Ward 2, and both wards were lower than the city total.)
However, being neat is not what makes data *tidy*. Figure [6\.1](ch-dataII.html#fig:minn-vote-1) violates the first rule for tidy data.
* **Rule 1**: The rows, called [*cases*](https://en.wikipedia.org/w/index.php?search=cases), each must represent the same underlying attribute, that is, the same kind of thing.
That’s not true in Figure [6\.1](ch-dataII.html#fig:minn-vote-1).
For most of the table, the rows represent a single precinct.
But other rows give ward or city\-wide totals.
The first two rows are captions describing the data, not cases.
* **Rule 2**: Each column is a variable containing the same type of value for each case.
That’s mostly true in Figure [6\.1](ch-dataII.html#fig:minn-vote-1), but the tidy pattern is interrupted by labels that are not variables. For instance, the first two cells in row 15 are the label “Ward 1 Subtotal,” which is different from the ward/precinct identifiers that are the values in most of the first column.
Conforming to the rules for tidy data simplifies summarizing and analyzing data. For instance, in the tidy `babynames` table, it is easy (for a computer) to find the total number of babies: just add up all the numbers in the `n` variable. It is similarly easy to find the number of cases: just count the rows. And if you want to know the total number of Ahmeds or Sherinas across the years, there is an easy way to do that.
In contrast, it would be more difficult in the [*Minneapolis*](https://en.wikipedia.org/w/index.php?search=Minneapolis) election data to find, say, the total number of ballots cast. If you take the seemingly obvious approach and add up the numbers in column I of Figure [6\.1](ch-dataII.html#fig:minn-vote-1) (labeled “Total Ballots Cast”), the result will be *three times* the true number of ballots, because some of the rows contain summaries, not cases.
Indeed, if you wanted to do calculations based on the Minneapolis election data, you would be far better off to put it in a tidy form.
Table 6\.3: A selection from the Minneapolis election data in tidy form.
| ward | precinct | registered | voters | absentee | total\_turnout |
| --- | --- | --- | --- | --- | --- |
| 1 | 1 | 28 | 492 | 27 | 0\.272 |
| 1 | 4 | 29 | 768 | 26 | 0\.366 |
| 1 | 7 | 47 | 291 | 8 | 0\.158 |
| 2 | 1 | 63 | 1011 | 39 | 0\.364 |
| 2 | 4 | 53 | 117 | 3 | 0\.073 |
| 2 | 7 | 39 | 138 | 7 | 0\.138 |
| 2 | 10 | 87 | 196 | 5 | 0\.069 |
| 3 | 3 | 71 | 893 | 101 | 0\.374 |
| 3 | 6 | 102 | 927 | 71 | 0\.353 |
The tidy form in Table [6\.3](ch-dataII.html#tab:vote-summary) is, admittedly, not as attractive as the form published by the [*Minneapolis*](https://en.wikipedia.org/w/index.php?search=Minneapolis) government.
But it is much easier to use for the purpose of generating summaries and analyses.
Once data are in a tidy form, you can present them in ways that can be more effective than a formatted spreadsheet. For example, the data graphic in Figure [6\.2](ch-dataII.html#fig:ward-turnouts) presents the turnout within each precinct for each ward in a way that makes it easy to see how much variation there is within and among wards and precincts.
Figure 6\.2: A graphical depiction of voter turnout by precinct in the different wards.
The tidy format
also makes it easier to bring together data from different sources. For instance, to explain the variation in voter turnout, you might want to consider variables such as party affiliation, age, income, etc.
Such data might be available on a ward\-by\-ward basis from other records, such as public voter registration logs and census records.
Tidy data can be wrangled into forms that can be connected to one another (i.e., using the `inner_join()` function from Chapter [5](ch-join.html#ch:join)).
This task would be difficult if you had to deal with an idiosyncratic format for each different source of data.
#### 6\.1\.2\.1 Variables
In data science, the word [*variable*](https://en.wikipedia.org/w/index.php?search=variable) has a different meaning than in mathematics.
In [*algebra*](https://en.wikipedia.org/w/index.php?search=algebra), a variable is an unknown quantity.
In data, a variable is known—it has been measured. Rather, the word [*variable*](https://en.wikipedia.org/w/index.php?search=variable) refers to a specific quantity or quality that can vary from case to case.
There are two major types of variables:
* **Categorical variables**: record type or category and often take the form of a word.
* **Quantitative variables**: record a numerical attribute. A quantitative variable is just what it sounds like: a number.
A [*categorical variable*](https://en.wikipedia.org/w/index.php?search=categorical%20variable) tells you into which category or group a case falls.
For instance, in the baby names data table, `sex` is a categorical variable with two levels `F` and `M`, standing for female and male.
Similarly, the `name` variable is categorical. It happens that there are 97,310 different levels for `name`, ranging from `Aaron`, `Ab`, and `Abbie` to `Zyhaire`, `Zylis`, and `Zymya`.
#### 6\.1\.2\.2 Cases and what they represent
As noted previously, a row of a tidy data table refers to a case.
To this point, you may have little reason to prefer the word *case* to *row*.
When working with a data table, it is important to keep in mind what a case stands for in the real world.
Sometimes the meaning is obvious.
For instance, Table [6\.4](ch-dataII.html#tab:indiv-ballots) is a tidy data table showing the ballots in the Minneapolis mayoral election in 2013\.
Each case is an individual voter’s ballot.
(The voters were directed to mark their ballot with their first choice, second choice, and third choice among the candidates.
This is part of [a procedure](http://vote.minneapolismn.gov/rcv) called [*rank choice voting*](https://en.wikipedia.org/w/index.php?search=rank%20choice%20voting).)
Table 6\.4: Individual ballots in the Minneapolis election. Each voter votes in one precinct within one ward. The ballot marks the voter’s first three choices for mayor.
| Precinct | First | Second | Third | Ward |
| --- | --- | --- | --- | --- |
| P\-04 | undervote | undervote | undervote | W\-6 |
| P\-06 | BOB FINE | MARK ANDREW | undervote | W\-10 |
| P\-02D | NEAL BAXTER | BETSY HODGES | DON SAMUELS | W\-7 |
| P\-01 | DON SAMUELS | undervote | undervote | W\-5 |
| P\-03 | CAM WINTON | DON SAMUELS | OLE SAVIOR | W\-1 |
The case in Table [6\.4](ch-dataII.html#tab:indiv-ballots) is a different sort of thing than the case in Table [6\.3](ch-dataII.html#tab:vote-summary). In Table [6\.3](ch-dataII.html#tab:vote-summary), a case is a ward in a precinct. But in Table [6\.4](ch-dataII.html#tab:indiv-ballots), the case is an individual ballot. Similarly, in the baby names data (Table [6\.1](ch-dataII.html#tab:names-short1)), a case is a name and sex and year while in Table [6\.2](ch-dataII.html#tab:names-popular1) the case is a name and sex.
When thinking about cases, ask this question: What description would make every case unique? In the vote summary data, a precinct does not uniquely identify a case. Each individual precinct appears in several rows. But each precinct and ward combination appears once and only once. Similarly, in Table [6\.1](ch-dataII.html#tab:names-short1), `name` and `sex` do not specify a unique case. Rather, you need the combination of `name-sex-year` to identify a unique row.
#### 6\.1\.2\.3 Runners and races
Table [6\.5](ch-dataII.html#tab:race-excerpt) displays some of the results from a 10\-mile running race held each year in Washington, D.C.
Table 6\.5: An excerpt of runners’ performance over time in a 10\-mile race.
| name.yob | sex | age | year | gun |
| --- | --- | --- | --- | --- |
| jane polanek 1974 | F | 32 | 2006 | 114\.5 |
| jane poole 1948 | F | 55 | 2003 | 92\.7 |
| jane poole 1948 | F | 56 | 2004 | 87\.3 |
| jane poole 1948 | F | 57 | 2005 | 85\.0 |
| jane poole 1948 | F | 58 | 2006 | 80\.8 |
| jane poole 1948 | F | 59 | 2007 | 78\.5 |
| jane schultz 1964 | F | 35 | 1999 | 91\.4 |
| jane schultz 1964 | F | 37 | 2001 | 79\.1 |
| jane schultz 1964 | F | 38 | 2002 | 76\.8 |
| jane schultz 1964 | F | 39 | 2003 | 82\.7 |
| jane schultz 1964 | F | 40 | 2004 | 87\.9 |
| jane schultz 1964 | F | 41 | 2005 | 91\.5 |
| jane schultz 1964 | F | 42 | 2006 | 88\.4 |
| jane smith 1952 | F | 47 | 1999 | 90\.6 |
| jane smith 1952 | F | 49 | 2001 | 97\.9 |
What is the meaning of a case here? It is tempting to think that a case is a person. After all, it is people who run road races. But notice that individuals appear more than once: Jane Poole ran each year from 2003 to 2007\. (Her times improved consistently as she got older!) Jane Schultz ran in the races from 1999 to 2006, missing only the year 2000 race. This suggests that the case is a runner in one year’s race.
#### 6\.1\.2\.4 Codebooks
Data tables do not necessarily display all the variables needed to figure out what makes each row unique.
For such information, you sometimes need to look at the documentation of how the data were collected and what the variables mean.
The codebook is a document—separate from the data table—that describes various aspects of how the data were collected, what the variables mean and what the different levels of categorical variables refer to.
The word [*codebook*](https://en.wikipedia.org/w/index.php?search=codebook) comes from the days when data was encoded for the computer in ways that make it hard for a human to read.
A codebook should include information about how the data were collected and what constitutes a case.
Figure [6\.3](ch-dataII.html#fig:babynames-codebook) shows the codebook for the `HELPrct` data in the **mosaicData** package. In **R**, codebooks for data tables in packages are available from the `help()` function.
```
help(HELPrct)
```
Figure 6\.3: Part of the codebook for the `HELPrct` data table from the **mosaicData** package.
For the runners data in Table [6\.5](ch-dataII.html#tab:race-excerpt), a codebook should tell you that the meaning of the `gun` variable is the time from when the start gun went off to when the runner crosses the finish line and that the unit of measurement is *minutes*. It should also state what might be obvious: that `age` is the person’s age in years and `sex` has two levels, male and female, represented by `M` and `F`.
#### 6\.1\.2\.5 Multiple tables
It is often the case that creating a meaningful display of data involves combining data from different sources and about different kinds of things.
For instance, you might want your analysis of the runners’ performance data in Table [6\.5](ch-dataII.html#tab:race-excerpt) to include temperature and precipitation data for each year’s race.
Such weather data is likely contained in a table of daily weather measurements.
In many circumstances, there will be multiple tidy tables, each of which contains information relative to your analysis, but which has a different kind of thing as a case.
We saw in Chapter [5](ch-join.html#ch:join) how the `inner_join()` and `left_join()` functions can be used to combine multiple tables, and in Chapter [15](ch-sql.html#ch:sql) we will further develop skills for working with relational databases.
For now, keep in mind that being tidy is not about shoving everything into one table.
6\.2 Reshaping data
-------------------
Each row of a tidy data table is an individual case. It is often useful to re\-organize the same data in a such a way that a case has a different meaning. This can make it easier to perform wrangling tasks such as comparisons, joins, and the inclusion of new data.
Consider the format of `BP_wide` shown in Table [6\.6](ch-dataII.html#tab:wide-example), in which each case is a research study subject and there are separate variables for the measurement of [*systolic blood pressure*](https://en.wikipedia.org/w/index.php?search=systolic%20blood%20pressure) (SBP) before and after exposure to a stressful environment.
Exactly the same data can be presented in the format of the `BP_narrow` data table (Table [6\.7](ch-dataII.html#tab:narrow-example)), where the case is an individual occasion for blood pressure measurement.
Table 6\.6: A blood pressure data table in a wide format.
| subject | before | after |
| --- | --- | --- |
| BHO | 160 | 115 |
| GWB | 120 | 135 |
| WJC | 105 | 145 |
Table 6\.7: A tidy blood pressure data table in a narrow format.
| subject | when | sbp |
| --- | --- | --- |
| BHO | before | 160 |
| GWB | before | 120 |
| WJC | before | 105 |
| BHO | after | 115 |
| GWB | after | 135 |
| WJC | after | 145 |
Each of the formats `BP_wide` and `BP_narrow` has its advantages and its disadvantages.
For example, it is easy to find the before\-and\-after change in blood pressure using `BP_wide`.
```
BP_wide %>%
mutate(change = after - before)
```
```
# A tibble: 3 × 4
subject before after change
<chr> <dbl> <dbl> <dbl>
1 BHO 160 115 -45
2 GWB 120 135 15
3 WJC 105 145 40
```
On the other hand, a narrow format is more flexible for including additional variables, for example the date of the measurement or the diastolic blood pressure as in Table [6\.8](ch-dataII.html#tab:narrow-augmented). The narrow format also makes it feasible to add in additional measurement occasions. For instance, Table [6\.8](ch-dataII.html#tab:narrow-augmented) shows several “after” measurements for subject “WJC.” (Such [*repeated measures*](https://en.wikipedia.org/w/index.php?search=repeated%20measures) are a common feature of scientific studies.)
A simple strategy allows you to get the benefits of either format: convert from wide to narrow or from narrow to wide as suits your purpose.
Table 6\.8: A data table extending the information in the previous two to include additional variables and repeated measurements. The narrow format facilitates including new cases or variables.
| subject | when | sbp | dbp | date |
| --- | --- | --- | --- | --- |
| BHO | before | 160 | 69 | 2007\-06\-19 |
| GWB | before | 120 | 54 | 1998\-04\-21 |
| BHO | before | 155 | 65 | 2005\-11\-08 |
| WJC | after | 145 | 75 | 2002\-11\-15 |
| WJC | after | NA | 65 | 2010\-03\-26 |
| WJC | after | 130 | 60 | 2013\-09\-15 |
| GWB | after | 135 | NA | 2009\-05\-08 |
| WJC | before | 105 | 60 | 1990\-08\-17 |
| BHO | after | 115 | 78 | 2017\-06\-04 |
### 6\.2\.1 Data verbs for converting wide to narrow and *vice versa*
Transforming a data table from wide to narrow is the action of the `pivot_longer()` data verb: A wide data table is the input and a narrow data table is the output. The reverse task, transforming from narrow to wide, involves the data verb `pivot_wider()`. Both functions are implemented in the **tidyr** package.
### 6\.2\.2 Pivoting wider
The `pivot_wider()` function converts a data table from narrow to wide. Carrying out this operation involves specifying some information in the arguments to the function. The `values_from` argument is the name of the variable in the narrow format that is to be divided up into multiple variables in the resulting wide format. The `names_from` argument is the name of the variable in the narrow format that identifies for each case individually which column in the wide format will receive the value.
For instance, in the narrow form of `BP_narrow` (Table [6\.7](ch-dataII.html#tab:narrow-example)) the `values_from` variable is `sbp`. In the corresponding wide form, `BP_wide` (Table [6\.6](ch-dataII.html#tab:wide-example)), the information in `sbp` will be spread between two variables: `before` and `after`. The `names_from` variable in `BP_narrow` is `when`. Note that the different categorical levels in `when` specify which variable in `BP_wide` will be the destination for the `sbp` value of each case.
Only the `names_from` and `values_from` variables are involved in the transformation from narrow to wide. Other variables in the narrow table, such as `subject` in `BP_narrow`, are used to define the cases. Thus, to translate from `BP_narrow` to `BP_wide` we would write this code:
```
BP_narrow %>%
pivot_wider(names_from = when, values_from = sbp)
```
```
# A tibble: 3 × 3
subject before after
<chr> <dbl> <dbl>
1 BHO 160 115
2 GWB 120 135
3 WJC 105 145
```
### 6\.2\.3 Pivoting longer
Now consider how to transform `BP_wide` into `BP_narrow`.
The names of the variables to be gathered together, `before` and `after`, will become the categorical levels in the narrow form.
That is, they will make up the `names_to` variable in the narrow form.
The data analyst has to invent a name for this variable. There are all sorts of sensible possibilities, for instance `before_or_after`.
In gathering `BP_wide` into `BP_narrow`, we chose the concise variable name `when`.
Similarly, a name must be specified for the variable that is to hold the values in the variables being gathered.
There are many reasonable possibilities.
It is sensible to choose a name that reflects the kind of thing those values are, in this case systolic blood pressure.
So, `sbp` is a good choice.
Finally, we need to specify which variables are to be gathered.
For instance, it hardly makes sense to gather `subject` with the other variables; it will remain as a separate variable in the narrow result.
Values in `subject` will be repeated as necessary to give each case in the narrow format its own correct value of `subject`.
In summary, to convert `BP_wide`
into `BP_narrow`, we make the following call to `pivot_longer()`.
```
BP_wide %>%
pivot_longer(-subject, names_to = "when", values_to = "sbp")
```
```
# A tibble: 6 × 3
subject when sbp
<chr> <chr> <dbl>
1 BHO before 160
2 BHO after 115
3 GWB before 120
4 GWB after 135
5 WJC before 105
6 WJC after 145
```
### 6\.2\.4 List\-columns
Consider the following simple summarization of the blood pressure data. Using the techniques developed in Section [4\.1\.4](ch-dataI.html#sec:summarize), we can compute the mean systolic blood pressure for each subject both before and after exposure.
```
BP_full %>%
group_by(subject, when) %>%
summarize(mean_sbp = mean(sbp, na.rm = TRUE))
```
```
# A tibble: 6 × 3
# Groups: subject [3]
subject when mean_sbp
<chr> <chr> <dbl>
1 BHO after 115
2 BHO before 158.
3 GWB after 135
4 GWB before 120
5 WJC after 138.
6 WJC before 105
```
But what if we want to do additional analysis on the blood pressure data? The individual observations are not retained in the summarized output. Can we create a summary of the data that still contains *all* of the observations?
One simplistic approach would be to use `paste()` with the `collapse` argument to condense the individual operations into a single vector.
```
BP_summary <- BP_full %>%
group_by(subject, when) %>%
summarize(
sbps = paste(sbp, collapse = ", "),
dbps = paste(dbp, collapse = ", ")
)
```
This can be useful for seeing the data, but you can’t do much computing on it, because the variables `sbps` and `dbps` are `character` vectors. As a result, trying to compute, say, the mean of the systolic blood pressures won’t work as you hope it might. Note that the means computed below are wrong.
```
BP_summary %>%
mutate(mean_sbp = mean(parse_number(sbps)))
```
```
# A tibble: 6 × 5
# Groups: subject [3]
subject when sbps dbps mean_sbp
<chr> <chr> <chr> <chr> <dbl>
1 BHO after 115 78 138.
2 BHO before 160, 155 69, 65 138.
3 GWB after 135 NA 128.
4 GWB before 120 54 128.
5 WJC after 145, NA, 130 75, 65, 60 125
6 WJC before 105 60 125
```
Additionally, you would have to write the code to do the summarization for every variable in your data set, which could get cumbersome.
Instead, the `nest()` function will collapse *all* of the ungrouped variables in a data frame into a `tibble` (a simple data frame).
This creates a new variable of type `list`, which by default has the name `data`. Each element of that list has the type `tibble`. Although you can’t see all of the data in the output printed here, it’s all in there. Variables in data frames that have type `list` are called [*list\-columns*](https://en.wikipedia.org/w/index.php?search=list-columns).
```
BP_nested <- BP_full %>%
group_by(subject, when) %>%
nest()
BP_nested
```
```
# A tibble: 6 × 3
# Groups: subject, when [6]
subject when data
<chr> <chr> <list>
1 BHO before <tibble [2 × 3]>
2 GWB before <tibble [1 × 3]>
3 WJC after <tibble [3 × 3]>
4 GWB after <tibble [1 × 3]>
5 WJC before <tibble [1 × 3]>
6 BHO after <tibble [1 × 3]>
```
This construction works because a data frame is just a list of vectors of the same length, and the type of those vectors is arbitrary. Thus, the `data` variable is a vector of type `list` that consists of `tibble`s. Note also that the dimensions of each tibble (items in the `data` list) can be different.
The ability to collapse a long data frame into its nested form is particularly useful in the context of model fitting, which we illustrate in Chapter [11](ch-learningI.html#ch:learningI).
While every list\-column has the type `list`, the type of the data contained within that list can be anything. Thus, while the `data` variable contains a list of tibbles, we can extract only the systolic blood pressures, and put them in their own list\-column. It’s tempting to try to `pull()` the `sbp` variable out like this:
```
BP_nested %>%
mutate(sbp_list = pull(data, sbp))
```
```
Error: Problem with `mutate()` column `sbp_list`.
ℹ `sbp_list = pull(data, sbp)`.
x no applicable method for 'pull' applied to an object of class "list"
ℹ The error occurred in group 1: subject = "BHO", when = "after".
```
The problem is that `data` is not a `tibble`.
Rather, it’s a `list` of `tibble`s. To get around this, we need to use the `map()` function, which is described in Chapter [7](ch-iteration.html#ch:iteration).
For now, it’s enough to understand that we need to apply the `pull()` function to each item in the `data` list.
The `map()` function allows us to do just that, and further, it always returns a `list`, and thus creates a new list\-column.
```
BP_nested <- BP_nested %>%
mutate(sbp_list = map(data, pull, sbp))
BP_nested
```
```
# A tibble: 6 × 4
# Groups: subject, when [6]
subject when data sbp_list
<chr> <chr> <list> <list>
1 BHO before <tibble [2 × 3]> <dbl [2]>
2 GWB before <tibble [1 × 3]> <dbl [1]>
3 WJC after <tibble [3 × 3]> <dbl [3]>
4 GWB after <tibble [1 × 3]> <dbl [1]>
5 WJC before <tibble [1 × 3]> <dbl [1]>
6 BHO after <tibble [1 × 3]> <dbl [1]>
```
Again, note that `sbp_list` is a `list`, with each item in the list being a vector of type `double`.
These vectors need *not* have the same length!
We can verify this by isolating the `sbp_list` variable with the `pluck()` function.
```
BP_nested %>%
pluck("sbp_list")
```
```
[[1]]
[1] 160 155
[[2]]
[1] 120
[[3]]
[1] 145 NA 130
[[4]]
[1] 135
[[5]]
[1] 105
[[6]]
[1] 115
```
Because all of the systolic blood pressure readings are contained within this `list`, a further application of `map()` will allow us to compute the mean.
```
BP_nested <- BP_nested %>%
mutate(sbp_mean = map(sbp_list, mean, na.rm = TRUE))
BP_nested
```
```
# A tibble: 6 × 5
# Groups: subject, when [6]
subject when data sbp_list sbp_mean
<chr> <chr> <list> <list> <list>
1 BHO before <tibble [2 × 3]> <dbl [2]> <dbl [1]>
2 GWB before <tibble [1 × 3]> <dbl [1]> <dbl [1]>
3 WJC after <tibble [3 × 3]> <dbl [3]> <dbl [1]>
4 GWB after <tibble [1 × 3]> <dbl [1]> <dbl [1]>
5 WJC before <tibble [1 × 3]> <dbl [1]> <dbl [1]>
6 BHO after <tibble [1 × 3]> <dbl [1]> <dbl [1]>
```
`BP_nested` still has a nested structure. However, the column `sbp_mean` is a `list` of `double` vectors, each of which has a single element.
We can use `unnest()` to undo the nesting structure of that column. In this case, we retain the same 6 rows, each corresponding to one subject either before or after intervention.
```
BP_nested %>%
unnest(cols = c(sbp_mean))
```
```
# A tibble: 6 × 5
# Groups: subject, when [6]
subject when data sbp_list sbp_mean
<chr> <chr> <list> <list> <dbl>
1 BHO before <tibble [2 × 3]> <dbl [2]> 158.
2 GWB before <tibble [1 × 3]> <dbl [1]> 120
3 WJC after <tibble [3 × 3]> <dbl [3]> 138.
4 GWB after <tibble [1 × 3]> <dbl [1]> 135
5 WJC before <tibble [1 × 3]> <dbl [1]> 105
6 BHO after <tibble [1 × 3]> <dbl [1]> 115
```
This computation gives the correct mean blood pressure for each subject at each time point.
On the other hand, an application of `unnest()` to the `sbp_list` variable, which has more than one observation for each row, results in a data frame with one row for each observed subject on a specific date. This transforms the data back into the same unit of observation as `BP_full`.
```
BP_nested %>%
unnest(cols = c(sbp_list))
```
```
# A tibble: 9 × 5
# Groups: subject, when [6]
subject when data sbp_list sbp_mean
<chr> <chr> <list> <dbl> <list>
1 BHO before <tibble [2 × 3]> 160 <dbl [1]>
2 BHO before <tibble [2 × 3]> 155 <dbl [1]>
3 GWB before <tibble [1 × 3]> 120 <dbl [1]>
4 WJC after <tibble [3 × 3]> 145 <dbl [1]>
5 WJC after <tibble [3 × 3]> NA <dbl [1]>
6 WJC after <tibble [3 × 3]> 130 <dbl [1]>
7 GWB after <tibble [1 × 3]> 135 <dbl [1]>
8 WJC before <tibble [1 × 3]> 105 <dbl [1]>
9 BHO after <tibble [1 × 3]> 115 <dbl [1]>
```
We use `nest()` or `unnest()` in Chapters [11](ch-learningI.html#ch:learningI), [14](ch-vizIII.html#ch:vizIII), and [20](ch-netsci.html#ch:netsci).
### 6\.2\.5 Example: Gender\-neutral names
In “[A Boy Named Sue](https://en.wikipedia.org/wiki/A_Boy_Named_Sue)” country singer [Johnny Cash](https://en.wikipedia.org/w/index.php?search=Johnny%20Cash)
famously told the story of a boy toughened in life—eventually reaching gratitude—by being given a traditional girl’s name.
The conceit is of course the rarity of being a boy with the name `Sue`, and indeed, `Sue` is given to about 300 times as many girls as boys (at least being recorded in this manner: data entry errors may account for some of these names).
```
babynames %>%
filter(name == "Sue") %>%
group_by(name, sex) %>%
summarize(total = sum(n))
```
```
# A tibble: 2 × 3
# Groups: name [1]
name sex total
<chr> <chr> <int>
1 Sue F 144465
2 Sue M 519
```
On the other hand, some names that are predominantly given to girls are also commonly given to boys.
Although only 15% of people named `Robin` are male, it is easy to think of a few famous men with that name: the actor [Robin Williams](https://en.wikipedia.org/w/index.php?search=Robin%20Williams), the singer [Robin Gibb](https://en.wikipedia.org/w/index.php?search=Robin%20Gibb), and the basketball player [Robin Lopez](https://en.wikipedia.org/w/index.php?search=Robin%20Lopez) (not to mention [*Batman*](https://en.wikipedia.org/w/index.php?search=Batman)’s sidekick).
```
babynames %>%
filter(name == "Robin") %>%
group_by(name, sex) %>%
summarize(total = sum(n))
```
```
# A tibble: 2 × 3
# Groups: name [1]
name sex total
<chr> <chr> <int>
1 Robin F 289395
2 Robin M 44616
```
This computational paradigm (e.g., filtering) works well if you want to look at gender balance in one name at a time, but suppose you want to find the most gender\-neutral names from all 97,310 names in `babynames`?
For this, it would be useful to have the results in a wide format, like the one shown below.
```
babynames %>%
filter(name %in% c("Sue", "Robin", "Leslie")) %>%
group_by(name, sex) %>%
summarize(total = sum(n)) %>%
pivot_wider(
names_from = sex,
values_from = total
)
```
```
# A tibble: 3 × 3
# Groups: name [3]
name F M
<chr> <int> <int>
1 Leslie 266474 112689
2 Robin 289395 44616
3 Sue 144465 519
```
The `pivot_wider()` function can help us generate the wide format. Note that the `sex` variable is the `names_from` used in the conversion.
A fill of zero is appropriate here: For a name like `Aaban` or `Aadam`, where there are no females, the entry for `F` should be zero.
```
baby_wide <- babynames %>%
group_by(sex, name) %>%
summarize(total = sum(n)) %>%
pivot_wider(
names_from = sex,
values_from = total,
values_fill = 0
)
head(baby_wide, 3)
```
```
# A tibble: 3 × 3
name F M
<chr> <int> <int>
1 Aabha 35 0
2 Aabriella 32 0
3 Aada 5 0
```
One way to define “approximately the same” is to take the smaller of the ratios M/F and F/M. If females greatly outnumber males, then F/M will be large, but M/F will be small. If the sexes are about equal, then both ratios will be near one. The smaller will never be greater than one, so the most balanced names are those with the smaller of the ratios near one.
The code to identify
the most balanced gender\-neutral names out of the names with more than 50,000 babies of each sex is shown below.
Remember, a ratio of 1 means exactly balanced; a ratio of 0\.5 means two to one in favor of one sex; 0\.33 means three to one.
(The `pmin()` transformation function returns the smaller of the two arguments for each individual case.)
```
baby_wide %>%
filter(M > 50000, F > 50000) %>%
mutate(ratio = pmin(M / F, F / M) ) %>%
arrange(desc(ratio)) %>%
head(3)
```
```
# A tibble: 3 × 4
name F M ratio
<chr> <int> <int> <dbl>
1 Riley 100881 92789 0.920
2 Jackie 90604 78405 0.865
3 Casey 76020 110165 0.690
```
Riley has been the most gender\-balanced name, followed by Jackie. Where does your name fall on this list?
6\.3 Naming conventions
-----------------------
Like any language, **R** has some rules that you cannot break, but also many conventions that you can—but should not—break. There are a few simple rules that apply when creating a *name* for an object:
* The name cannot start with a digit. So you cannot assign the name `100NCHS` to a data frame, but `NCHS100` is fine. This rule is to make it easy for **R** to distinguish between object names and numbers. It also helps you avoid mistakes such as writing `2pi` when you mean `2*pi`.
* The name cannot contain any punctuation symbols other than `.` and `_`. So `?NCHS` or `N*Hanes` are not legitimate names. However, you can use `.` and `_` in a name.
For reasons that will be explained later, the use of `.` in function names has a specific meaning, but should otherwise be avoided. The use of `_` is preferred.
* The case of the letters in the name matters. So `NCHS`, `nchs`, `Nchs`, and `nChs`, etc., are all different names that only look similar to a human reader, not to **R**.
Do not use `.` in function names, to avoid conflicting with internal functions.
One of **R**’s strengths is its modularity—many people have contributed many packages that do many different things. However, this decentralized paradigm has resulted in many *different* people writing code using many *different* conventions. The resulting lack of uniformity can make code harder to read. We suggest adopting a style guide and sticking to it—we have attempted to do that in this book. However, the inescapable use of other people’s code results in inevitable deviations from that style.
In this book and in our teaching, we follow the [tidyverse style guide](https://style.tidyverse.org)—which is public, widely adopted, and influential—as closely as possible.
It provides guidance about how and why to adopt a particular style.
Other groups (e.g., Google) have adopted variants of this guide.
This means:
* We use underscores (`_`) in variable and function names. The use of periods (`.`) in function names is restricted to S3 methods.
* We use spaces liberally and prefer multiline, narrow blocks of code to single lines of wide code (although we occasionally relax this to save space on the printed page).
* We use [*snake\_case*](https://en.wikipedia.org/w/index.php?search=snake_case) for the names of things. This means that each “word” is lowercase, and there are no spaces, only underscores. (The **janitor** package provides a function called `clean_names()` that by default turns variable names into snake case (other styles are also supported.)
The **styler** package can be used to reformat code into a format that implements the tidyverse style guide.
Faithfully adopting a consistent style for code can help to improve readability and reduce errors.
6\.4 Data intake
----------------
> “Every easy data format is alike. Every difficult data format is difficult in its own way.”
>
>
> —inspired by [Leo Tolstoy](https://en.wikipedia.org/w/index.php?search=Leo%20Tolstoy) and [Hadley Wickham](https://en.wikipedia.org/w/index.php?search=Hadley%20Wickham)
The tools that we develop in this book allow one to work with data in **R**. However, most data sets are not available in **R** to begin with—they are often stored in a different file format.
While **R** has sophisticated abilities for reading data in a variety of formats, it is not without limits.
For data that are not in a file, one common form of data intake is [*Web scraping*](https://en.wikipedia.org/w/index.php?search=Web%20scraping), in which data from the internet are processed as (structured) text and converted into data.
Such data often have errors that stem from blunders in data entry or from deficiencies in the way data are stored or coded.
Correcting such errors is called [*data cleaning*](https://en.wikipedia.org/w/index.php?search=data%20cleaning).
The native file format for **R** is usually given the suffix `.rda` (or sometimes, `.RData`).
Any object in your **R** environment can be written to this file format using the `saveRDS()` command.
Using the `compress` argument will make these files smaller.
```
saveRDS(mtcars, file = "mtcars.rda", compress = TRUE)
```
This file format is usually an efficient means for storing data, but it is not the most portable.
To load a stored object into your **R** environment, use the `readRDS()` command.
```
mtcars <- readRDS("mtcars.rda")
```
Maintaining the provenance of data from beginning to the end of an analysis is an important part of a reproducible workflow. This can be facilitated by creating one Markdown file or notebook that undertakes the data wrangling and generates an analytic data set (using `saveRDS()`) that can be read (using `readRDS()`) into a second Markdown file.
### 6\.4\.1 Data\-table friendly formats
Many formats for data are essentially equivalent to data tables.
When you come across data in a format that you don’t recognize, it is worth checking whether it is one of the data\-table–friendly formats.
Sometimes the [*filename extension*](https://en.wikipedia.org/w/index.php?search=filename%20extension) provides an indication.
Here are several, each with a brief description:
* **CSV**: a non\-proprietary comma\-separated text format that is widely used for data exchange between different software packages. [*CSV*](https://en.wikipedia.org/w/index.php?search=CSV)s are easy to understand, but are not compressed, and therefore can take up more space on disk than other formats.
* **Software\-package specific format**: some common examples include:
+ [*Octave*](https://en.wikipedia.org/w/index.php?search=Octave) (and through that, [*MATLAB*](https://en.wikipedia.org/w/index.php?search=MATLAB)): widely used in engineering and physics
+ [*Stata*](https://en.wikipedia.org/w/index.php?search=Stata): commonly used for economic research
+ [*SPSS*](https://en.wikipedia.org/w/index.php?search=SPSS): commonly used for social science research
+ [*Minitab*](https://en.wikipedia.org/w/index.php?search=Minitab): often used in business applications
+ [*SAS*](https://en.wikipedia.org/w/index.php?search=SAS): often used for large data sets
+ [*Epi*](https://en.wikipedia.org/w/index.php?search=Epi): used by the [*Centers for Disease Control*](https://en.wikipedia.org/w/index.php?search=Centers%20for%20Disease%20Control) (CDC) for health and epidemiology data
* **Relational databases**: the form that much of institutional, actively\-updated data are stored in. This includes business transaction records, government records, Web logs, and so on. (See Chapter [15](ch-sql.html#ch:sql) for a discussion of relational database management systems.)
* **Excel**: a set of proprietary spreadsheet formats heavily used in business. Watch out, though. Just because something is stored in an [*Excel format*](https://en.wikipedia.org/w/index.php?search=Excel%20format) doesn’t mean it is a data table. Excel is sometimes used as a kind of tablecloth for writing down data with no particular scheme in mind.
* **Web\-related**: For example:
+ [*HTML*](https://en.wikipedia.org/w/index.php?search=HTML) (hypertext markup language): `<table>` format
+ [*XML*](https://en.wikipedia.org/w/index.php?search=XML) (extensible markup language) format, a tree\-based document structure
+ [*JSON*](https://en.wikipedia.org/w/index.php?search=JSON) (JavaScript Object Notation) is a common data format that breaks the “rows\-and\-columns” paradigm (see Section [21\.2\.4\.2](ch-big.html#sec:nosql))
+ [*Google Sheets*](https://en.wikipedia.org/w/index.php?search=Google%20Sheets): published as HTML
+ [*application programming interface*](https://en.wikipedia.org/w/index.php?search=application%20programming%20interface) (API)
The procedure for reading data in one of these formats varies depending on the format.
For Excel or Google Sheets data, it is sometimes easiest to use the application software to export the data as a CSV file.
There are also **R** packages for reading directly from either (**readxl** and **googlesheets4**, respectively), which are useful if the spreadsheet is being updated frequently.
For the technical software package formats, the **haven** package provides useful reading and writing functions.
For relational databases, even if they are on a remote server, there are several useful **R** packages that allow you to connect to these databases directly, most notably **dbplyr** and **DBI**.
CSV and HTML `<table>` formats are frequently encountered sources for data scraping, and can be read by the **readr** and **rvest** packages, respectively.
The next subsections give a bit more detail about how to read them into **R**.
#### 6\.4\.1\.1 CSV (comma separated value) files
This text format can be read with a huge variety of software. It has a data table format, with the values of variables in each case separated by commas. Here is an example of the first several lines of a CSV file:
```
"year","sex","name","n","prop"
1880,"F","Mary",7065,0.07238359
1880,"F","Anna",2604,0.02667896
1880,"F","Emma",2003,0.02052149
1880,"F","Elizabeth",1939,0.01986579
1880,"F","Minnie",1746,0.01788843
1880,"F","Margaret",1578,0.0161672
```
The top row usually (but not always) contains the variable names. Quotation marks are often used at the start and end of character strings—these quotation marks are not part of the content of the string, but are useful if, say, you want to include a comma in the text of a field. CSV files are often named with the `.csv` suffix; it is also common for them to be named with `.txt`, `.dat`, or other things.
You will also see characters other than commas being used to delimit the fields: tabs and vertical bars (or pipes, i.e., `|`) are particularly common.
Be careful with date and time variables in CSV format: these can sometimes be formatted in inconsistent ways that make it more challenging to ingest.
Since reading from a CSV file is so common, several implementations are available.
The `read.csv()` function in the **base** package is perhaps the most widely used, but the more recent `read_csv()` function in the **readr** package is noticeably faster for large CSVs.
CSV files need not exist on your local hard drive.
For example, here is a way to access a `.csv` file over the internet using a URL ([*universal resource locator*](https://en.wikipedia.org/w/index.php?search=universal%20resource%20locator)).
```
mdsr_url <- "https://raw.githubusercontent.com/mdsr-book/mdsr/master/data-raw/"
houses <- mdsr_url %>%
paste0("houses-for-sale.csv") %>%
read_csv()
head(houses, 3)
```
```
# A tibble: 3 × 16
price lot_size waterfront age land_value construction air_cond fuel
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 132500 0.09 0 42 50000 0 0 3
2 181115 0.92 0 0 22300 0 0 2
3 109000 0.19 0 133 7300 0 0 2
# … with 8 more variables: heat <dbl>, sewer <dbl>, living_area <dbl>,
# pct_college <dbl>, bedrooms <dbl>, fireplaces <dbl>, bathrooms <dbl>,
# rooms <dbl>
```
Just as reading a data file from the internet uses a URL, reading a file on your computer uses a complete name, called a [*path*](https://en.wikipedia.org/w/index.php?search=path) to the file.
Although many people are used to using a mouse\-based selector to access their files, being specific about the path to your files is important to ensure the reproducibility of your code (see Appendix [D](ch-reproduce.html#ch:reproduce)).
#### 6\.4\.1\.2 HTML tables
Web pages are HTML documents, which are then translated by a browser to the formatted content that users see. HTML includes facilities for presenting tabular content. The HTML `<table>` markup is often the way human\-readable data is arranged.
Figure 6\.4: Part of a page on mile\-run world records from Wikipedia. Two separate data tables are visible. You can’t tell from this small part of the page, but there are many tables on the page. These two tables are the third and fourth in the page.
When you have the URL of a page containing one or more tables, it is sometimes easy to read them into **R** as data tables.
Since they are not CSVs, we can’t use `read_csv()`. Instead, we use functionality in the **rvest** package to ingest the HTML as a data structure in **R**.
Once you have the content of the Web page, you can translate any tables in the page from HTML to data table format.
In this brief example, we will investigate the progression of the world record time in the mile run, [as detailed on Wikipedia](http://en.wikipedia.org/wiki/Mile_run_world_record_progression).
This page (see Figure [6\.4](ch-dataII.html#fig:wiki-running)) contains several tables, each of which contains a list of new world records for a different class of athlete (e.g., men, women, amateur, professional, etc.).
```
library(rvest)
url <- "http://en.wikipedia.org/wiki/Mile_run_world_record_progression"
tables <- url %>%
read_html() %>%
html_nodes("table")
```
The result, `tables`, is not a data table. Instead, it is a `list` (see Appendix [B](ch-R.html#ch:R)) of the tables found in the Web page. Use `length()` to find how many items there are in the list of tables.
```
length(tables)
```
```
[1] 12
```
You can access any of those tables using the `pluck()` function from the **purrr** package, which extracts items from a `list`.
Unfortunately, as of this writing the `rvest::pluck()` function masks the more useful `purrr::pluck()` function, so we will be specific by using the double\-colon operator.
The first table is `pluck(tables, 1)`, the second table is `pluck(tables, 2)`, and so on.
The third table—which corresponds to amateur men up until 1862—is shown in Table [6\.9](ch-dataII.html#tab:wikipedia-table-three).
```
amateur <- tables %>%
purrr::pluck(3) %>%
html_table()
```
Table 6\.9: The third table embedded in the Wikipedia page on running records.
| Time | Athlete | Nationality | Date | Venue |
| --- | --- | --- | --- | --- |
| 4:52 | Cadet Marshall | United Kingdom | 2 September 1852 | Addiscome |
| 4:45 | Thomas Finch | United Kingdom | 3 November 1858 | Oxford |
| 4:45 | St. Vincent Hammick | United Kingdom | 15 November 1858 | Oxford |
| 4:40 | Gerald Surman | United Kingdom | 24 November 1859 | Oxford |
| 4:33 | George Farran | United Kingdom | 23 May 1862 | Dublin |
Likely of greater interest is the information in the fourth table, which corresponds to the current era of [*International Amateur Athletics Federation*](https://en.wikipedia.org/w/index.php?search=International%20Amateur%20Athletics%20Federation) world records. The first few rows of that table are shown in Table [6\.10](ch-dataII.html#tab:wikipedia-table-four). The last row of that table (not shown) contains the current world record of 3:43\.13, which was set by [Hicham El Guerrouj](https://en.wikipedia.org/w/index.php?search=Hicham%20El%20Guerrouj) of [*Morocco*](https://en.wikipedia.org/w/index.php?search=Morocco) in [*Rome*](https://en.wikipedia.org/w/index.php?search=Rome) on July 7th, 1999\.
```
records <- tables %>%
purrr::pluck(4) %>%
html_table() %>%
select(-Auto) # remove unwanted column
```
Table 6\.10: The fourth table embedded in the Wikipedia page on running records.
| Time | Athlete | Nationality | Date | Venue |
| --- | --- | --- | --- | --- |
| 4:14\.4 | John Paul Jones | United States | 31 May 1913\[6] | Allston, Mass. |
| 4:12\.6 | Norman Taber | United States | 16 July 1915\[6] | Allston, Mass. |
| 4:10\.4 | Paavo Nurmi | Finland | 23 August 1923\[6] | Stockholm |
| 4:09\.2 | Jules Ladoumègue | France | 4 October 1931\[6] | Paris |
| 4:07\.6 | Jack Lovelock | New Zealand | 15 July 1933\[6] | Princeton, N.J. |
| 4:06\.8 | Glenn Cunningham | United States | 16 June 1934\[6] | Princeton, N.J. |
### 6\.4\.2 APIs
An [*application programming interface*](https://en.wikipedia.org/w/index.php?search=application%20programming%20interface) (API) is a protocol for interacting with a computer program that you can’t control.
It is a set of agreed\-upon instructions for using a “[*black\-box*](https://en.wikipedia.org/w/index.php?search=black-box)—not unlike the manual for a television’s remote control.
APIs provide access to massive troves of public data on the Web, from a vast array of different sources.
Not all APIs are the same, but by learning how to use them, you can dramatically increase your ability to pull data into **R** without having to manually \`\`scrape” it.
If you want to obtain data from a public source, it is a good idea to check to see whether: a) the organization has a public API; b) someone has already written an **R** package to said interface.
These packages don’t provide the actual data—they simply provide a series of **R** functions that allow you to access the actual data.
The documentation for each package should explain how to use it to collect data from the original source.
### 6\.4\.3 Cleaning data
A person somewhat knowledgeable about running would have little trouble interpreting Tables [6\.9](ch-dataII.html#tab:wikipedia-table-three) and [6\.10](ch-dataII.html#tab:wikipedia-table-four) correctly.
The `Time` is in minutes and seconds. The `Date` gives the day on which the record was set. When the data table is read into **R**, both `Time` and `Date` are stored as character strings. Before they can be used, they have to be converted into a format that the computer can process like a date and time. Among other things, this requires dealing with the footnote (listed as `[5]`) at the end of the date information.
[*Data cleaning*](https://en.wikipedia.org/w/index.php?search=Data%20cleaning)
refers to taking the information contained in a variable and transforming it to a form in which that information can be used.
#### 6\.4\.3\.1 Recoding
Table [6\.11](ch-dataII.html#tab:house-systems) displays a few variables from the `houses` data table we downloaded earlier.
It describes 1,728 houses for sale in [*Saratoga, NY*](https://en.wikipedia.org/w/index.php?search=Saratoga,%20NY).[11](#fn11)
The full table includes additional variables such as `living_area`, `price`, `bedrooms`, and `bathrooms`.
The data on house systems such as `sewer_type` and `heat_type` have been stored as numbers, even though they are really categorical.
Table 6\.11: Four of the variables from the tables giving features of the Saratoga houses stored as integer codes. Each case is a different house.
| fuel | heat | sewer | construction |
| --- | --- | --- | --- |
| 3 | 4 | 2 | 0 |
| 2 | 3 | 2 | 0 |
| 2 | 3 | 3 | 0 |
| 2 | 2 | 2 | 0 |
| 2 | 2 | 3 | 1 |
There is nothing fundamentally wrong with using integers to encode, say, fuel type, though it may be confusing to interpret results. What is worse is that the numbers imply a meaningful order to the categories when there is none.
To translate the integers to a more informative coding, you first have to find out what the various codes mean. Often, this information comes from the codebook, but sometimes you will need to contact the person who collected the data.
Once you know the translation, you can use spreadsheet software (or the `tribble()` function) to enter them into a data table, like this one for the houses:
```
translations <- mdsr_url %>%
paste0("house_codes.csv") %>%
read_csv()
translations %>% head(5)
```
```
# A tibble: 5 × 3
code system_type meaning
<dbl> <chr> <chr>
1 0 new_const no
2 1 new_const yes
3 1 sewer_type none
4 2 sewer_type private
5 3 sewer_type public
```
`Translations` describes the codes in a format that makes it easy to add new code values as the need arises. The same information can also be presented a wide format as in Table [6\.12](ch-dataII.html#tab:code-vals).
```
codes <- translations %>%
pivot_wider(
names_from = system_type,
values_from = meaning,
values_fill = "invalid"
)
```
Table 6\.12: The Translations data table rendered in a wide format.
| code | new\_const | sewer\_type | central\_air | fuel\_type | heat\_type |
| --- | --- | --- | --- | --- | --- |
| 0 | no | invalid | no | invalid | invalid |
| 1 | yes | none | yes | invalid | invalid |
| 2 | invalid | private | invalid | gas | hot air |
| 3 | invalid | public | invalid | electric | hot water |
| 4 | invalid | invalid | invalid | oil | electric |
In `codes`, there is a column for each system type that translates the integer code to a meaningful term. In cases where the integer has no corresponding term, `invalid` has been entered. This provides a quick way to distinguish between incorrect entries and missing entries.
To carry out the translation, we join each variable, one at a time, to the data table of interest. Note how the `by` value changes for each variable:
```
houses <- houses %>%
left_join(
codes %>% select(code, fuel_type),
by = c(fuel = "code")
) %>%
left_join(
codes %>% select(code, heat_type),
by = c(heat = "code")
) %>%
left_join(
codes %>% select(code, sewer_type),
by = c(sewer = "code")
)
```
Table [6\.13](ch-dataII.html#tab:recode-houses) shows the re\-coded data. We can compare this to the previous display in Table [6\.11](ch-dataII.html#tab:house-systems).
Table 6\.13: The Saratoga houses data with re\-coded categorical variables.
| fuel\_type | heat\_type | sewer\_type |
| --- | --- | --- |
| electric | electric | private |
| gas | hot water | private |
| gas | hot water | public |
| gas | hot air | private |
| gas | hot air | public |
| gas | hot air | private |
#### 6\.4\.3\.2 From strings to numbers
You have seen two major types of variables: quantitative and categorical. You are used to using quoted character strings as the levels of categorical variables, and numbers for quantitative variables.
Often, you will encounter data tables that have variables whose meaning is numeric but whose representation is a character string. This can occur when one or more cases is given a non\-numeric value, e.g., *not available*.
The `parse_number()` function will translate character strings with numerical content into numbers.
The `parse_character()` function goes the other way.
For example, in the `ordway_birds` data, the `Month`, `Day`, and `Year` variables are all being stored as character vectors, even though their evident meaning is numeric.
```
ordway_birds %>%
select(Timestamp, Year, Month, Day) %>%
glimpse()
```
```
Rows: 15,829
Columns: 4
$ Timestamp <chr> "4/14/2010 13:20:56", "", "5/13/2010 16:00:30", "5/13/20…
$ Year <chr> "1972", "", "1972", "1972", "1972", "1972", "1972", "197…
$ Month <chr> "7", "", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7…
$ Day <chr> "16", "", "16", "16", "16", "16", "16", "16", "16", "16"…
```
We can convert the strings to numbers using `mutate()` and `parse_number()`. Note how the empty strings (i.e., `""`) in those fields are automatically converted into `NA`’s, since they cannot be converted into valid numbers.
```
library(readr)
ordway_birds <- ordway_birds %>%
mutate(
Month = parse_number(Month),
Year = parse_number(Year),
Day = parse_number(Day)
)
ordway_birds %>%
select(Timestamp, Year, Month, Day) %>%
glimpse()
```
```
Rows: 15,829
Columns: 4
$ Timestamp <chr> "4/14/2010 13:20:56", "", "5/13/2010 16:00:30", "5/13/20…
$ Year <dbl> 1972, NA, 1972, 1972, 1972, 1972, 1972, 1972, 1972, 1972…
$ Month <dbl> 7, NA, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7…
$ Day <dbl> 16, NA, 16, 16, 16, 16, 16, 16, 16, 16, 17, 18, 18, 18, …
```
#### 6\.4\.3\.3 Dates
Dates are often recorded as character strings (e.g., `29 October 2014`). Among other important properties, dates have a natural order.
When you plot values such as `16 December 2015` and `29 October 2016`, you expect the December date to come after the October date, even though this is not true alphabetically of the string itself.
When plotting a value that is numeric, you expect the axis to be marked with a few round numbers.
A plot from 0 to 100 might have ticks at 0, 20, 40, 60, 100\.
It is similar for dates.
When you are plotting dates within one month, you expect the day of the month to be shown on the axis.
If you are plotting a range of several years, it would be appropriate to show only the years on the axis.
When you are given dates stored as a character vector, it is usually necessary to convert them to a data type designed specifically for dates.
For instance, in the `ordway_birds` data, the `Timestamp` variable refers to the time the data were transcribed from the original lab notebook to the computer file.
This variable is currently stored as a `character` string, but we can translate it into a more usable date format using functions from the **lubridate** package.
These dates are written in a format showing `month/day/year hour:minute:second`. The `mdy_hms()` function from the **lubridate** package converts strings in this format to a date. Note that the data type of the `When` variable is now `dttm`.
```
library(lubridate)
birds <- ordway_birds %>%
mutate(When = mdy_hms(Timestamp)) %>%
select(Timestamp, Year, Month, Day, When, DataEntryPerson)
birds %>%
glimpse()
```
```
Rows: 15,829
Columns: 6
$ Timestamp <chr> "4/14/2010 13:20:56", "", "5/13/2010 16:00:30", "5…
$ Year <dbl> 1972, NA, 1972, 1972, 1972, 1972, 1972, 1972, 1972…
$ Month <dbl> 7, NA, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7…
$ Day <dbl> 16, NA, 16, 16, 16, 16, 16, 16, 16, 16, 17, 18, 18…
$ When <dttm> 2010-04-14 13:20:56, NA, 2010-05-13 16:00:30, 201…
$ DataEntryPerson <chr> "Jerald Dosch", "Caitlin Baker", "Caitlin Baker", …
```
With the `When` variable now recorded as a timestamp, we can create a sensible plot showing when each of the transcribers completed their work, as in Figure [6\.5](ch-dataII.html#fig:when-and-who2).
```
birds %>%
ggplot(aes(x = When, y = DataEntryPerson)) +
geom_point(alpha = 0.1, position = "jitter")
```
Figure 6\.5: The transcribers of the Ordway Birds from lab notebooks worked during different time intervals.
Many of the same operations that apply to numbers can be used on dates. For example, the range of dates that each transcriber worked can be calculated as a difference in times (i.e., an `interval()`), and shown in Table [6\.14](ch-dataII.html#tab:transcriber-dates). This makes it clear that Jolani worked on the project for nearly a year (329 days), while Abby’s first transcription was also her last.
```
bird_summary <- birds %>%
group_by(DataEntryPerson) %>%
summarize(
start = first(When),
finish = last(When)
) %>%
mutate(duration = interval(start, finish) / ddays(1))
```
Table 6\.14: Starting and ending dates for each transcriber involved in the Ordway Birds project.
| DataEntryPerson | start | finish | duration |
| --- | --- | --- | --- |
| Abby Colehour | 2011\-04\-23 15:50:24 | 2011\-04\-23 15:50:24 | 0\.000 |
| Brennan Panzarella | 2010\-09\-13 10:48:12 | 2011\-04\-10 21:58:56 | 209\.466 |
| Emily Merrill | 2010\-06\-08 09:10:01 | 2010\-06\-08 14:47:21 | 0\.234 |
| Jerald Dosch | 2010\-04\-14 13:20:56 | 2010\-04\-14 13:20:56 | 0\.000 |
| Jolani Daney | 2010\-06\-08 09:03:00 | 2011\-05\-03 10:12:59 | 329\.049 |
| Keith Bradley\-Hewitt | 2010\-09\-21 11:31:02 | 2011\-05\-06 17:36:38 | 227\.254 |
| Mary Catherine Muñiz | 2012\-02\-02 08:57:37 | 2012\-04\-30 14:06:27 | 88\.214 |
There are many similar **lubridate** functions for converting strings in different formats into dates, e.g., `ymd()`, `dmy()`, and so on. There are also functions like `hour()`, `yday()`,
etc. for extracting certain pieces of variables encoded as dates.
Internally, **R** uses several different classes to represent dates and times. For timestamps (also referred to as [*datetime*](https://en.wikipedia.org/w/index.php?search=datetime)s), these classes are `POSIXct` and `POSIXlt`.
For most purposes, you can treat these as being the same, but internally, they are stored differently.
A `POSIXct` object is stored as the number of seconds since the [*UNIX epoch*](https://en.wikipedia.org/w/index.php?search=UNIX%20epoch) (1970\-01\-01\), whereas `POSIXlt` objects are stored as a list of year, month, day, etc., character strings.
```
now()
```
```
[1] "2021-07-28 14:13:07 EDT"
```
```
class(now())
```
```
[1] "POSIXct" "POSIXt"
```
```
class(as.POSIXlt(now()))
```
```
[1] "POSIXlt" "POSIXt"
```
For dates that do not include times, the `Date` class is most commonly used.
```
as.Date(now())
```
```
[1] "2021-07-28"
```
#### 6\.4\.3\.4 Factors or strings?
A [*factor*](https://en.wikipedia.org/w/index.php?search=factor) is a special data type used to represent categorical data.
Factors store categorical data efficiently and provide a means to put the categorical levels in whatever order is desired.
Unfortunately, factors also make cleaning data more confusing.
The problem is that it is easy to mistake a factor for a character string, and they have different properties when it comes to converting a numeric or date form.
This is especially problematic when using the character processing techniques in Chapter [19](ch-text.html#ch:text).
By default, `readr::read_csv()` will interpret character strings as strings and not as factors.
Other functions, such as `read.csv()` prior to version 4\.0 of **R**, convert character strings into factors by default.
Cleaning such data often requires converting them back to a character format using `parse_character()`.
Failing to do this when needed can result in completely erroneous results without any warning.
The **forcats** package was written to improve support for wrangling factor variables.
For this reason, the data tables used in this book have been stored with categorical or text data in character format. Be aware that data provided by other packages do not necessarily follow this convention. If you get mysterious results when working with such data, consider the possibility that you are working with factors rather than character vectors. Recall that `summary()`, `glimpse()`, and `str()` will all reveal the data types of each variable in a data frame.
It’s always a good idea to carefully check all variables and data wrangling operations to ensure
that correct values are generated.
Such data auditing and the use of automated data consistency checking can decrease the likelihood of data integrity errors.
### 6\.4\.4 Example: Japanese nuclear reactors
Dates and times are an important aspect of many analyses.
In the example below, the vector `example` contains human\-readable datetimes stored as `character` by **R**.
The `ymd_hms()` function from **lubridate** will convert this into `POSIXct`—a datetime format.
This makes it possible for **R** to do date arithmetic.
```
library(lubridate)
example <- c("2021-04-29 06:00:00", "2021-12-31 12:00:00")
str(example)
```
```
chr [1:2] "2021-04-29 06:00:00" "2021-12-31 12:00:00"
```
```
converted <- ymd_hms(example)
str(converted)
```
```
POSIXct[1:2], format: "2021-04-29 06:00:00" "2021-12-31 12:00:00"
```
```
converted
```
```
[1] "2021-04-29 06:00:00 UTC" "2021-12-31 12:00:00 UTC"
```
```
converted[2] - converted[1]
```
```
Time difference of 246 days
```
We will use this functionality to analyze data on nuclear reactors in Japan.
Figure [6\.6](ch-dataII.html#fig:wikijapan) displays the first part of this table as of the summer of 2016\.
Figure 6\.6: Screenshot of Wikipedia’s list of Japanese nuclear reactors.
```
tables <- "http://en.wikipedia.org/wiki/List_of_nuclear_reactors" %>%
read_html() %>%
html_nodes(css = "table")
idx <- tables %>%
html_text() %>%
str_detect("Fukushima Daiichi") %>%
which()
reactors <- tables %>%
purrr::pluck(idx) %>%
html_table(fill = TRUE) %>%
janitor::clean_names() %>%
rename(
reactor_type = reactor,
reactor_model = reactor_2,
capacity_net = capacity_in_mw,
capacity_gross = capacity_in_mw_2
) %>%
tail(-1)
glimpse(reactors)
```
```
Rows: 68
Columns: 10
$ name <chr> "Fugen", "Fukushima Daiichi", "Fukushima Daii…
$ unit_no <chr> "1", "1", "2", "3", "4", "5", "6", "1", "2", …
$ reactor_type <chr> "HWLWR", "BWR", "BWR", "BWR", "BWR", "BWR", "…
$ reactor_model <chr> "ATR", "BWR-3", "BWR-4", "BWR-4", "BWR-4", "B…
$ status <chr> "Shut down", "Inoperable", "Inoperable", "Ino…
$ capacity_net <chr> "148", "439", "760", "760", "760", "760", "10…
$ capacity_gross <chr> "165", "460", "784", "784", "784", "784", "11…
$ construction_start <chr> "10 May 1972", "25 July 1967", "9 June 1969",…
$ commercial_operation <chr> "20 March 1979", "26 March 1971", "18 July 19…
$ closure <chr> "29 March 2003", "19 May 2011", "19 May 2011"…
```
We see that among the first entries are the ill\-fated [*Fukushima Daiichi*](https://en.wikipedia.org/w/index.php?search=Fukushima%20Daiichi) reactors. The
`mutate()` function can be used in conjunction with the `dmy()` function from the **lubridate** package to wrangle these data into a better form.
```
reactors <- reactors %>%
mutate(
plant_status = ifelse(
str_detect(status, "Shut down"),
"Shut down", "Not formally shut down"
),
capacity_net = parse_number(capacity_net),
construct_date = dmy(construction_start),
operation_date = dmy(commercial_operation),
closure_date = dmy(closure)
)
glimpse(reactors)
```
```
Rows: 68
Columns: 14
$ name <chr> "Fugen", "Fukushima Daiichi", "Fukushima Daii…
$ unit_no <chr> "1", "1", "2", "3", "4", "5", "6", "1", "2", …
$ reactor_type <chr> "HWLWR", "BWR", "BWR", "BWR", "BWR", "BWR", "…
$ reactor_model <chr> "ATR", "BWR-3", "BWR-4", "BWR-4", "BWR-4", "B…
$ status <chr> "Shut down", "Inoperable", "Inoperable", "Ino…
$ capacity_net <dbl> 148, 439, 760, 760, 760, 760, 1067, NA, 1067,…
$ capacity_gross <chr> "165", "460", "784", "784", "784", "784", "11…
$ construction_start <chr> "10 May 1972", "25 July 1967", "9 June 1969",…
$ commercial_operation <chr> "20 March 1979", "26 March 1971", "18 July 19…
$ closure <chr> "29 March 2003", "19 May 2011", "19 May 2011"…
$ plant_status <chr> "Shut down", "Not formally shut down", "Not f…
$ construct_date <date> 1972-05-10, 1967-07-25, 1969-06-09, 1970-12-…
$ operation_date <date> 1979-03-20, 1971-03-26, 1974-07-18, 1976-03-…
$ closure_date <date> 2003-03-29, 2011-05-19, 2011-05-19, 2011-05-…
```
How have these plants evolved over time? It seems likely that as nuclear technology has progressed, plants should see an increase in capacity. A number of these reactors have been shut down in recent years. Are there changes in capacity related to the age of the plant? Figure [6\.7](ch-dataII.html#fig:japannukes) displays the data.
```
ggplot(
data = reactors,
aes(x = construct_date, y = capacity_net, color = plant_status
)
) +
geom_point() +
geom_smooth() +
xlab("Date of Plant Construction") +
ylab("Net Plant Capacity (MW)")
```
Figure 6\.7: Distribution of capacity of Japanese nuclear power plants over time.
Indeed, reactor capacity has tended to increase over time, while the older reactors were more likely
to have been formally shut down. While it would have been straightforward
to code these data by hand, automating data ingestion for larger and more
complex tables is more efficient and less error\-prone.
6\.5 Further resources
----------------------
The tidyverse style guide (<https://style.tidyverse.org>) merits a close read by all **R** users.
Broman and Woo (2018\) describe helpful tips for data organization in spreadsheets.
The **tidyr** package, and in particular, Hadley Wickham (2020c) provide principles for tidy data.
The corresponding paper on tidy data (H. Wickham 2014\) builds upon notions of normal forms—common to database designers from computer science—to describe a process of thinking about how data should be stored and formatted.
There are many **R** packages that do nothing other than provide access to a public API from within **R**.
There are far too many API packages to list here, but a fair number of them are maintained by the [rOpenSci group](https://ropensci.org/packages/).
In fact, several of the packages referenced in this book, including the **twitteR** and **aRxiv** packages in Chapter [19](ch-text.html#ch:text), and the **plotly** package in Chapter [14](ch-vizIII.html#ch:vizIII), are APIs.
The [CRAN task view on Web Technologies](https://cran.r-project.org/web/views/WebTechnologies.html) lists hundreds more packages, including **Rfacebook**, **instaR**, **FlickrAPI**, **tumblR**, and **Rlinkedin**.
The **RSocrata** package facilitates the use of [*Socrata*](https://en.wikipedia.org/w/index.php?search=Socrata), which is itself an API for querying—among other things—the [NYC Open Data](https://nycopendata.socrata.com/) platform.
6\.6 Exercises
--------------
**Problem 1 (Easy)**: In the `Marriage` data set included in `mosaic`, the `appdate`, `ceremonydate`, and `dob` variables are encoded as factors, even though they are dates. Use `lubridate` to convert those three columns into a date format.
```
library(mosaic)
Marriage %>%
select(appdate, ceremonydate, dob) %>%
glimpse(width = 50)
```
```
Rows: 98
Columns: 3
$ appdate <date> 1996-10-29, 1996-11-12, 19…
$ ceremonydate <date> 1996-11-09, 1996-11-12, 19…
$ dob <date> 2064-04-11, 2064-08-06, 20…
```
**Problem 2 (Easy)**: Consider the following pipeline:
```
library(tidyverse)
mtcars %>%
filter(cyl == 4) %>%
select(mpg, cyl)
```
```
mpg cyl
Datsun 710 22.8 4
Merc 240D 24.4 4
Merc 230 22.8 4
Fiat 128 32.4 4
Honda Civic 30.4 4
Toyota Corolla 33.9 4
Toyota Corona 21.5 4
Fiat X1-9 27.3 4
Porsche 914-2 26.0 4
Lotus Europa 30.4 4
Volvo 142E 21.4 4
```
Rewrite this in nested form on a single line. Which set of commands do you prefer and why?
**Problem 3 (Easy)**: Consider the values returned by the `as.numeric()` and `parse_number()` functions when applied to the following vectors. Describe the results and their implication.
```
x1 <- c("1900.45", "$1900.45", "1,900.45", "nearly $2000")
x2 <- as.factor(x1)
```
**Problem 4 (Medium)**: Find an interesting Wikipedia page with a table, scrape the data from it, and generate a figure that tells an interesting story. Include an interpretation of the figure.
**Problem 5 (Medium)**: Generate the code to convert the following data frame to wide format.
```
# A tibble: 4 × 6
grp sex meanL sdL meanR sdR
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 A F 0.225 0.106 0.34 0.0849
2 A M 0.47 0.325 0.57 0.325
3 B F 0.325 0.106 0.4 0.0707
4 B M 0.547 0.308 0.647 0.274
```
The result should look like the following display.
```
# A tibble: 2 × 9
grp F.meanL F.meanR F.sdL F.sdR M.meanL M.meanR M.sdL M.sdR
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 A 0.225 0.34 0.106 0.0849 0.47 0.57 0.325 0.325
2 B 0.325 0.4 0.106 0.0707 0.547 0.647 0.308 0.274
```
Hint: use `pivot_longer()` in conjunction with `pivot_wider()`.
**Problem 6 (Medium)**: The `HELPfull` data within the `mosaicData` package contains information about the Health Evaluation and Linkage to Primary Care (HELP) randomized trial in *tall* format.
1. Generate a table of the data for subjects (`ID`) 1, 2, and 3 that includes the `ID` variable, the `TIME` variable, and the `DRUGRISK` and `SEXRISK` variables (measures of drug and sex risk\-taking behaviors, respectively).
2. The HELP trial was designed to collect information at 0, 6, 12, 18, and 24 month intervals. At which timepoints were measurements available on the `*RISK` variables for subject 3?
3. Let’s restrict our attention to the data from the baseline (`TIME = 0`) and 6\-month data. Use the `pivot_wider()` function from the `dplyr` package to create a table that looks like the following:
```
# A tibble: 3 × 5
ID DRUGRISK_0 DRUGRISK_6 SEXRISK_0 SEXRISK_6
<int> <int> <int> <int> <int>
1 1 0 0 4 1
2 2 0 0 7 0
3 3 20 13 2 4
```
4. Repeat this process using all subjects. What is the Pearson correlation between the baseline (`TIME = 0`) and 6\-month `DRUGRISK` scores? Repeat this for the `SEXRISK` scores. (Hint: use the `use = "complete.obs"` option from the `cor()` function.)
**Problem 7 (Medium)**: An analyst wants to calculate the pairwise differences between the Treatment and Control values for a small data set from a crossover trial (all subjects received both treatments) that
consists of the following observations.
```
ds1
```
```
# A tibble: 6 × 3
id group vals
<int> <chr> <dbl>
1 1 T 4
2 2 T 6
3 3 T 8
4 1 C 5
5 2 C 6
6 3 C 10
```
Then use the following code to create the new `diff` variable.
```
Treat <- filter(ds1, group == "T")
Control <- filter(ds1, group == "C")
all <- mutate(Treat, diff = Treat$vals - Control$vals)
all
```
Verify that this code works for this example and generates the correct values of \\(\-1\\), 0, and \\(\-2\\). Describe two problems that might arise if the data set is not sorted in a particular
order or if one of the observations is missing for one of the subjects. Provide an alternative approach to generate this
variable that is more robust (hint: use `pivot_wider`).
**Problem 8 (Medium)**: Write a function called `count_seasons` that, when given a teamID, will count the number of seasons the team played in the `Teams` data frame from the `Lahman` package.
**Problem 9 (Medium)**: Replicate the functionality of `make_babynames_dist()` from the `mdsr` package to wrangle the original tables from the `babynames` package.
**Problem 10 (Medium)**: Consider the number of home runs hit (`HR`) and home runs allowed (`HRA`) for the Chicago Cubs (\\(CHN\\)) baseball team. Reshape the `Teams` data from the `Lahman` package into “long” format and plot a time series conditioned on whether the HRs that involved the Cubs were hit by them or allowed by them.
**Problem 11 (Medium)**: Using the approach described in Section 6\.4\.1\.2 of the text, find another table in Wikipedia that can be scraped and visualized. Be sure to interpret your graphical display.
6\.7 Supplementary exercises
----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-dataII.html\#dataII\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-dataII.html#dataII-online-exercises)
**Problem 1 (Easy)**: What type of join operation is depicted below?
**Problem 2 (Easy)**: What type of `tidyr` operation is depicted below?
**Problem 3 (Easy)**: What type of `tidyr` operation is depicted below?
---
6\.1 Tidy data
--------------
### 6\.1\.1 Motivation
[*Gapminder*](https://en.wikipedia.org/w/index.php?search=Gapminder) (Rosling, Rönnlund, and Rosling 2005\) is the brainchild of the late Swedish physician and public health researcher [Hans Rosling](https://en.wikipedia.org/w/index.php?search=Hans%20Rosling).
Gapminder contains data about countries over time for a variety of different variables such as the prevalence of [*HIV*](https://en.wikipedia.org/w/index.php?search=HIV) (human immunodeficiency virus) among adults aged 15 to 49 and other health and economic indicators. These data are stored in [*Google Sheets*](https://en.wikipedia.org/w/index.php?search=Google%20Sheets), or one can download them as [*Microsoft Excel*](https://en.wikipedia.org/w/index.php?search=Microsoft%20Excel) workbooks. The typical presentation of a small subset of such data is shown below, where we have used the **googlesheets4** package to pull these data directly into **R**. (See Section [6\.2\.4](ch-dataII.html#sec:nest) for a description of the `unnest()` function.)
```
library(tidyverse)
library(mdsr)
library(googlesheets4)
hiv_key <- "1kWH_xdJDM4SMfT_Kzpkk-1yuxWChfurZuWYjfmv51EA"
hiv <- read_sheet(hiv_key) %>%
rename(Country = 1) %>%
filter(
Country %in% c("United States", "France", "South Africa")
) %>%
select(Country, `1979`, `1989`, `1999`, `2009`) %>%
unnest(cols = c(`2009`)) %>%
mutate(across(matches("[0-9]"), as.double))
hiv
```
```
# A tibble: 3 × 5
Country `1979` `1989` `1999` `2009`
<chr> <dbl> <dbl> <dbl> <dbl>
1 France NA NA 0.3 0.4
2 South Africa NA NA 14.8 17.2
3 United States 0.0318 NA 0.5 0.6
```
The data set has the form of a two\-dimensional array where each of the \\(n\=3\\) rows represents a country and each of the \\(p\=4\\) columns is a year.
Each entry represents the percentage of adults aged 15 to 49 living with HIV in the \\(i^{th}\\) country in the \\(j^{th}\\) year.
This presentation of the data has some advantages.
First, it is possible (with a big enough display) to *see* all of the data.
One can quickly follow the trend over time for a particular country, and one can also estimate quite easily the percentage of data that is missing (e.g., `NA`).
If visual inspection is the primary analytical technique, this [*spreadsheet*](https://en.wikipedia.org/w/index.php?search=spreadsheet)\-style presentation can be convenient.
Alternatively, consider this presentation of those same data.
```
hiv %>%
pivot_longer(-Country, names_to = "Year", values_to = "hiv_rate")
```
```
# A tibble: 12 × 3
Country Year hiv_rate
<chr> <chr> <dbl>
1 France 1979 NA
2 France 1989 NA
3 France 1999 0.3
4 France 2009 0.4
5 South Africa 1979 NA
6 South Africa 1989 NA
7 South Africa 1999 14.8
8 South Africa 2009 17.2
9 United States 1979 0.0318
10 United States 1989 NA
11 United States 1999 0.5
12 United States 2009 0.6
```
While our data can still be represented by a two\-dimensional array, it now has \\(np\=12\\) rows and just three columns. Visual inspection of the data is now more difficult, since our data are long and very narrow—the aspect ratio is not similar to that of our screen.
It turns out that there are substantive reasons to prefer the long (or tall), narrow version of these data. With multiple tables (see Chapter [15](ch-sql.html#ch:sql)), it is a more efficient way for the computer to store and retrieve the data. It is more convenient for the purpose of data analysis. And it is more scalable, in that the addition of a second variable simply contributes another column, whereas to add another variable to the spreadsheet presentation would require a confusing three\-dimensional view, multiple tabs in the spreadsheet, or worse, [*merged cells*](https://en.wikipedia.org/w/index.php?search=merged%20cells).
These gains come at a cost: we have relinquished our ability to *see all the data at once*. When data sets are small, being able to see them all at once can be useful, and even comforting. But in this era of big data, a quest to see all the data at once in a spreadsheet layout is a [*fool’s errand*](https://en.wikipedia.org/w/index.php?search=fool's%20errand). Learning to manage data via programming frees us from the [*click\-and\-drag*](https://en.wikipedia.org/w/index.php?search=click-and-drag) paradigm popularized by spreadsheet applications, allows us to work with data of arbitrary size, and reduces errors. Recording our data management operations in code also makes them reproducible (see Appendix [D](ch-reproduce.html#ch:reproduce))—an increasingly necessary trait in this era of collaboration.
It enables us to fully separate the raw data from our analysis, which is difficult to achieve using a spreadsheet.
(ref:files\-tip)
(ref:files\-tip) Always keep your raw data and your analysis in separate files. Store the uncorrected data file (with errors and problems) and make corrections with a script (see Appendix [D](ch-reproduce.html#ch:reproduce)) file that transforms the raw data into the data that will actually be analyzed. This process will maintain the provenance of your data and allow analyses to be updated with new data without having to start data wrangling from scratch.
The long, narrow format for the [*Gapminder*](https://en.wikipedia.org/w/index.php?search=Gapminder) data that we have outlined above is called [*tidy data*](https://en.wikipedia.org/w/index.php?search=tidy%20data) (H. Wickham 2014\). In what follows, we will further expand upon this notion and develop more sophisticated techniques for wrangling data.
### 6\.1\.2 What are tidy data?
Data can be as simple as a column of numbers in a spreadsheet file or as complex as the electronic medical records collected by a hospital. A newcomer to working with data may expect each source of data to be organized in a unique way and to require unique techniques. The expert, however, has learned to operate with a small set of standard tools. As you’ll see, each of the standard tools performs a comparatively simple task. Combining those simple tasks in appropriate ways is the key to dealing with complex data.
One reason the individual tools can be simple is that each tool gets applied to data arranged in a simple but precisely defined pattern called [*tidy data*](https://en.wikipedia.org/w/index.php?search=tidy%20data).
Tidy data exists in systematically defined [*data tables*](https://en.wikipedia.org/w/index.php?search=data%20tables) (e.g., the rectangular arrays of data seen previously).
Note that not all data tables are tidy.
To illustrate, Table [6\.1](ch-dataII.html#tab:names-short1) shows a handful of entries from a large [*United States Social Security Administration*](https://en.wikipedia.org/w/index.php?search=United%20States%20Social%20Security%20Administration) tabulation of names given to babies.
In particular, the table shows how many babies of each sex were given each name in each year.
Table 6\.1: A data table showing how many babies were given each name in each year in the United States, for a few names.
| year | sex | name | n |
| --- | --- | --- | --- |
| 1999 | M | Kavon | 104 |
| 1984 | F | Somaly | 6 |
| 2017 | F | Dnylah | 8 |
| 1918 | F | Eron | 6 |
| 1992 | F | Arleene | 5 |
| 1977 | F | Alissia | 5 |
| 1919 | F | Bular | 10 |
Table [6\.1](ch-dataII.html#tab:names-short1) shows that there were 104 boys named Kavon born in the U.S. in 1999 and 6 girls named Somaly born in 1984\.
As a whole, the `babynames` data table covers the years 1880 through 2017 and includes a total of 348,120,517 individuals, somewhat larger than the current population of the U.S.
The data in Table [6\.1](ch-dataII.html#tab:names-short1) are *tidy* because they are organized according to two simple rules.
1. The rows, called [*cases*](https://en.wikipedia.org/w/index.php?search=cases) or observations, each refer to a specific, unique, and similar sort of thing, e.g., girls named Somaly in 1984\.
2. The columns, called variables, each have the same sort of value recorded for each row. For instance, `n` gives the number of babies for each case; `sex` tells which gender was assigned at birth.
When data are in tidy form, it is relatively straightforward to transform the data into arrangements that are more useful for answering interesting questions. For instance, you might wish to know which were the most popular baby names over all the years. Even though Table [6\.1](ch-dataII.html#tab:names-short1) contains the popularity information implicitly, we need to rearrange these data by adding up the counts for a name across all the years before the popularity becomes obvious, as in Table [6\.2](ch-dataII.html#tab:names-popular1).
```
popular_names <- babynames %>%
group_by(sex, name) %>%
summarize(total_births = sum(n)) %>%
arrange(desc(total_births))
```
Table 6\.2: The most popular baby names across all years.
| sex | name | total\_births |
| --- | --- | --- |
| M | James | 5150472 |
| M | John | 5115466 |
| M | Robert | 4814815 |
| M | Michael | 4350824 |
| F | Mary | 4123200 |
| M | William | 4102604 |
| M | David | 3611329 |
| M | Joseph | 2603445 |
| M | Richard | 2563082 |
| M | Charles | 2386048 |
The process of transforming information that is implicit in a data table into another data table that gives the information explicitly is called [*data wrangling*](https://en.wikipedia.org/w/index.php?search=data%20wrangling).
The wrangling itself is accomplished by using [*data verbs*](https://en.wikipedia.org/w/index.php?search=data%20verbs) that take a tidy data table and transform it into another tidy data table in a different form.
In Chapters [4](ch-dataI.html#ch:dataI) and [5](ch-join.html#ch:join), you were introduced to several [*data verbs*](https://en.wikipedia.org/w/index.php?search=data%20verbs).
Figure 6\.1: Ward and precinct votes cast in the 2013 Minneapolis mayoral election.
Figure [6\.1](ch-dataII.html#fig:minn-vote-1) displays results from the [*Minneapolis*](https://en.wikipedia.org/w/index.php?search=Minneapolis) mayoral election.
Unlike `babynames`, it is not in tidy form, though the display is attractive and neatly laid out.
There are helpful labels and summaries that make it easy for a person to read and draw conclusions.
(For instance, Ward 1 had a higher voter turnout than Ward 2, and both wards were lower than the city total.)
However, being neat is not what makes data *tidy*. Figure [6\.1](ch-dataII.html#fig:minn-vote-1) violates the first rule for tidy data.
* **Rule 1**: The rows, called [*cases*](https://en.wikipedia.org/w/index.php?search=cases), each must represent the same underlying attribute, that is, the same kind of thing.
That’s not true in Figure [6\.1](ch-dataII.html#fig:minn-vote-1).
For most of the table, the rows represent a single precinct.
But other rows give ward or city\-wide totals.
The first two rows are captions describing the data, not cases.
* **Rule 2**: Each column is a variable containing the same type of value for each case.
That’s mostly true in Figure [6\.1](ch-dataII.html#fig:minn-vote-1), but the tidy pattern is interrupted by labels that are not variables. For instance, the first two cells in row 15 are the label “Ward 1 Subtotal,” which is different from the ward/precinct identifiers that are the values in most of the first column.
Conforming to the rules for tidy data simplifies summarizing and analyzing data. For instance, in the tidy `babynames` table, it is easy (for a computer) to find the total number of babies: just add up all the numbers in the `n` variable. It is similarly easy to find the number of cases: just count the rows. And if you want to know the total number of Ahmeds or Sherinas across the years, there is an easy way to do that.
In contrast, it would be more difficult in the [*Minneapolis*](https://en.wikipedia.org/w/index.php?search=Minneapolis) election data to find, say, the total number of ballots cast. If you take the seemingly obvious approach and add up the numbers in column I of Figure [6\.1](ch-dataII.html#fig:minn-vote-1) (labeled “Total Ballots Cast”), the result will be *three times* the true number of ballots, because some of the rows contain summaries, not cases.
Indeed, if you wanted to do calculations based on the Minneapolis election data, you would be far better off to put it in a tidy form.
Table 6\.3: A selection from the Minneapolis election data in tidy form.
| ward | precinct | registered | voters | absentee | total\_turnout |
| --- | --- | --- | --- | --- | --- |
| 1 | 1 | 28 | 492 | 27 | 0\.272 |
| 1 | 4 | 29 | 768 | 26 | 0\.366 |
| 1 | 7 | 47 | 291 | 8 | 0\.158 |
| 2 | 1 | 63 | 1011 | 39 | 0\.364 |
| 2 | 4 | 53 | 117 | 3 | 0\.073 |
| 2 | 7 | 39 | 138 | 7 | 0\.138 |
| 2 | 10 | 87 | 196 | 5 | 0\.069 |
| 3 | 3 | 71 | 893 | 101 | 0\.374 |
| 3 | 6 | 102 | 927 | 71 | 0\.353 |
The tidy form in Table [6\.3](ch-dataII.html#tab:vote-summary) is, admittedly, not as attractive as the form published by the [*Minneapolis*](https://en.wikipedia.org/w/index.php?search=Minneapolis) government.
But it is much easier to use for the purpose of generating summaries and analyses.
Once data are in a tidy form, you can present them in ways that can be more effective than a formatted spreadsheet. For example, the data graphic in Figure [6\.2](ch-dataII.html#fig:ward-turnouts) presents the turnout within each precinct for each ward in a way that makes it easy to see how much variation there is within and among wards and precincts.
Figure 6\.2: A graphical depiction of voter turnout by precinct in the different wards.
The tidy format
also makes it easier to bring together data from different sources. For instance, to explain the variation in voter turnout, you might want to consider variables such as party affiliation, age, income, etc.
Such data might be available on a ward\-by\-ward basis from other records, such as public voter registration logs and census records.
Tidy data can be wrangled into forms that can be connected to one another (i.e., using the `inner_join()` function from Chapter [5](ch-join.html#ch:join)).
This task would be difficult if you had to deal with an idiosyncratic format for each different source of data.
#### 6\.1\.2\.1 Variables
In data science, the word [*variable*](https://en.wikipedia.org/w/index.php?search=variable) has a different meaning than in mathematics.
In [*algebra*](https://en.wikipedia.org/w/index.php?search=algebra), a variable is an unknown quantity.
In data, a variable is known—it has been measured. Rather, the word [*variable*](https://en.wikipedia.org/w/index.php?search=variable) refers to a specific quantity or quality that can vary from case to case.
There are two major types of variables:
* **Categorical variables**: record type or category and often take the form of a word.
* **Quantitative variables**: record a numerical attribute. A quantitative variable is just what it sounds like: a number.
A [*categorical variable*](https://en.wikipedia.org/w/index.php?search=categorical%20variable) tells you into which category or group a case falls.
For instance, in the baby names data table, `sex` is a categorical variable with two levels `F` and `M`, standing for female and male.
Similarly, the `name` variable is categorical. It happens that there are 97,310 different levels for `name`, ranging from `Aaron`, `Ab`, and `Abbie` to `Zyhaire`, `Zylis`, and `Zymya`.
#### 6\.1\.2\.2 Cases and what they represent
As noted previously, a row of a tidy data table refers to a case.
To this point, you may have little reason to prefer the word *case* to *row*.
When working with a data table, it is important to keep in mind what a case stands for in the real world.
Sometimes the meaning is obvious.
For instance, Table [6\.4](ch-dataII.html#tab:indiv-ballots) is a tidy data table showing the ballots in the Minneapolis mayoral election in 2013\.
Each case is an individual voter’s ballot.
(The voters were directed to mark their ballot with their first choice, second choice, and third choice among the candidates.
This is part of [a procedure](http://vote.minneapolismn.gov/rcv) called [*rank choice voting*](https://en.wikipedia.org/w/index.php?search=rank%20choice%20voting).)
Table 6\.4: Individual ballots in the Minneapolis election. Each voter votes in one precinct within one ward. The ballot marks the voter’s first three choices for mayor.
| Precinct | First | Second | Third | Ward |
| --- | --- | --- | --- | --- |
| P\-04 | undervote | undervote | undervote | W\-6 |
| P\-06 | BOB FINE | MARK ANDREW | undervote | W\-10 |
| P\-02D | NEAL BAXTER | BETSY HODGES | DON SAMUELS | W\-7 |
| P\-01 | DON SAMUELS | undervote | undervote | W\-5 |
| P\-03 | CAM WINTON | DON SAMUELS | OLE SAVIOR | W\-1 |
The case in Table [6\.4](ch-dataII.html#tab:indiv-ballots) is a different sort of thing than the case in Table [6\.3](ch-dataII.html#tab:vote-summary). In Table [6\.3](ch-dataII.html#tab:vote-summary), a case is a ward in a precinct. But in Table [6\.4](ch-dataII.html#tab:indiv-ballots), the case is an individual ballot. Similarly, in the baby names data (Table [6\.1](ch-dataII.html#tab:names-short1)), a case is a name and sex and year while in Table [6\.2](ch-dataII.html#tab:names-popular1) the case is a name and sex.
When thinking about cases, ask this question: What description would make every case unique? In the vote summary data, a precinct does not uniquely identify a case. Each individual precinct appears in several rows. But each precinct and ward combination appears once and only once. Similarly, in Table [6\.1](ch-dataII.html#tab:names-short1), `name` and `sex` do not specify a unique case. Rather, you need the combination of `name-sex-year` to identify a unique row.
#### 6\.1\.2\.3 Runners and races
Table [6\.5](ch-dataII.html#tab:race-excerpt) displays some of the results from a 10\-mile running race held each year in Washington, D.C.
Table 6\.5: An excerpt of runners’ performance over time in a 10\-mile race.
| name.yob | sex | age | year | gun |
| --- | --- | --- | --- | --- |
| jane polanek 1974 | F | 32 | 2006 | 114\.5 |
| jane poole 1948 | F | 55 | 2003 | 92\.7 |
| jane poole 1948 | F | 56 | 2004 | 87\.3 |
| jane poole 1948 | F | 57 | 2005 | 85\.0 |
| jane poole 1948 | F | 58 | 2006 | 80\.8 |
| jane poole 1948 | F | 59 | 2007 | 78\.5 |
| jane schultz 1964 | F | 35 | 1999 | 91\.4 |
| jane schultz 1964 | F | 37 | 2001 | 79\.1 |
| jane schultz 1964 | F | 38 | 2002 | 76\.8 |
| jane schultz 1964 | F | 39 | 2003 | 82\.7 |
| jane schultz 1964 | F | 40 | 2004 | 87\.9 |
| jane schultz 1964 | F | 41 | 2005 | 91\.5 |
| jane schultz 1964 | F | 42 | 2006 | 88\.4 |
| jane smith 1952 | F | 47 | 1999 | 90\.6 |
| jane smith 1952 | F | 49 | 2001 | 97\.9 |
What is the meaning of a case here? It is tempting to think that a case is a person. After all, it is people who run road races. But notice that individuals appear more than once: Jane Poole ran each year from 2003 to 2007\. (Her times improved consistently as she got older!) Jane Schultz ran in the races from 1999 to 2006, missing only the year 2000 race. This suggests that the case is a runner in one year’s race.
#### 6\.1\.2\.4 Codebooks
Data tables do not necessarily display all the variables needed to figure out what makes each row unique.
For such information, you sometimes need to look at the documentation of how the data were collected and what the variables mean.
The codebook is a document—separate from the data table—that describes various aspects of how the data were collected, what the variables mean and what the different levels of categorical variables refer to.
The word [*codebook*](https://en.wikipedia.org/w/index.php?search=codebook) comes from the days when data was encoded for the computer in ways that make it hard for a human to read.
A codebook should include information about how the data were collected and what constitutes a case.
Figure [6\.3](ch-dataII.html#fig:babynames-codebook) shows the codebook for the `HELPrct` data in the **mosaicData** package. In **R**, codebooks for data tables in packages are available from the `help()` function.
```
help(HELPrct)
```
Figure 6\.3: Part of the codebook for the `HELPrct` data table from the **mosaicData** package.
For the runners data in Table [6\.5](ch-dataII.html#tab:race-excerpt), a codebook should tell you that the meaning of the `gun` variable is the time from when the start gun went off to when the runner crosses the finish line and that the unit of measurement is *minutes*. It should also state what might be obvious: that `age` is the person’s age in years and `sex` has two levels, male and female, represented by `M` and `F`.
#### 6\.1\.2\.5 Multiple tables
It is often the case that creating a meaningful display of data involves combining data from different sources and about different kinds of things.
For instance, you might want your analysis of the runners’ performance data in Table [6\.5](ch-dataII.html#tab:race-excerpt) to include temperature and precipitation data for each year’s race.
Such weather data is likely contained in a table of daily weather measurements.
In many circumstances, there will be multiple tidy tables, each of which contains information relative to your analysis, but which has a different kind of thing as a case.
We saw in Chapter [5](ch-join.html#ch:join) how the `inner_join()` and `left_join()` functions can be used to combine multiple tables, and in Chapter [15](ch-sql.html#ch:sql) we will further develop skills for working with relational databases.
For now, keep in mind that being tidy is not about shoving everything into one table.
### 6\.1\.1 Motivation
[*Gapminder*](https://en.wikipedia.org/w/index.php?search=Gapminder) (Rosling, Rönnlund, and Rosling 2005\) is the brainchild of the late Swedish physician and public health researcher [Hans Rosling](https://en.wikipedia.org/w/index.php?search=Hans%20Rosling).
Gapminder contains data about countries over time for a variety of different variables such as the prevalence of [*HIV*](https://en.wikipedia.org/w/index.php?search=HIV) (human immunodeficiency virus) among adults aged 15 to 49 and other health and economic indicators. These data are stored in [*Google Sheets*](https://en.wikipedia.org/w/index.php?search=Google%20Sheets), or one can download them as [*Microsoft Excel*](https://en.wikipedia.org/w/index.php?search=Microsoft%20Excel) workbooks. The typical presentation of a small subset of such data is shown below, where we have used the **googlesheets4** package to pull these data directly into **R**. (See Section [6\.2\.4](ch-dataII.html#sec:nest) for a description of the `unnest()` function.)
```
library(tidyverse)
library(mdsr)
library(googlesheets4)
hiv_key <- "1kWH_xdJDM4SMfT_Kzpkk-1yuxWChfurZuWYjfmv51EA"
hiv <- read_sheet(hiv_key) %>%
rename(Country = 1) %>%
filter(
Country %in% c("United States", "France", "South Africa")
) %>%
select(Country, `1979`, `1989`, `1999`, `2009`) %>%
unnest(cols = c(`2009`)) %>%
mutate(across(matches("[0-9]"), as.double))
hiv
```
```
# A tibble: 3 × 5
Country `1979` `1989` `1999` `2009`
<chr> <dbl> <dbl> <dbl> <dbl>
1 France NA NA 0.3 0.4
2 South Africa NA NA 14.8 17.2
3 United States 0.0318 NA 0.5 0.6
```
The data set has the form of a two\-dimensional array where each of the \\(n\=3\\) rows represents a country and each of the \\(p\=4\\) columns is a year.
Each entry represents the percentage of adults aged 15 to 49 living with HIV in the \\(i^{th}\\) country in the \\(j^{th}\\) year.
This presentation of the data has some advantages.
First, it is possible (with a big enough display) to *see* all of the data.
One can quickly follow the trend over time for a particular country, and one can also estimate quite easily the percentage of data that is missing (e.g., `NA`).
If visual inspection is the primary analytical technique, this [*spreadsheet*](https://en.wikipedia.org/w/index.php?search=spreadsheet)\-style presentation can be convenient.
Alternatively, consider this presentation of those same data.
```
hiv %>%
pivot_longer(-Country, names_to = "Year", values_to = "hiv_rate")
```
```
# A tibble: 12 × 3
Country Year hiv_rate
<chr> <chr> <dbl>
1 France 1979 NA
2 France 1989 NA
3 France 1999 0.3
4 France 2009 0.4
5 South Africa 1979 NA
6 South Africa 1989 NA
7 South Africa 1999 14.8
8 South Africa 2009 17.2
9 United States 1979 0.0318
10 United States 1989 NA
11 United States 1999 0.5
12 United States 2009 0.6
```
While our data can still be represented by a two\-dimensional array, it now has \\(np\=12\\) rows and just three columns. Visual inspection of the data is now more difficult, since our data are long and very narrow—the aspect ratio is not similar to that of our screen.
It turns out that there are substantive reasons to prefer the long (or tall), narrow version of these data. With multiple tables (see Chapter [15](ch-sql.html#ch:sql)), it is a more efficient way for the computer to store and retrieve the data. It is more convenient for the purpose of data analysis. And it is more scalable, in that the addition of a second variable simply contributes another column, whereas to add another variable to the spreadsheet presentation would require a confusing three\-dimensional view, multiple tabs in the spreadsheet, or worse, [*merged cells*](https://en.wikipedia.org/w/index.php?search=merged%20cells).
These gains come at a cost: we have relinquished our ability to *see all the data at once*. When data sets are small, being able to see them all at once can be useful, and even comforting. But in this era of big data, a quest to see all the data at once in a spreadsheet layout is a [*fool’s errand*](https://en.wikipedia.org/w/index.php?search=fool's%20errand). Learning to manage data via programming frees us from the [*click\-and\-drag*](https://en.wikipedia.org/w/index.php?search=click-and-drag) paradigm popularized by spreadsheet applications, allows us to work with data of arbitrary size, and reduces errors. Recording our data management operations in code also makes them reproducible (see Appendix [D](ch-reproduce.html#ch:reproduce))—an increasingly necessary trait in this era of collaboration.
It enables us to fully separate the raw data from our analysis, which is difficult to achieve using a spreadsheet.
(ref:files\-tip)
(ref:files\-tip) Always keep your raw data and your analysis in separate files. Store the uncorrected data file (with errors and problems) and make corrections with a script (see Appendix [D](ch-reproduce.html#ch:reproduce)) file that transforms the raw data into the data that will actually be analyzed. This process will maintain the provenance of your data and allow analyses to be updated with new data without having to start data wrangling from scratch.
The long, narrow format for the [*Gapminder*](https://en.wikipedia.org/w/index.php?search=Gapminder) data that we have outlined above is called [*tidy data*](https://en.wikipedia.org/w/index.php?search=tidy%20data) (H. Wickham 2014\). In what follows, we will further expand upon this notion and develop more sophisticated techniques for wrangling data.
### 6\.1\.2 What are tidy data?
Data can be as simple as a column of numbers in a spreadsheet file or as complex as the electronic medical records collected by a hospital. A newcomer to working with data may expect each source of data to be organized in a unique way and to require unique techniques. The expert, however, has learned to operate with a small set of standard tools. As you’ll see, each of the standard tools performs a comparatively simple task. Combining those simple tasks in appropriate ways is the key to dealing with complex data.
One reason the individual tools can be simple is that each tool gets applied to data arranged in a simple but precisely defined pattern called [*tidy data*](https://en.wikipedia.org/w/index.php?search=tidy%20data).
Tidy data exists in systematically defined [*data tables*](https://en.wikipedia.org/w/index.php?search=data%20tables) (e.g., the rectangular arrays of data seen previously).
Note that not all data tables are tidy.
To illustrate, Table [6\.1](ch-dataII.html#tab:names-short1) shows a handful of entries from a large [*United States Social Security Administration*](https://en.wikipedia.org/w/index.php?search=United%20States%20Social%20Security%20Administration) tabulation of names given to babies.
In particular, the table shows how many babies of each sex were given each name in each year.
Table 6\.1: A data table showing how many babies were given each name in each year in the United States, for a few names.
| year | sex | name | n |
| --- | --- | --- | --- |
| 1999 | M | Kavon | 104 |
| 1984 | F | Somaly | 6 |
| 2017 | F | Dnylah | 8 |
| 1918 | F | Eron | 6 |
| 1992 | F | Arleene | 5 |
| 1977 | F | Alissia | 5 |
| 1919 | F | Bular | 10 |
Table [6\.1](ch-dataII.html#tab:names-short1) shows that there were 104 boys named Kavon born in the U.S. in 1999 and 6 girls named Somaly born in 1984\.
As a whole, the `babynames` data table covers the years 1880 through 2017 and includes a total of 348,120,517 individuals, somewhat larger than the current population of the U.S.
The data in Table [6\.1](ch-dataII.html#tab:names-short1) are *tidy* because they are organized according to two simple rules.
1. The rows, called [*cases*](https://en.wikipedia.org/w/index.php?search=cases) or observations, each refer to a specific, unique, and similar sort of thing, e.g., girls named Somaly in 1984\.
2. The columns, called variables, each have the same sort of value recorded for each row. For instance, `n` gives the number of babies for each case; `sex` tells which gender was assigned at birth.
When data are in tidy form, it is relatively straightforward to transform the data into arrangements that are more useful for answering interesting questions. For instance, you might wish to know which were the most popular baby names over all the years. Even though Table [6\.1](ch-dataII.html#tab:names-short1) contains the popularity information implicitly, we need to rearrange these data by adding up the counts for a name across all the years before the popularity becomes obvious, as in Table [6\.2](ch-dataII.html#tab:names-popular1).
```
popular_names <- babynames %>%
group_by(sex, name) %>%
summarize(total_births = sum(n)) %>%
arrange(desc(total_births))
```
Table 6\.2: The most popular baby names across all years.
| sex | name | total\_births |
| --- | --- | --- |
| M | James | 5150472 |
| M | John | 5115466 |
| M | Robert | 4814815 |
| M | Michael | 4350824 |
| F | Mary | 4123200 |
| M | William | 4102604 |
| M | David | 3611329 |
| M | Joseph | 2603445 |
| M | Richard | 2563082 |
| M | Charles | 2386048 |
The process of transforming information that is implicit in a data table into another data table that gives the information explicitly is called [*data wrangling*](https://en.wikipedia.org/w/index.php?search=data%20wrangling).
The wrangling itself is accomplished by using [*data verbs*](https://en.wikipedia.org/w/index.php?search=data%20verbs) that take a tidy data table and transform it into another tidy data table in a different form.
In Chapters [4](ch-dataI.html#ch:dataI) and [5](ch-join.html#ch:join), you were introduced to several [*data verbs*](https://en.wikipedia.org/w/index.php?search=data%20verbs).
Figure 6\.1: Ward and precinct votes cast in the 2013 Minneapolis mayoral election.
Figure [6\.1](ch-dataII.html#fig:minn-vote-1) displays results from the [*Minneapolis*](https://en.wikipedia.org/w/index.php?search=Minneapolis) mayoral election.
Unlike `babynames`, it is not in tidy form, though the display is attractive and neatly laid out.
There are helpful labels and summaries that make it easy for a person to read and draw conclusions.
(For instance, Ward 1 had a higher voter turnout than Ward 2, and both wards were lower than the city total.)
However, being neat is not what makes data *tidy*. Figure [6\.1](ch-dataII.html#fig:minn-vote-1) violates the first rule for tidy data.
* **Rule 1**: The rows, called [*cases*](https://en.wikipedia.org/w/index.php?search=cases), each must represent the same underlying attribute, that is, the same kind of thing.
That’s not true in Figure [6\.1](ch-dataII.html#fig:minn-vote-1).
For most of the table, the rows represent a single precinct.
But other rows give ward or city\-wide totals.
The first two rows are captions describing the data, not cases.
* **Rule 2**: Each column is a variable containing the same type of value for each case.
That’s mostly true in Figure [6\.1](ch-dataII.html#fig:minn-vote-1), but the tidy pattern is interrupted by labels that are not variables. For instance, the first two cells in row 15 are the label “Ward 1 Subtotal,” which is different from the ward/precinct identifiers that are the values in most of the first column.
Conforming to the rules for tidy data simplifies summarizing and analyzing data. For instance, in the tidy `babynames` table, it is easy (for a computer) to find the total number of babies: just add up all the numbers in the `n` variable. It is similarly easy to find the number of cases: just count the rows. And if you want to know the total number of Ahmeds or Sherinas across the years, there is an easy way to do that.
In contrast, it would be more difficult in the [*Minneapolis*](https://en.wikipedia.org/w/index.php?search=Minneapolis) election data to find, say, the total number of ballots cast. If you take the seemingly obvious approach and add up the numbers in column I of Figure [6\.1](ch-dataII.html#fig:minn-vote-1) (labeled “Total Ballots Cast”), the result will be *three times* the true number of ballots, because some of the rows contain summaries, not cases.
Indeed, if you wanted to do calculations based on the Minneapolis election data, you would be far better off to put it in a tidy form.
Table 6\.3: A selection from the Minneapolis election data in tidy form.
| ward | precinct | registered | voters | absentee | total\_turnout |
| --- | --- | --- | --- | --- | --- |
| 1 | 1 | 28 | 492 | 27 | 0\.272 |
| 1 | 4 | 29 | 768 | 26 | 0\.366 |
| 1 | 7 | 47 | 291 | 8 | 0\.158 |
| 2 | 1 | 63 | 1011 | 39 | 0\.364 |
| 2 | 4 | 53 | 117 | 3 | 0\.073 |
| 2 | 7 | 39 | 138 | 7 | 0\.138 |
| 2 | 10 | 87 | 196 | 5 | 0\.069 |
| 3 | 3 | 71 | 893 | 101 | 0\.374 |
| 3 | 6 | 102 | 927 | 71 | 0\.353 |
The tidy form in Table [6\.3](ch-dataII.html#tab:vote-summary) is, admittedly, not as attractive as the form published by the [*Minneapolis*](https://en.wikipedia.org/w/index.php?search=Minneapolis) government.
But it is much easier to use for the purpose of generating summaries and analyses.
Once data are in a tidy form, you can present them in ways that can be more effective than a formatted spreadsheet. For example, the data graphic in Figure [6\.2](ch-dataII.html#fig:ward-turnouts) presents the turnout within each precinct for each ward in a way that makes it easy to see how much variation there is within and among wards and precincts.
Figure 6\.2: A graphical depiction of voter turnout by precinct in the different wards.
The tidy format
also makes it easier to bring together data from different sources. For instance, to explain the variation in voter turnout, you might want to consider variables such as party affiliation, age, income, etc.
Such data might be available on a ward\-by\-ward basis from other records, such as public voter registration logs and census records.
Tidy data can be wrangled into forms that can be connected to one another (i.e., using the `inner_join()` function from Chapter [5](ch-join.html#ch:join)).
This task would be difficult if you had to deal with an idiosyncratic format for each different source of data.
#### 6\.1\.2\.1 Variables
In data science, the word [*variable*](https://en.wikipedia.org/w/index.php?search=variable) has a different meaning than in mathematics.
In [*algebra*](https://en.wikipedia.org/w/index.php?search=algebra), a variable is an unknown quantity.
In data, a variable is known—it has been measured. Rather, the word [*variable*](https://en.wikipedia.org/w/index.php?search=variable) refers to a specific quantity or quality that can vary from case to case.
There are two major types of variables:
* **Categorical variables**: record type or category and often take the form of a word.
* **Quantitative variables**: record a numerical attribute. A quantitative variable is just what it sounds like: a number.
A [*categorical variable*](https://en.wikipedia.org/w/index.php?search=categorical%20variable) tells you into which category or group a case falls.
For instance, in the baby names data table, `sex` is a categorical variable with two levels `F` and `M`, standing for female and male.
Similarly, the `name` variable is categorical. It happens that there are 97,310 different levels for `name`, ranging from `Aaron`, `Ab`, and `Abbie` to `Zyhaire`, `Zylis`, and `Zymya`.
#### 6\.1\.2\.2 Cases and what they represent
As noted previously, a row of a tidy data table refers to a case.
To this point, you may have little reason to prefer the word *case* to *row*.
When working with a data table, it is important to keep in mind what a case stands for in the real world.
Sometimes the meaning is obvious.
For instance, Table [6\.4](ch-dataII.html#tab:indiv-ballots) is a tidy data table showing the ballots in the Minneapolis mayoral election in 2013\.
Each case is an individual voter’s ballot.
(The voters were directed to mark their ballot with their first choice, second choice, and third choice among the candidates.
This is part of [a procedure](http://vote.minneapolismn.gov/rcv) called [*rank choice voting*](https://en.wikipedia.org/w/index.php?search=rank%20choice%20voting).)
Table 6\.4: Individual ballots in the Minneapolis election. Each voter votes in one precinct within one ward. The ballot marks the voter’s first three choices for mayor.
| Precinct | First | Second | Third | Ward |
| --- | --- | --- | --- | --- |
| P\-04 | undervote | undervote | undervote | W\-6 |
| P\-06 | BOB FINE | MARK ANDREW | undervote | W\-10 |
| P\-02D | NEAL BAXTER | BETSY HODGES | DON SAMUELS | W\-7 |
| P\-01 | DON SAMUELS | undervote | undervote | W\-5 |
| P\-03 | CAM WINTON | DON SAMUELS | OLE SAVIOR | W\-1 |
The case in Table [6\.4](ch-dataII.html#tab:indiv-ballots) is a different sort of thing than the case in Table [6\.3](ch-dataII.html#tab:vote-summary). In Table [6\.3](ch-dataII.html#tab:vote-summary), a case is a ward in a precinct. But in Table [6\.4](ch-dataII.html#tab:indiv-ballots), the case is an individual ballot. Similarly, in the baby names data (Table [6\.1](ch-dataII.html#tab:names-short1)), a case is a name and sex and year while in Table [6\.2](ch-dataII.html#tab:names-popular1) the case is a name and sex.
When thinking about cases, ask this question: What description would make every case unique? In the vote summary data, a precinct does not uniquely identify a case. Each individual precinct appears in several rows. But each precinct and ward combination appears once and only once. Similarly, in Table [6\.1](ch-dataII.html#tab:names-short1), `name` and `sex` do not specify a unique case. Rather, you need the combination of `name-sex-year` to identify a unique row.
#### 6\.1\.2\.3 Runners and races
Table [6\.5](ch-dataII.html#tab:race-excerpt) displays some of the results from a 10\-mile running race held each year in Washington, D.C.
Table 6\.5: An excerpt of runners’ performance over time in a 10\-mile race.
| name.yob | sex | age | year | gun |
| --- | --- | --- | --- | --- |
| jane polanek 1974 | F | 32 | 2006 | 114\.5 |
| jane poole 1948 | F | 55 | 2003 | 92\.7 |
| jane poole 1948 | F | 56 | 2004 | 87\.3 |
| jane poole 1948 | F | 57 | 2005 | 85\.0 |
| jane poole 1948 | F | 58 | 2006 | 80\.8 |
| jane poole 1948 | F | 59 | 2007 | 78\.5 |
| jane schultz 1964 | F | 35 | 1999 | 91\.4 |
| jane schultz 1964 | F | 37 | 2001 | 79\.1 |
| jane schultz 1964 | F | 38 | 2002 | 76\.8 |
| jane schultz 1964 | F | 39 | 2003 | 82\.7 |
| jane schultz 1964 | F | 40 | 2004 | 87\.9 |
| jane schultz 1964 | F | 41 | 2005 | 91\.5 |
| jane schultz 1964 | F | 42 | 2006 | 88\.4 |
| jane smith 1952 | F | 47 | 1999 | 90\.6 |
| jane smith 1952 | F | 49 | 2001 | 97\.9 |
What is the meaning of a case here? It is tempting to think that a case is a person. After all, it is people who run road races. But notice that individuals appear more than once: Jane Poole ran each year from 2003 to 2007\. (Her times improved consistently as she got older!) Jane Schultz ran in the races from 1999 to 2006, missing only the year 2000 race. This suggests that the case is a runner in one year’s race.
#### 6\.1\.2\.4 Codebooks
Data tables do not necessarily display all the variables needed to figure out what makes each row unique.
For such information, you sometimes need to look at the documentation of how the data were collected and what the variables mean.
The codebook is a document—separate from the data table—that describes various aspects of how the data were collected, what the variables mean and what the different levels of categorical variables refer to.
The word [*codebook*](https://en.wikipedia.org/w/index.php?search=codebook) comes from the days when data was encoded for the computer in ways that make it hard for a human to read.
A codebook should include information about how the data were collected and what constitutes a case.
Figure [6\.3](ch-dataII.html#fig:babynames-codebook) shows the codebook for the `HELPrct` data in the **mosaicData** package. In **R**, codebooks for data tables in packages are available from the `help()` function.
```
help(HELPrct)
```
Figure 6\.3: Part of the codebook for the `HELPrct` data table from the **mosaicData** package.
For the runners data in Table [6\.5](ch-dataII.html#tab:race-excerpt), a codebook should tell you that the meaning of the `gun` variable is the time from when the start gun went off to when the runner crosses the finish line and that the unit of measurement is *minutes*. It should also state what might be obvious: that `age` is the person’s age in years and `sex` has two levels, male and female, represented by `M` and `F`.
#### 6\.1\.2\.5 Multiple tables
It is often the case that creating a meaningful display of data involves combining data from different sources and about different kinds of things.
For instance, you might want your analysis of the runners’ performance data in Table [6\.5](ch-dataII.html#tab:race-excerpt) to include temperature and precipitation data for each year’s race.
Such weather data is likely contained in a table of daily weather measurements.
In many circumstances, there will be multiple tidy tables, each of which contains information relative to your analysis, but which has a different kind of thing as a case.
We saw in Chapter [5](ch-join.html#ch:join) how the `inner_join()` and `left_join()` functions can be used to combine multiple tables, and in Chapter [15](ch-sql.html#ch:sql) we will further develop skills for working with relational databases.
For now, keep in mind that being tidy is not about shoving everything into one table.
#### 6\.1\.2\.1 Variables
In data science, the word [*variable*](https://en.wikipedia.org/w/index.php?search=variable) has a different meaning than in mathematics.
In [*algebra*](https://en.wikipedia.org/w/index.php?search=algebra), a variable is an unknown quantity.
In data, a variable is known—it has been measured. Rather, the word [*variable*](https://en.wikipedia.org/w/index.php?search=variable) refers to a specific quantity or quality that can vary from case to case.
There are two major types of variables:
* **Categorical variables**: record type or category and often take the form of a word.
* **Quantitative variables**: record a numerical attribute. A quantitative variable is just what it sounds like: a number.
A [*categorical variable*](https://en.wikipedia.org/w/index.php?search=categorical%20variable) tells you into which category or group a case falls.
For instance, in the baby names data table, `sex` is a categorical variable with two levels `F` and `M`, standing for female and male.
Similarly, the `name` variable is categorical. It happens that there are 97,310 different levels for `name`, ranging from `Aaron`, `Ab`, and `Abbie` to `Zyhaire`, `Zylis`, and `Zymya`.
#### 6\.1\.2\.2 Cases and what they represent
As noted previously, a row of a tidy data table refers to a case.
To this point, you may have little reason to prefer the word *case* to *row*.
When working with a data table, it is important to keep in mind what a case stands for in the real world.
Sometimes the meaning is obvious.
For instance, Table [6\.4](ch-dataII.html#tab:indiv-ballots) is a tidy data table showing the ballots in the Minneapolis mayoral election in 2013\.
Each case is an individual voter’s ballot.
(The voters were directed to mark their ballot with their first choice, second choice, and third choice among the candidates.
This is part of [a procedure](http://vote.minneapolismn.gov/rcv) called [*rank choice voting*](https://en.wikipedia.org/w/index.php?search=rank%20choice%20voting).)
Table 6\.4: Individual ballots in the Minneapolis election. Each voter votes in one precinct within one ward. The ballot marks the voter’s first three choices for mayor.
| Precinct | First | Second | Third | Ward |
| --- | --- | --- | --- | --- |
| P\-04 | undervote | undervote | undervote | W\-6 |
| P\-06 | BOB FINE | MARK ANDREW | undervote | W\-10 |
| P\-02D | NEAL BAXTER | BETSY HODGES | DON SAMUELS | W\-7 |
| P\-01 | DON SAMUELS | undervote | undervote | W\-5 |
| P\-03 | CAM WINTON | DON SAMUELS | OLE SAVIOR | W\-1 |
The case in Table [6\.4](ch-dataII.html#tab:indiv-ballots) is a different sort of thing than the case in Table [6\.3](ch-dataII.html#tab:vote-summary). In Table [6\.3](ch-dataII.html#tab:vote-summary), a case is a ward in a precinct. But in Table [6\.4](ch-dataII.html#tab:indiv-ballots), the case is an individual ballot. Similarly, in the baby names data (Table [6\.1](ch-dataII.html#tab:names-short1)), a case is a name and sex and year while in Table [6\.2](ch-dataII.html#tab:names-popular1) the case is a name and sex.
When thinking about cases, ask this question: What description would make every case unique? In the vote summary data, a precinct does not uniquely identify a case. Each individual precinct appears in several rows. But each precinct and ward combination appears once and only once. Similarly, in Table [6\.1](ch-dataII.html#tab:names-short1), `name` and `sex` do not specify a unique case. Rather, you need the combination of `name-sex-year` to identify a unique row.
#### 6\.1\.2\.3 Runners and races
Table [6\.5](ch-dataII.html#tab:race-excerpt) displays some of the results from a 10\-mile running race held each year in Washington, D.C.
Table 6\.5: An excerpt of runners’ performance over time in a 10\-mile race.
| name.yob | sex | age | year | gun |
| --- | --- | --- | --- | --- |
| jane polanek 1974 | F | 32 | 2006 | 114\.5 |
| jane poole 1948 | F | 55 | 2003 | 92\.7 |
| jane poole 1948 | F | 56 | 2004 | 87\.3 |
| jane poole 1948 | F | 57 | 2005 | 85\.0 |
| jane poole 1948 | F | 58 | 2006 | 80\.8 |
| jane poole 1948 | F | 59 | 2007 | 78\.5 |
| jane schultz 1964 | F | 35 | 1999 | 91\.4 |
| jane schultz 1964 | F | 37 | 2001 | 79\.1 |
| jane schultz 1964 | F | 38 | 2002 | 76\.8 |
| jane schultz 1964 | F | 39 | 2003 | 82\.7 |
| jane schultz 1964 | F | 40 | 2004 | 87\.9 |
| jane schultz 1964 | F | 41 | 2005 | 91\.5 |
| jane schultz 1964 | F | 42 | 2006 | 88\.4 |
| jane smith 1952 | F | 47 | 1999 | 90\.6 |
| jane smith 1952 | F | 49 | 2001 | 97\.9 |
What is the meaning of a case here? It is tempting to think that a case is a person. After all, it is people who run road races. But notice that individuals appear more than once: Jane Poole ran each year from 2003 to 2007\. (Her times improved consistently as she got older!) Jane Schultz ran in the races from 1999 to 2006, missing only the year 2000 race. This suggests that the case is a runner in one year’s race.
#### 6\.1\.2\.4 Codebooks
Data tables do not necessarily display all the variables needed to figure out what makes each row unique.
For such information, you sometimes need to look at the documentation of how the data were collected and what the variables mean.
The codebook is a document—separate from the data table—that describes various aspects of how the data were collected, what the variables mean and what the different levels of categorical variables refer to.
The word [*codebook*](https://en.wikipedia.org/w/index.php?search=codebook) comes from the days when data was encoded for the computer in ways that make it hard for a human to read.
A codebook should include information about how the data were collected and what constitutes a case.
Figure [6\.3](ch-dataII.html#fig:babynames-codebook) shows the codebook for the `HELPrct` data in the **mosaicData** package. In **R**, codebooks for data tables in packages are available from the `help()` function.
```
help(HELPrct)
```
Figure 6\.3: Part of the codebook for the `HELPrct` data table from the **mosaicData** package.
For the runners data in Table [6\.5](ch-dataII.html#tab:race-excerpt), a codebook should tell you that the meaning of the `gun` variable is the time from when the start gun went off to when the runner crosses the finish line and that the unit of measurement is *minutes*. It should also state what might be obvious: that `age` is the person’s age in years and `sex` has two levels, male and female, represented by `M` and `F`.
#### 6\.1\.2\.5 Multiple tables
It is often the case that creating a meaningful display of data involves combining data from different sources and about different kinds of things.
For instance, you might want your analysis of the runners’ performance data in Table [6\.5](ch-dataII.html#tab:race-excerpt) to include temperature and precipitation data for each year’s race.
Such weather data is likely contained in a table of daily weather measurements.
In many circumstances, there will be multiple tidy tables, each of which contains information relative to your analysis, but which has a different kind of thing as a case.
We saw in Chapter [5](ch-join.html#ch:join) how the `inner_join()` and `left_join()` functions can be used to combine multiple tables, and in Chapter [15](ch-sql.html#ch:sql) we will further develop skills for working with relational databases.
For now, keep in mind that being tidy is not about shoving everything into one table.
6\.2 Reshaping data
-------------------
Each row of a tidy data table is an individual case. It is often useful to re\-organize the same data in a such a way that a case has a different meaning. This can make it easier to perform wrangling tasks such as comparisons, joins, and the inclusion of new data.
Consider the format of `BP_wide` shown in Table [6\.6](ch-dataII.html#tab:wide-example), in which each case is a research study subject and there are separate variables for the measurement of [*systolic blood pressure*](https://en.wikipedia.org/w/index.php?search=systolic%20blood%20pressure) (SBP) before and after exposure to a stressful environment.
Exactly the same data can be presented in the format of the `BP_narrow` data table (Table [6\.7](ch-dataII.html#tab:narrow-example)), where the case is an individual occasion for blood pressure measurement.
Table 6\.6: A blood pressure data table in a wide format.
| subject | before | after |
| --- | --- | --- |
| BHO | 160 | 115 |
| GWB | 120 | 135 |
| WJC | 105 | 145 |
Table 6\.7: A tidy blood pressure data table in a narrow format.
| subject | when | sbp |
| --- | --- | --- |
| BHO | before | 160 |
| GWB | before | 120 |
| WJC | before | 105 |
| BHO | after | 115 |
| GWB | after | 135 |
| WJC | after | 145 |
Each of the formats `BP_wide` and `BP_narrow` has its advantages and its disadvantages.
For example, it is easy to find the before\-and\-after change in blood pressure using `BP_wide`.
```
BP_wide %>%
mutate(change = after - before)
```
```
# A tibble: 3 × 4
subject before after change
<chr> <dbl> <dbl> <dbl>
1 BHO 160 115 -45
2 GWB 120 135 15
3 WJC 105 145 40
```
On the other hand, a narrow format is more flexible for including additional variables, for example the date of the measurement or the diastolic blood pressure as in Table [6\.8](ch-dataII.html#tab:narrow-augmented). The narrow format also makes it feasible to add in additional measurement occasions. For instance, Table [6\.8](ch-dataII.html#tab:narrow-augmented) shows several “after” measurements for subject “WJC.” (Such [*repeated measures*](https://en.wikipedia.org/w/index.php?search=repeated%20measures) are a common feature of scientific studies.)
A simple strategy allows you to get the benefits of either format: convert from wide to narrow or from narrow to wide as suits your purpose.
Table 6\.8: A data table extending the information in the previous two to include additional variables and repeated measurements. The narrow format facilitates including new cases or variables.
| subject | when | sbp | dbp | date |
| --- | --- | --- | --- | --- |
| BHO | before | 160 | 69 | 2007\-06\-19 |
| GWB | before | 120 | 54 | 1998\-04\-21 |
| BHO | before | 155 | 65 | 2005\-11\-08 |
| WJC | after | 145 | 75 | 2002\-11\-15 |
| WJC | after | NA | 65 | 2010\-03\-26 |
| WJC | after | 130 | 60 | 2013\-09\-15 |
| GWB | after | 135 | NA | 2009\-05\-08 |
| WJC | before | 105 | 60 | 1990\-08\-17 |
| BHO | after | 115 | 78 | 2017\-06\-04 |
### 6\.2\.1 Data verbs for converting wide to narrow and *vice versa*
Transforming a data table from wide to narrow is the action of the `pivot_longer()` data verb: A wide data table is the input and a narrow data table is the output. The reverse task, transforming from narrow to wide, involves the data verb `pivot_wider()`. Both functions are implemented in the **tidyr** package.
### 6\.2\.2 Pivoting wider
The `pivot_wider()` function converts a data table from narrow to wide. Carrying out this operation involves specifying some information in the arguments to the function. The `values_from` argument is the name of the variable in the narrow format that is to be divided up into multiple variables in the resulting wide format. The `names_from` argument is the name of the variable in the narrow format that identifies for each case individually which column in the wide format will receive the value.
For instance, in the narrow form of `BP_narrow` (Table [6\.7](ch-dataII.html#tab:narrow-example)) the `values_from` variable is `sbp`. In the corresponding wide form, `BP_wide` (Table [6\.6](ch-dataII.html#tab:wide-example)), the information in `sbp` will be spread between two variables: `before` and `after`. The `names_from` variable in `BP_narrow` is `when`. Note that the different categorical levels in `when` specify which variable in `BP_wide` will be the destination for the `sbp` value of each case.
Only the `names_from` and `values_from` variables are involved in the transformation from narrow to wide. Other variables in the narrow table, such as `subject` in `BP_narrow`, are used to define the cases. Thus, to translate from `BP_narrow` to `BP_wide` we would write this code:
```
BP_narrow %>%
pivot_wider(names_from = when, values_from = sbp)
```
```
# A tibble: 3 × 3
subject before after
<chr> <dbl> <dbl>
1 BHO 160 115
2 GWB 120 135
3 WJC 105 145
```
### 6\.2\.3 Pivoting longer
Now consider how to transform `BP_wide` into `BP_narrow`.
The names of the variables to be gathered together, `before` and `after`, will become the categorical levels in the narrow form.
That is, they will make up the `names_to` variable in the narrow form.
The data analyst has to invent a name for this variable. There are all sorts of sensible possibilities, for instance `before_or_after`.
In gathering `BP_wide` into `BP_narrow`, we chose the concise variable name `when`.
Similarly, a name must be specified for the variable that is to hold the values in the variables being gathered.
There are many reasonable possibilities.
It is sensible to choose a name that reflects the kind of thing those values are, in this case systolic blood pressure.
So, `sbp` is a good choice.
Finally, we need to specify which variables are to be gathered.
For instance, it hardly makes sense to gather `subject` with the other variables; it will remain as a separate variable in the narrow result.
Values in `subject` will be repeated as necessary to give each case in the narrow format its own correct value of `subject`.
In summary, to convert `BP_wide`
into `BP_narrow`, we make the following call to `pivot_longer()`.
```
BP_wide %>%
pivot_longer(-subject, names_to = "when", values_to = "sbp")
```
```
# A tibble: 6 × 3
subject when sbp
<chr> <chr> <dbl>
1 BHO before 160
2 BHO after 115
3 GWB before 120
4 GWB after 135
5 WJC before 105
6 WJC after 145
```
### 6\.2\.4 List\-columns
Consider the following simple summarization of the blood pressure data. Using the techniques developed in Section [4\.1\.4](ch-dataI.html#sec:summarize), we can compute the mean systolic blood pressure for each subject both before and after exposure.
```
BP_full %>%
group_by(subject, when) %>%
summarize(mean_sbp = mean(sbp, na.rm = TRUE))
```
```
# A tibble: 6 × 3
# Groups: subject [3]
subject when mean_sbp
<chr> <chr> <dbl>
1 BHO after 115
2 BHO before 158.
3 GWB after 135
4 GWB before 120
5 WJC after 138.
6 WJC before 105
```
But what if we want to do additional analysis on the blood pressure data? The individual observations are not retained in the summarized output. Can we create a summary of the data that still contains *all* of the observations?
One simplistic approach would be to use `paste()` with the `collapse` argument to condense the individual operations into a single vector.
```
BP_summary <- BP_full %>%
group_by(subject, when) %>%
summarize(
sbps = paste(sbp, collapse = ", "),
dbps = paste(dbp, collapse = ", ")
)
```
This can be useful for seeing the data, but you can’t do much computing on it, because the variables `sbps` and `dbps` are `character` vectors. As a result, trying to compute, say, the mean of the systolic blood pressures won’t work as you hope it might. Note that the means computed below are wrong.
```
BP_summary %>%
mutate(mean_sbp = mean(parse_number(sbps)))
```
```
# A tibble: 6 × 5
# Groups: subject [3]
subject when sbps dbps mean_sbp
<chr> <chr> <chr> <chr> <dbl>
1 BHO after 115 78 138.
2 BHO before 160, 155 69, 65 138.
3 GWB after 135 NA 128.
4 GWB before 120 54 128.
5 WJC after 145, NA, 130 75, 65, 60 125
6 WJC before 105 60 125
```
Additionally, you would have to write the code to do the summarization for every variable in your data set, which could get cumbersome.
Instead, the `nest()` function will collapse *all* of the ungrouped variables in a data frame into a `tibble` (a simple data frame).
This creates a new variable of type `list`, which by default has the name `data`. Each element of that list has the type `tibble`. Although you can’t see all of the data in the output printed here, it’s all in there. Variables in data frames that have type `list` are called [*list\-columns*](https://en.wikipedia.org/w/index.php?search=list-columns).
```
BP_nested <- BP_full %>%
group_by(subject, when) %>%
nest()
BP_nested
```
```
# A tibble: 6 × 3
# Groups: subject, when [6]
subject when data
<chr> <chr> <list>
1 BHO before <tibble [2 × 3]>
2 GWB before <tibble [1 × 3]>
3 WJC after <tibble [3 × 3]>
4 GWB after <tibble [1 × 3]>
5 WJC before <tibble [1 × 3]>
6 BHO after <tibble [1 × 3]>
```
This construction works because a data frame is just a list of vectors of the same length, and the type of those vectors is arbitrary. Thus, the `data` variable is a vector of type `list` that consists of `tibble`s. Note also that the dimensions of each tibble (items in the `data` list) can be different.
The ability to collapse a long data frame into its nested form is particularly useful in the context of model fitting, which we illustrate in Chapter [11](ch-learningI.html#ch:learningI).
While every list\-column has the type `list`, the type of the data contained within that list can be anything. Thus, while the `data` variable contains a list of tibbles, we can extract only the systolic blood pressures, and put them in their own list\-column. It’s tempting to try to `pull()` the `sbp` variable out like this:
```
BP_nested %>%
mutate(sbp_list = pull(data, sbp))
```
```
Error: Problem with `mutate()` column `sbp_list`.
ℹ `sbp_list = pull(data, sbp)`.
x no applicable method for 'pull' applied to an object of class "list"
ℹ The error occurred in group 1: subject = "BHO", when = "after".
```
The problem is that `data` is not a `tibble`.
Rather, it’s a `list` of `tibble`s. To get around this, we need to use the `map()` function, which is described in Chapter [7](ch-iteration.html#ch:iteration).
For now, it’s enough to understand that we need to apply the `pull()` function to each item in the `data` list.
The `map()` function allows us to do just that, and further, it always returns a `list`, and thus creates a new list\-column.
```
BP_nested <- BP_nested %>%
mutate(sbp_list = map(data, pull, sbp))
BP_nested
```
```
# A tibble: 6 × 4
# Groups: subject, when [6]
subject when data sbp_list
<chr> <chr> <list> <list>
1 BHO before <tibble [2 × 3]> <dbl [2]>
2 GWB before <tibble [1 × 3]> <dbl [1]>
3 WJC after <tibble [3 × 3]> <dbl [3]>
4 GWB after <tibble [1 × 3]> <dbl [1]>
5 WJC before <tibble [1 × 3]> <dbl [1]>
6 BHO after <tibble [1 × 3]> <dbl [1]>
```
Again, note that `sbp_list` is a `list`, with each item in the list being a vector of type `double`.
These vectors need *not* have the same length!
We can verify this by isolating the `sbp_list` variable with the `pluck()` function.
```
BP_nested %>%
pluck("sbp_list")
```
```
[[1]]
[1] 160 155
[[2]]
[1] 120
[[3]]
[1] 145 NA 130
[[4]]
[1] 135
[[5]]
[1] 105
[[6]]
[1] 115
```
Because all of the systolic blood pressure readings are contained within this `list`, a further application of `map()` will allow us to compute the mean.
```
BP_nested <- BP_nested %>%
mutate(sbp_mean = map(sbp_list, mean, na.rm = TRUE))
BP_nested
```
```
# A tibble: 6 × 5
# Groups: subject, when [6]
subject when data sbp_list sbp_mean
<chr> <chr> <list> <list> <list>
1 BHO before <tibble [2 × 3]> <dbl [2]> <dbl [1]>
2 GWB before <tibble [1 × 3]> <dbl [1]> <dbl [1]>
3 WJC after <tibble [3 × 3]> <dbl [3]> <dbl [1]>
4 GWB after <tibble [1 × 3]> <dbl [1]> <dbl [1]>
5 WJC before <tibble [1 × 3]> <dbl [1]> <dbl [1]>
6 BHO after <tibble [1 × 3]> <dbl [1]> <dbl [1]>
```
`BP_nested` still has a nested structure. However, the column `sbp_mean` is a `list` of `double` vectors, each of which has a single element.
We can use `unnest()` to undo the nesting structure of that column. In this case, we retain the same 6 rows, each corresponding to one subject either before or after intervention.
```
BP_nested %>%
unnest(cols = c(sbp_mean))
```
```
# A tibble: 6 × 5
# Groups: subject, when [6]
subject when data sbp_list sbp_mean
<chr> <chr> <list> <list> <dbl>
1 BHO before <tibble [2 × 3]> <dbl [2]> 158.
2 GWB before <tibble [1 × 3]> <dbl [1]> 120
3 WJC after <tibble [3 × 3]> <dbl [3]> 138.
4 GWB after <tibble [1 × 3]> <dbl [1]> 135
5 WJC before <tibble [1 × 3]> <dbl [1]> 105
6 BHO after <tibble [1 × 3]> <dbl [1]> 115
```
This computation gives the correct mean blood pressure for each subject at each time point.
On the other hand, an application of `unnest()` to the `sbp_list` variable, which has more than one observation for each row, results in a data frame with one row for each observed subject on a specific date. This transforms the data back into the same unit of observation as `BP_full`.
```
BP_nested %>%
unnest(cols = c(sbp_list))
```
```
# A tibble: 9 × 5
# Groups: subject, when [6]
subject when data sbp_list sbp_mean
<chr> <chr> <list> <dbl> <list>
1 BHO before <tibble [2 × 3]> 160 <dbl [1]>
2 BHO before <tibble [2 × 3]> 155 <dbl [1]>
3 GWB before <tibble [1 × 3]> 120 <dbl [1]>
4 WJC after <tibble [3 × 3]> 145 <dbl [1]>
5 WJC after <tibble [3 × 3]> NA <dbl [1]>
6 WJC after <tibble [3 × 3]> 130 <dbl [1]>
7 GWB after <tibble [1 × 3]> 135 <dbl [1]>
8 WJC before <tibble [1 × 3]> 105 <dbl [1]>
9 BHO after <tibble [1 × 3]> 115 <dbl [1]>
```
We use `nest()` or `unnest()` in Chapters [11](ch-learningI.html#ch:learningI), [14](ch-vizIII.html#ch:vizIII), and [20](ch-netsci.html#ch:netsci).
### 6\.2\.5 Example: Gender\-neutral names
In “[A Boy Named Sue](https://en.wikipedia.org/wiki/A_Boy_Named_Sue)” country singer [Johnny Cash](https://en.wikipedia.org/w/index.php?search=Johnny%20Cash)
famously told the story of a boy toughened in life—eventually reaching gratitude—by being given a traditional girl’s name.
The conceit is of course the rarity of being a boy with the name `Sue`, and indeed, `Sue` is given to about 300 times as many girls as boys (at least being recorded in this manner: data entry errors may account for some of these names).
```
babynames %>%
filter(name == "Sue") %>%
group_by(name, sex) %>%
summarize(total = sum(n))
```
```
# A tibble: 2 × 3
# Groups: name [1]
name sex total
<chr> <chr> <int>
1 Sue F 144465
2 Sue M 519
```
On the other hand, some names that are predominantly given to girls are also commonly given to boys.
Although only 15% of people named `Robin` are male, it is easy to think of a few famous men with that name: the actor [Robin Williams](https://en.wikipedia.org/w/index.php?search=Robin%20Williams), the singer [Robin Gibb](https://en.wikipedia.org/w/index.php?search=Robin%20Gibb), and the basketball player [Robin Lopez](https://en.wikipedia.org/w/index.php?search=Robin%20Lopez) (not to mention [*Batman*](https://en.wikipedia.org/w/index.php?search=Batman)’s sidekick).
```
babynames %>%
filter(name == "Robin") %>%
group_by(name, sex) %>%
summarize(total = sum(n))
```
```
# A tibble: 2 × 3
# Groups: name [1]
name sex total
<chr> <chr> <int>
1 Robin F 289395
2 Robin M 44616
```
This computational paradigm (e.g., filtering) works well if you want to look at gender balance in one name at a time, but suppose you want to find the most gender\-neutral names from all 97,310 names in `babynames`?
For this, it would be useful to have the results in a wide format, like the one shown below.
```
babynames %>%
filter(name %in% c("Sue", "Robin", "Leslie")) %>%
group_by(name, sex) %>%
summarize(total = sum(n)) %>%
pivot_wider(
names_from = sex,
values_from = total
)
```
```
# A tibble: 3 × 3
# Groups: name [3]
name F M
<chr> <int> <int>
1 Leslie 266474 112689
2 Robin 289395 44616
3 Sue 144465 519
```
The `pivot_wider()` function can help us generate the wide format. Note that the `sex` variable is the `names_from` used in the conversion.
A fill of zero is appropriate here: For a name like `Aaban` or `Aadam`, where there are no females, the entry for `F` should be zero.
```
baby_wide <- babynames %>%
group_by(sex, name) %>%
summarize(total = sum(n)) %>%
pivot_wider(
names_from = sex,
values_from = total,
values_fill = 0
)
head(baby_wide, 3)
```
```
# A tibble: 3 × 3
name F M
<chr> <int> <int>
1 Aabha 35 0
2 Aabriella 32 0
3 Aada 5 0
```
One way to define “approximately the same” is to take the smaller of the ratios M/F and F/M. If females greatly outnumber males, then F/M will be large, but M/F will be small. If the sexes are about equal, then both ratios will be near one. The smaller will never be greater than one, so the most balanced names are those with the smaller of the ratios near one.
The code to identify
the most balanced gender\-neutral names out of the names with more than 50,000 babies of each sex is shown below.
Remember, a ratio of 1 means exactly balanced; a ratio of 0\.5 means two to one in favor of one sex; 0\.33 means three to one.
(The `pmin()` transformation function returns the smaller of the two arguments for each individual case.)
```
baby_wide %>%
filter(M > 50000, F > 50000) %>%
mutate(ratio = pmin(M / F, F / M) ) %>%
arrange(desc(ratio)) %>%
head(3)
```
```
# A tibble: 3 × 4
name F M ratio
<chr> <int> <int> <dbl>
1 Riley 100881 92789 0.920
2 Jackie 90604 78405 0.865
3 Casey 76020 110165 0.690
```
Riley has been the most gender\-balanced name, followed by Jackie. Where does your name fall on this list?
### 6\.2\.1 Data verbs for converting wide to narrow and *vice versa*
Transforming a data table from wide to narrow is the action of the `pivot_longer()` data verb: A wide data table is the input and a narrow data table is the output. The reverse task, transforming from narrow to wide, involves the data verb `pivot_wider()`. Both functions are implemented in the **tidyr** package.
### 6\.2\.2 Pivoting wider
The `pivot_wider()` function converts a data table from narrow to wide. Carrying out this operation involves specifying some information in the arguments to the function. The `values_from` argument is the name of the variable in the narrow format that is to be divided up into multiple variables in the resulting wide format. The `names_from` argument is the name of the variable in the narrow format that identifies for each case individually which column in the wide format will receive the value.
For instance, in the narrow form of `BP_narrow` (Table [6\.7](ch-dataII.html#tab:narrow-example)) the `values_from` variable is `sbp`. In the corresponding wide form, `BP_wide` (Table [6\.6](ch-dataII.html#tab:wide-example)), the information in `sbp` will be spread between two variables: `before` and `after`. The `names_from` variable in `BP_narrow` is `when`. Note that the different categorical levels in `when` specify which variable in `BP_wide` will be the destination for the `sbp` value of each case.
Only the `names_from` and `values_from` variables are involved in the transformation from narrow to wide. Other variables in the narrow table, such as `subject` in `BP_narrow`, are used to define the cases. Thus, to translate from `BP_narrow` to `BP_wide` we would write this code:
```
BP_narrow %>%
pivot_wider(names_from = when, values_from = sbp)
```
```
# A tibble: 3 × 3
subject before after
<chr> <dbl> <dbl>
1 BHO 160 115
2 GWB 120 135
3 WJC 105 145
```
### 6\.2\.3 Pivoting longer
Now consider how to transform `BP_wide` into `BP_narrow`.
The names of the variables to be gathered together, `before` and `after`, will become the categorical levels in the narrow form.
That is, they will make up the `names_to` variable in the narrow form.
The data analyst has to invent a name for this variable. There are all sorts of sensible possibilities, for instance `before_or_after`.
In gathering `BP_wide` into `BP_narrow`, we chose the concise variable name `when`.
Similarly, a name must be specified for the variable that is to hold the values in the variables being gathered.
There are many reasonable possibilities.
It is sensible to choose a name that reflects the kind of thing those values are, in this case systolic blood pressure.
So, `sbp` is a good choice.
Finally, we need to specify which variables are to be gathered.
For instance, it hardly makes sense to gather `subject` with the other variables; it will remain as a separate variable in the narrow result.
Values in `subject` will be repeated as necessary to give each case in the narrow format its own correct value of `subject`.
In summary, to convert `BP_wide`
into `BP_narrow`, we make the following call to `pivot_longer()`.
```
BP_wide %>%
pivot_longer(-subject, names_to = "when", values_to = "sbp")
```
```
# A tibble: 6 × 3
subject when sbp
<chr> <chr> <dbl>
1 BHO before 160
2 BHO after 115
3 GWB before 120
4 GWB after 135
5 WJC before 105
6 WJC after 145
```
### 6\.2\.4 List\-columns
Consider the following simple summarization of the blood pressure data. Using the techniques developed in Section [4\.1\.4](ch-dataI.html#sec:summarize), we can compute the mean systolic blood pressure for each subject both before and after exposure.
```
BP_full %>%
group_by(subject, when) %>%
summarize(mean_sbp = mean(sbp, na.rm = TRUE))
```
```
# A tibble: 6 × 3
# Groups: subject [3]
subject when mean_sbp
<chr> <chr> <dbl>
1 BHO after 115
2 BHO before 158.
3 GWB after 135
4 GWB before 120
5 WJC after 138.
6 WJC before 105
```
But what if we want to do additional analysis on the blood pressure data? The individual observations are not retained in the summarized output. Can we create a summary of the data that still contains *all* of the observations?
One simplistic approach would be to use `paste()` with the `collapse` argument to condense the individual operations into a single vector.
```
BP_summary <- BP_full %>%
group_by(subject, when) %>%
summarize(
sbps = paste(sbp, collapse = ", "),
dbps = paste(dbp, collapse = ", ")
)
```
This can be useful for seeing the data, but you can’t do much computing on it, because the variables `sbps` and `dbps` are `character` vectors. As a result, trying to compute, say, the mean of the systolic blood pressures won’t work as you hope it might. Note that the means computed below are wrong.
```
BP_summary %>%
mutate(mean_sbp = mean(parse_number(sbps)))
```
```
# A tibble: 6 × 5
# Groups: subject [3]
subject when sbps dbps mean_sbp
<chr> <chr> <chr> <chr> <dbl>
1 BHO after 115 78 138.
2 BHO before 160, 155 69, 65 138.
3 GWB after 135 NA 128.
4 GWB before 120 54 128.
5 WJC after 145, NA, 130 75, 65, 60 125
6 WJC before 105 60 125
```
Additionally, you would have to write the code to do the summarization for every variable in your data set, which could get cumbersome.
Instead, the `nest()` function will collapse *all* of the ungrouped variables in a data frame into a `tibble` (a simple data frame).
This creates a new variable of type `list`, which by default has the name `data`. Each element of that list has the type `tibble`. Although you can’t see all of the data in the output printed here, it’s all in there. Variables in data frames that have type `list` are called [*list\-columns*](https://en.wikipedia.org/w/index.php?search=list-columns).
```
BP_nested <- BP_full %>%
group_by(subject, when) %>%
nest()
BP_nested
```
```
# A tibble: 6 × 3
# Groups: subject, when [6]
subject when data
<chr> <chr> <list>
1 BHO before <tibble [2 × 3]>
2 GWB before <tibble [1 × 3]>
3 WJC after <tibble [3 × 3]>
4 GWB after <tibble [1 × 3]>
5 WJC before <tibble [1 × 3]>
6 BHO after <tibble [1 × 3]>
```
This construction works because a data frame is just a list of vectors of the same length, and the type of those vectors is arbitrary. Thus, the `data` variable is a vector of type `list` that consists of `tibble`s. Note also that the dimensions of each tibble (items in the `data` list) can be different.
The ability to collapse a long data frame into its nested form is particularly useful in the context of model fitting, which we illustrate in Chapter [11](ch-learningI.html#ch:learningI).
While every list\-column has the type `list`, the type of the data contained within that list can be anything. Thus, while the `data` variable contains a list of tibbles, we can extract only the systolic blood pressures, and put them in their own list\-column. It’s tempting to try to `pull()` the `sbp` variable out like this:
```
BP_nested %>%
mutate(sbp_list = pull(data, sbp))
```
```
Error: Problem with `mutate()` column `sbp_list`.
ℹ `sbp_list = pull(data, sbp)`.
x no applicable method for 'pull' applied to an object of class "list"
ℹ The error occurred in group 1: subject = "BHO", when = "after".
```
The problem is that `data` is not a `tibble`.
Rather, it’s a `list` of `tibble`s. To get around this, we need to use the `map()` function, which is described in Chapter [7](ch-iteration.html#ch:iteration).
For now, it’s enough to understand that we need to apply the `pull()` function to each item in the `data` list.
The `map()` function allows us to do just that, and further, it always returns a `list`, and thus creates a new list\-column.
```
BP_nested <- BP_nested %>%
mutate(sbp_list = map(data, pull, sbp))
BP_nested
```
```
# A tibble: 6 × 4
# Groups: subject, when [6]
subject when data sbp_list
<chr> <chr> <list> <list>
1 BHO before <tibble [2 × 3]> <dbl [2]>
2 GWB before <tibble [1 × 3]> <dbl [1]>
3 WJC after <tibble [3 × 3]> <dbl [3]>
4 GWB after <tibble [1 × 3]> <dbl [1]>
5 WJC before <tibble [1 × 3]> <dbl [1]>
6 BHO after <tibble [1 × 3]> <dbl [1]>
```
Again, note that `sbp_list` is a `list`, with each item in the list being a vector of type `double`.
These vectors need *not* have the same length!
We can verify this by isolating the `sbp_list` variable with the `pluck()` function.
```
BP_nested %>%
pluck("sbp_list")
```
```
[[1]]
[1] 160 155
[[2]]
[1] 120
[[3]]
[1] 145 NA 130
[[4]]
[1] 135
[[5]]
[1] 105
[[6]]
[1] 115
```
Because all of the systolic blood pressure readings are contained within this `list`, a further application of `map()` will allow us to compute the mean.
```
BP_nested <- BP_nested %>%
mutate(sbp_mean = map(sbp_list, mean, na.rm = TRUE))
BP_nested
```
```
# A tibble: 6 × 5
# Groups: subject, when [6]
subject when data sbp_list sbp_mean
<chr> <chr> <list> <list> <list>
1 BHO before <tibble [2 × 3]> <dbl [2]> <dbl [1]>
2 GWB before <tibble [1 × 3]> <dbl [1]> <dbl [1]>
3 WJC after <tibble [3 × 3]> <dbl [3]> <dbl [1]>
4 GWB after <tibble [1 × 3]> <dbl [1]> <dbl [1]>
5 WJC before <tibble [1 × 3]> <dbl [1]> <dbl [1]>
6 BHO after <tibble [1 × 3]> <dbl [1]> <dbl [1]>
```
`BP_nested` still has a nested structure. However, the column `sbp_mean` is a `list` of `double` vectors, each of which has a single element.
We can use `unnest()` to undo the nesting structure of that column. In this case, we retain the same 6 rows, each corresponding to one subject either before or after intervention.
```
BP_nested %>%
unnest(cols = c(sbp_mean))
```
```
# A tibble: 6 × 5
# Groups: subject, when [6]
subject when data sbp_list sbp_mean
<chr> <chr> <list> <list> <dbl>
1 BHO before <tibble [2 × 3]> <dbl [2]> 158.
2 GWB before <tibble [1 × 3]> <dbl [1]> 120
3 WJC after <tibble [3 × 3]> <dbl [3]> 138.
4 GWB after <tibble [1 × 3]> <dbl [1]> 135
5 WJC before <tibble [1 × 3]> <dbl [1]> 105
6 BHO after <tibble [1 × 3]> <dbl [1]> 115
```
This computation gives the correct mean blood pressure for each subject at each time point.
On the other hand, an application of `unnest()` to the `sbp_list` variable, which has more than one observation for each row, results in a data frame with one row for each observed subject on a specific date. This transforms the data back into the same unit of observation as `BP_full`.
```
BP_nested %>%
unnest(cols = c(sbp_list))
```
```
# A tibble: 9 × 5
# Groups: subject, when [6]
subject when data sbp_list sbp_mean
<chr> <chr> <list> <dbl> <list>
1 BHO before <tibble [2 × 3]> 160 <dbl [1]>
2 BHO before <tibble [2 × 3]> 155 <dbl [1]>
3 GWB before <tibble [1 × 3]> 120 <dbl [1]>
4 WJC after <tibble [3 × 3]> 145 <dbl [1]>
5 WJC after <tibble [3 × 3]> NA <dbl [1]>
6 WJC after <tibble [3 × 3]> 130 <dbl [1]>
7 GWB after <tibble [1 × 3]> 135 <dbl [1]>
8 WJC before <tibble [1 × 3]> 105 <dbl [1]>
9 BHO after <tibble [1 × 3]> 115 <dbl [1]>
```
We use `nest()` or `unnest()` in Chapters [11](ch-learningI.html#ch:learningI), [14](ch-vizIII.html#ch:vizIII), and [20](ch-netsci.html#ch:netsci).
### 6\.2\.5 Example: Gender\-neutral names
In “[A Boy Named Sue](https://en.wikipedia.org/wiki/A_Boy_Named_Sue)” country singer [Johnny Cash](https://en.wikipedia.org/w/index.php?search=Johnny%20Cash)
famously told the story of a boy toughened in life—eventually reaching gratitude—by being given a traditional girl’s name.
The conceit is of course the rarity of being a boy with the name `Sue`, and indeed, `Sue` is given to about 300 times as many girls as boys (at least being recorded in this manner: data entry errors may account for some of these names).
```
babynames %>%
filter(name == "Sue") %>%
group_by(name, sex) %>%
summarize(total = sum(n))
```
```
# A tibble: 2 × 3
# Groups: name [1]
name sex total
<chr> <chr> <int>
1 Sue F 144465
2 Sue M 519
```
On the other hand, some names that are predominantly given to girls are also commonly given to boys.
Although only 15% of people named `Robin` are male, it is easy to think of a few famous men with that name: the actor [Robin Williams](https://en.wikipedia.org/w/index.php?search=Robin%20Williams), the singer [Robin Gibb](https://en.wikipedia.org/w/index.php?search=Robin%20Gibb), and the basketball player [Robin Lopez](https://en.wikipedia.org/w/index.php?search=Robin%20Lopez) (not to mention [*Batman*](https://en.wikipedia.org/w/index.php?search=Batman)’s sidekick).
```
babynames %>%
filter(name == "Robin") %>%
group_by(name, sex) %>%
summarize(total = sum(n))
```
```
# A tibble: 2 × 3
# Groups: name [1]
name sex total
<chr> <chr> <int>
1 Robin F 289395
2 Robin M 44616
```
This computational paradigm (e.g., filtering) works well if you want to look at gender balance in one name at a time, but suppose you want to find the most gender\-neutral names from all 97,310 names in `babynames`?
For this, it would be useful to have the results in a wide format, like the one shown below.
```
babynames %>%
filter(name %in% c("Sue", "Robin", "Leslie")) %>%
group_by(name, sex) %>%
summarize(total = sum(n)) %>%
pivot_wider(
names_from = sex,
values_from = total
)
```
```
# A tibble: 3 × 3
# Groups: name [3]
name F M
<chr> <int> <int>
1 Leslie 266474 112689
2 Robin 289395 44616
3 Sue 144465 519
```
The `pivot_wider()` function can help us generate the wide format. Note that the `sex` variable is the `names_from` used in the conversion.
A fill of zero is appropriate here: For a name like `Aaban` or `Aadam`, where there are no females, the entry for `F` should be zero.
```
baby_wide <- babynames %>%
group_by(sex, name) %>%
summarize(total = sum(n)) %>%
pivot_wider(
names_from = sex,
values_from = total,
values_fill = 0
)
head(baby_wide, 3)
```
```
# A tibble: 3 × 3
name F M
<chr> <int> <int>
1 Aabha 35 0
2 Aabriella 32 0
3 Aada 5 0
```
One way to define “approximately the same” is to take the smaller of the ratios M/F and F/M. If females greatly outnumber males, then F/M will be large, but M/F will be small. If the sexes are about equal, then both ratios will be near one. The smaller will never be greater than one, so the most balanced names are those with the smaller of the ratios near one.
The code to identify
the most balanced gender\-neutral names out of the names with more than 50,000 babies of each sex is shown below.
Remember, a ratio of 1 means exactly balanced; a ratio of 0\.5 means two to one in favor of one sex; 0\.33 means three to one.
(The `pmin()` transformation function returns the smaller of the two arguments for each individual case.)
```
baby_wide %>%
filter(M > 50000, F > 50000) %>%
mutate(ratio = pmin(M / F, F / M) ) %>%
arrange(desc(ratio)) %>%
head(3)
```
```
# A tibble: 3 × 4
name F M ratio
<chr> <int> <int> <dbl>
1 Riley 100881 92789 0.920
2 Jackie 90604 78405 0.865
3 Casey 76020 110165 0.690
```
Riley has been the most gender\-balanced name, followed by Jackie. Where does your name fall on this list?
6\.3 Naming conventions
-----------------------
Like any language, **R** has some rules that you cannot break, but also many conventions that you can—but should not—break. There are a few simple rules that apply when creating a *name* for an object:
* The name cannot start with a digit. So you cannot assign the name `100NCHS` to a data frame, but `NCHS100` is fine. This rule is to make it easy for **R** to distinguish between object names and numbers. It also helps you avoid mistakes such as writing `2pi` when you mean `2*pi`.
* The name cannot contain any punctuation symbols other than `.` and `_`. So `?NCHS` or `N*Hanes` are not legitimate names. However, you can use `.` and `_` in a name.
For reasons that will be explained later, the use of `.` in function names has a specific meaning, but should otherwise be avoided. The use of `_` is preferred.
* The case of the letters in the name matters. So `NCHS`, `nchs`, `Nchs`, and `nChs`, etc., are all different names that only look similar to a human reader, not to **R**.
Do not use `.` in function names, to avoid conflicting with internal functions.
One of **R**’s strengths is its modularity—many people have contributed many packages that do many different things. However, this decentralized paradigm has resulted in many *different* people writing code using many *different* conventions. The resulting lack of uniformity can make code harder to read. We suggest adopting a style guide and sticking to it—we have attempted to do that in this book. However, the inescapable use of other people’s code results in inevitable deviations from that style.
In this book and in our teaching, we follow the [tidyverse style guide](https://style.tidyverse.org)—which is public, widely adopted, and influential—as closely as possible.
It provides guidance about how and why to adopt a particular style.
Other groups (e.g., Google) have adopted variants of this guide.
This means:
* We use underscores (`_`) in variable and function names. The use of periods (`.`) in function names is restricted to S3 methods.
* We use spaces liberally and prefer multiline, narrow blocks of code to single lines of wide code (although we occasionally relax this to save space on the printed page).
* We use [*snake\_case*](https://en.wikipedia.org/w/index.php?search=snake_case) for the names of things. This means that each “word” is lowercase, and there are no spaces, only underscores. (The **janitor** package provides a function called `clean_names()` that by default turns variable names into snake case (other styles are also supported.)
The **styler** package can be used to reformat code into a format that implements the tidyverse style guide.
Faithfully adopting a consistent style for code can help to improve readability and reduce errors.
6\.4 Data intake
----------------
> “Every easy data format is alike. Every difficult data format is difficult in its own way.”
>
>
> —inspired by [Leo Tolstoy](https://en.wikipedia.org/w/index.php?search=Leo%20Tolstoy) and [Hadley Wickham](https://en.wikipedia.org/w/index.php?search=Hadley%20Wickham)
The tools that we develop in this book allow one to work with data in **R**. However, most data sets are not available in **R** to begin with—they are often stored in a different file format.
While **R** has sophisticated abilities for reading data in a variety of formats, it is not without limits.
For data that are not in a file, one common form of data intake is [*Web scraping*](https://en.wikipedia.org/w/index.php?search=Web%20scraping), in which data from the internet are processed as (structured) text and converted into data.
Such data often have errors that stem from blunders in data entry or from deficiencies in the way data are stored or coded.
Correcting such errors is called [*data cleaning*](https://en.wikipedia.org/w/index.php?search=data%20cleaning).
The native file format for **R** is usually given the suffix `.rda` (or sometimes, `.RData`).
Any object in your **R** environment can be written to this file format using the `saveRDS()` command.
Using the `compress` argument will make these files smaller.
```
saveRDS(mtcars, file = "mtcars.rda", compress = TRUE)
```
This file format is usually an efficient means for storing data, but it is not the most portable.
To load a stored object into your **R** environment, use the `readRDS()` command.
```
mtcars <- readRDS("mtcars.rda")
```
Maintaining the provenance of data from beginning to the end of an analysis is an important part of a reproducible workflow. This can be facilitated by creating one Markdown file or notebook that undertakes the data wrangling and generates an analytic data set (using `saveRDS()`) that can be read (using `readRDS()`) into a second Markdown file.
### 6\.4\.1 Data\-table friendly formats
Many formats for data are essentially equivalent to data tables.
When you come across data in a format that you don’t recognize, it is worth checking whether it is one of the data\-table–friendly formats.
Sometimes the [*filename extension*](https://en.wikipedia.org/w/index.php?search=filename%20extension) provides an indication.
Here are several, each with a brief description:
* **CSV**: a non\-proprietary comma\-separated text format that is widely used for data exchange between different software packages. [*CSV*](https://en.wikipedia.org/w/index.php?search=CSV)s are easy to understand, but are not compressed, and therefore can take up more space on disk than other formats.
* **Software\-package specific format**: some common examples include:
+ [*Octave*](https://en.wikipedia.org/w/index.php?search=Octave) (and through that, [*MATLAB*](https://en.wikipedia.org/w/index.php?search=MATLAB)): widely used in engineering and physics
+ [*Stata*](https://en.wikipedia.org/w/index.php?search=Stata): commonly used for economic research
+ [*SPSS*](https://en.wikipedia.org/w/index.php?search=SPSS): commonly used for social science research
+ [*Minitab*](https://en.wikipedia.org/w/index.php?search=Minitab): often used in business applications
+ [*SAS*](https://en.wikipedia.org/w/index.php?search=SAS): often used for large data sets
+ [*Epi*](https://en.wikipedia.org/w/index.php?search=Epi): used by the [*Centers for Disease Control*](https://en.wikipedia.org/w/index.php?search=Centers%20for%20Disease%20Control) (CDC) for health and epidemiology data
* **Relational databases**: the form that much of institutional, actively\-updated data are stored in. This includes business transaction records, government records, Web logs, and so on. (See Chapter [15](ch-sql.html#ch:sql) for a discussion of relational database management systems.)
* **Excel**: a set of proprietary spreadsheet formats heavily used in business. Watch out, though. Just because something is stored in an [*Excel format*](https://en.wikipedia.org/w/index.php?search=Excel%20format) doesn’t mean it is a data table. Excel is sometimes used as a kind of tablecloth for writing down data with no particular scheme in mind.
* **Web\-related**: For example:
+ [*HTML*](https://en.wikipedia.org/w/index.php?search=HTML) (hypertext markup language): `<table>` format
+ [*XML*](https://en.wikipedia.org/w/index.php?search=XML) (extensible markup language) format, a tree\-based document structure
+ [*JSON*](https://en.wikipedia.org/w/index.php?search=JSON) (JavaScript Object Notation) is a common data format that breaks the “rows\-and\-columns” paradigm (see Section [21\.2\.4\.2](ch-big.html#sec:nosql))
+ [*Google Sheets*](https://en.wikipedia.org/w/index.php?search=Google%20Sheets): published as HTML
+ [*application programming interface*](https://en.wikipedia.org/w/index.php?search=application%20programming%20interface) (API)
The procedure for reading data in one of these formats varies depending on the format.
For Excel or Google Sheets data, it is sometimes easiest to use the application software to export the data as a CSV file.
There are also **R** packages for reading directly from either (**readxl** and **googlesheets4**, respectively), which are useful if the spreadsheet is being updated frequently.
For the technical software package formats, the **haven** package provides useful reading and writing functions.
For relational databases, even if they are on a remote server, there are several useful **R** packages that allow you to connect to these databases directly, most notably **dbplyr** and **DBI**.
CSV and HTML `<table>` formats are frequently encountered sources for data scraping, and can be read by the **readr** and **rvest** packages, respectively.
The next subsections give a bit more detail about how to read them into **R**.
#### 6\.4\.1\.1 CSV (comma separated value) files
This text format can be read with a huge variety of software. It has a data table format, with the values of variables in each case separated by commas. Here is an example of the first several lines of a CSV file:
```
"year","sex","name","n","prop"
1880,"F","Mary",7065,0.07238359
1880,"F","Anna",2604,0.02667896
1880,"F","Emma",2003,0.02052149
1880,"F","Elizabeth",1939,0.01986579
1880,"F","Minnie",1746,0.01788843
1880,"F","Margaret",1578,0.0161672
```
The top row usually (but not always) contains the variable names. Quotation marks are often used at the start and end of character strings—these quotation marks are not part of the content of the string, but are useful if, say, you want to include a comma in the text of a field. CSV files are often named with the `.csv` suffix; it is also common for them to be named with `.txt`, `.dat`, or other things.
You will also see characters other than commas being used to delimit the fields: tabs and vertical bars (or pipes, i.e., `|`) are particularly common.
Be careful with date and time variables in CSV format: these can sometimes be formatted in inconsistent ways that make it more challenging to ingest.
Since reading from a CSV file is so common, several implementations are available.
The `read.csv()` function in the **base** package is perhaps the most widely used, but the more recent `read_csv()` function in the **readr** package is noticeably faster for large CSVs.
CSV files need not exist on your local hard drive.
For example, here is a way to access a `.csv` file over the internet using a URL ([*universal resource locator*](https://en.wikipedia.org/w/index.php?search=universal%20resource%20locator)).
```
mdsr_url <- "https://raw.githubusercontent.com/mdsr-book/mdsr/master/data-raw/"
houses <- mdsr_url %>%
paste0("houses-for-sale.csv") %>%
read_csv()
head(houses, 3)
```
```
# A tibble: 3 × 16
price lot_size waterfront age land_value construction air_cond fuel
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 132500 0.09 0 42 50000 0 0 3
2 181115 0.92 0 0 22300 0 0 2
3 109000 0.19 0 133 7300 0 0 2
# … with 8 more variables: heat <dbl>, sewer <dbl>, living_area <dbl>,
# pct_college <dbl>, bedrooms <dbl>, fireplaces <dbl>, bathrooms <dbl>,
# rooms <dbl>
```
Just as reading a data file from the internet uses a URL, reading a file on your computer uses a complete name, called a [*path*](https://en.wikipedia.org/w/index.php?search=path) to the file.
Although many people are used to using a mouse\-based selector to access their files, being specific about the path to your files is important to ensure the reproducibility of your code (see Appendix [D](ch-reproduce.html#ch:reproduce)).
#### 6\.4\.1\.2 HTML tables
Web pages are HTML documents, which are then translated by a browser to the formatted content that users see. HTML includes facilities for presenting tabular content. The HTML `<table>` markup is often the way human\-readable data is arranged.
Figure 6\.4: Part of a page on mile\-run world records from Wikipedia. Two separate data tables are visible. You can’t tell from this small part of the page, but there are many tables on the page. These two tables are the third and fourth in the page.
When you have the URL of a page containing one or more tables, it is sometimes easy to read them into **R** as data tables.
Since they are not CSVs, we can’t use `read_csv()`. Instead, we use functionality in the **rvest** package to ingest the HTML as a data structure in **R**.
Once you have the content of the Web page, you can translate any tables in the page from HTML to data table format.
In this brief example, we will investigate the progression of the world record time in the mile run, [as detailed on Wikipedia](http://en.wikipedia.org/wiki/Mile_run_world_record_progression).
This page (see Figure [6\.4](ch-dataII.html#fig:wiki-running)) contains several tables, each of which contains a list of new world records for a different class of athlete (e.g., men, women, amateur, professional, etc.).
```
library(rvest)
url <- "http://en.wikipedia.org/wiki/Mile_run_world_record_progression"
tables <- url %>%
read_html() %>%
html_nodes("table")
```
The result, `tables`, is not a data table. Instead, it is a `list` (see Appendix [B](ch-R.html#ch:R)) of the tables found in the Web page. Use `length()` to find how many items there are in the list of tables.
```
length(tables)
```
```
[1] 12
```
You can access any of those tables using the `pluck()` function from the **purrr** package, which extracts items from a `list`.
Unfortunately, as of this writing the `rvest::pluck()` function masks the more useful `purrr::pluck()` function, so we will be specific by using the double\-colon operator.
The first table is `pluck(tables, 1)`, the second table is `pluck(tables, 2)`, and so on.
The third table—which corresponds to amateur men up until 1862—is shown in Table [6\.9](ch-dataII.html#tab:wikipedia-table-three).
```
amateur <- tables %>%
purrr::pluck(3) %>%
html_table()
```
Table 6\.9: The third table embedded in the Wikipedia page on running records.
| Time | Athlete | Nationality | Date | Venue |
| --- | --- | --- | --- | --- |
| 4:52 | Cadet Marshall | United Kingdom | 2 September 1852 | Addiscome |
| 4:45 | Thomas Finch | United Kingdom | 3 November 1858 | Oxford |
| 4:45 | St. Vincent Hammick | United Kingdom | 15 November 1858 | Oxford |
| 4:40 | Gerald Surman | United Kingdom | 24 November 1859 | Oxford |
| 4:33 | George Farran | United Kingdom | 23 May 1862 | Dublin |
Likely of greater interest is the information in the fourth table, which corresponds to the current era of [*International Amateur Athletics Federation*](https://en.wikipedia.org/w/index.php?search=International%20Amateur%20Athletics%20Federation) world records. The first few rows of that table are shown in Table [6\.10](ch-dataII.html#tab:wikipedia-table-four). The last row of that table (not shown) contains the current world record of 3:43\.13, which was set by [Hicham El Guerrouj](https://en.wikipedia.org/w/index.php?search=Hicham%20El%20Guerrouj) of [*Morocco*](https://en.wikipedia.org/w/index.php?search=Morocco) in [*Rome*](https://en.wikipedia.org/w/index.php?search=Rome) on July 7th, 1999\.
```
records <- tables %>%
purrr::pluck(4) %>%
html_table() %>%
select(-Auto) # remove unwanted column
```
Table 6\.10: The fourth table embedded in the Wikipedia page on running records.
| Time | Athlete | Nationality | Date | Venue |
| --- | --- | --- | --- | --- |
| 4:14\.4 | John Paul Jones | United States | 31 May 1913\[6] | Allston, Mass. |
| 4:12\.6 | Norman Taber | United States | 16 July 1915\[6] | Allston, Mass. |
| 4:10\.4 | Paavo Nurmi | Finland | 23 August 1923\[6] | Stockholm |
| 4:09\.2 | Jules Ladoumègue | France | 4 October 1931\[6] | Paris |
| 4:07\.6 | Jack Lovelock | New Zealand | 15 July 1933\[6] | Princeton, N.J. |
| 4:06\.8 | Glenn Cunningham | United States | 16 June 1934\[6] | Princeton, N.J. |
### 6\.4\.2 APIs
An [*application programming interface*](https://en.wikipedia.org/w/index.php?search=application%20programming%20interface) (API) is a protocol for interacting with a computer program that you can’t control.
It is a set of agreed\-upon instructions for using a “[*black\-box*](https://en.wikipedia.org/w/index.php?search=black-box)—not unlike the manual for a television’s remote control.
APIs provide access to massive troves of public data on the Web, from a vast array of different sources.
Not all APIs are the same, but by learning how to use them, you can dramatically increase your ability to pull data into **R** without having to manually \`\`scrape” it.
If you want to obtain data from a public source, it is a good idea to check to see whether: a) the organization has a public API; b) someone has already written an **R** package to said interface.
These packages don’t provide the actual data—they simply provide a series of **R** functions that allow you to access the actual data.
The documentation for each package should explain how to use it to collect data from the original source.
### 6\.4\.3 Cleaning data
A person somewhat knowledgeable about running would have little trouble interpreting Tables [6\.9](ch-dataII.html#tab:wikipedia-table-three) and [6\.10](ch-dataII.html#tab:wikipedia-table-four) correctly.
The `Time` is in minutes and seconds. The `Date` gives the day on which the record was set. When the data table is read into **R**, both `Time` and `Date` are stored as character strings. Before they can be used, they have to be converted into a format that the computer can process like a date and time. Among other things, this requires dealing with the footnote (listed as `[5]`) at the end of the date information.
[*Data cleaning*](https://en.wikipedia.org/w/index.php?search=Data%20cleaning)
refers to taking the information contained in a variable and transforming it to a form in which that information can be used.
#### 6\.4\.3\.1 Recoding
Table [6\.11](ch-dataII.html#tab:house-systems) displays a few variables from the `houses` data table we downloaded earlier.
It describes 1,728 houses for sale in [*Saratoga, NY*](https://en.wikipedia.org/w/index.php?search=Saratoga,%20NY).[11](#fn11)
The full table includes additional variables such as `living_area`, `price`, `bedrooms`, and `bathrooms`.
The data on house systems such as `sewer_type` and `heat_type` have been stored as numbers, even though they are really categorical.
Table 6\.11: Four of the variables from the tables giving features of the Saratoga houses stored as integer codes. Each case is a different house.
| fuel | heat | sewer | construction |
| --- | --- | --- | --- |
| 3 | 4 | 2 | 0 |
| 2 | 3 | 2 | 0 |
| 2 | 3 | 3 | 0 |
| 2 | 2 | 2 | 0 |
| 2 | 2 | 3 | 1 |
There is nothing fundamentally wrong with using integers to encode, say, fuel type, though it may be confusing to interpret results. What is worse is that the numbers imply a meaningful order to the categories when there is none.
To translate the integers to a more informative coding, you first have to find out what the various codes mean. Often, this information comes from the codebook, but sometimes you will need to contact the person who collected the data.
Once you know the translation, you can use spreadsheet software (or the `tribble()` function) to enter them into a data table, like this one for the houses:
```
translations <- mdsr_url %>%
paste0("house_codes.csv") %>%
read_csv()
translations %>% head(5)
```
```
# A tibble: 5 × 3
code system_type meaning
<dbl> <chr> <chr>
1 0 new_const no
2 1 new_const yes
3 1 sewer_type none
4 2 sewer_type private
5 3 sewer_type public
```
`Translations` describes the codes in a format that makes it easy to add new code values as the need arises. The same information can also be presented a wide format as in Table [6\.12](ch-dataII.html#tab:code-vals).
```
codes <- translations %>%
pivot_wider(
names_from = system_type,
values_from = meaning,
values_fill = "invalid"
)
```
Table 6\.12: The Translations data table rendered in a wide format.
| code | new\_const | sewer\_type | central\_air | fuel\_type | heat\_type |
| --- | --- | --- | --- | --- | --- |
| 0 | no | invalid | no | invalid | invalid |
| 1 | yes | none | yes | invalid | invalid |
| 2 | invalid | private | invalid | gas | hot air |
| 3 | invalid | public | invalid | electric | hot water |
| 4 | invalid | invalid | invalid | oil | electric |
In `codes`, there is a column for each system type that translates the integer code to a meaningful term. In cases where the integer has no corresponding term, `invalid` has been entered. This provides a quick way to distinguish between incorrect entries and missing entries.
To carry out the translation, we join each variable, one at a time, to the data table of interest. Note how the `by` value changes for each variable:
```
houses <- houses %>%
left_join(
codes %>% select(code, fuel_type),
by = c(fuel = "code")
) %>%
left_join(
codes %>% select(code, heat_type),
by = c(heat = "code")
) %>%
left_join(
codes %>% select(code, sewer_type),
by = c(sewer = "code")
)
```
Table [6\.13](ch-dataII.html#tab:recode-houses) shows the re\-coded data. We can compare this to the previous display in Table [6\.11](ch-dataII.html#tab:house-systems).
Table 6\.13: The Saratoga houses data with re\-coded categorical variables.
| fuel\_type | heat\_type | sewer\_type |
| --- | --- | --- |
| electric | electric | private |
| gas | hot water | private |
| gas | hot water | public |
| gas | hot air | private |
| gas | hot air | public |
| gas | hot air | private |
#### 6\.4\.3\.2 From strings to numbers
You have seen two major types of variables: quantitative and categorical. You are used to using quoted character strings as the levels of categorical variables, and numbers for quantitative variables.
Often, you will encounter data tables that have variables whose meaning is numeric but whose representation is a character string. This can occur when one or more cases is given a non\-numeric value, e.g., *not available*.
The `parse_number()` function will translate character strings with numerical content into numbers.
The `parse_character()` function goes the other way.
For example, in the `ordway_birds` data, the `Month`, `Day`, and `Year` variables are all being stored as character vectors, even though their evident meaning is numeric.
```
ordway_birds %>%
select(Timestamp, Year, Month, Day) %>%
glimpse()
```
```
Rows: 15,829
Columns: 4
$ Timestamp <chr> "4/14/2010 13:20:56", "", "5/13/2010 16:00:30", "5/13/20…
$ Year <chr> "1972", "", "1972", "1972", "1972", "1972", "1972", "197…
$ Month <chr> "7", "", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7…
$ Day <chr> "16", "", "16", "16", "16", "16", "16", "16", "16", "16"…
```
We can convert the strings to numbers using `mutate()` and `parse_number()`. Note how the empty strings (i.e., `""`) in those fields are automatically converted into `NA`’s, since they cannot be converted into valid numbers.
```
library(readr)
ordway_birds <- ordway_birds %>%
mutate(
Month = parse_number(Month),
Year = parse_number(Year),
Day = parse_number(Day)
)
ordway_birds %>%
select(Timestamp, Year, Month, Day) %>%
glimpse()
```
```
Rows: 15,829
Columns: 4
$ Timestamp <chr> "4/14/2010 13:20:56", "", "5/13/2010 16:00:30", "5/13/20…
$ Year <dbl> 1972, NA, 1972, 1972, 1972, 1972, 1972, 1972, 1972, 1972…
$ Month <dbl> 7, NA, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7…
$ Day <dbl> 16, NA, 16, 16, 16, 16, 16, 16, 16, 16, 17, 18, 18, 18, …
```
#### 6\.4\.3\.3 Dates
Dates are often recorded as character strings (e.g., `29 October 2014`). Among other important properties, dates have a natural order.
When you plot values such as `16 December 2015` and `29 October 2016`, you expect the December date to come after the October date, even though this is not true alphabetically of the string itself.
When plotting a value that is numeric, you expect the axis to be marked with a few round numbers.
A plot from 0 to 100 might have ticks at 0, 20, 40, 60, 100\.
It is similar for dates.
When you are plotting dates within one month, you expect the day of the month to be shown on the axis.
If you are plotting a range of several years, it would be appropriate to show only the years on the axis.
When you are given dates stored as a character vector, it is usually necessary to convert them to a data type designed specifically for dates.
For instance, in the `ordway_birds` data, the `Timestamp` variable refers to the time the data were transcribed from the original lab notebook to the computer file.
This variable is currently stored as a `character` string, but we can translate it into a more usable date format using functions from the **lubridate** package.
These dates are written in a format showing `month/day/year hour:minute:second`. The `mdy_hms()` function from the **lubridate** package converts strings in this format to a date. Note that the data type of the `When` variable is now `dttm`.
```
library(lubridate)
birds <- ordway_birds %>%
mutate(When = mdy_hms(Timestamp)) %>%
select(Timestamp, Year, Month, Day, When, DataEntryPerson)
birds %>%
glimpse()
```
```
Rows: 15,829
Columns: 6
$ Timestamp <chr> "4/14/2010 13:20:56", "", "5/13/2010 16:00:30", "5…
$ Year <dbl> 1972, NA, 1972, 1972, 1972, 1972, 1972, 1972, 1972…
$ Month <dbl> 7, NA, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7…
$ Day <dbl> 16, NA, 16, 16, 16, 16, 16, 16, 16, 16, 17, 18, 18…
$ When <dttm> 2010-04-14 13:20:56, NA, 2010-05-13 16:00:30, 201…
$ DataEntryPerson <chr> "Jerald Dosch", "Caitlin Baker", "Caitlin Baker", …
```
With the `When` variable now recorded as a timestamp, we can create a sensible plot showing when each of the transcribers completed their work, as in Figure [6\.5](ch-dataII.html#fig:when-and-who2).
```
birds %>%
ggplot(aes(x = When, y = DataEntryPerson)) +
geom_point(alpha = 0.1, position = "jitter")
```
Figure 6\.5: The transcribers of the Ordway Birds from lab notebooks worked during different time intervals.
Many of the same operations that apply to numbers can be used on dates. For example, the range of dates that each transcriber worked can be calculated as a difference in times (i.e., an `interval()`), and shown in Table [6\.14](ch-dataII.html#tab:transcriber-dates). This makes it clear that Jolani worked on the project for nearly a year (329 days), while Abby’s first transcription was also her last.
```
bird_summary <- birds %>%
group_by(DataEntryPerson) %>%
summarize(
start = first(When),
finish = last(When)
) %>%
mutate(duration = interval(start, finish) / ddays(1))
```
Table 6\.14: Starting and ending dates for each transcriber involved in the Ordway Birds project.
| DataEntryPerson | start | finish | duration |
| --- | --- | --- | --- |
| Abby Colehour | 2011\-04\-23 15:50:24 | 2011\-04\-23 15:50:24 | 0\.000 |
| Brennan Panzarella | 2010\-09\-13 10:48:12 | 2011\-04\-10 21:58:56 | 209\.466 |
| Emily Merrill | 2010\-06\-08 09:10:01 | 2010\-06\-08 14:47:21 | 0\.234 |
| Jerald Dosch | 2010\-04\-14 13:20:56 | 2010\-04\-14 13:20:56 | 0\.000 |
| Jolani Daney | 2010\-06\-08 09:03:00 | 2011\-05\-03 10:12:59 | 329\.049 |
| Keith Bradley\-Hewitt | 2010\-09\-21 11:31:02 | 2011\-05\-06 17:36:38 | 227\.254 |
| Mary Catherine Muñiz | 2012\-02\-02 08:57:37 | 2012\-04\-30 14:06:27 | 88\.214 |
There are many similar **lubridate** functions for converting strings in different formats into dates, e.g., `ymd()`, `dmy()`, and so on. There are also functions like `hour()`, `yday()`,
etc. for extracting certain pieces of variables encoded as dates.
Internally, **R** uses several different classes to represent dates and times. For timestamps (also referred to as [*datetime*](https://en.wikipedia.org/w/index.php?search=datetime)s), these classes are `POSIXct` and `POSIXlt`.
For most purposes, you can treat these as being the same, but internally, they are stored differently.
A `POSIXct` object is stored as the number of seconds since the [*UNIX epoch*](https://en.wikipedia.org/w/index.php?search=UNIX%20epoch) (1970\-01\-01\), whereas `POSIXlt` objects are stored as a list of year, month, day, etc., character strings.
```
now()
```
```
[1] "2021-07-28 14:13:07 EDT"
```
```
class(now())
```
```
[1] "POSIXct" "POSIXt"
```
```
class(as.POSIXlt(now()))
```
```
[1] "POSIXlt" "POSIXt"
```
For dates that do not include times, the `Date` class is most commonly used.
```
as.Date(now())
```
```
[1] "2021-07-28"
```
#### 6\.4\.3\.4 Factors or strings?
A [*factor*](https://en.wikipedia.org/w/index.php?search=factor) is a special data type used to represent categorical data.
Factors store categorical data efficiently and provide a means to put the categorical levels in whatever order is desired.
Unfortunately, factors also make cleaning data more confusing.
The problem is that it is easy to mistake a factor for a character string, and they have different properties when it comes to converting a numeric or date form.
This is especially problematic when using the character processing techniques in Chapter [19](ch-text.html#ch:text).
By default, `readr::read_csv()` will interpret character strings as strings and not as factors.
Other functions, such as `read.csv()` prior to version 4\.0 of **R**, convert character strings into factors by default.
Cleaning such data often requires converting them back to a character format using `parse_character()`.
Failing to do this when needed can result in completely erroneous results without any warning.
The **forcats** package was written to improve support for wrangling factor variables.
For this reason, the data tables used in this book have been stored with categorical or text data in character format. Be aware that data provided by other packages do not necessarily follow this convention. If you get mysterious results when working with such data, consider the possibility that you are working with factors rather than character vectors. Recall that `summary()`, `glimpse()`, and `str()` will all reveal the data types of each variable in a data frame.
It’s always a good idea to carefully check all variables and data wrangling operations to ensure
that correct values are generated.
Such data auditing and the use of automated data consistency checking can decrease the likelihood of data integrity errors.
### 6\.4\.4 Example: Japanese nuclear reactors
Dates and times are an important aspect of many analyses.
In the example below, the vector `example` contains human\-readable datetimes stored as `character` by **R**.
The `ymd_hms()` function from **lubridate** will convert this into `POSIXct`—a datetime format.
This makes it possible for **R** to do date arithmetic.
```
library(lubridate)
example <- c("2021-04-29 06:00:00", "2021-12-31 12:00:00")
str(example)
```
```
chr [1:2] "2021-04-29 06:00:00" "2021-12-31 12:00:00"
```
```
converted <- ymd_hms(example)
str(converted)
```
```
POSIXct[1:2], format: "2021-04-29 06:00:00" "2021-12-31 12:00:00"
```
```
converted
```
```
[1] "2021-04-29 06:00:00 UTC" "2021-12-31 12:00:00 UTC"
```
```
converted[2] - converted[1]
```
```
Time difference of 246 days
```
We will use this functionality to analyze data on nuclear reactors in Japan.
Figure [6\.6](ch-dataII.html#fig:wikijapan) displays the first part of this table as of the summer of 2016\.
Figure 6\.6: Screenshot of Wikipedia’s list of Japanese nuclear reactors.
```
tables <- "http://en.wikipedia.org/wiki/List_of_nuclear_reactors" %>%
read_html() %>%
html_nodes(css = "table")
idx <- tables %>%
html_text() %>%
str_detect("Fukushima Daiichi") %>%
which()
reactors <- tables %>%
purrr::pluck(idx) %>%
html_table(fill = TRUE) %>%
janitor::clean_names() %>%
rename(
reactor_type = reactor,
reactor_model = reactor_2,
capacity_net = capacity_in_mw,
capacity_gross = capacity_in_mw_2
) %>%
tail(-1)
glimpse(reactors)
```
```
Rows: 68
Columns: 10
$ name <chr> "Fugen", "Fukushima Daiichi", "Fukushima Daii…
$ unit_no <chr> "1", "1", "2", "3", "4", "5", "6", "1", "2", …
$ reactor_type <chr> "HWLWR", "BWR", "BWR", "BWR", "BWR", "BWR", "…
$ reactor_model <chr> "ATR", "BWR-3", "BWR-4", "BWR-4", "BWR-4", "B…
$ status <chr> "Shut down", "Inoperable", "Inoperable", "Ino…
$ capacity_net <chr> "148", "439", "760", "760", "760", "760", "10…
$ capacity_gross <chr> "165", "460", "784", "784", "784", "784", "11…
$ construction_start <chr> "10 May 1972", "25 July 1967", "9 June 1969",…
$ commercial_operation <chr> "20 March 1979", "26 March 1971", "18 July 19…
$ closure <chr> "29 March 2003", "19 May 2011", "19 May 2011"…
```
We see that among the first entries are the ill\-fated [*Fukushima Daiichi*](https://en.wikipedia.org/w/index.php?search=Fukushima%20Daiichi) reactors. The
`mutate()` function can be used in conjunction with the `dmy()` function from the **lubridate** package to wrangle these data into a better form.
```
reactors <- reactors %>%
mutate(
plant_status = ifelse(
str_detect(status, "Shut down"),
"Shut down", "Not formally shut down"
),
capacity_net = parse_number(capacity_net),
construct_date = dmy(construction_start),
operation_date = dmy(commercial_operation),
closure_date = dmy(closure)
)
glimpse(reactors)
```
```
Rows: 68
Columns: 14
$ name <chr> "Fugen", "Fukushima Daiichi", "Fukushima Daii…
$ unit_no <chr> "1", "1", "2", "3", "4", "5", "6", "1", "2", …
$ reactor_type <chr> "HWLWR", "BWR", "BWR", "BWR", "BWR", "BWR", "…
$ reactor_model <chr> "ATR", "BWR-3", "BWR-4", "BWR-4", "BWR-4", "B…
$ status <chr> "Shut down", "Inoperable", "Inoperable", "Ino…
$ capacity_net <dbl> 148, 439, 760, 760, 760, 760, 1067, NA, 1067,…
$ capacity_gross <chr> "165", "460", "784", "784", "784", "784", "11…
$ construction_start <chr> "10 May 1972", "25 July 1967", "9 June 1969",…
$ commercial_operation <chr> "20 March 1979", "26 March 1971", "18 July 19…
$ closure <chr> "29 March 2003", "19 May 2011", "19 May 2011"…
$ plant_status <chr> "Shut down", "Not formally shut down", "Not f…
$ construct_date <date> 1972-05-10, 1967-07-25, 1969-06-09, 1970-12-…
$ operation_date <date> 1979-03-20, 1971-03-26, 1974-07-18, 1976-03-…
$ closure_date <date> 2003-03-29, 2011-05-19, 2011-05-19, 2011-05-…
```
How have these plants evolved over time? It seems likely that as nuclear technology has progressed, plants should see an increase in capacity. A number of these reactors have been shut down in recent years. Are there changes in capacity related to the age of the plant? Figure [6\.7](ch-dataII.html#fig:japannukes) displays the data.
```
ggplot(
data = reactors,
aes(x = construct_date, y = capacity_net, color = plant_status
)
) +
geom_point() +
geom_smooth() +
xlab("Date of Plant Construction") +
ylab("Net Plant Capacity (MW)")
```
Figure 6\.7: Distribution of capacity of Japanese nuclear power plants over time.
Indeed, reactor capacity has tended to increase over time, while the older reactors were more likely
to have been formally shut down. While it would have been straightforward
to code these data by hand, automating data ingestion for larger and more
complex tables is more efficient and less error\-prone.
### 6\.4\.1 Data\-table friendly formats
Many formats for data are essentially equivalent to data tables.
When you come across data in a format that you don’t recognize, it is worth checking whether it is one of the data\-table–friendly formats.
Sometimes the [*filename extension*](https://en.wikipedia.org/w/index.php?search=filename%20extension) provides an indication.
Here are several, each with a brief description:
* **CSV**: a non\-proprietary comma\-separated text format that is widely used for data exchange between different software packages. [*CSV*](https://en.wikipedia.org/w/index.php?search=CSV)s are easy to understand, but are not compressed, and therefore can take up more space on disk than other formats.
* **Software\-package specific format**: some common examples include:
+ [*Octave*](https://en.wikipedia.org/w/index.php?search=Octave) (and through that, [*MATLAB*](https://en.wikipedia.org/w/index.php?search=MATLAB)): widely used in engineering and physics
+ [*Stata*](https://en.wikipedia.org/w/index.php?search=Stata): commonly used for economic research
+ [*SPSS*](https://en.wikipedia.org/w/index.php?search=SPSS): commonly used for social science research
+ [*Minitab*](https://en.wikipedia.org/w/index.php?search=Minitab): often used in business applications
+ [*SAS*](https://en.wikipedia.org/w/index.php?search=SAS): often used for large data sets
+ [*Epi*](https://en.wikipedia.org/w/index.php?search=Epi): used by the [*Centers for Disease Control*](https://en.wikipedia.org/w/index.php?search=Centers%20for%20Disease%20Control) (CDC) for health and epidemiology data
* **Relational databases**: the form that much of institutional, actively\-updated data are stored in. This includes business transaction records, government records, Web logs, and so on. (See Chapter [15](ch-sql.html#ch:sql) for a discussion of relational database management systems.)
* **Excel**: a set of proprietary spreadsheet formats heavily used in business. Watch out, though. Just because something is stored in an [*Excel format*](https://en.wikipedia.org/w/index.php?search=Excel%20format) doesn’t mean it is a data table. Excel is sometimes used as a kind of tablecloth for writing down data with no particular scheme in mind.
* **Web\-related**: For example:
+ [*HTML*](https://en.wikipedia.org/w/index.php?search=HTML) (hypertext markup language): `<table>` format
+ [*XML*](https://en.wikipedia.org/w/index.php?search=XML) (extensible markup language) format, a tree\-based document structure
+ [*JSON*](https://en.wikipedia.org/w/index.php?search=JSON) (JavaScript Object Notation) is a common data format that breaks the “rows\-and\-columns” paradigm (see Section [21\.2\.4\.2](ch-big.html#sec:nosql))
+ [*Google Sheets*](https://en.wikipedia.org/w/index.php?search=Google%20Sheets): published as HTML
+ [*application programming interface*](https://en.wikipedia.org/w/index.php?search=application%20programming%20interface) (API)
The procedure for reading data in one of these formats varies depending on the format.
For Excel or Google Sheets data, it is sometimes easiest to use the application software to export the data as a CSV file.
There are also **R** packages for reading directly from either (**readxl** and **googlesheets4**, respectively), which are useful if the spreadsheet is being updated frequently.
For the technical software package formats, the **haven** package provides useful reading and writing functions.
For relational databases, even if they are on a remote server, there are several useful **R** packages that allow you to connect to these databases directly, most notably **dbplyr** and **DBI**.
CSV and HTML `<table>` formats are frequently encountered sources for data scraping, and can be read by the **readr** and **rvest** packages, respectively.
The next subsections give a bit more detail about how to read them into **R**.
#### 6\.4\.1\.1 CSV (comma separated value) files
This text format can be read with a huge variety of software. It has a data table format, with the values of variables in each case separated by commas. Here is an example of the first several lines of a CSV file:
```
"year","sex","name","n","prop"
1880,"F","Mary",7065,0.07238359
1880,"F","Anna",2604,0.02667896
1880,"F","Emma",2003,0.02052149
1880,"F","Elizabeth",1939,0.01986579
1880,"F","Minnie",1746,0.01788843
1880,"F","Margaret",1578,0.0161672
```
The top row usually (but not always) contains the variable names. Quotation marks are often used at the start and end of character strings—these quotation marks are not part of the content of the string, but are useful if, say, you want to include a comma in the text of a field. CSV files are often named with the `.csv` suffix; it is also common for them to be named with `.txt`, `.dat`, or other things.
You will also see characters other than commas being used to delimit the fields: tabs and vertical bars (or pipes, i.e., `|`) are particularly common.
Be careful with date and time variables in CSV format: these can sometimes be formatted in inconsistent ways that make it more challenging to ingest.
Since reading from a CSV file is so common, several implementations are available.
The `read.csv()` function in the **base** package is perhaps the most widely used, but the more recent `read_csv()` function in the **readr** package is noticeably faster for large CSVs.
CSV files need not exist on your local hard drive.
For example, here is a way to access a `.csv` file over the internet using a URL ([*universal resource locator*](https://en.wikipedia.org/w/index.php?search=universal%20resource%20locator)).
```
mdsr_url <- "https://raw.githubusercontent.com/mdsr-book/mdsr/master/data-raw/"
houses <- mdsr_url %>%
paste0("houses-for-sale.csv") %>%
read_csv()
head(houses, 3)
```
```
# A tibble: 3 × 16
price lot_size waterfront age land_value construction air_cond fuel
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 132500 0.09 0 42 50000 0 0 3
2 181115 0.92 0 0 22300 0 0 2
3 109000 0.19 0 133 7300 0 0 2
# … with 8 more variables: heat <dbl>, sewer <dbl>, living_area <dbl>,
# pct_college <dbl>, bedrooms <dbl>, fireplaces <dbl>, bathrooms <dbl>,
# rooms <dbl>
```
Just as reading a data file from the internet uses a URL, reading a file on your computer uses a complete name, called a [*path*](https://en.wikipedia.org/w/index.php?search=path) to the file.
Although many people are used to using a mouse\-based selector to access their files, being specific about the path to your files is important to ensure the reproducibility of your code (see Appendix [D](ch-reproduce.html#ch:reproduce)).
#### 6\.4\.1\.2 HTML tables
Web pages are HTML documents, which are then translated by a browser to the formatted content that users see. HTML includes facilities for presenting tabular content. The HTML `<table>` markup is often the way human\-readable data is arranged.
Figure 6\.4: Part of a page on mile\-run world records from Wikipedia. Two separate data tables are visible. You can’t tell from this small part of the page, but there are many tables on the page. These two tables are the third and fourth in the page.
When you have the URL of a page containing one or more tables, it is sometimes easy to read them into **R** as data tables.
Since they are not CSVs, we can’t use `read_csv()`. Instead, we use functionality in the **rvest** package to ingest the HTML as a data structure in **R**.
Once you have the content of the Web page, you can translate any tables in the page from HTML to data table format.
In this brief example, we will investigate the progression of the world record time in the mile run, [as detailed on Wikipedia](http://en.wikipedia.org/wiki/Mile_run_world_record_progression).
This page (see Figure [6\.4](ch-dataII.html#fig:wiki-running)) contains several tables, each of which contains a list of new world records for a different class of athlete (e.g., men, women, amateur, professional, etc.).
```
library(rvest)
url <- "http://en.wikipedia.org/wiki/Mile_run_world_record_progression"
tables <- url %>%
read_html() %>%
html_nodes("table")
```
The result, `tables`, is not a data table. Instead, it is a `list` (see Appendix [B](ch-R.html#ch:R)) of the tables found in the Web page. Use `length()` to find how many items there are in the list of tables.
```
length(tables)
```
```
[1] 12
```
You can access any of those tables using the `pluck()` function from the **purrr** package, which extracts items from a `list`.
Unfortunately, as of this writing the `rvest::pluck()` function masks the more useful `purrr::pluck()` function, so we will be specific by using the double\-colon operator.
The first table is `pluck(tables, 1)`, the second table is `pluck(tables, 2)`, and so on.
The third table—which corresponds to amateur men up until 1862—is shown in Table [6\.9](ch-dataII.html#tab:wikipedia-table-three).
```
amateur <- tables %>%
purrr::pluck(3) %>%
html_table()
```
Table 6\.9: The third table embedded in the Wikipedia page on running records.
| Time | Athlete | Nationality | Date | Venue |
| --- | --- | --- | --- | --- |
| 4:52 | Cadet Marshall | United Kingdom | 2 September 1852 | Addiscome |
| 4:45 | Thomas Finch | United Kingdom | 3 November 1858 | Oxford |
| 4:45 | St. Vincent Hammick | United Kingdom | 15 November 1858 | Oxford |
| 4:40 | Gerald Surman | United Kingdom | 24 November 1859 | Oxford |
| 4:33 | George Farran | United Kingdom | 23 May 1862 | Dublin |
Likely of greater interest is the information in the fourth table, which corresponds to the current era of [*International Amateur Athletics Federation*](https://en.wikipedia.org/w/index.php?search=International%20Amateur%20Athletics%20Federation) world records. The first few rows of that table are shown in Table [6\.10](ch-dataII.html#tab:wikipedia-table-four). The last row of that table (not shown) contains the current world record of 3:43\.13, which was set by [Hicham El Guerrouj](https://en.wikipedia.org/w/index.php?search=Hicham%20El%20Guerrouj) of [*Morocco*](https://en.wikipedia.org/w/index.php?search=Morocco) in [*Rome*](https://en.wikipedia.org/w/index.php?search=Rome) on July 7th, 1999\.
```
records <- tables %>%
purrr::pluck(4) %>%
html_table() %>%
select(-Auto) # remove unwanted column
```
Table 6\.10: The fourth table embedded in the Wikipedia page on running records.
| Time | Athlete | Nationality | Date | Venue |
| --- | --- | --- | --- | --- |
| 4:14\.4 | John Paul Jones | United States | 31 May 1913\[6] | Allston, Mass. |
| 4:12\.6 | Norman Taber | United States | 16 July 1915\[6] | Allston, Mass. |
| 4:10\.4 | Paavo Nurmi | Finland | 23 August 1923\[6] | Stockholm |
| 4:09\.2 | Jules Ladoumègue | France | 4 October 1931\[6] | Paris |
| 4:07\.6 | Jack Lovelock | New Zealand | 15 July 1933\[6] | Princeton, N.J. |
| 4:06\.8 | Glenn Cunningham | United States | 16 June 1934\[6] | Princeton, N.J. |
#### 6\.4\.1\.1 CSV (comma separated value) files
This text format can be read with a huge variety of software. It has a data table format, with the values of variables in each case separated by commas. Here is an example of the first several lines of a CSV file:
```
"year","sex","name","n","prop"
1880,"F","Mary",7065,0.07238359
1880,"F","Anna",2604,0.02667896
1880,"F","Emma",2003,0.02052149
1880,"F","Elizabeth",1939,0.01986579
1880,"F","Minnie",1746,0.01788843
1880,"F","Margaret",1578,0.0161672
```
The top row usually (but not always) contains the variable names. Quotation marks are often used at the start and end of character strings—these quotation marks are not part of the content of the string, but are useful if, say, you want to include a comma in the text of a field. CSV files are often named with the `.csv` suffix; it is also common for them to be named with `.txt`, `.dat`, or other things.
You will also see characters other than commas being used to delimit the fields: tabs and vertical bars (or pipes, i.e., `|`) are particularly common.
Be careful with date and time variables in CSV format: these can sometimes be formatted in inconsistent ways that make it more challenging to ingest.
Since reading from a CSV file is so common, several implementations are available.
The `read.csv()` function in the **base** package is perhaps the most widely used, but the more recent `read_csv()` function in the **readr** package is noticeably faster for large CSVs.
CSV files need not exist on your local hard drive.
For example, here is a way to access a `.csv` file over the internet using a URL ([*universal resource locator*](https://en.wikipedia.org/w/index.php?search=universal%20resource%20locator)).
```
mdsr_url <- "https://raw.githubusercontent.com/mdsr-book/mdsr/master/data-raw/"
houses <- mdsr_url %>%
paste0("houses-for-sale.csv") %>%
read_csv()
head(houses, 3)
```
```
# A tibble: 3 × 16
price lot_size waterfront age land_value construction air_cond fuel
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 132500 0.09 0 42 50000 0 0 3
2 181115 0.92 0 0 22300 0 0 2
3 109000 0.19 0 133 7300 0 0 2
# … with 8 more variables: heat <dbl>, sewer <dbl>, living_area <dbl>,
# pct_college <dbl>, bedrooms <dbl>, fireplaces <dbl>, bathrooms <dbl>,
# rooms <dbl>
```
Just as reading a data file from the internet uses a URL, reading a file on your computer uses a complete name, called a [*path*](https://en.wikipedia.org/w/index.php?search=path) to the file.
Although many people are used to using a mouse\-based selector to access their files, being specific about the path to your files is important to ensure the reproducibility of your code (see Appendix [D](ch-reproduce.html#ch:reproduce)).
#### 6\.4\.1\.2 HTML tables
Web pages are HTML documents, which are then translated by a browser to the formatted content that users see. HTML includes facilities for presenting tabular content. The HTML `<table>` markup is often the way human\-readable data is arranged.
Figure 6\.4: Part of a page on mile\-run world records from Wikipedia. Two separate data tables are visible. You can’t tell from this small part of the page, but there are many tables on the page. These two tables are the third and fourth in the page.
When you have the URL of a page containing one or more tables, it is sometimes easy to read them into **R** as data tables.
Since they are not CSVs, we can’t use `read_csv()`. Instead, we use functionality in the **rvest** package to ingest the HTML as a data structure in **R**.
Once you have the content of the Web page, you can translate any tables in the page from HTML to data table format.
In this brief example, we will investigate the progression of the world record time in the mile run, [as detailed on Wikipedia](http://en.wikipedia.org/wiki/Mile_run_world_record_progression).
This page (see Figure [6\.4](ch-dataII.html#fig:wiki-running)) contains several tables, each of which contains a list of new world records for a different class of athlete (e.g., men, women, amateur, professional, etc.).
```
library(rvest)
url <- "http://en.wikipedia.org/wiki/Mile_run_world_record_progression"
tables <- url %>%
read_html() %>%
html_nodes("table")
```
The result, `tables`, is not a data table. Instead, it is a `list` (see Appendix [B](ch-R.html#ch:R)) of the tables found in the Web page. Use `length()` to find how many items there are in the list of tables.
```
length(tables)
```
```
[1] 12
```
You can access any of those tables using the `pluck()` function from the **purrr** package, which extracts items from a `list`.
Unfortunately, as of this writing the `rvest::pluck()` function masks the more useful `purrr::pluck()` function, so we will be specific by using the double\-colon operator.
The first table is `pluck(tables, 1)`, the second table is `pluck(tables, 2)`, and so on.
The third table—which corresponds to amateur men up until 1862—is shown in Table [6\.9](ch-dataII.html#tab:wikipedia-table-three).
```
amateur <- tables %>%
purrr::pluck(3) %>%
html_table()
```
Table 6\.9: The third table embedded in the Wikipedia page on running records.
| Time | Athlete | Nationality | Date | Venue |
| --- | --- | --- | --- | --- |
| 4:52 | Cadet Marshall | United Kingdom | 2 September 1852 | Addiscome |
| 4:45 | Thomas Finch | United Kingdom | 3 November 1858 | Oxford |
| 4:45 | St. Vincent Hammick | United Kingdom | 15 November 1858 | Oxford |
| 4:40 | Gerald Surman | United Kingdom | 24 November 1859 | Oxford |
| 4:33 | George Farran | United Kingdom | 23 May 1862 | Dublin |
Likely of greater interest is the information in the fourth table, which corresponds to the current era of [*International Amateur Athletics Federation*](https://en.wikipedia.org/w/index.php?search=International%20Amateur%20Athletics%20Federation) world records. The first few rows of that table are shown in Table [6\.10](ch-dataII.html#tab:wikipedia-table-four). The last row of that table (not shown) contains the current world record of 3:43\.13, which was set by [Hicham El Guerrouj](https://en.wikipedia.org/w/index.php?search=Hicham%20El%20Guerrouj) of [*Morocco*](https://en.wikipedia.org/w/index.php?search=Morocco) in [*Rome*](https://en.wikipedia.org/w/index.php?search=Rome) on July 7th, 1999\.
```
records <- tables %>%
purrr::pluck(4) %>%
html_table() %>%
select(-Auto) # remove unwanted column
```
Table 6\.10: The fourth table embedded in the Wikipedia page on running records.
| Time | Athlete | Nationality | Date | Venue |
| --- | --- | --- | --- | --- |
| 4:14\.4 | John Paul Jones | United States | 31 May 1913\[6] | Allston, Mass. |
| 4:12\.6 | Norman Taber | United States | 16 July 1915\[6] | Allston, Mass. |
| 4:10\.4 | Paavo Nurmi | Finland | 23 August 1923\[6] | Stockholm |
| 4:09\.2 | Jules Ladoumègue | France | 4 October 1931\[6] | Paris |
| 4:07\.6 | Jack Lovelock | New Zealand | 15 July 1933\[6] | Princeton, N.J. |
| 4:06\.8 | Glenn Cunningham | United States | 16 June 1934\[6] | Princeton, N.J. |
### 6\.4\.2 APIs
An [*application programming interface*](https://en.wikipedia.org/w/index.php?search=application%20programming%20interface) (API) is a protocol for interacting with a computer program that you can’t control.
It is a set of agreed\-upon instructions for using a “[*black\-box*](https://en.wikipedia.org/w/index.php?search=black-box)—not unlike the manual for a television’s remote control.
APIs provide access to massive troves of public data on the Web, from a vast array of different sources.
Not all APIs are the same, but by learning how to use them, you can dramatically increase your ability to pull data into **R** without having to manually \`\`scrape” it.
If you want to obtain data from a public source, it is a good idea to check to see whether: a) the organization has a public API; b) someone has already written an **R** package to said interface.
These packages don’t provide the actual data—they simply provide a series of **R** functions that allow you to access the actual data.
The documentation for each package should explain how to use it to collect data from the original source.
### 6\.4\.3 Cleaning data
A person somewhat knowledgeable about running would have little trouble interpreting Tables [6\.9](ch-dataII.html#tab:wikipedia-table-three) and [6\.10](ch-dataII.html#tab:wikipedia-table-four) correctly.
The `Time` is in minutes and seconds. The `Date` gives the day on which the record was set. When the data table is read into **R**, both `Time` and `Date` are stored as character strings. Before they can be used, they have to be converted into a format that the computer can process like a date and time. Among other things, this requires dealing with the footnote (listed as `[5]`) at the end of the date information.
[*Data cleaning*](https://en.wikipedia.org/w/index.php?search=Data%20cleaning)
refers to taking the information contained in a variable and transforming it to a form in which that information can be used.
#### 6\.4\.3\.1 Recoding
Table [6\.11](ch-dataII.html#tab:house-systems) displays a few variables from the `houses` data table we downloaded earlier.
It describes 1,728 houses for sale in [*Saratoga, NY*](https://en.wikipedia.org/w/index.php?search=Saratoga,%20NY).[11](#fn11)
The full table includes additional variables such as `living_area`, `price`, `bedrooms`, and `bathrooms`.
The data on house systems such as `sewer_type` and `heat_type` have been stored as numbers, even though they are really categorical.
Table 6\.11: Four of the variables from the tables giving features of the Saratoga houses stored as integer codes. Each case is a different house.
| fuel | heat | sewer | construction |
| --- | --- | --- | --- |
| 3 | 4 | 2 | 0 |
| 2 | 3 | 2 | 0 |
| 2 | 3 | 3 | 0 |
| 2 | 2 | 2 | 0 |
| 2 | 2 | 3 | 1 |
There is nothing fundamentally wrong with using integers to encode, say, fuel type, though it may be confusing to interpret results. What is worse is that the numbers imply a meaningful order to the categories when there is none.
To translate the integers to a more informative coding, you first have to find out what the various codes mean. Often, this information comes from the codebook, but sometimes you will need to contact the person who collected the data.
Once you know the translation, you can use spreadsheet software (or the `tribble()` function) to enter them into a data table, like this one for the houses:
```
translations <- mdsr_url %>%
paste0("house_codes.csv") %>%
read_csv()
translations %>% head(5)
```
```
# A tibble: 5 × 3
code system_type meaning
<dbl> <chr> <chr>
1 0 new_const no
2 1 new_const yes
3 1 sewer_type none
4 2 sewer_type private
5 3 sewer_type public
```
`Translations` describes the codes in a format that makes it easy to add new code values as the need arises. The same information can also be presented a wide format as in Table [6\.12](ch-dataII.html#tab:code-vals).
```
codes <- translations %>%
pivot_wider(
names_from = system_type,
values_from = meaning,
values_fill = "invalid"
)
```
Table 6\.12: The Translations data table rendered in a wide format.
| code | new\_const | sewer\_type | central\_air | fuel\_type | heat\_type |
| --- | --- | --- | --- | --- | --- |
| 0 | no | invalid | no | invalid | invalid |
| 1 | yes | none | yes | invalid | invalid |
| 2 | invalid | private | invalid | gas | hot air |
| 3 | invalid | public | invalid | electric | hot water |
| 4 | invalid | invalid | invalid | oil | electric |
In `codes`, there is a column for each system type that translates the integer code to a meaningful term. In cases where the integer has no corresponding term, `invalid` has been entered. This provides a quick way to distinguish between incorrect entries and missing entries.
To carry out the translation, we join each variable, one at a time, to the data table of interest. Note how the `by` value changes for each variable:
```
houses <- houses %>%
left_join(
codes %>% select(code, fuel_type),
by = c(fuel = "code")
) %>%
left_join(
codes %>% select(code, heat_type),
by = c(heat = "code")
) %>%
left_join(
codes %>% select(code, sewer_type),
by = c(sewer = "code")
)
```
Table [6\.13](ch-dataII.html#tab:recode-houses) shows the re\-coded data. We can compare this to the previous display in Table [6\.11](ch-dataII.html#tab:house-systems).
Table 6\.13: The Saratoga houses data with re\-coded categorical variables.
| fuel\_type | heat\_type | sewer\_type |
| --- | --- | --- |
| electric | electric | private |
| gas | hot water | private |
| gas | hot water | public |
| gas | hot air | private |
| gas | hot air | public |
| gas | hot air | private |
#### 6\.4\.3\.2 From strings to numbers
You have seen two major types of variables: quantitative and categorical. You are used to using quoted character strings as the levels of categorical variables, and numbers for quantitative variables.
Often, you will encounter data tables that have variables whose meaning is numeric but whose representation is a character string. This can occur when one or more cases is given a non\-numeric value, e.g., *not available*.
The `parse_number()` function will translate character strings with numerical content into numbers.
The `parse_character()` function goes the other way.
For example, in the `ordway_birds` data, the `Month`, `Day`, and `Year` variables are all being stored as character vectors, even though their evident meaning is numeric.
```
ordway_birds %>%
select(Timestamp, Year, Month, Day) %>%
glimpse()
```
```
Rows: 15,829
Columns: 4
$ Timestamp <chr> "4/14/2010 13:20:56", "", "5/13/2010 16:00:30", "5/13/20…
$ Year <chr> "1972", "", "1972", "1972", "1972", "1972", "1972", "197…
$ Month <chr> "7", "", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7…
$ Day <chr> "16", "", "16", "16", "16", "16", "16", "16", "16", "16"…
```
We can convert the strings to numbers using `mutate()` and `parse_number()`. Note how the empty strings (i.e., `""`) in those fields are automatically converted into `NA`’s, since they cannot be converted into valid numbers.
```
library(readr)
ordway_birds <- ordway_birds %>%
mutate(
Month = parse_number(Month),
Year = parse_number(Year),
Day = parse_number(Day)
)
ordway_birds %>%
select(Timestamp, Year, Month, Day) %>%
glimpse()
```
```
Rows: 15,829
Columns: 4
$ Timestamp <chr> "4/14/2010 13:20:56", "", "5/13/2010 16:00:30", "5/13/20…
$ Year <dbl> 1972, NA, 1972, 1972, 1972, 1972, 1972, 1972, 1972, 1972…
$ Month <dbl> 7, NA, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7…
$ Day <dbl> 16, NA, 16, 16, 16, 16, 16, 16, 16, 16, 17, 18, 18, 18, …
```
#### 6\.4\.3\.3 Dates
Dates are often recorded as character strings (e.g., `29 October 2014`). Among other important properties, dates have a natural order.
When you plot values such as `16 December 2015` and `29 October 2016`, you expect the December date to come after the October date, even though this is not true alphabetically of the string itself.
When plotting a value that is numeric, you expect the axis to be marked with a few round numbers.
A plot from 0 to 100 might have ticks at 0, 20, 40, 60, 100\.
It is similar for dates.
When you are plotting dates within one month, you expect the day of the month to be shown on the axis.
If you are plotting a range of several years, it would be appropriate to show only the years on the axis.
When you are given dates stored as a character vector, it is usually necessary to convert them to a data type designed specifically for dates.
For instance, in the `ordway_birds` data, the `Timestamp` variable refers to the time the data were transcribed from the original lab notebook to the computer file.
This variable is currently stored as a `character` string, but we can translate it into a more usable date format using functions from the **lubridate** package.
These dates are written in a format showing `month/day/year hour:minute:second`. The `mdy_hms()` function from the **lubridate** package converts strings in this format to a date. Note that the data type of the `When` variable is now `dttm`.
```
library(lubridate)
birds <- ordway_birds %>%
mutate(When = mdy_hms(Timestamp)) %>%
select(Timestamp, Year, Month, Day, When, DataEntryPerson)
birds %>%
glimpse()
```
```
Rows: 15,829
Columns: 6
$ Timestamp <chr> "4/14/2010 13:20:56", "", "5/13/2010 16:00:30", "5…
$ Year <dbl> 1972, NA, 1972, 1972, 1972, 1972, 1972, 1972, 1972…
$ Month <dbl> 7, NA, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7…
$ Day <dbl> 16, NA, 16, 16, 16, 16, 16, 16, 16, 16, 17, 18, 18…
$ When <dttm> 2010-04-14 13:20:56, NA, 2010-05-13 16:00:30, 201…
$ DataEntryPerson <chr> "Jerald Dosch", "Caitlin Baker", "Caitlin Baker", …
```
With the `When` variable now recorded as a timestamp, we can create a sensible plot showing when each of the transcribers completed their work, as in Figure [6\.5](ch-dataII.html#fig:when-and-who2).
```
birds %>%
ggplot(aes(x = When, y = DataEntryPerson)) +
geom_point(alpha = 0.1, position = "jitter")
```
Figure 6\.5: The transcribers of the Ordway Birds from lab notebooks worked during different time intervals.
Many of the same operations that apply to numbers can be used on dates. For example, the range of dates that each transcriber worked can be calculated as a difference in times (i.e., an `interval()`), and shown in Table [6\.14](ch-dataII.html#tab:transcriber-dates). This makes it clear that Jolani worked on the project for nearly a year (329 days), while Abby’s first transcription was also her last.
```
bird_summary <- birds %>%
group_by(DataEntryPerson) %>%
summarize(
start = first(When),
finish = last(When)
) %>%
mutate(duration = interval(start, finish) / ddays(1))
```
Table 6\.14: Starting and ending dates for each transcriber involved in the Ordway Birds project.
| DataEntryPerson | start | finish | duration |
| --- | --- | --- | --- |
| Abby Colehour | 2011\-04\-23 15:50:24 | 2011\-04\-23 15:50:24 | 0\.000 |
| Brennan Panzarella | 2010\-09\-13 10:48:12 | 2011\-04\-10 21:58:56 | 209\.466 |
| Emily Merrill | 2010\-06\-08 09:10:01 | 2010\-06\-08 14:47:21 | 0\.234 |
| Jerald Dosch | 2010\-04\-14 13:20:56 | 2010\-04\-14 13:20:56 | 0\.000 |
| Jolani Daney | 2010\-06\-08 09:03:00 | 2011\-05\-03 10:12:59 | 329\.049 |
| Keith Bradley\-Hewitt | 2010\-09\-21 11:31:02 | 2011\-05\-06 17:36:38 | 227\.254 |
| Mary Catherine Muñiz | 2012\-02\-02 08:57:37 | 2012\-04\-30 14:06:27 | 88\.214 |
There are many similar **lubridate** functions for converting strings in different formats into dates, e.g., `ymd()`, `dmy()`, and so on. There are also functions like `hour()`, `yday()`,
etc. for extracting certain pieces of variables encoded as dates.
Internally, **R** uses several different classes to represent dates and times. For timestamps (also referred to as [*datetime*](https://en.wikipedia.org/w/index.php?search=datetime)s), these classes are `POSIXct` and `POSIXlt`.
For most purposes, you can treat these as being the same, but internally, they are stored differently.
A `POSIXct` object is stored as the number of seconds since the [*UNIX epoch*](https://en.wikipedia.org/w/index.php?search=UNIX%20epoch) (1970\-01\-01\), whereas `POSIXlt` objects are stored as a list of year, month, day, etc., character strings.
```
now()
```
```
[1] "2021-07-28 14:13:07 EDT"
```
```
class(now())
```
```
[1] "POSIXct" "POSIXt"
```
```
class(as.POSIXlt(now()))
```
```
[1] "POSIXlt" "POSIXt"
```
For dates that do not include times, the `Date` class is most commonly used.
```
as.Date(now())
```
```
[1] "2021-07-28"
```
#### 6\.4\.3\.4 Factors or strings?
A [*factor*](https://en.wikipedia.org/w/index.php?search=factor) is a special data type used to represent categorical data.
Factors store categorical data efficiently and provide a means to put the categorical levels in whatever order is desired.
Unfortunately, factors also make cleaning data more confusing.
The problem is that it is easy to mistake a factor for a character string, and they have different properties when it comes to converting a numeric or date form.
This is especially problematic when using the character processing techniques in Chapter [19](ch-text.html#ch:text).
By default, `readr::read_csv()` will interpret character strings as strings and not as factors.
Other functions, such as `read.csv()` prior to version 4\.0 of **R**, convert character strings into factors by default.
Cleaning such data often requires converting them back to a character format using `parse_character()`.
Failing to do this when needed can result in completely erroneous results without any warning.
The **forcats** package was written to improve support for wrangling factor variables.
For this reason, the data tables used in this book have been stored with categorical or text data in character format. Be aware that data provided by other packages do not necessarily follow this convention. If you get mysterious results when working with such data, consider the possibility that you are working with factors rather than character vectors. Recall that `summary()`, `glimpse()`, and `str()` will all reveal the data types of each variable in a data frame.
It’s always a good idea to carefully check all variables and data wrangling operations to ensure
that correct values are generated.
Such data auditing and the use of automated data consistency checking can decrease the likelihood of data integrity errors.
#### 6\.4\.3\.1 Recoding
Table [6\.11](ch-dataII.html#tab:house-systems) displays a few variables from the `houses` data table we downloaded earlier.
It describes 1,728 houses for sale in [*Saratoga, NY*](https://en.wikipedia.org/w/index.php?search=Saratoga,%20NY).[11](#fn11)
The full table includes additional variables such as `living_area`, `price`, `bedrooms`, and `bathrooms`.
The data on house systems such as `sewer_type` and `heat_type` have been stored as numbers, even though they are really categorical.
Table 6\.11: Four of the variables from the tables giving features of the Saratoga houses stored as integer codes. Each case is a different house.
| fuel | heat | sewer | construction |
| --- | --- | --- | --- |
| 3 | 4 | 2 | 0 |
| 2 | 3 | 2 | 0 |
| 2 | 3 | 3 | 0 |
| 2 | 2 | 2 | 0 |
| 2 | 2 | 3 | 1 |
There is nothing fundamentally wrong with using integers to encode, say, fuel type, though it may be confusing to interpret results. What is worse is that the numbers imply a meaningful order to the categories when there is none.
To translate the integers to a more informative coding, you first have to find out what the various codes mean. Often, this information comes from the codebook, but sometimes you will need to contact the person who collected the data.
Once you know the translation, you can use spreadsheet software (or the `tribble()` function) to enter them into a data table, like this one for the houses:
```
translations <- mdsr_url %>%
paste0("house_codes.csv") %>%
read_csv()
translations %>% head(5)
```
```
# A tibble: 5 × 3
code system_type meaning
<dbl> <chr> <chr>
1 0 new_const no
2 1 new_const yes
3 1 sewer_type none
4 2 sewer_type private
5 3 sewer_type public
```
`Translations` describes the codes in a format that makes it easy to add new code values as the need arises. The same information can also be presented a wide format as in Table [6\.12](ch-dataII.html#tab:code-vals).
```
codes <- translations %>%
pivot_wider(
names_from = system_type,
values_from = meaning,
values_fill = "invalid"
)
```
Table 6\.12: The Translations data table rendered in a wide format.
| code | new\_const | sewer\_type | central\_air | fuel\_type | heat\_type |
| --- | --- | --- | --- | --- | --- |
| 0 | no | invalid | no | invalid | invalid |
| 1 | yes | none | yes | invalid | invalid |
| 2 | invalid | private | invalid | gas | hot air |
| 3 | invalid | public | invalid | electric | hot water |
| 4 | invalid | invalid | invalid | oil | electric |
In `codes`, there is a column for each system type that translates the integer code to a meaningful term. In cases where the integer has no corresponding term, `invalid` has been entered. This provides a quick way to distinguish between incorrect entries and missing entries.
To carry out the translation, we join each variable, one at a time, to the data table of interest. Note how the `by` value changes for each variable:
```
houses <- houses %>%
left_join(
codes %>% select(code, fuel_type),
by = c(fuel = "code")
) %>%
left_join(
codes %>% select(code, heat_type),
by = c(heat = "code")
) %>%
left_join(
codes %>% select(code, sewer_type),
by = c(sewer = "code")
)
```
Table [6\.13](ch-dataII.html#tab:recode-houses) shows the re\-coded data. We can compare this to the previous display in Table [6\.11](ch-dataII.html#tab:house-systems).
Table 6\.13: The Saratoga houses data with re\-coded categorical variables.
| fuel\_type | heat\_type | sewer\_type |
| --- | --- | --- |
| electric | electric | private |
| gas | hot water | private |
| gas | hot water | public |
| gas | hot air | private |
| gas | hot air | public |
| gas | hot air | private |
#### 6\.4\.3\.2 From strings to numbers
You have seen two major types of variables: quantitative and categorical. You are used to using quoted character strings as the levels of categorical variables, and numbers for quantitative variables.
Often, you will encounter data tables that have variables whose meaning is numeric but whose representation is a character string. This can occur when one or more cases is given a non\-numeric value, e.g., *not available*.
The `parse_number()` function will translate character strings with numerical content into numbers.
The `parse_character()` function goes the other way.
For example, in the `ordway_birds` data, the `Month`, `Day`, and `Year` variables are all being stored as character vectors, even though their evident meaning is numeric.
```
ordway_birds %>%
select(Timestamp, Year, Month, Day) %>%
glimpse()
```
```
Rows: 15,829
Columns: 4
$ Timestamp <chr> "4/14/2010 13:20:56", "", "5/13/2010 16:00:30", "5/13/20…
$ Year <chr> "1972", "", "1972", "1972", "1972", "1972", "1972", "197…
$ Month <chr> "7", "", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7…
$ Day <chr> "16", "", "16", "16", "16", "16", "16", "16", "16", "16"…
```
We can convert the strings to numbers using `mutate()` and `parse_number()`. Note how the empty strings (i.e., `""`) in those fields are automatically converted into `NA`’s, since they cannot be converted into valid numbers.
```
library(readr)
ordway_birds <- ordway_birds %>%
mutate(
Month = parse_number(Month),
Year = parse_number(Year),
Day = parse_number(Day)
)
ordway_birds %>%
select(Timestamp, Year, Month, Day) %>%
glimpse()
```
```
Rows: 15,829
Columns: 4
$ Timestamp <chr> "4/14/2010 13:20:56", "", "5/13/2010 16:00:30", "5/13/20…
$ Year <dbl> 1972, NA, 1972, 1972, 1972, 1972, 1972, 1972, 1972, 1972…
$ Month <dbl> 7, NA, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7…
$ Day <dbl> 16, NA, 16, 16, 16, 16, 16, 16, 16, 16, 17, 18, 18, 18, …
```
#### 6\.4\.3\.3 Dates
Dates are often recorded as character strings (e.g., `29 October 2014`). Among other important properties, dates have a natural order.
When you plot values such as `16 December 2015` and `29 October 2016`, you expect the December date to come after the October date, even though this is not true alphabetically of the string itself.
When plotting a value that is numeric, you expect the axis to be marked with a few round numbers.
A plot from 0 to 100 might have ticks at 0, 20, 40, 60, 100\.
It is similar for dates.
When you are plotting dates within one month, you expect the day of the month to be shown on the axis.
If you are plotting a range of several years, it would be appropriate to show only the years on the axis.
When you are given dates stored as a character vector, it is usually necessary to convert them to a data type designed specifically for dates.
For instance, in the `ordway_birds` data, the `Timestamp` variable refers to the time the data were transcribed from the original lab notebook to the computer file.
This variable is currently stored as a `character` string, but we can translate it into a more usable date format using functions from the **lubridate** package.
These dates are written in a format showing `month/day/year hour:minute:second`. The `mdy_hms()` function from the **lubridate** package converts strings in this format to a date. Note that the data type of the `When` variable is now `dttm`.
```
library(lubridate)
birds <- ordway_birds %>%
mutate(When = mdy_hms(Timestamp)) %>%
select(Timestamp, Year, Month, Day, When, DataEntryPerson)
birds %>%
glimpse()
```
```
Rows: 15,829
Columns: 6
$ Timestamp <chr> "4/14/2010 13:20:56", "", "5/13/2010 16:00:30", "5…
$ Year <dbl> 1972, NA, 1972, 1972, 1972, 1972, 1972, 1972, 1972…
$ Month <dbl> 7, NA, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7…
$ Day <dbl> 16, NA, 16, 16, 16, 16, 16, 16, 16, 16, 17, 18, 18…
$ When <dttm> 2010-04-14 13:20:56, NA, 2010-05-13 16:00:30, 201…
$ DataEntryPerson <chr> "Jerald Dosch", "Caitlin Baker", "Caitlin Baker", …
```
With the `When` variable now recorded as a timestamp, we can create a sensible plot showing when each of the transcribers completed their work, as in Figure [6\.5](ch-dataII.html#fig:when-and-who2).
```
birds %>%
ggplot(aes(x = When, y = DataEntryPerson)) +
geom_point(alpha = 0.1, position = "jitter")
```
Figure 6\.5: The transcribers of the Ordway Birds from lab notebooks worked during different time intervals.
Many of the same operations that apply to numbers can be used on dates. For example, the range of dates that each transcriber worked can be calculated as a difference in times (i.e., an `interval()`), and shown in Table [6\.14](ch-dataII.html#tab:transcriber-dates). This makes it clear that Jolani worked on the project for nearly a year (329 days), while Abby’s first transcription was also her last.
```
bird_summary <- birds %>%
group_by(DataEntryPerson) %>%
summarize(
start = first(When),
finish = last(When)
) %>%
mutate(duration = interval(start, finish) / ddays(1))
```
Table 6\.14: Starting and ending dates for each transcriber involved in the Ordway Birds project.
| DataEntryPerson | start | finish | duration |
| --- | --- | --- | --- |
| Abby Colehour | 2011\-04\-23 15:50:24 | 2011\-04\-23 15:50:24 | 0\.000 |
| Brennan Panzarella | 2010\-09\-13 10:48:12 | 2011\-04\-10 21:58:56 | 209\.466 |
| Emily Merrill | 2010\-06\-08 09:10:01 | 2010\-06\-08 14:47:21 | 0\.234 |
| Jerald Dosch | 2010\-04\-14 13:20:56 | 2010\-04\-14 13:20:56 | 0\.000 |
| Jolani Daney | 2010\-06\-08 09:03:00 | 2011\-05\-03 10:12:59 | 329\.049 |
| Keith Bradley\-Hewitt | 2010\-09\-21 11:31:02 | 2011\-05\-06 17:36:38 | 227\.254 |
| Mary Catherine Muñiz | 2012\-02\-02 08:57:37 | 2012\-04\-30 14:06:27 | 88\.214 |
There are many similar **lubridate** functions for converting strings in different formats into dates, e.g., `ymd()`, `dmy()`, and so on. There are also functions like `hour()`, `yday()`,
etc. for extracting certain pieces of variables encoded as dates.
Internally, **R** uses several different classes to represent dates and times. For timestamps (also referred to as [*datetime*](https://en.wikipedia.org/w/index.php?search=datetime)s), these classes are `POSIXct` and `POSIXlt`.
For most purposes, you can treat these as being the same, but internally, they are stored differently.
A `POSIXct` object is stored as the number of seconds since the [*UNIX epoch*](https://en.wikipedia.org/w/index.php?search=UNIX%20epoch) (1970\-01\-01\), whereas `POSIXlt` objects are stored as a list of year, month, day, etc., character strings.
```
now()
```
```
[1] "2021-07-28 14:13:07 EDT"
```
```
class(now())
```
```
[1] "POSIXct" "POSIXt"
```
```
class(as.POSIXlt(now()))
```
```
[1] "POSIXlt" "POSIXt"
```
For dates that do not include times, the `Date` class is most commonly used.
```
as.Date(now())
```
```
[1] "2021-07-28"
```
#### 6\.4\.3\.4 Factors or strings?
A [*factor*](https://en.wikipedia.org/w/index.php?search=factor) is a special data type used to represent categorical data.
Factors store categorical data efficiently and provide a means to put the categorical levels in whatever order is desired.
Unfortunately, factors also make cleaning data more confusing.
The problem is that it is easy to mistake a factor for a character string, and they have different properties when it comes to converting a numeric or date form.
This is especially problematic when using the character processing techniques in Chapter [19](ch-text.html#ch:text).
By default, `readr::read_csv()` will interpret character strings as strings and not as factors.
Other functions, such as `read.csv()` prior to version 4\.0 of **R**, convert character strings into factors by default.
Cleaning such data often requires converting them back to a character format using `parse_character()`.
Failing to do this when needed can result in completely erroneous results without any warning.
The **forcats** package was written to improve support for wrangling factor variables.
For this reason, the data tables used in this book have been stored with categorical or text data in character format. Be aware that data provided by other packages do not necessarily follow this convention. If you get mysterious results when working with such data, consider the possibility that you are working with factors rather than character vectors. Recall that `summary()`, `glimpse()`, and `str()` will all reveal the data types of each variable in a data frame.
It’s always a good idea to carefully check all variables and data wrangling operations to ensure
that correct values are generated.
Such data auditing and the use of automated data consistency checking can decrease the likelihood of data integrity errors.
### 6\.4\.4 Example: Japanese nuclear reactors
Dates and times are an important aspect of many analyses.
In the example below, the vector `example` contains human\-readable datetimes stored as `character` by **R**.
The `ymd_hms()` function from **lubridate** will convert this into `POSIXct`—a datetime format.
This makes it possible for **R** to do date arithmetic.
```
library(lubridate)
example <- c("2021-04-29 06:00:00", "2021-12-31 12:00:00")
str(example)
```
```
chr [1:2] "2021-04-29 06:00:00" "2021-12-31 12:00:00"
```
```
converted <- ymd_hms(example)
str(converted)
```
```
POSIXct[1:2], format: "2021-04-29 06:00:00" "2021-12-31 12:00:00"
```
```
converted
```
```
[1] "2021-04-29 06:00:00 UTC" "2021-12-31 12:00:00 UTC"
```
```
converted[2] - converted[1]
```
```
Time difference of 246 days
```
We will use this functionality to analyze data on nuclear reactors in Japan.
Figure [6\.6](ch-dataII.html#fig:wikijapan) displays the first part of this table as of the summer of 2016\.
Figure 6\.6: Screenshot of Wikipedia’s list of Japanese nuclear reactors.
```
tables <- "http://en.wikipedia.org/wiki/List_of_nuclear_reactors" %>%
read_html() %>%
html_nodes(css = "table")
idx <- tables %>%
html_text() %>%
str_detect("Fukushima Daiichi") %>%
which()
reactors <- tables %>%
purrr::pluck(idx) %>%
html_table(fill = TRUE) %>%
janitor::clean_names() %>%
rename(
reactor_type = reactor,
reactor_model = reactor_2,
capacity_net = capacity_in_mw,
capacity_gross = capacity_in_mw_2
) %>%
tail(-1)
glimpse(reactors)
```
```
Rows: 68
Columns: 10
$ name <chr> "Fugen", "Fukushima Daiichi", "Fukushima Daii…
$ unit_no <chr> "1", "1", "2", "3", "4", "5", "6", "1", "2", …
$ reactor_type <chr> "HWLWR", "BWR", "BWR", "BWR", "BWR", "BWR", "…
$ reactor_model <chr> "ATR", "BWR-3", "BWR-4", "BWR-4", "BWR-4", "B…
$ status <chr> "Shut down", "Inoperable", "Inoperable", "Ino…
$ capacity_net <chr> "148", "439", "760", "760", "760", "760", "10…
$ capacity_gross <chr> "165", "460", "784", "784", "784", "784", "11…
$ construction_start <chr> "10 May 1972", "25 July 1967", "9 June 1969",…
$ commercial_operation <chr> "20 March 1979", "26 March 1971", "18 July 19…
$ closure <chr> "29 March 2003", "19 May 2011", "19 May 2011"…
```
We see that among the first entries are the ill\-fated [*Fukushima Daiichi*](https://en.wikipedia.org/w/index.php?search=Fukushima%20Daiichi) reactors. The
`mutate()` function can be used in conjunction with the `dmy()` function from the **lubridate** package to wrangle these data into a better form.
```
reactors <- reactors %>%
mutate(
plant_status = ifelse(
str_detect(status, "Shut down"),
"Shut down", "Not formally shut down"
),
capacity_net = parse_number(capacity_net),
construct_date = dmy(construction_start),
operation_date = dmy(commercial_operation),
closure_date = dmy(closure)
)
glimpse(reactors)
```
```
Rows: 68
Columns: 14
$ name <chr> "Fugen", "Fukushima Daiichi", "Fukushima Daii…
$ unit_no <chr> "1", "1", "2", "3", "4", "5", "6", "1", "2", …
$ reactor_type <chr> "HWLWR", "BWR", "BWR", "BWR", "BWR", "BWR", "…
$ reactor_model <chr> "ATR", "BWR-3", "BWR-4", "BWR-4", "BWR-4", "B…
$ status <chr> "Shut down", "Inoperable", "Inoperable", "Ino…
$ capacity_net <dbl> 148, 439, 760, 760, 760, 760, 1067, NA, 1067,…
$ capacity_gross <chr> "165", "460", "784", "784", "784", "784", "11…
$ construction_start <chr> "10 May 1972", "25 July 1967", "9 June 1969",…
$ commercial_operation <chr> "20 March 1979", "26 March 1971", "18 July 19…
$ closure <chr> "29 March 2003", "19 May 2011", "19 May 2011"…
$ plant_status <chr> "Shut down", "Not formally shut down", "Not f…
$ construct_date <date> 1972-05-10, 1967-07-25, 1969-06-09, 1970-12-…
$ operation_date <date> 1979-03-20, 1971-03-26, 1974-07-18, 1976-03-…
$ closure_date <date> 2003-03-29, 2011-05-19, 2011-05-19, 2011-05-…
```
How have these plants evolved over time? It seems likely that as nuclear technology has progressed, plants should see an increase in capacity. A number of these reactors have been shut down in recent years. Are there changes in capacity related to the age of the plant? Figure [6\.7](ch-dataII.html#fig:japannukes) displays the data.
```
ggplot(
data = reactors,
aes(x = construct_date, y = capacity_net, color = plant_status
)
) +
geom_point() +
geom_smooth() +
xlab("Date of Plant Construction") +
ylab("Net Plant Capacity (MW)")
```
Figure 6\.7: Distribution of capacity of Japanese nuclear power plants over time.
Indeed, reactor capacity has tended to increase over time, while the older reactors were more likely
to have been formally shut down. While it would have been straightforward
to code these data by hand, automating data ingestion for larger and more
complex tables is more efficient and less error\-prone.
6\.5 Further resources
----------------------
The tidyverse style guide (<https://style.tidyverse.org>) merits a close read by all **R** users.
Broman and Woo (2018\) describe helpful tips for data organization in spreadsheets.
The **tidyr** package, and in particular, Hadley Wickham (2020c) provide principles for tidy data.
The corresponding paper on tidy data (H. Wickham 2014\) builds upon notions of normal forms—common to database designers from computer science—to describe a process of thinking about how data should be stored and formatted.
There are many **R** packages that do nothing other than provide access to a public API from within **R**.
There are far too many API packages to list here, but a fair number of them are maintained by the [rOpenSci group](https://ropensci.org/packages/).
In fact, several of the packages referenced in this book, including the **twitteR** and **aRxiv** packages in Chapter [19](ch-text.html#ch:text), and the **plotly** package in Chapter [14](ch-vizIII.html#ch:vizIII), are APIs.
The [CRAN task view on Web Technologies](https://cran.r-project.org/web/views/WebTechnologies.html) lists hundreds more packages, including **Rfacebook**, **instaR**, **FlickrAPI**, **tumblR**, and **Rlinkedin**.
The **RSocrata** package facilitates the use of [*Socrata*](https://en.wikipedia.org/w/index.php?search=Socrata), which is itself an API for querying—among other things—the [NYC Open Data](https://nycopendata.socrata.com/) platform.
6\.6 Exercises
--------------
**Problem 1 (Easy)**: In the `Marriage` data set included in `mosaic`, the `appdate`, `ceremonydate`, and `dob` variables are encoded as factors, even though they are dates. Use `lubridate` to convert those three columns into a date format.
```
library(mosaic)
Marriage %>%
select(appdate, ceremonydate, dob) %>%
glimpse(width = 50)
```
```
Rows: 98
Columns: 3
$ appdate <date> 1996-10-29, 1996-11-12, 19…
$ ceremonydate <date> 1996-11-09, 1996-11-12, 19…
$ dob <date> 2064-04-11, 2064-08-06, 20…
```
**Problem 2 (Easy)**: Consider the following pipeline:
```
library(tidyverse)
mtcars %>%
filter(cyl == 4) %>%
select(mpg, cyl)
```
```
mpg cyl
Datsun 710 22.8 4
Merc 240D 24.4 4
Merc 230 22.8 4
Fiat 128 32.4 4
Honda Civic 30.4 4
Toyota Corolla 33.9 4
Toyota Corona 21.5 4
Fiat X1-9 27.3 4
Porsche 914-2 26.0 4
Lotus Europa 30.4 4
Volvo 142E 21.4 4
```
Rewrite this in nested form on a single line. Which set of commands do you prefer and why?
**Problem 3 (Easy)**: Consider the values returned by the `as.numeric()` and `parse_number()` functions when applied to the following vectors. Describe the results and their implication.
```
x1 <- c("1900.45", "$1900.45", "1,900.45", "nearly $2000")
x2 <- as.factor(x1)
```
**Problem 4 (Medium)**: Find an interesting Wikipedia page with a table, scrape the data from it, and generate a figure that tells an interesting story. Include an interpretation of the figure.
**Problem 5 (Medium)**: Generate the code to convert the following data frame to wide format.
```
# A tibble: 4 × 6
grp sex meanL sdL meanR sdR
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 A F 0.225 0.106 0.34 0.0849
2 A M 0.47 0.325 0.57 0.325
3 B F 0.325 0.106 0.4 0.0707
4 B M 0.547 0.308 0.647 0.274
```
The result should look like the following display.
```
# A tibble: 2 × 9
grp F.meanL F.meanR F.sdL F.sdR M.meanL M.meanR M.sdL M.sdR
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 A 0.225 0.34 0.106 0.0849 0.47 0.57 0.325 0.325
2 B 0.325 0.4 0.106 0.0707 0.547 0.647 0.308 0.274
```
Hint: use `pivot_longer()` in conjunction with `pivot_wider()`.
**Problem 6 (Medium)**: The `HELPfull` data within the `mosaicData` package contains information about the Health Evaluation and Linkage to Primary Care (HELP) randomized trial in *tall* format.
1. Generate a table of the data for subjects (`ID`) 1, 2, and 3 that includes the `ID` variable, the `TIME` variable, and the `DRUGRISK` and `SEXRISK` variables (measures of drug and sex risk\-taking behaviors, respectively).
2. The HELP trial was designed to collect information at 0, 6, 12, 18, and 24 month intervals. At which timepoints were measurements available on the `*RISK` variables for subject 3?
3. Let’s restrict our attention to the data from the baseline (`TIME = 0`) and 6\-month data. Use the `pivot_wider()` function from the `dplyr` package to create a table that looks like the following:
```
# A tibble: 3 × 5
ID DRUGRISK_0 DRUGRISK_6 SEXRISK_0 SEXRISK_6
<int> <int> <int> <int> <int>
1 1 0 0 4 1
2 2 0 0 7 0
3 3 20 13 2 4
```
4. Repeat this process using all subjects. What is the Pearson correlation between the baseline (`TIME = 0`) and 6\-month `DRUGRISK` scores? Repeat this for the `SEXRISK` scores. (Hint: use the `use = "complete.obs"` option from the `cor()` function.)
**Problem 7 (Medium)**: An analyst wants to calculate the pairwise differences between the Treatment and Control values for a small data set from a crossover trial (all subjects received both treatments) that
consists of the following observations.
```
ds1
```
```
# A tibble: 6 × 3
id group vals
<int> <chr> <dbl>
1 1 T 4
2 2 T 6
3 3 T 8
4 1 C 5
5 2 C 6
6 3 C 10
```
Then use the following code to create the new `diff` variable.
```
Treat <- filter(ds1, group == "T")
Control <- filter(ds1, group == "C")
all <- mutate(Treat, diff = Treat$vals - Control$vals)
all
```
Verify that this code works for this example and generates the correct values of \\(\-1\\), 0, and \\(\-2\\). Describe two problems that might arise if the data set is not sorted in a particular
order or if one of the observations is missing for one of the subjects. Provide an alternative approach to generate this
variable that is more robust (hint: use `pivot_wider`).
**Problem 8 (Medium)**: Write a function called `count_seasons` that, when given a teamID, will count the number of seasons the team played in the `Teams` data frame from the `Lahman` package.
**Problem 9 (Medium)**: Replicate the functionality of `make_babynames_dist()` from the `mdsr` package to wrangle the original tables from the `babynames` package.
**Problem 10 (Medium)**: Consider the number of home runs hit (`HR`) and home runs allowed (`HRA`) for the Chicago Cubs (\\(CHN\\)) baseball team. Reshape the `Teams` data from the `Lahman` package into “long” format and plot a time series conditioned on whether the HRs that involved the Cubs were hit by them or allowed by them.
**Problem 11 (Medium)**: Using the approach described in Section 6\.4\.1\.2 of the text, find another table in Wikipedia that can be scraped and visualized. Be sure to interpret your graphical display.
6\.7 Supplementary exercises
----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-dataII.html\#dataII\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-dataII.html#dataII-online-exercises)
**Problem 1 (Easy)**: What type of join operation is depicted below?
**Problem 2 (Easy)**: What type of `tidyr` operation is depicted below?
**Problem 3 (Easy)**: What type of `tidyr` operation is depicted below?
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-iteration.html |
Chapter 7 Iteration
===================
Calculators free human beings from having to perform arithmetic computations *by hand*. Similarly, programming languages free humans from having to perform iterative computations by re\-running chunks of code, or worse, copying\-and\-pasting a chunk of code many times, while changing just one or two things in each chunk.
For example, in [*Major League Baseball*](https://en.wikipedia.org/w/index.php?search=Major%20League%20Baseball) there are 30 teams, and the game has been played for over 100 years. There are a number of natural questions that we might want to ask about *each team* (e.g., which player has accrued the most hits for that team?) or about each season (e.g., which seasons had the highest levels of scoring?).
If we can write a chunk of code that will answer these questions for a single team or a single season, then we should be able to generalize that chunk of code to work for *all* teams or seasons.
Furthermore, we should be able to do this without having to re\-type that chunk of code. In this section, we present a variety of techniques for automating these types of iterative operations.
7\.1 Vectorized operations
--------------------------
In every programming language that we can think of, there is a way to write a [*loop*](https://en.wikipedia.org/w/index.php?search=loop). For example, you can write a `for()` loop in **R** the same way you can with most programming languages.
Recall that the `Teams` data frame contains one row for each team in each MLB season.
```
library(tidyverse)
library(mdsr)
library(Lahman)
names(Teams)
```
```
[1] "yearID" "lgID" "teamID" "franchID"
[5] "divID" "Rank" "G" "Ghome"
[9] "W" "L" "DivWin" "WCWin"
[13] "LgWin" "WSWin" "R" "AB"
[17] "H" "X2B" "X3B" "HR"
[21] "BB" "SO" "SB" "CS"
[25] "HBP" "SF" "RA" "ER"
[29] "ERA" "CG" "SHO" "SV"
[33] "IPouts" "HA" "HRA" "BBA"
[37] "SOA" "E" "DP" "FP"
[41] "name" "park" "attendance" "BPF"
[45] "PPF" "teamIDBR" "teamIDlahman45" "teamIDretro"
```
What might not be immediately obvious is that columns 15 through 40 of this data frame contain numerical data about how each team performed in that season. To see this, you can execute the `str()` command to see the **str**ucture of the data frame, but we suppress that output here. For data frames, a similar alternative that is a little cleaner is `glimpse()`.
```
str(Teams)
glimpse(Teams)
```
Suppose you are interested in computing the averages of these 26 numeric columns.
You don’t want to have to type the names of each of them, or re\-type the `mean()` command 26 times.
Seasoned programmers would identify this as a situation in which a [*loop*](https://en.wikipedia.org/w/index.php?search=loop) is a natural and efficient solution.
A `for()` loop will iterate over the selected column indices.
```
averages <- NULL
for (i in 15:40) {
averages[i - 14] <- mean(Teams[, i], na.rm = TRUE)
}
names(averages) <- names(Teams)[15:40]
averages
```
```
R AB H X2B X3B HR BB SO
680.496 5126.260 1339.657 228.330 45.910 104.929 473.020 755.573
SB CS HBP SF RA ER ERA CG
109.786 46.873 45.411 44.233 680.495 572.404 3.836 48.011
SHO SV IPouts HA HRA BBA SOA E
9.584 24.267 4010.716 1339.434 104.929 473.212 755.050 181.732
DP FP
132.669 0.966
```
This certainly works. However, there are a number of problematic aspects of this code (e.g., the use of multiple [*magic numbers*](https://en.wikipedia.org/w/index.php?search=magic%20numbers) like 14, 15, and 40\). The use of a `for()` loop may not be ideal.
For problems of this type, it is almost always possible (and usually preferable) to iterate without explicitly defining a loop.
**R** programmers prefer to solve this type of problem by applying an operation to each element in a vector.
This often requires only one line of code, with no appeal to indices.
It is important to understand that the fundamental architecture of **R** is based on *vectors*. That is, in contrast to [*general\-purpose programming languages*](https://en.wikipedia.org/w/index.php?search=general-purpose%20programming%20languages) like *C\+\+* or [*Python*](https://en.wikipedia.org/w/index.php?search=Python) that distinguish between single items—like strings and integers—and arrays of those items, in **R** a “string” is just a character vector of length 1\. There is no special kind of atomic object. Thus, if you assign a single “string” to an object, **R** still stores it as a vector.
```
a <- "a string"
class(a)
```
```
[1] "character"
```
```
is.vector(a)
```
```
[1] TRUE
```
```
length(a)
```
```
[1] 1
```
As a consequence of this construction, **R** is highly optimized for vectorized operations (see Appendix [B](ch-R.html#ch:R) for more detailed information about **R** internals). Loops, by their nature, do not take advantage of this optimization. Thus, **R** provides several tools for performing loop\-like operations without actually writing a loop. This can be a challenging conceptual hurdle for those who are used to more general\-purpose programming languages.
Try to avoid writing `for()` loops, even when it seems like the easiest solution.
Many functions in **R** are [*vectorized*](https://en.wikipedia.org/w/index.php?search=vectorized). This means that they will perform an operation on every element of a vector by default. For example, many mathematical functions (e.g., `exp()`) work this way.
```
exp(1:3)
```
```
[1] 2.72 7.39 20.09
```
Note that vectorized functions like `exp()` take a vector as an input, and return a vector *of the same length* as an output.
This is importantly different behavior than so\-called [*summary functions*](https://en.wikipedia.org/w/index.php?search=summary%20functions), which take a vector as an input, and return *a single value*. Summary functions (e.g., `mean()`) are commonly useful within a call to `summarize()`. Note that when we call `mean()` on a vector, it only returns a single value no matter how many elements there are in the input vector.
```
mean(1:3)
```
```
[1] 2
```
Other functions in **R** are not vectorized. They may assume an input that is a vector of length one, and fail or exhibit strange behavior if given a longer vector. For example, `if()` throws a warning if given a vector of length more than one.
```
if (c(TRUE, FALSE)) {
cat("This is a great book!")
}
```
```
Warning in if (c(TRUE, FALSE)) {: the condition has length > 1 and only the
first element will be used
```
```
This is a great book!
```
As you get more comfortable with **R**, you will develop intuition about which functions are vectorized. If a function is vectorized, you should make use of that fact and not iterate over it. The code below shows that computing the exponential of the first 10,000 integers by appealing to `exp()` as a vectorized function is much, much faster than using `map_dbl()` to iterate over the same vector. The results are identical.
```
x <- 1:1e5
bench::mark(
exp(x),
map_dbl(x, exp)
)
```
```
# A tibble: 2 × 6
expression min median `itr/sec` mem_alloc `gc/sec`
<bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl>
1 exp(x) 900µs 1.59ms 652. 1.53MB 21.2
2 map_dbl(x, exp) 135ms 135.08ms 7.40 788.62KB 29.6
```
Try to always make use of vectorized functions before iterating an operation.
7\.2 Using `across()` with **dplyr** functions
----------------------------------------------
The `mutate()` and `summarize()` verbs described in Chapter [4](ch-dataI.html#ch:dataI) can take advantage of an [*adverb*](https://en.wikipedia.org/w/index.php?search=adverb) called `across()` that applies operations programmatically. In the example above, we had to observe that columns 15 through 40 of the `Teams` data frame were numeric, and hard\-code those observations as magic numbers. Rather than relying on these observations to identify which variables are numeric—and therefore which variables it makes sense to compute the average of—we can use the `across()` function to do this work for us. The chunk below will compute the average values of only those variables in `Teams` that are numeric.
```
Teams %>%
summarize(across(where(is.numeric), mean, na.rm = TRUE))
```
```
yearID Rank G Ghome W L R AB H X2B X3B HR BB SO SB
1 1958 4.05 150 78 74.5 74.5 680 5126 1340 228 45.9 105 473 756 110
CS HBP SF RA ER ERA CG SHO SV IPouts HA HRA BBA SOA E DP
1 46.9 45.4 44.2 680 572 3.84 48 9.58 24.3 4011 1339 105 473 755 182 133
FP attendance BPF PPF
1 0.966 1375102 100 100
```
Note that this result included several variables (e.g., `yearID`, `attendance`) that were outside of the range we defined previously.
The `across()` adverb allows us to specify the set of variables that `summarize()` includes in different ways. In the example above, we used the [*predicate function*](https://en.wikipedia.org/w/index.php?search=predicate%20function) `is.numeric()` to identify the variables for which we wanted to compute the mean.
In the following example, we compute the mean of `yearID`, a series of variables from `R` (runs scored) to `SF` ([*sacrifice flies*](https://en.wikipedia.org/w/index.php?search=sacrifice%20flies)) that apply only to offensive players, and the batting [*park factor*](https://en.wikipedia.org/w/index.php?search=park%20factor) (`BPF`). Since we are specifying these columns without the use of a predicate function, we don’t need to use `where()`.
```
Teams %>%
summarize(across(c(yearID, R:SF, BPF), mean, na.rm = TRUE))
```
```
yearID R AB H X2B X3B HR BB SO SB CS HBP SF BPF
1 1958 680 5126 1340 228 45.9 105 473 756 110 46.9 45.4 44.2 100
```
The `across()` function behaves analogously with `mutate()`.
It provides an easy way to perform an operation on a set of variables without having to type or copy\-and\-paste the name of each variable.
7\.3 The `map()` family of functions
------------------------------------
More generally, to apply a function to each item in a list or vector, or the columns of a data frame[12](#fn12), use `map()` (or one of its type\-specific variants). This is the main function from the **purrr** package. In this example, we calculate the mean of each of the statistics defined above, all at once. Compare this to the `for()` loop written above. Which is syntactically simpler? Which expresses the ideas behind the code more succinctly?
```
Teams %>%
select(15:40) %>%
map_dbl(mean, na.rm = TRUE)
```
```
R AB H X2B X3B HR BB SO
680.496 5126.260 1339.657 228.330 45.910 104.929 473.020 755.573
SB CS HBP SF RA ER ERA CG
109.786 46.873 45.411 44.233 680.495 572.404 3.836 48.011
SHO SV IPouts HA HRA BBA SOA E
9.584 24.267 4010.716 1339.434 104.929 473.212 755.050 181.732
DP FP
132.669 0.966
```
The first argument to `map_dbl()` is the thing that you want to do something to (in this case, a data frame). The second argument specifies the name of a function (the argument is named `.f`). Any further arguments are passed as options to `.f`. Thus, this command applies the `mean()` function to the 15th through the 40th columns of the `Teams` data frame, while removing any `NA`s that might be present in any of those columns. The use of the variant `map_dbl()` simply forces the output to be a vector of type `double`.[13](#fn13)
Of course, we began by taking the subset of the columns that were all `numeric` values. If you tried to take the `mean()` of a non\-numeric vector, you would get a *warning* (and a value of `NA`).
```
Teams %>%
select(teamID) %>%
map_dbl(mean, na.rm = TRUE)
```
```
Warning in mean.default(.x[[i]], ...): argument is not numeric or logical:
returning NA
```
```
teamID
NA
```
If you can solve your problem using `across()` and/or `where()` as described in Section [7\.2](ch-iteration.html#sec:scoped), that is probably the cleanest solution. However, we will show that the `map()` family of functions provides a much more general set of capabilities.
7\.4 Iterating over a one\-dimensional vector
---------------------------------------------
### 7\.4\.1 Iterating a known function
Often you will want to apply a function to each element of a vector or list. For example, the baseball franchise now known as the [*Los Angeles Angels of Anaheim*](https://en.wikipedia.org/w/index.php?search=Los%20Angeles%20Angels%20of%20Anaheim) has gone by several names during its history.
```
angels <- Teams %>%
filter(franchID == "ANA") %>%
group_by(teamID, name) %>%
summarize(began = first(yearID), ended = last(yearID)) %>%
arrange(began)
angels
```
```
# A tibble: 4 × 4
# Groups: teamID [3]
teamID name began ended
<fct> <chr> <int> <int>
1 LAA Los Angeles Angels 1961 1964
2 CAL California Angels 1965 1996
3 ANA Anaheim Angels 1997 2004
4 LAA Los Angeles Angels of Anaheim 2005 2020
```
The franchise began as the [*Los Angeles Angels*](https://en.wikipedia.org/w/index.php?search=Los%20Angeles%20Angels) (`LAA`) in 1961, then became the [*California Angels*](https://en.wikipedia.org/w/index.php?search=California%20Angels) (`CAL`) in 1965, the [*Anaheim Angels*](https://en.wikipedia.org/w/index.php?search=Anaheim%20Angels) (`ANA`) in 1997, before taking their current name (`LAA` again) in 2005\. This situation is complicated by the fact that the `teamID` `LAA` was re\-used. This sort of schizophrenic behavior is unfortunately common in many data sets.
Now, suppose we want to find the length, in number of characters, of each of those team names. We could check each one manually using the function `nchar()`:
```
angels_names <- angels %>%
pull(name)
nchar(angels_names[1])
```
```
[1] 18
```
```
nchar(angels_names[2])
```
```
[1] 17
```
```
nchar(angels_names[3])
```
```
[1] 14
```
```
nchar(angels_names[4])
```
```
[1] 29
```
But this would grow tiresome if we had many names. It would be simpler, more efficient, more elegant, and scalable to apply the function `nchar()` to each element of the vector `angel_names`. We can accomplish this using `map_int()`. `map_int()` is like `map()` or `map_dbl()`, but it always returns an `integer` vector.
```
map_int(angels_names, nchar)
```
```
[1] 18 17 14 29
```
The key difference between `map_int()` and `map()` is that the former will always return an `integer` vector, whereas the latter will always return a `list`. Recall that the main difference between `list`s and `data.frame`s is that the elements (columns) of a `data.frame` have to have the same length, whereas the elements of a list are arbitrary. So while `map()` is more versatile, we usually find `map_int()` or one of the other variants to be more convenient when appropriate.
It’s often helpful to use `map()` to figure out what the return type will be, and then switch to the appropriate type\-specific `map()` variant.
This section was designed to illustrate how `map()` can be used to iterate a function over a vector of values. However, the choice of the `nchar()` function was a bit silly, because `nchar()` is already vectorized. Thus, we can use it directly!
```
nchar(angels_names)
```
```
[1] 18 17 14 29
```
### 7\.4\.2 Iterating an arbitrary function
One of the most powerful uses of iteration is that you can apply *any* function, including a function that you have defined (see Appendix [C](ch-function.html#ch:function) for a discussion of how to write user\-defined functions). For example, suppose we want to display the top\-5 seasons in terms of wins for each of the Angels teams.
```
top5 <- function(data, team_name) {
data %>%
filter(name == team_name) %>%
select(teamID, yearID, W, L, name) %>%
arrange(desc(W)) %>%
head(n = 5)
}
```
We can now do this for each element of our vector with a single call to `map()`.
Note how we named the `data` argument to ensure that the `team_name` argument was the one that accepted the value over which we iterated.
```
angels_names %>%
map(top5, data = Teams)
```
```
[[1]]
teamID yearID W L name
1 LAA 1962 86 76 Los Angeles Angels
2 LAA 1964 82 80 Los Angeles Angels
3 LAA 1961 70 91 Los Angeles Angels
4 LAA 1963 70 91 Los Angeles Angels
[[2]]
teamID yearID W L name
1 CAL 1982 93 69 California Angels
2 CAL 1986 92 70 California Angels
3 CAL 1989 91 71 California Angels
4 CAL 1985 90 72 California Angels
5 CAL 1979 88 74 California Angels
[[3]]
teamID yearID W L name
1 ANA 2002 99 63 Anaheim Angels
2 ANA 2004 92 70 Anaheim Angels
3 ANA 1998 85 77 Anaheim Angels
4 ANA 1997 84 78 Anaheim Angels
5 ANA 2000 82 80 Anaheim Angels
[[4]]
teamID yearID W L name
1 LAA 2008 100 62 Los Angeles Angels of Anaheim
2 LAA 2014 98 64 Los Angeles Angels of Anaheim
3 LAA 2009 97 65 Los Angeles Angels of Anaheim
4 LAA 2005 95 67 Los Angeles Angels of Anaheim
5 LAA 2007 94 68 Los Angeles Angels of Anaheim
```
Alternatively, we can collect the results into a single data frame by using the `map_dfr()` function, which combines the data frames by row. Below, we do this and then compute the average number of wins in a top\-5 season for each Angels team name. Based on these data, the Los Angeles Angels of Anaheim has been the most successful incarnation of the franchise, when judged by average performance in the best five seasons.
```
angels_names %>%
map_dfr(top5, data = Teams) %>%
group_by(teamID, name) %>%
summarize(N = n(), mean_wins = mean(W)) %>%
arrange(desc(mean_wins))
```
```
# A tibble: 4 × 4
# Groups: teamID [3]
teamID name N mean_wins
<fct> <chr> <int> <dbl>
1 LAA Los Angeles Angels of Anaheim 5 96.8
2 CAL California Angels 5 90.8
3 ANA Anaheim Angels 5 88.4
4 LAA Los Angeles Angels 4 77
```
Once you’ve read Chapter [15](ch-sql.html#ch:sql), think about how you might do this operation in SQL. It is not that easy!
7\.5 Iteration over subgroups
-----------------------------
In Chapter [4](ch-dataI.html#ch:dataI), we introduced data *verbs* that could be chained to perform very powerful data wrangling operations. These functions—which come from the **dplyr** package—operate on data frames and return data frames. The `group_modify()` function in **purrr** allows you to apply an arbitrary function that returns a data frame to the *groups* of a data frame. That is, you will first define a grouping using the `group_by()` function, and then apply a function to each of those groups. Note that this is similar to `map_dfr()`, in that you are mapping a function that returns a data frame over a collection of values, and returning a data frame. But whereas the values used in `map_dfr()` are individual elements of a vector, in `group_modify()` they are groups defined on a data frame.
### 7\.5\.1 Example: Expected winning percentage
As noted in Section [4\.2](ch-dataI.html#sec:mets), one of the more enduring models in [*sabermetrics*](https://en.wikipedia.org/w/index.php?search=sabermetrics) is [Bill James](https://en.wikipedia.org/w/index.php?search=Bill%20James)’s formula for estimating a team’s expected [*winning percentage*](https://en.wikipedia.org/w/index.php?search=winning%20percentage), given knowledge only of the team’s runs scored and runs allowed to date (recall that the team that scores the most
runs wins a given game). This statistic is known—unfortunately—as [Pythagorean Winning Percentage](https://en.wikipedia.org/wiki/Pythagorean_expectation), even though it has nothing to do with Pythagoras. The formula is simple, but non\-linear:
\\\[
\\widehat{WPct} \= \\frac{RS^2}{RS^2 \+ RA^2} \= \\frac{1}{1 \+ (RA/RS)^2} \\,,
\\]
where \\(RS\\) and \\(RA\\) are the number of runs the team has scored and allowed, respectively. If we define \\(x \= RS/RA\\) to be the team’s *run ratio*, then this is a function of one variable having the form \\(f(x) \= \\frac{1}{1 \+ (1/x)^2}\\).
This model seems to fit quite well upon visual inspection—in Figure [7\.1](ch-iteration.html#fig:pythag) we show the data since 1954, along with a line representing the model. Indeed, this model has also been successful in other sports, albeit with wholly different exponents.
```
exp_wpct <- function(x) {
return(1/(1 + (1/x)^2))
}
TeamRuns <- Teams %>%
filter(yearID >= 1954) %>%
rename(RS = R) %>%
mutate(WPct = W / (W + L), run_ratio = RS/RA) %>%
select(yearID, teamID, lgID, WPct, run_ratio)
ggplot(data = TeamRuns, aes(x = run_ratio, y = WPct)) +
geom_vline(xintercept = 1, color = "darkgray", linetype = 2) +
geom_hline(yintercept = 0.5, color = "darkgray", linetype = 2) +
geom_point(alpha = 0.2) +
stat_function(fun = exp_wpct, size = 2, color = "blue") +
xlab("Ratio of Runs Scored to Runs Allowed") +
ylab("Winning Percentage")
```
Figure 7\.1: Fit for the Pythagorean Winning Percentage model for all teams since 1954\.
However, the exponent of 2 was posited by James. One can imagine having the exponent become a parameter \\(k\\), and trying to find the optimal fit. Indeed, researchers have found that in baseball, the optimal value of \\(k\\) is not 2, but something closer to 1\.85 (V. Wang 2006\). It is easy enough for us to find the optimal value using the `nls()` function. We specify the formula of the nonlinear model, the data used to fit the model, and a starting value for the search.
```
TeamRuns %>%
nls(
formula = WPct ~ 1/(1 + (1/run_ratio)^k),
start = list(k = 2)
) %>%
coef()
```
```
k
1.84
```
Furthermore, researchers investigating this model have found that the optimal value of the exponent varies based on the era during which the model is fit.
We can use the `group_modify()` function to do this for all decades in baseball history.
First, we must write a short function (see Appendix [C](ch-function.html#ch:function)) that will return a data frame containing the optimal exponent, and for good measure, the number of observations during that decade.
```
fit_k <- function(x) {
mod <- nls(
formula = WPct ~ 1/(1 + (1/run_ratio)^k),
data = x,
start = list(k = 2)
)
return(tibble(k = coef(mod), n = nrow(x)))
}
```
Note that this function will return the optimal value of the exponent over any time period.
```
fit_k(TeamRuns)
```
```
# A tibble: 1 × 2
k n
<dbl> <int>
1 1.84 1708
```
Finally, we compute the decade for each year using `mutate()`, define the group using `group_by()`, and apply `fit_k()` to those decades. The use of the `~` tells **R** to interpret the expression in parentheses as a `formula`, rather than the name of a function. The `.x` is a placeholder for the data frame for a particular decade.
```
TeamRuns %>%
mutate(decade = yearID %/% 10 * 10) %>%
group_by(decade) %>%
group_modify(~fit_k(.x))
```
```
# A tibble: 8 × 3
# Groups: decade [8]
decade k n
<dbl> <dbl> <int>
1 1950 1.69 96
2 1960 1.90 198
3 1970 1.74 246
4 1980 1.93 260
5 1990 1.88 278
6 2000 1.94 300
7 2010 1.77 300
8 2020 1.86 30
```
Note the variation in the optimal value of \\(k\\). Even though the exponent is not the same in each decade, it varies within a fairly narrow range between 1\.69 and 1\.94\.
### 7\.5\.2 Example: Annual leaders
As a second example, consider the problem of identifying the team in each season that led their league in home runs.
We can easily write a function that will, for a specific year and league, return a data frame with one row that contains the team with the most home runs.
```
hr_leader <- function(x) {
# x is a subset of Teams for a single year and league
x %>%
select(teamID, HR) %>%
arrange(desc(HR)) %>%
head(1)
}
```
We can verify that in 1961, the [*New York Yankees*](https://en.wikipedia.org/w/index.php?search=New%20York%20Yankees) led the [*American League*](https://en.wikipedia.org/w/index.php?search=American%20League) in home runs.
```
Teams %>%
filter(yearID == 1961 & lgID == "AL") %>%
hr_leader()
```
```
teamID HR
1 NYA 240
```
We can use `group_modify()` to quickly find all the teams that led their league in home runs. Here, we employ the `.keep` argument so that the grouping variables appear in the computation.
```
hr_leaders <- Teams %>%
group_by(yearID, lgID) %>%
group_modify(~hr_leader(.x), .keep = TRUE)
tail(hr_leaders, 4)
```
```
# A tibble: 4 × 4
# Groups: yearID, lgID [4]
yearID lgID teamID HR
<int> <fct> <fct> <int>
1 2019 AL MIN 307
2 2019 NL LAN 279
3 2020 AL CHA 96
4 2020 NL LAN 118
```
In this manner, we can compute the average number of home runs hit in a season by the team that hit the most.
```
hr_leaders %>%
group_by(lgID) %>%
summarize(mean_hr = mean(HR))
```
```
# A tibble: 7 × 2
lgID mean_hr
<fct> <dbl>
1 AA 40.5
2 AL 157.
3 FL 51
4 NA 13.8
5 NL 129.
6 PL 66
7 UA 32
```
We restrict our attention to the years since 1916, during which only the AL and NL leagues have existed.
```
hr_leaders %>%
filter(yearID >= 1916) %>%
group_by(lgID) %>%
summarize(mean_hr = mean(HR))
```
```
# A tibble: 2 × 2
lgID mean_hr
<fct> <dbl>
1 AL 174.
2 NL 161.
```
In Figure [7\.2](ch-iteration.html#fig:dh), we show how this number has changed over time. We note that while the top HR hitting teams were comparable across the two leagues until the mid\-1970s, the AL teams have dominated since their league adopted the [*designated hitter*](https://en.wikipedia.org/w/index.php?search=designated%20hitter) rule in 1973\.
```
hr_leaders %>%
filter(yearID >= 1916) %>%
ggplot(aes(x = yearID, y = HR, color = lgID)) +
geom_line() +
geom_point() +
geom_smooth(se = FALSE) +
geom_vline(xintercept = 1973) +
annotate(
"text", x = 1974, y = 25,
label = "AL adopts DH", hjust = "left"
) +
labs(x = "Year", y = "Home runs", color = "League")
```
Figure 7\.2: Number of home runs hit by the team with the most home runs, 1916–2019\. Note how the AL has consistently bested the NL since the introduction of the designated hitter (DH) in 1973\.
7\.6 Simulation
---------------
In the previous section, we learned how to repeat operations while iterating over the elements of a vector. It can also be useful to simply repeat an operation many times and collect the results. Obviously, if the result of the operation is [*deterministic*](https://en.wikipedia.org/w/index.php?search=deterministic) (i.e., you get the same answer every time) then this is pointless. On the other hand, if this operation involves randomness, then you won’t get the same answer every time, and understanding the distribution of values that your random operation produces can be useful.
We will flesh out these ideas further in Chapter [13](ch-simulation.html#ch:simulation).
For example, in our investigation into the expected winning percentage in baseball (Section [7\.5\.1](ch-iteration.html#sec:pythag)), we determined that the optimal exponent fit to the 66 seasons worth of data from 1954 to 2019 was 1\.84\. However, we also found that if we fit this same model separately for each decade, that optimal exponent varies from 1\.69 to 1\.94\. This gives us a rough sense of the variability in this exponent—we observed values between 1\.6 and 2, which may give some insights as to plausible values for the exponent.
Nevertheless, our choice to stratify by decade was somewhat arbitrary. A more natural question might be: What is the distribution of optimal exponents fit to a *single\-season*’s worth of data? How confident should we be in that estimate of 1\.84?
We can use `group_modify()` and the function we wrote previously to compute the 66 actual values. The resulting distribution is summarized in Figure [7\.3](ch-iteration.html#fig:teamdens2).
```
k_actual <- TeamRuns %>%
group_by(yearID) %>%
group_modify(~fit_k(.x))
k_actual %>%
ungroup() %>%
skim(k)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 k 67 0 1.85 0.186 1.31 1.69 1.89 1.96 2.33
```
```
ggplot(data = k_actual, aes(x = k)) +
geom_density() +
xlab("Best fit exponent for a single season")
```
Figure 7\.3: Distribution of best\-fitting exponent across single seasons from 1954–2019\.
Since we only have 67 samples, we might obtain a better understanding of the sampling distribution of the mean \\(k\\) by [*resampling*](https://en.wikipedia.org/w/index.php?search=resampling)—sampling with replacement—from these 67 values. (This is a statistical technique known as the [*bootstrap*](https://en.wikipedia.org/w/index.php?search=bootstrap), which we describe further in Chapter [9](ch-foundations.html#ch:foundations).) A simple way to do this is by mapping a sampling expression over an index of values. That is, we define `n` to be the number of iterations we want to perform, write an expression to compute the mean of a single resample, and then use `map_dbl()` to perform the iterations.
```
n <- 10000
bstrap <- 1:n %>%
map_dbl(
~k_actual %>%
pull(k) %>%
sample(replace = TRUE) %>%
mean()
)
civals <- bstrap %>%
quantile(probs = c(0.025, .975))
civals
```
```
2.5% 97.5%
1.80 1.89
```
After repeating the resampling 10,000 times, we found that 95% of the resampled exponents were between 1\.8 and 1\.889, with our original estimate of 1\.84 lying somewhere near the center of that distribution. This distribution, along the boundaries of the middle 95%, is depicted in Figure [7\.4](ch-iteration.html#fig:bdensplot).
```
ggplot(data = enframe(bstrap, value = "k"), aes(x = k)) +
geom_density() +
xlab("Distribution of resampled means") +
geom_vline(
data = enframe(civals), aes(xintercept = value),
color = "red", linetype = 3
)
```
Figure 7\.4: Bootstrap distribution of mean optimal Pythagorean exponent.
7\.7 Extended example: Factors associated with BMI
--------------------------------------------------
[*Body Mass Index*](https://en.wikipedia.org/w/index.php?search=Body%20Mass%20Index) (BMI) is a common measure of a person’s size, expressed as a ratio of their body’s mass to the square of their height. What factors are associated with high BMI?
For answers, we turn to survey data collected by the [National Center for Health Statistics (NCHS)](https://www.cdc.gov/nchs/index.htm) and packaged as the [*National Health and Nutrition Examination Survey*](https://en.wikipedia.org/w/index.php?search=National%20Health%20and%20Nutrition%20Examination%20Survey) (NHANES). These data available in **R** through the **NHANES** package.
```
library(NHANES)
```
An exhaustive approach to understanding the relationship between BMI and some of the other variables is complicated by the fact that there are 75 potential explanatory varibles for any model for BMI. In Chapter [11](ch-learningI.html#ch:learningI), we develop several modeling techniques that might be useful for this purpose, but here, we focus on examining the [*bivariate*](https://en.wikipedia.org/w/index.php?search=bivariate) relationships between BMI and the other explanatory variables.
For example, we might start by simply producing a bivariate scatterplot between BMI and age, and adding a [*local regression*](https://en.wikipedia.org/w/index.php?search=local%20regression) line to show the general trend. Figure [7\.5](ch-iteration.html#fig:nhanes-age) shows the result.
```
ggplot(NHANES, aes(x = Age, y = BMI)) +
geom_point() +
geom_smooth()
```
Figure 7\.5: Relationship between body mass index (BMI) and age among participants in the **NHANES** study.
How can we programmatically produce an analogous image for *all* of the variables in **NHANES**?
First, we’ll write a function that takes the name of a variable as an input, and returns the plot.
Second, we’ll define a set of variables, and use `map()` to iterate our function over that list.
The following function will take a data set, and an argument called `x_var` that will be the name of a variable.
It produces a slightly jazzed\-up version of Figure [7\.5](ch-iteration.html#fig:nhanes-age) that contains variable\-specific titles, as well as information about the source.
```
bmi_plot <- function(.data, x_var) {
ggplot(.data, aes(y = BMI)) +
aes_string(x = x_var) +
geom_jitter(alpha = 0.3) +
geom_smooth() +
labs(
title = paste("BMI by", x_var),
subtitle = "NHANES",
caption = "US National Center for Health Statistics (NCHS)"
)
}
```
The use of the `aes_string()` function is necessary for **ggplot2** to understand that we want to bind the `x` aesthetic to the variable whose name is stored in the `x_var` object, and not a variable that is named `x_var`
We can then call our function on a specific variable.
```
bmi_plot(NHANES, "Age")
```
Or, we can specify a set of variables and then `map()` over that set. Since `map()` always returns a list, and a list of plots is not that useful, we use the `wrap_plots()` function from the **patchwork** package to combine the resulting list of plots into one image.
```
c("Age", "HHIncomeMid", "PhysActiveDays",
"TVHrsDay", "AlcoholDay", "Pulse") %>%
map(bmi_plot, .data = NHANES) %>%
patchwork::wrap_plots(ncol = 2)
```
Figure 7\.6: Relationship between body mass index (BMI) and a series of other variables, for participants in the **NHANES** study.
Figure [7\.6](ch-iteration.html#fig:patchwork) displays the results for six variables.
We won’t show the results of our ultimate goal to produce all 75 plots here, but you can try it for yourself by using the `names()` function to retrieve the full list of variable names.
Or, you could use `across()` to retrieve only those variables that meet a certain condition.
7\.8 Further resources
----------------------
The [chapter](https://adv-r.hadley.nz/functionals.html) on [*functionals*](https://en.wikipedia.org/w/index.php?search=functionals) in H. Wickham (2019\) is the definitive source for understanding **purrr**. The name “functionals” reflects the use of a programming paradigm called [*functional programming*](https://en.wikipedia.org/w/index.php?search=functional%20programming).
For those who are already familiar with the `*apply()` family of functions popular in base R, [Jenny Bryan](https://en.wikipedia.org/w/index.php?search=Jenny%20Bryan) wrote [a helpful tutorial](https://jennybc.github.io/purrr-tutorial/bk01_base-functions.html) that maps these functions to their **purrr** equivalents.
The **rlang** package lays the groundwork for [*tidy evaluation*](https://en.wikipedia.org/w/index.php?search=tidy%20evaluation), which allows you to work programmatically with unquoted variable names. The [programming with `dplyr` vignette](https://dplyr.tidyverse.org/articles/programming.html) is the best place to start learning about tidy evaluation.
Section [C.4](ch-function.html#sec:tidyeval) provides a brief introduction to the principles.
7\.9 Exercises
--------------
**Problem 1 (Easy)**: Use the `HELPrct` data from the `mosaicData` to calculate the mean of all numeric variables (be sure to exclude missing values).
**Problem 2 (Easy)**: Suppose you want to visit airports in Boston (`BOS`), New York (`JFK`, `LGA`), San Francisco (`SFO`), Chicago (`ORD`, `MDW`), and Los Angeles (`LAX`). You have data about flight delays in a `tibble` called `flights`. You have written a pipeline that, for any given airport code (e.g., `LGA`), will return a `tibble` with two columns, the airport code, and the average arrival delay time.
Suggest a workflow that would be most efficient for computing the average arrival delay time for all seven airports.
**Problem 3 (Medium)**: Use the `purrr::map()` function and the `HELPrct` data frame from the `mosaicData` package to fit a regression model predicting `cesd` as a function of `age` separately for each of the levels of the `substance` variable. Generate a table of results (estimates and confidence intervals) for the slope parameter for each level of the grouping variable.
**Problem 4 (Medium)**: The team IDs corresponding to Brooklyn baseball teams from the `Teams` data frame from the `Lahman` package are listed below. Use `map_int()` to find the number of seasons in which each of those teams played by calling a function called `count_seasons`.
```
library(Lahman)
bk_teams <- c("BR1", "BR2", "BR3", "BR4", "BRO", "BRP", "BRF")
```
**Problem 5 (Medium)**: Use data from the `NHANES` package to create a set of scatterplots of `Pulse` as a function of `Age`, `BMI`, `TVHrsDay`, and `BPSysAve` to create a figure like the last one in the chapter.
Be sure to create appropriate annotations (source, survey name, variables being displayed).
What do you conclude?
**Problem 6 (Hard)**: Use the `group_modify()` function and the `Lahman` data to replicate one of the baseball records plots ([http://tinyurl.com/nytimes\-records](http://tinyurl.com/nytimes-records)) from the *The New York Times*.
7\.10 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-iteration.html\#iteration\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-iteration.html#iteration-online-exercises)
No exercises found
---
7\.1 Vectorized operations
--------------------------
In every programming language that we can think of, there is a way to write a [*loop*](https://en.wikipedia.org/w/index.php?search=loop). For example, you can write a `for()` loop in **R** the same way you can with most programming languages.
Recall that the `Teams` data frame contains one row for each team in each MLB season.
```
library(tidyverse)
library(mdsr)
library(Lahman)
names(Teams)
```
```
[1] "yearID" "lgID" "teamID" "franchID"
[5] "divID" "Rank" "G" "Ghome"
[9] "W" "L" "DivWin" "WCWin"
[13] "LgWin" "WSWin" "R" "AB"
[17] "H" "X2B" "X3B" "HR"
[21] "BB" "SO" "SB" "CS"
[25] "HBP" "SF" "RA" "ER"
[29] "ERA" "CG" "SHO" "SV"
[33] "IPouts" "HA" "HRA" "BBA"
[37] "SOA" "E" "DP" "FP"
[41] "name" "park" "attendance" "BPF"
[45] "PPF" "teamIDBR" "teamIDlahman45" "teamIDretro"
```
What might not be immediately obvious is that columns 15 through 40 of this data frame contain numerical data about how each team performed in that season. To see this, you can execute the `str()` command to see the **str**ucture of the data frame, but we suppress that output here. For data frames, a similar alternative that is a little cleaner is `glimpse()`.
```
str(Teams)
glimpse(Teams)
```
Suppose you are interested in computing the averages of these 26 numeric columns.
You don’t want to have to type the names of each of them, or re\-type the `mean()` command 26 times.
Seasoned programmers would identify this as a situation in which a [*loop*](https://en.wikipedia.org/w/index.php?search=loop) is a natural and efficient solution.
A `for()` loop will iterate over the selected column indices.
```
averages <- NULL
for (i in 15:40) {
averages[i - 14] <- mean(Teams[, i], na.rm = TRUE)
}
names(averages) <- names(Teams)[15:40]
averages
```
```
R AB H X2B X3B HR BB SO
680.496 5126.260 1339.657 228.330 45.910 104.929 473.020 755.573
SB CS HBP SF RA ER ERA CG
109.786 46.873 45.411 44.233 680.495 572.404 3.836 48.011
SHO SV IPouts HA HRA BBA SOA E
9.584 24.267 4010.716 1339.434 104.929 473.212 755.050 181.732
DP FP
132.669 0.966
```
This certainly works. However, there are a number of problematic aspects of this code (e.g., the use of multiple [*magic numbers*](https://en.wikipedia.org/w/index.php?search=magic%20numbers) like 14, 15, and 40\). The use of a `for()` loop may not be ideal.
For problems of this type, it is almost always possible (and usually preferable) to iterate without explicitly defining a loop.
**R** programmers prefer to solve this type of problem by applying an operation to each element in a vector.
This often requires only one line of code, with no appeal to indices.
It is important to understand that the fundamental architecture of **R** is based on *vectors*. That is, in contrast to [*general\-purpose programming languages*](https://en.wikipedia.org/w/index.php?search=general-purpose%20programming%20languages) like *C\+\+* or [*Python*](https://en.wikipedia.org/w/index.php?search=Python) that distinguish between single items—like strings and integers—and arrays of those items, in **R** a “string” is just a character vector of length 1\. There is no special kind of atomic object. Thus, if you assign a single “string” to an object, **R** still stores it as a vector.
```
a <- "a string"
class(a)
```
```
[1] "character"
```
```
is.vector(a)
```
```
[1] TRUE
```
```
length(a)
```
```
[1] 1
```
As a consequence of this construction, **R** is highly optimized for vectorized operations (see Appendix [B](ch-R.html#ch:R) for more detailed information about **R** internals). Loops, by their nature, do not take advantage of this optimization. Thus, **R** provides several tools for performing loop\-like operations without actually writing a loop. This can be a challenging conceptual hurdle for those who are used to more general\-purpose programming languages.
Try to avoid writing `for()` loops, even when it seems like the easiest solution.
Many functions in **R** are [*vectorized*](https://en.wikipedia.org/w/index.php?search=vectorized). This means that they will perform an operation on every element of a vector by default. For example, many mathematical functions (e.g., `exp()`) work this way.
```
exp(1:3)
```
```
[1] 2.72 7.39 20.09
```
Note that vectorized functions like `exp()` take a vector as an input, and return a vector *of the same length* as an output.
This is importantly different behavior than so\-called [*summary functions*](https://en.wikipedia.org/w/index.php?search=summary%20functions), which take a vector as an input, and return *a single value*. Summary functions (e.g., `mean()`) are commonly useful within a call to `summarize()`. Note that when we call `mean()` on a vector, it only returns a single value no matter how many elements there are in the input vector.
```
mean(1:3)
```
```
[1] 2
```
Other functions in **R** are not vectorized. They may assume an input that is a vector of length one, and fail or exhibit strange behavior if given a longer vector. For example, `if()` throws a warning if given a vector of length more than one.
```
if (c(TRUE, FALSE)) {
cat("This is a great book!")
}
```
```
Warning in if (c(TRUE, FALSE)) {: the condition has length > 1 and only the
first element will be used
```
```
This is a great book!
```
As you get more comfortable with **R**, you will develop intuition about which functions are vectorized. If a function is vectorized, you should make use of that fact and not iterate over it. The code below shows that computing the exponential of the first 10,000 integers by appealing to `exp()` as a vectorized function is much, much faster than using `map_dbl()` to iterate over the same vector. The results are identical.
```
x <- 1:1e5
bench::mark(
exp(x),
map_dbl(x, exp)
)
```
```
# A tibble: 2 × 6
expression min median `itr/sec` mem_alloc `gc/sec`
<bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl>
1 exp(x) 900µs 1.59ms 652. 1.53MB 21.2
2 map_dbl(x, exp) 135ms 135.08ms 7.40 788.62KB 29.6
```
Try to always make use of vectorized functions before iterating an operation.
7\.2 Using `across()` with **dplyr** functions
----------------------------------------------
The `mutate()` and `summarize()` verbs described in Chapter [4](ch-dataI.html#ch:dataI) can take advantage of an [*adverb*](https://en.wikipedia.org/w/index.php?search=adverb) called `across()` that applies operations programmatically. In the example above, we had to observe that columns 15 through 40 of the `Teams` data frame were numeric, and hard\-code those observations as magic numbers. Rather than relying on these observations to identify which variables are numeric—and therefore which variables it makes sense to compute the average of—we can use the `across()` function to do this work for us. The chunk below will compute the average values of only those variables in `Teams` that are numeric.
```
Teams %>%
summarize(across(where(is.numeric), mean, na.rm = TRUE))
```
```
yearID Rank G Ghome W L R AB H X2B X3B HR BB SO SB
1 1958 4.05 150 78 74.5 74.5 680 5126 1340 228 45.9 105 473 756 110
CS HBP SF RA ER ERA CG SHO SV IPouts HA HRA BBA SOA E DP
1 46.9 45.4 44.2 680 572 3.84 48 9.58 24.3 4011 1339 105 473 755 182 133
FP attendance BPF PPF
1 0.966 1375102 100 100
```
Note that this result included several variables (e.g., `yearID`, `attendance`) that were outside of the range we defined previously.
The `across()` adverb allows us to specify the set of variables that `summarize()` includes in different ways. In the example above, we used the [*predicate function*](https://en.wikipedia.org/w/index.php?search=predicate%20function) `is.numeric()` to identify the variables for which we wanted to compute the mean.
In the following example, we compute the mean of `yearID`, a series of variables from `R` (runs scored) to `SF` ([*sacrifice flies*](https://en.wikipedia.org/w/index.php?search=sacrifice%20flies)) that apply only to offensive players, and the batting [*park factor*](https://en.wikipedia.org/w/index.php?search=park%20factor) (`BPF`). Since we are specifying these columns without the use of a predicate function, we don’t need to use `where()`.
```
Teams %>%
summarize(across(c(yearID, R:SF, BPF), mean, na.rm = TRUE))
```
```
yearID R AB H X2B X3B HR BB SO SB CS HBP SF BPF
1 1958 680 5126 1340 228 45.9 105 473 756 110 46.9 45.4 44.2 100
```
The `across()` function behaves analogously with `mutate()`.
It provides an easy way to perform an operation on a set of variables without having to type or copy\-and\-paste the name of each variable.
7\.3 The `map()` family of functions
------------------------------------
More generally, to apply a function to each item in a list or vector, or the columns of a data frame[12](#fn12), use `map()` (or one of its type\-specific variants). This is the main function from the **purrr** package. In this example, we calculate the mean of each of the statistics defined above, all at once. Compare this to the `for()` loop written above. Which is syntactically simpler? Which expresses the ideas behind the code more succinctly?
```
Teams %>%
select(15:40) %>%
map_dbl(mean, na.rm = TRUE)
```
```
R AB H X2B X3B HR BB SO
680.496 5126.260 1339.657 228.330 45.910 104.929 473.020 755.573
SB CS HBP SF RA ER ERA CG
109.786 46.873 45.411 44.233 680.495 572.404 3.836 48.011
SHO SV IPouts HA HRA BBA SOA E
9.584 24.267 4010.716 1339.434 104.929 473.212 755.050 181.732
DP FP
132.669 0.966
```
The first argument to `map_dbl()` is the thing that you want to do something to (in this case, a data frame). The second argument specifies the name of a function (the argument is named `.f`). Any further arguments are passed as options to `.f`. Thus, this command applies the `mean()` function to the 15th through the 40th columns of the `Teams` data frame, while removing any `NA`s that might be present in any of those columns. The use of the variant `map_dbl()` simply forces the output to be a vector of type `double`.[13](#fn13)
Of course, we began by taking the subset of the columns that were all `numeric` values. If you tried to take the `mean()` of a non\-numeric vector, you would get a *warning* (and a value of `NA`).
```
Teams %>%
select(teamID) %>%
map_dbl(mean, na.rm = TRUE)
```
```
Warning in mean.default(.x[[i]], ...): argument is not numeric or logical:
returning NA
```
```
teamID
NA
```
If you can solve your problem using `across()` and/or `where()` as described in Section [7\.2](ch-iteration.html#sec:scoped), that is probably the cleanest solution. However, we will show that the `map()` family of functions provides a much more general set of capabilities.
7\.4 Iterating over a one\-dimensional vector
---------------------------------------------
### 7\.4\.1 Iterating a known function
Often you will want to apply a function to each element of a vector or list. For example, the baseball franchise now known as the [*Los Angeles Angels of Anaheim*](https://en.wikipedia.org/w/index.php?search=Los%20Angeles%20Angels%20of%20Anaheim) has gone by several names during its history.
```
angels <- Teams %>%
filter(franchID == "ANA") %>%
group_by(teamID, name) %>%
summarize(began = first(yearID), ended = last(yearID)) %>%
arrange(began)
angels
```
```
# A tibble: 4 × 4
# Groups: teamID [3]
teamID name began ended
<fct> <chr> <int> <int>
1 LAA Los Angeles Angels 1961 1964
2 CAL California Angels 1965 1996
3 ANA Anaheim Angels 1997 2004
4 LAA Los Angeles Angels of Anaheim 2005 2020
```
The franchise began as the [*Los Angeles Angels*](https://en.wikipedia.org/w/index.php?search=Los%20Angeles%20Angels) (`LAA`) in 1961, then became the [*California Angels*](https://en.wikipedia.org/w/index.php?search=California%20Angels) (`CAL`) in 1965, the [*Anaheim Angels*](https://en.wikipedia.org/w/index.php?search=Anaheim%20Angels) (`ANA`) in 1997, before taking their current name (`LAA` again) in 2005\. This situation is complicated by the fact that the `teamID` `LAA` was re\-used. This sort of schizophrenic behavior is unfortunately common in many data sets.
Now, suppose we want to find the length, in number of characters, of each of those team names. We could check each one manually using the function `nchar()`:
```
angels_names <- angels %>%
pull(name)
nchar(angels_names[1])
```
```
[1] 18
```
```
nchar(angels_names[2])
```
```
[1] 17
```
```
nchar(angels_names[3])
```
```
[1] 14
```
```
nchar(angels_names[4])
```
```
[1] 29
```
But this would grow tiresome if we had many names. It would be simpler, more efficient, more elegant, and scalable to apply the function `nchar()` to each element of the vector `angel_names`. We can accomplish this using `map_int()`. `map_int()` is like `map()` or `map_dbl()`, but it always returns an `integer` vector.
```
map_int(angels_names, nchar)
```
```
[1] 18 17 14 29
```
The key difference between `map_int()` and `map()` is that the former will always return an `integer` vector, whereas the latter will always return a `list`. Recall that the main difference between `list`s and `data.frame`s is that the elements (columns) of a `data.frame` have to have the same length, whereas the elements of a list are arbitrary. So while `map()` is more versatile, we usually find `map_int()` or one of the other variants to be more convenient when appropriate.
It’s often helpful to use `map()` to figure out what the return type will be, and then switch to the appropriate type\-specific `map()` variant.
This section was designed to illustrate how `map()` can be used to iterate a function over a vector of values. However, the choice of the `nchar()` function was a bit silly, because `nchar()` is already vectorized. Thus, we can use it directly!
```
nchar(angels_names)
```
```
[1] 18 17 14 29
```
### 7\.4\.2 Iterating an arbitrary function
One of the most powerful uses of iteration is that you can apply *any* function, including a function that you have defined (see Appendix [C](ch-function.html#ch:function) for a discussion of how to write user\-defined functions). For example, suppose we want to display the top\-5 seasons in terms of wins for each of the Angels teams.
```
top5 <- function(data, team_name) {
data %>%
filter(name == team_name) %>%
select(teamID, yearID, W, L, name) %>%
arrange(desc(W)) %>%
head(n = 5)
}
```
We can now do this for each element of our vector with a single call to `map()`.
Note how we named the `data` argument to ensure that the `team_name` argument was the one that accepted the value over which we iterated.
```
angels_names %>%
map(top5, data = Teams)
```
```
[[1]]
teamID yearID W L name
1 LAA 1962 86 76 Los Angeles Angels
2 LAA 1964 82 80 Los Angeles Angels
3 LAA 1961 70 91 Los Angeles Angels
4 LAA 1963 70 91 Los Angeles Angels
[[2]]
teamID yearID W L name
1 CAL 1982 93 69 California Angels
2 CAL 1986 92 70 California Angels
3 CAL 1989 91 71 California Angels
4 CAL 1985 90 72 California Angels
5 CAL 1979 88 74 California Angels
[[3]]
teamID yearID W L name
1 ANA 2002 99 63 Anaheim Angels
2 ANA 2004 92 70 Anaheim Angels
3 ANA 1998 85 77 Anaheim Angels
4 ANA 1997 84 78 Anaheim Angels
5 ANA 2000 82 80 Anaheim Angels
[[4]]
teamID yearID W L name
1 LAA 2008 100 62 Los Angeles Angels of Anaheim
2 LAA 2014 98 64 Los Angeles Angels of Anaheim
3 LAA 2009 97 65 Los Angeles Angels of Anaheim
4 LAA 2005 95 67 Los Angeles Angels of Anaheim
5 LAA 2007 94 68 Los Angeles Angels of Anaheim
```
Alternatively, we can collect the results into a single data frame by using the `map_dfr()` function, which combines the data frames by row. Below, we do this and then compute the average number of wins in a top\-5 season for each Angels team name. Based on these data, the Los Angeles Angels of Anaheim has been the most successful incarnation of the franchise, when judged by average performance in the best five seasons.
```
angels_names %>%
map_dfr(top5, data = Teams) %>%
group_by(teamID, name) %>%
summarize(N = n(), mean_wins = mean(W)) %>%
arrange(desc(mean_wins))
```
```
# A tibble: 4 × 4
# Groups: teamID [3]
teamID name N mean_wins
<fct> <chr> <int> <dbl>
1 LAA Los Angeles Angels of Anaheim 5 96.8
2 CAL California Angels 5 90.8
3 ANA Anaheim Angels 5 88.4
4 LAA Los Angeles Angels 4 77
```
Once you’ve read Chapter [15](ch-sql.html#ch:sql), think about how you might do this operation in SQL. It is not that easy!
### 7\.4\.1 Iterating a known function
Often you will want to apply a function to each element of a vector or list. For example, the baseball franchise now known as the [*Los Angeles Angels of Anaheim*](https://en.wikipedia.org/w/index.php?search=Los%20Angeles%20Angels%20of%20Anaheim) has gone by several names during its history.
```
angels <- Teams %>%
filter(franchID == "ANA") %>%
group_by(teamID, name) %>%
summarize(began = first(yearID), ended = last(yearID)) %>%
arrange(began)
angels
```
```
# A tibble: 4 × 4
# Groups: teamID [3]
teamID name began ended
<fct> <chr> <int> <int>
1 LAA Los Angeles Angels 1961 1964
2 CAL California Angels 1965 1996
3 ANA Anaheim Angels 1997 2004
4 LAA Los Angeles Angels of Anaheim 2005 2020
```
The franchise began as the [*Los Angeles Angels*](https://en.wikipedia.org/w/index.php?search=Los%20Angeles%20Angels) (`LAA`) in 1961, then became the [*California Angels*](https://en.wikipedia.org/w/index.php?search=California%20Angels) (`CAL`) in 1965, the [*Anaheim Angels*](https://en.wikipedia.org/w/index.php?search=Anaheim%20Angels) (`ANA`) in 1997, before taking their current name (`LAA` again) in 2005\. This situation is complicated by the fact that the `teamID` `LAA` was re\-used. This sort of schizophrenic behavior is unfortunately common in many data sets.
Now, suppose we want to find the length, in number of characters, of each of those team names. We could check each one manually using the function `nchar()`:
```
angels_names <- angels %>%
pull(name)
nchar(angels_names[1])
```
```
[1] 18
```
```
nchar(angels_names[2])
```
```
[1] 17
```
```
nchar(angels_names[3])
```
```
[1] 14
```
```
nchar(angels_names[4])
```
```
[1] 29
```
But this would grow tiresome if we had many names. It would be simpler, more efficient, more elegant, and scalable to apply the function `nchar()` to each element of the vector `angel_names`. We can accomplish this using `map_int()`. `map_int()` is like `map()` or `map_dbl()`, but it always returns an `integer` vector.
```
map_int(angels_names, nchar)
```
```
[1] 18 17 14 29
```
The key difference between `map_int()` and `map()` is that the former will always return an `integer` vector, whereas the latter will always return a `list`. Recall that the main difference between `list`s and `data.frame`s is that the elements (columns) of a `data.frame` have to have the same length, whereas the elements of a list are arbitrary. So while `map()` is more versatile, we usually find `map_int()` or one of the other variants to be more convenient when appropriate.
It’s often helpful to use `map()` to figure out what the return type will be, and then switch to the appropriate type\-specific `map()` variant.
This section was designed to illustrate how `map()` can be used to iterate a function over a vector of values. However, the choice of the `nchar()` function was a bit silly, because `nchar()` is already vectorized. Thus, we can use it directly!
```
nchar(angels_names)
```
```
[1] 18 17 14 29
```
### 7\.4\.2 Iterating an arbitrary function
One of the most powerful uses of iteration is that you can apply *any* function, including a function that you have defined (see Appendix [C](ch-function.html#ch:function) for a discussion of how to write user\-defined functions). For example, suppose we want to display the top\-5 seasons in terms of wins for each of the Angels teams.
```
top5 <- function(data, team_name) {
data %>%
filter(name == team_name) %>%
select(teamID, yearID, W, L, name) %>%
arrange(desc(W)) %>%
head(n = 5)
}
```
We can now do this for each element of our vector with a single call to `map()`.
Note how we named the `data` argument to ensure that the `team_name` argument was the one that accepted the value over which we iterated.
```
angels_names %>%
map(top5, data = Teams)
```
```
[[1]]
teamID yearID W L name
1 LAA 1962 86 76 Los Angeles Angels
2 LAA 1964 82 80 Los Angeles Angels
3 LAA 1961 70 91 Los Angeles Angels
4 LAA 1963 70 91 Los Angeles Angels
[[2]]
teamID yearID W L name
1 CAL 1982 93 69 California Angels
2 CAL 1986 92 70 California Angels
3 CAL 1989 91 71 California Angels
4 CAL 1985 90 72 California Angels
5 CAL 1979 88 74 California Angels
[[3]]
teamID yearID W L name
1 ANA 2002 99 63 Anaheim Angels
2 ANA 2004 92 70 Anaheim Angels
3 ANA 1998 85 77 Anaheim Angels
4 ANA 1997 84 78 Anaheim Angels
5 ANA 2000 82 80 Anaheim Angels
[[4]]
teamID yearID W L name
1 LAA 2008 100 62 Los Angeles Angels of Anaheim
2 LAA 2014 98 64 Los Angeles Angels of Anaheim
3 LAA 2009 97 65 Los Angeles Angels of Anaheim
4 LAA 2005 95 67 Los Angeles Angels of Anaheim
5 LAA 2007 94 68 Los Angeles Angels of Anaheim
```
Alternatively, we can collect the results into a single data frame by using the `map_dfr()` function, which combines the data frames by row. Below, we do this and then compute the average number of wins in a top\-5 season for each Angels team name. Based on these data, the Los Angeles Angels of Anaheim has been the most successful incarnation of the franchise, when judged by average performance in the best five seasons.
```
angels_names %>%
map_dfr(top5, data = Teams) %>%
group_by(teamID, name) %>%
summarize(N = n(), mean_wins = mean(W)) %>%
arrange(desc(mean_wins))
```
```
# A tibble: 4 × 4
# Groups: teamID [3]
teamID name N mean_wins
<fct> <chr> <int> <dbl>
1 LAA Los Angeles Angels of Anaheim 5 96.8
2 CAL California Angels 5 90.8
3 ANA Anaheim Angels 5 88.4
4 LAA Los Angeles Angels 4 77
```
Once you’ve read Chapter [15](ch-sql.html#ch:sql), think about how you might do this operation in SQL. It is not that easy!
7\.5 Iteration over subgroups
-----------------------------
In Chapter [4](ch-dataI.html#ch:dataI), we introduced data *verbs* that could be chained to perform very powerful data wrangling operations. These functions—which come from the **dplyr** package—operate on data frames and return data frames. The `group_modify()` function in **purrr** allows you to apply an arbitrary function that returns a data frame to the *groups* of a data frame. That is, you will first define a grouping using the `group_by()` function, and then apply a function to each of those groups. Note that this is similar to `map_dfr()`, in that you are mapping a function that returns a data frame over a collection of values, and returning a data frame. But whereas the values used in `map_dfr()` are individual elements of a vector, in `group_modify()` they are groups defined on a data frame.
### 7\.5\.1 Example: Expected winning percentage
As noted in Section [4\.2](ch-dataI.html#sec:mets), one of the more enduring models in [*sabermetrics*](https://en.wikipedia.org/w/index.php?search=sabermetrics) is [Bill James](https://en.wikipedia.org/w/index.php?search=Bill%20James)’s formula for estimating a team’s expected [*winning percentage*](https://en.wikipedia.org/w/index.php?search=winning%20percentage), given knowledge only of the team’s runs scored and runs allowed to date (recall that the team that scores the most
runs wins a given game). This statistic is known—unfortunately—as [Pythagorean Winning Percentage](https://en.wikipedia.org/wiki/Pythagorean_expectation), even though it has nothing to do with Pythagoras. The formula is simple, but non\-linear:
\\\[
\\widehat{WPct} \= \\frac{RS^2}{RS^2 \+ RA^2} \= \\frac{1}{1 \+ (RA/RS)^2} \\,,
\\]
where \\(RS\\) and \\(RA\\) are the number of runs the team has scored and allowed, respectively. If we define \\(x \= RS/RA\\) to be the team’s *run ratio*, then this is a function of one variable having the form \\(f(x) \= \\frac{1}{1 \+ (1/x)^2}\\).
This model seems to fit quite well upon visual inspection—in Figure [7\.1](ch-iteration.html#fig:pythag) we show the data since 1954, along with a line representing the model. Indeed, this model has also been successful in other sports, albeit with wholly different exponents.
```
exp_wpct <- function(x) {
return(1/(1 + (1/x)^2))
}
TeamRuns <- Teams %>%
filter(yearID >= 1954) %>%
rename(RS = R) %>%
mutate(WPct = W / (W + L), run_ratio = RS/RA) %>%
select(yearID, teamID, lgID, WPct, run_ratio)
ggplot(data = TeamRuns, aes(x = run_ratio, y = WPct)) +
geom_vline(xintercept = 1, color = "darkgray", linetype = 2) +
geom_hline(yintercept = 0.5, color = "darkgray", linetype = 2) +
geom_point(alpha = 0.2) +
stat_function(fun = exp_wpct, size = 2, color = "blue") +
xlab("Ratio of Runs Scored to Runs Allowed") +
ylab("Winning Percentage")
```
Figure 7\.1: Fit for the Pythagorean Winning Percentage model for all teams since 1954\.
However, the exponent of 2 was posited by James. One can imagine having the exponent become a parameter \\(k\\), and trying to find the optimal fit. Indeed, researchers have found that in baseball, the optimal value of \\(k\\) is not 2, but something closer to 1\.85 (V. Wang 2006\). It is easy enough for us to find the optimal value using the `nls()` function. We specify the formula of the nonlinear model, the data used to fit the model, and a starting value for the search.
```
TeamRuns %>%
nls(
formula = WPct ~ 1/(1 + (1/run_ratio)^k),
start = list(k = 2)
) %>%
coef()
```
```
k
1.84
```
Furthermore, researchers investigating this model have found that the optimal value of the exponent varies based on the era during which the model is fit.
We can use the `group_modify()` function to do this for all decades in baseball history.
First, we must write a short function (see Appendix [C](ch-function.html#ch:function)) that will return a data frame containing the optimal exponent, and for good measure, the number of observations during that decade.
```
fit_k <- function(x) {
mod <- nls(
formula = WPct ~ 1/(1 + (1/run_ratio)^k),
data = x,
start = list(k = 2)
)
return(tibble(k = coef(mod), n = nrow(x)))
}
```
Note that this function will return the optimal value of the exponent over any time period.
```
fit_k(TeamRuns)
```
```
# A tibble: 1 × 2
k n
<dbl> <int>
1 1.84 1708
```
Finally, we compute the decade for each year using `mutate()`, define the group using `group_by()`, and apply `fit_k()` to those decades. The use of the `~` tells **R** to interpret the expression in parentheses as a `formula`, rather than the name of a function. The `.x` is a placeholder for the data frame for a particular decade.
```
TeamRuns %>%
mutate(decade = yearID %/% 10 * 10) %>%
group_by(decade) %>%
group_modify(~fit_k(.x))
```
```
# A tibble: 8 × 3
# Groups: decade [8]
decade k n
<dbl> <dbl> <int>
1 1950 1.69 96
2 1960 1.90 198
3 1970 1.74 246
4 1980 1.93 260
5 1990 1.88 278
6 2000 1.94 300
7 2010 1.77 300
8 2020 1.86 30
```
Note the variation in the optimal value of \\(k\\). Even though the exponent is not the same in each decade, it varies within a fairly narrow range between 1\.69 and 1\.94\.
### 7\.5\.2 Example: Annual leaders
As a second example, consider the problem of identifying the team in each season that led their league in home runs.
We can easily write a function that will, for a specific year and league, return a data frame with one row that contains the team with the most home runs.
```
hr_leader <- function(x) {
# x is a subset of Teams for a single year and league
x %>%
select(teamID, HR) %>%
arrange(desc(HR)) %>%
head(1)
}
```
We can verify that in 1961, the [*New York Yankees*](https://en.wikipedia.org/w/index.php?search=New%20York%20Yankees) led the [*American League*](https://en.wikipedia.org/w/index.php?search=American%20League) in home runs.
```
Teams %>%
filter(yearID == 1961 & lgID == "AL") %>%
hr_leader()
```
```
teamID HR
1 NYA 240
```
We can use `group_modify()` to quickly find all the teams that led their league in home runs. Here, we employ the `.keep` argument so that the grouping variables appear in the computation.
```
hr_leaders <- Teams %>%
group_by(yearID, lgID) %>%
group_modify(~hr_leader(.x), .keep = TRUE)
tail(hr_leaders, 4)
```
```
# A tibble: 4 × 4
# Groups: yearID, lgID [4]
yearID lgID teamID HR
<int> <fct> <fct> <int>
1 2019 AL MIN 307
2 2019 NL LAN 279
3 2020 AL CHA 96
4 2020 NL LAN 118
```
In this manner, we can compute the average number of home runs hit in a season by the team that hit the most.
```
hr_leaders %>%
group_by(lgID) %>%
summarize(mean_hr = mean(HR))
```
```
# A tibble: 7 × 2
lgID mean_hr
<fct> <dbl>
1 AA 40.5
2 AL 157.
3 FL 51
4 NA 13.8
5 NL 129.
6 PL 66
7 UA 32
```
We restrict our attention to the years since 1916, during which only the AL and NL leagues have existed.
```
hr_leaders %>%
filter(yearID >= 1916) %>%
group_by(lgID) %>%
summarize(mean_hr = mean(HR))
```
```
# A tibble: 2 × 2
lgID mean_hr
<fct> <dbl>
1 AL 174.
2 NL 161.
```
In Figure [7\.2](ch-iteration.html#fig:dh), we show how this number has changed over time. We note that while the top HR hitting teams were comparable across the two leagues until the mid\-1970s, the AL teams have dominated since their league adopted the [*designated hitter*](https://en.wikipedia.org/w/index.php?search=designated%20hitter) rule in 1973\.
```
hr_leaders %>%
filter(yearID >= 1916) %>%
ggplot(aes(x = yearID, y = HR, color = lgID)) +
geom_line() +
geom_point() +
geom_smooth(se = FALSE) +
geom_vline(xintercept = 1973) +
annotate(
"text", x = 1974, y = 25,
label = "AL adopts DH", hjust = "left"
) +
labs(x = "Year", y = "Home runs", color = "League")
```
Figure 7\.2: Number of home runs hit by the team with the most home runs, 1916–2019\. Note how the AL has consistently bested the NL since the introduction of the designated hitter (DH) in 1973\.
### 7\.5\.1 Example: Expected winning percentage
As noted in Section [4\.2](ch-dataI.html#sec:mets), one of the more enduring models in [*sabermetrics*](https://en.wikipedia.org/w/index.php?search=sabermetrics) is [Bill James](https://en.wikipedia.org/w/index.php?search=Bill%20James)’s formula for estimating a team’s expected [*winning percentage*](https://en.wikipedia.org/w/index.php?search=winning%20percentage), given knowledge only of the team’s runs scored and runs allowed to date (recall that the team that scores the most
runs wins a given game). This statistic is known—unfortunately—as [Pythagorean Winning Percentage](https://en.wikipedia.org/wiki/Pythagorean_expectation), even though it has nothing to do with Pythagoras. The formula is simple, but non\-linear:
\\\[
\\widehat{WPct} \= \\frac{RS^2}{RS^2 \+ RA^2} \= \\frac{1}{1 \+ (RA/RS)^2} \\,,
\\]
where \\(RS\\) and \\(RA\\) are the number of runs the team has scored and allowed, respectively. If we define \\(x \= RS/RA\\) to be the team’s *run ratio*, then this is a function of one variable having the form \\(f(x) \= \\frac{1}{1 \+ (1/x)^2}\\).
This model seems to fit quite well upon visual inspection—in Figure [7\.1](ch-iteration.html#fig:pythag) we show the data since 1954, along with a line representing the model. Indeed, this model has also been successful in other sports, albeit with wholly different exponents.
```
exp_wpct <- function(x) {
return(1/(1 + (1/x)^2))
}
TeamRuns <- Teams %>%
filter(yearID >= 1954) %>%
rename(RS = R) %>%
mutate(WPct = W / (W + L), run_ratio = RS/RA) %>%
select(yearID, teamID, lgID, WPct, run_ratio)
ggplot(data = TeamRuns, aes(x = run_ratio, y = WPct)) +
geom_vline(xintercept = 1, color = "darkgray", linetype = 2) +
geom_hline(yintercept = 0.5, color = "darkgray", linetype = 2) +
geom_point(alpha = 0.2) +
stat_function(fun = exp_wpct, size = 2, color = "blue") +
xlab("Ratio of Runs Scored to Runs Allowed") +
ylab("Winning Percentage")
```
Figure 7\.1: Fit for the Pythagorean Winning Percentage model for all teams since 1954\.
However, the exponent of 2 was posited by James. One can imagine having the exponent become a parameter \\(k\\), and trying to find the optimal fit. Indeed, researchers have found that in baseball, the optimal value of \\(k\\) is not 2, but something closer to 1\.85 (V. Wang 2006\). It is easy enough for us to find the optimal value using the `nls()` function. We specify the formula of the nonlinear model, the data used to fit the model, and a starting value for the search.
```
TeamRuns %>%
nls(
formula = WPct ~ 1/(1 + (1/run_ratio)^k),
start = list(k = 2)
) %>%
coef()
```
```
k
1.84
```
Furthermore, researchers investigating this model have found that the optimal value of the exponent varies based on the era during which the model is fit.
We can use the `group_modify()` function to do this for all decades in baseball history.
First, we must write a short function (see Appendix [C](ch-function.html#ch:function)) that will return a data frame containing the optimal exponent, and for good measure, the number of observations during that decade.
```
fit_k <- function(x) {
mod <- nls(
formula = WPct ~ 1/(1 + (1/run_ratio)^k),
data = x,
start = list(k = 2)
)
return(tibble(k = coef(mod), n = nrow(x)))
}
```
Note that this function will return the optimal value of the exponent over any time period.
```
fit_k(TeamRuns)
```
```
# A tibble: 1 × 2
k n
<dbl> <int>
1 1.84 1708
```
Finally, we compute the decade for each year using `mutate()`, define the group using `group_by()`, and apply `fit_k()` to those decades. The use of the `~` tells **R** to interpret the expression in parentheses as a `formula`, rather than the name of a function. The `.x` is a placeholder for the data frame for a particular decade.
```
TeamRuns %>%
mutate(decade = yearID %/% 10 * 10) %>%
group_by(decade) %>%
group_modify(~fit_k(.x))
```
```
# A tibble: 8 × 3
# Groups: decade [8]
decade k n
<dbl> <dbl> <int>
1 1950 1.69 96
2 1960 1.90 198
3 1970 1.74 246
4 1980 1.93 260
5 1990 1.88 278
6 2000 1.94 300
7 2010 1.77 300
8 2020 1.86 30
```
Note the variation in the optimal value of \\(k\\). Even though the exponent is not the same in each decade, it varies within a fairly narrow range between 1\.69 and 1\.94\.
### 7\.5\.2 Example: Annual leaders
As a second example, consider the problem of identifying the team in each season that led their league in home runs.
We can easily write a function that will, for a specific year and league, return a data frame with one row that contains the team with the most home runs.
```
hr_leader <- function(x) {
# x is a subset of Teams for a single year and league
x %>%
select(teamID, HR) %>%
arrange(desc(HR)) %>%
head(1)
}
```
We can verify that in 1961, the [*New York Yankees*](https://en.wikipedia.org/w/index.php?search=New%20York%20Yankees) led the [*American League*](https://en.wikipedia.org/w/index.php?search=American%20League) in home runs.
```
Teams %>%
filter(yearID == 1961 & lgID == "AL") %>%
hr_leader()
```
```
teamID HR
1 NYA 240
```
We can use `group_modify()` to quickly find all the teams that led their league in home runs. Here, we employ the `.keep` argument so that the grouping variables appear in the computation.
```
hr_leaders <- Teams %>%
group_by(yearID, lgID) %>%
group_modify(~hr_leader(.x), .keep = TRUE)
tail(hr_leaders, 4)
```
```
# A tibble: 4 × 4
# Groups: yearID, lgID [4]
yearID lgID teamID HR
<int> <fct> <fct> <int>
1 2019 AL MIN 307
2 2019 NL LAN 279
3 2020 AL CHA 96
4 2020 NL LAN 118
```
In this manner, we can compute the average number of home runs hit in a season by the team that hit the most.
```
hr_leaders %>%
group_by(lgID) %>%
summarize(mean_hr = mean(HR))
```
```
# A tibble: 7 × 2
lgID mean_hr
<fct> <dbl>
1 AA 40.5
2 AL 157.
3 FL 51
4 NA 13.8
5 NL 129.
6 PL 66
7 UA 32
```
We restrict our attention to the years since 1916, during which only the AL and NL leagues have existed.
```
hr_leaders %>%
filter(yearID >= 1916) %>%
group_by(lgID) %>%
summarize(mean_hr = mean(HR))
```
```
# A tibble: 2 × 2
lgID mean_hr
<fct> <dbl>
1 AL 174.
2 NL 161.
```
In Figure [7\.2](ch-iteration.html#fig:dh), we show how this number has changed over time. We note that while the top HR hitting teams were comparable across the two leagues until the mid\-1970s, the AL teams have dominated since their league adopted the [*designated hitter*](https://en.wikipedia.org/w/index.php?search=designated%20hitter) rule in 1973\.
```
hr_leaders %>%
filter(yearID >= 1916) %>%
ggplot(aes(x = yearID, y = HR, color = lgID)) +
geom_line() +
geom_point() +
geom_smooth(se = FALSE) +
geom_vline(xintercept = 1973) +
annotate(
"text", x = 1974, y = 25,
label = "AL adopts DH", hjust = "left"
) +
labs(x = "Year", y = "Home runs", color = "League")
```
Figure 7\.2: Number of home runs hit by the team with the most home runs, 1916–2019\. Note how the AL has consistently bested the NL since the introduction of the designated hitter (DH) in 1973\.
7\.6 Simulation
---------------
In the previous section, we learned how to repeat operations while iterating over the elements of a vector. It can also be useful to simply repeat an operation many times and collect the results. Obviously, if the result of the operation is [*deterministic*](https://en.wikipedia.org/w/index.php?search=deterministic) (i.e., you get the same answer every time) then this is pointless. On the other hand, if this operation involves randomness, then you won’t get the same answer every time, and understanding the distribution of values that your random operation produces can be useful.
We will flesh out these ideas further in Chapter [13](ch-simulation.html#ch:simulation).
For example, in our investigation into the expected winning percentage in baseball (Section [7\.5\.1](ch-iteration.html#sec:pythag)), we determined that the optimal exponent fit to the 66 seasons worth of data from 1954 to 2019 was 1\.84\. However, we also found that if we fit this same model separately for each decade, that optimal exponent varies from 1\.69 to 1\.94\. This gives us a rough sense of the variability in this exponent—we observed values between 1\.6 and 2, which may give some insights as to plausible values for the exponent.
Nevertheless, our choice to stratify by decade was somewhat arbitrary. A more natural question might be: What is the distribution of optimal exponents fit to a *single\-season*’s worth of data? How confident should we be in that estimate of 1\.84?
We can use `group_modify()` and the function we wrote previously to compute the 66 actual values. The resulting distribution is summarized in Figure [7\.3](ch-iteration.html#fig:teamdens2).
```
k_actual <- TeamRuns %>%
group_by(yearID) %>%
group_modify(~fit_k(.x))
k_actual %>%
ungroup() %>%
skim(k)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 k 67 0 1.85 0.186 1.31 1.69 1.89 1.96 2.33
```
```
ggplot(data = k_actual, aes(x = k)) +
geom_density() +
xlab("Best fit exponent for a single season")
```
Figure 7\.3: Distribution of best\-fitting exponent across single seasons from 1954–2019\.
Since we only have 67 samples, we might obtain a better understanding of the sampling distribution of the mean \\(k\\) by [*resampling*](https://en.wikipedia.org/w/index.php?search=resampling)—sampling with replacement—from these 67 values. (This is a statistical technique known as the [*bootstrap*](https://en.wikipedia.org/w/index.php?search=bootstrap), which we describe further in Chapter [9](ch-foundations.html#ch:foundations).) A simple way to do this is by mapping a sampling expression over an index of values. That is, we define `n` to be the number of iterations we want to perform, write an expression to compute the mean of a single resample, and then use `map_dbl()` to perform the iterations.
```
n <- 10000
bstrap <- 1:n %>%
map_dbl(
~k_actual %>%
pull(k) %>%
sample(replace = TRUE) %>%
mean()
)
civals <- bstrap %>%
quantile(probs = c(0.025, .975))
civals
```
```
2.5% 97.5%
1.80 1.89
```
After repeating the resampling 10,000 times, we found that 95% of the resampled exponents were between 1\.8 and 1\.889, with our original estimate of 1\.84 lying somewhere near the center of that distribution. This distribution, along the boundaries of the middle 95%, is depicted in Figure [7\.4](ch-iteration.html#fig:bdensplot).
```
ggplot(data = enframe(bstrap, value = "k"), aes(x = k)) +
geom_density() +
xlab("Distribution of resampled means") +
geom_vline(
data = enframe(civals), aes(xintercept = value),
color = "red", linetype = 3
)
```
Figure 7\.4: Bootstrap distribution of mean optimal Pythagorean exponent.
7\.7 Extended example: Factors associated with BMI
--------------------------------------------------
[*Body Mass Index*](https://en.wikipedia.org/w/index.php?search=Body%20Mass%20Index) (BMI) is a common measure of a person’s size, expressed as a ratio of their body’s mass to the square of their height. What factors are associated with high BMI?
For answers, we turn to survey data collected by the [National Center for Health Statistics (NCHS)](https://www.cdc.gov/nchs/index.htm) and packaged as the [*National Health and Nutrition Examination Survey*](https://en.wikipedia.org/w/index.php?search=National%20Health%20and%20Nutrition%20Examination%20Survey) (NHANES). These data available in **R** through the **NHANES** package.
```
library(NHANES)
```
An exhaustive approach to understanding the relationship between BMI and some of the other variables is complicated by the fact that there are 75 potential explanatory varibles for any model for BMI. In Chapter [11](ch-learningI.html#ch:learningI), we develop several modeling techniques that might be useful for this purpose, but here, we focus on examining the [*bivariate*](https://en.wikipedia.org/w/index.php?search=bivariate) relationships between BMI and the other explanatory variables.
For example, we might start by simply producing a bivariate scatterplot between BMI and age, and adding a [*local regression*](https://en.wikipedia.org/w/index.php?search=local%20regression) line to show the general trend. Figure [7\.5](ch-iteration.html#fig:nhanes-age) shows the result.
```
ggplot(NHANES, aes(x = Age, y = BMI)) +
geom_point() +
geom_smooth()
```
Figure 7\.5: Relationship between body mass index (BMI) and age among participants in the **NHANES** study.
How can we programmatically produce an analogous image for *all* of the variables in **NHANES**?
First, we’ll write a function that takes the name of a variable as an input, and returns the plot.
Second, we’ll define a set of variables, and use `map()` to iterate our function over that list.
The following function will take a data set, and an argument called `x_var` that will be the name of a variable.
It produces a slightly jazzed\-up version of Figure [7\.5](ch-iteration.html#fig:nhanes-age) that contains variable\-specific titles, as well as information about the source.
```
bmi_plot <- function(.data, x_var) {
ggplot(.data, aes(y = BMI)) +
aes_string(x = x_var) +
geom_jitter(alpha = 0.3) +
geom_smooth() +
labs(
title = paste("BMI by", x_var),
subtitle = "NHANES",
caption = "US National Center for Health Statistics (NCHS)"
)
}
```
The use of the `aes_string()` function is necessary for **ggplot2** to understand that we want to bind the `x` aesthetic to the variable whose name is stored in the `x_var` object, and not a variable that is named `x_var`
We can then call our function on a specific variable.
```
bmi_plot(NHANES, "Age")
```
Or, we can specify a set of variables and then `map()` over that set. Since `map()` always returns a list, and a list of plots is not that useful, we use the `wrap_plots()` function from the **patchwork** package to combine the resulting list of plots into one image.
```
c("Age", "HHIncomeMid", "PhysActiveDays",
"TVHrsDay", "AlcoholDay", "Pulse") %>%
map(bmi_plot, .data = NHANES) %>%
patchwork::wrap_plots(ncol = 2)
```
Figure 7\.6: Relationship between body mass index (BMI) and a series of other variables, for participants in the **NHANES** study.
Figure [7\.6](ch-iteration.html#fig:patchwork) displays the results for six variables.
We won’t show the results of our ultimate goal to produce all 75 plots here, but you can try it for yourself by using the `names()` function to retrieve the full list of variable names.
Or, you could use `across()` to retrieve only those variables that meet a certain condition.
7\.8 Further resources
----------------------
The [chapter](https://adv-r.hadley.nz/functionals.html) on [*functionals*](https://en.wikipedia.org/w/index.php?search=functionals) in H. Wickham (2019\) is the definitive source for understanding **purrr**. The name “functionals” reflects the use of a programming paradigm called [*functional programming*](https://en.wikipedia.org/w/index.php?search=functional%20programming).
For those who are already familiar with the `*apply()` family of functions popular in base R, [Jenny Bryan](https://en.wikipedia.org/w/index.php?search=Jenny%20Bryan) wrote [a helpful tutorial](https://jennybc.github.io/purrr-tutorial/bk01_base-functions.html) that maps these functions to their **purrr** equivalents.
The **rlang** package lays the groundwork for [*tidy evaluation*](https://en.wikipedia.org/w/index.php?search=tidy%20evaluation), which allows you to work programmatically with unquoted variable names. The [programming with `dplyr` vignette](https://dplyr.tidyverse.org/articles/programming.html) is the best place to start learning about tidy evaluation.
Section [C.4](ch-function.html#sec:tidyeval) provides a brief introduction to the principles.
7\.9 Exercises
--------------
**Problem 1 (Easy)**: Use the `HELPrct` data from the `mosaicData` to calculate the mean of all numeric variables (be sure to exclude missing values).
**Problem 2 (Easy)**: Suppose you want to visit airports in Boston (`BOS`), New York (`JFK`, `LGA`), San Francisco (`SFO`), Chicago (`ORD`, `MDW`), and Los Angeles (`LAX`). You have data about flight delays in a `tibble` called `flights`. You have written a pipeline that, for any given airport code (e.g., `LGA`), will return a `tibble` with two columns, the airport code, and the average arrival delay time.
Suggest a workflow that would be most efficient for computing the average arrival delay time for all seven airports.
**Problem 3 (Medium)**: Use the `purrr::map()` function and the `HELPrct` data frame from the `mosaicData` package to fit a regression model predicting `cesd` as a function of `age` separately for each of the levels of the `substance` variable. Generate a table of results (estimates and confidence intervals) for the slope parameter for each level of the grouping variable.
**Problem 4 (Medium)**: The team IDs corresponding to Brooklyn baseball teams from the `Teams` data frame from the `Lahman` package are listed below. Use `map_int()` to find the number of seasons in which each of those teams played by calling a function called `count_seasons`.
```
library(Lahman)
bk_teams <- c("BR1", "BR2", "BR3", "BR4", "BRO", "BRP", "BRF")
```
**Problem 5 (Medium)**: Use data from the `NHANES` package to create a set of scatterplots of `Pulse` as a function of `Age`, `BMI`, `TVHrsDay`, and `BPSysAve` to create a figure like the last one in the chapter.
Be sure to create appropriate annotations (source, survey name, variables being displayed).
What do you conclude?
**Problem 6 (Hard)**: Use the `group_modify()` function and the `Lahman` data to replicate one of the baseball records plots ([http://tinyurl.com/nytimes\-records](http://tinyurl.com/nytimes-records)) from the *The New York Times*.
7\.10 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-iteration.html\#iteration\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-iteration.html#iteration-online-exercises)
No exercises found
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-ethics.html |
Chapter 8 Data science ethics
=============================
8\.1 Introduction
-----------------
Work in data analytics involves expert knowledge, understanding, and skill.
In much of your work, you will be relying on the trust and confidence that your clients place in you. The term [*professional ethics*](https://en.wikipedia.org/w/index.php?search=professional%20ethics) describes the special responsibilities not to take unfair advantage of that trust. This involves more than being thoughtful and using common sense; there are specific professional standards that should guide your actions.
Moreover, due to the potential that their work may be deployed at scale, data scientists must anticipate how their work could used by others and wrestle with any ethical implications.
The best\-known professional standards are those in the [*Hippocratic Oath*](https://en.wikipedia.org/w/index.php?search=Hippocratic%20Oath) for physicians, which were originally written in the 5th century B.C. Three of the eight principles in the modern version of the oath (Wikipedia 2016\) are presented here because of similarity to standards for data analytics.
1. “I will not be ashamed to say ‘I know not,’ nor will I fail to call in my colleagues when the skills of another are needed for a patient’s recovery.”
2. “I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know.”
3. “I will remember that I remain a member of society, with special obligations to all my fellow human beings, those sound of mind and body as well as the infirm.”
Depending on the jurisdiction, these principles are extended and qualified by law. For instance, notwithstanding the need to “respect the privacy of my patients,” health\-care providers in the United States are required by law to report to appropriate government authorities evidence of child abuse or infectious diseases such as botulism, chicken pox, and cholera.
This chapter introduces principles of professional ethics for data science and gives examples of legal obligations, as well as guidelines issued by professional societies.
There is no official data scientist’s oath—although attempts to forge one exist (National Academies of Science, Engineering, and Medicine 2018\). Reasonable people can disagree about what actions are best, but the existing guidelines provide a description of the ethical expectations on which your clients can reasonably rely. As a consensus statement of professional ethics, the guidelines also establish standards of accountability.
8\.2 Truthful falsehoods
------------------------
The single best\-selling book with “statistics” in the title is *How to Lie with Statistics* by [Darrell Huff](https://en.wikipedia.org/w/index.php?search=Darrell%20Huff) (Huff 1954\). Written in the 1950s, the book shows graphical ploys to fool people even with accurate data. A general method is to violate conventions and tacit expectations that readers rely on when interpreting graphs. One way to think of *How to Lie* is a text to show the general public what these tacit expectations are and give tips for detecting when the trick is being played on them. The book’s title, while compelling, has wrongly tarred the field of statistics. The “statistics” of the title are really just “numbers.” The misleading graphical techniques are employed by politicians, journalists, and businessmen: not statisticians. More accurate titles would be “How to Lie with Numbers,” or “Don’t be misled by graphics.”
Some of the graphical tricks in *How to Lie* are still in use. Consider these three recent examples.
### 8\.2\.1 Stand your ground
In 2005, the [*Florida legislature*](https://en.wikipedia.org/w/index.php?search=Florida%20legislature) passed the controversial [“Stand Your Ground” law](https://en.wikipedia.org/wiki/Stand-your-ground_law) that broadened the situations in which citizens can use lethal force to protect themselves against perceived threats. Advocates believed that the new law would ultimately reduce crime; opponents feared an increase in the use of lethal force. What was the actual outcome?
The graphic in Figure [8\.1](ch-ethics.html#fig:florida) is a reproduction of one published by the news service [Reuters](http://static5.businessinsider.com/image/53038b556da8110e5ce82be7-604-756/florida%20gun%20deaths.jpg) on February 16, 2014 showing the number of firearm murders in Florida over the years.
Upon first glance, the graphic gives the visual impression that right after the passage of the 2005 law, the number of murders decreased substantially.
However, the numbers tell a different story.
Figure 8\.1: Reproduction of a data graphic reporting the number of gun deaths in Florida over time. The original image was published by Reuters.
The convention in data graphics is that up corresponds to increasing values. This is not an obscure convention—rather, it’s a standard part of the secondary school curriculum.
Close inspection reveals that the \\(y\\)\-axis in Figure [8\.1](ch-ethics.html#fig:florida) has been flipped upside down—the number of gun deaths increased sharply after 2005\.
### 8\.2\.2 Global temperature
Figure [8\.2](ch-ethics.html#fig:climate) shows another example of misleading graphics: a tweet by the news magazine *National Review* on the subject of climate change.
The dominant visual impression of the graphic is that global temperature has hardly changed at all.
Figure 8\.2: A tweet by *National Review* on December 14, 2015 showing the change in global temperature over time. The tweet was later deleted.
There is a tacit graphical convention that the coordinate scales on which the data are plotted are relevant to an informed interpretation of the data. The \\(x\\)\-axis follows the convention—1880 to 2015 is a reasonable choice when considering the relationship between human industrial activity and climate. The \\(y\\)\-axis, however, is utterly misleading. The scale goes from \\(\-10\\) to 110 degrees [*Fahrenheit*](https://en.wikipedia.org/w/index.php?search=Fahrenheit).
While this is a relevant scale for showing *season\-to\-season* variation in temperature, that is not the salient issue with respect to climate change.
The concern with climate change is about rising ocean levels, intensification of storms, ecological and agricultural disruption, etc.
These are the anticipated results of a change in global *average* temperature on the order of 5 degrees Fahrenheit.
The *National Review* graphic has obscured the data by showing them on an irrelevant scale where the actual changes in temperature are practically invisible.
By graying out the numbers on the \\(y\\)\-axis, the *National Review* makes it even harder to see the trick that’s being played.
The tweet was subsequently deleted.
### 8\.2\.3 COVID\-19 reporting
In May 2020, the state of [*Georgia*](https://en.wikipedia.org/w/index.php?search=Georgia) published [a highly misleading graphical display of COVID\-19 cases](https://www.vox.com/covid-19-coronavirus-us-response-trump/2020/5/18/21262265/georgia-covid-19-cases-declining-reopening) (see Figure [8\.3](ch-ethics.html#fig:covidga)).
Note that the results for April 17th appear to the right of April 19th, and that the counties are ordered such that all of the results are monotonically decreasing for each reporting period.
The net effect of the graph is to demonstrate that confirmed COVID cases are decreasing, but it does so in a misleading fashion.
Public outcry led to a statement from the governor’s office that moving forward, chronological order would be used to display time.
Figure 8\.3: A recreation of a misleading display of confirmed COVID\-19 cases in Georgia.
8\.3 Role of data science in society
------------------------------------
The examples in Figures [8\.1](ch-ethics.html#fig:florida), [8\.2](ch-ethics.html#fig:climate), and [8\.3](ch-ethics.html#fig:covidga) are not about lying with statistics.
Statistical methodology doesn’t enter into them.
It’s the professional ethics of journalism that the graphics violate, aided and abetted by an irresponsible ignorance of statistical methodology.
Insofar as the graphics concern matters of political controversy, they can be seen as part of the political process.
While politics is a profession, it’s a profession without any comprehensive standard of professional ethics.
As data scientists, what role do we play in shaping public discourse? What responsibilities do we have?
The stakes are high, and context matters.
The misleading data graphic about the “Stand Your Ground” law was published about six months after [George Zimmerman](https://en.wikipedia.org/w/index.php?search=George%20Zimmerman) was acquitted for [killing](https://en.wikipedia.org/wiki/Shooting_of_Trayvon_Martin) [Trayvon Martin](https://en.wikipedia.org/w/index.php?search=Trayvon%20Martin).
Did the data graphic affect public perception in the wake of this tragedy?
The *National Review* tweet was published during the thick of the presidential primaries leading up the 2016 election, and the publication is a leading voice in conservative political thought.
[*Pew Research*](https://en.wikipedia.org/w/index.php?search=Pew%20Research) reports that while concern about climate change has increased steadily among those who lean Democratic since 2013 (88% said climate change is “a major threat to the United States” in 2020, up from 58% in 2013\), [it did not increase at all among those who lean Republican from 2010 to mid\-2019](https://www.pewresearch.org/fact-tank/2020/04/16/u-s-concern-about-climate-change-is-rising-but-mainly-among-democrats/), holding steady at 25%.
Did the *National Review* persuade their readers to dismiss the [*scientific consensus on climate change*](https://en.wikipedia.org/w/index.php?search=scientific%20consensus%20on%20climate%20change)?
The misleading data graphic about COVID\-19 cases in Georgia was published during a time that Governor [Brian Kemp](https://en.wikipedia.org/w/index.php?search=Brian%20Kemp)’s reopening plan was facing stiff criticism from Atlanta mayor [Keisha Lance Bottoms](https://en.wikipedia.org/w/index.php?search=Keisha%20Lance%20Bottoms), his former opponent in the governor’s race [Stacey Abrams](https://en.wikipedia.org/w/index.php?search=Stacey%20Abrams), and even President [Donald Trump](https://en.wikipedia.org/w/index.php?search=Donald%20Trump).
[Journalists called attention to the data graphic on May 10th](https://www.businessinsider.com/graph-shows-georgia-bungling-coronavirus-data-2020-5).
The [Georgia Department of Health itself reports](https://dph.georgia.gov/covid-19-daily-status-report) that the 7\-day moving average for COVID\-19 cases increased by more than 125 cases per day during the two weeks following May 10th.
Did the Georgia’s governor’s office convince people to ignore the risk of COVID\-19?
These unanswered (and intentionally provocative) questions are meant to encourage you to see the deep and not always obvious ways in which data science work connects to society at\-large.
8\.4 Some settings for professional ethics
------------------------------------------
Common sense is a good starting point for evaluating the ethics of a situation. Tell the truth. Don’t steal. Don’t harm innocent people.
But professional ethics also require an informed assessment.
A dramatic illustration of this comes from legal ethics: a situation where the lawyers for an accused murderer found the bodies of two victims whose deaths were unknown to authorities and to the victims’ families.
The responsibility to confidentiality for their client precluded the lawyers from following their hearts and reporting the discovery.
The lawyers’ careers were destroyed by the public and political recriminations that followed, yet courts and legal scholars have confirmed that the lawyers were right to do what they did, and have even held them up as heroes for their ethical behavior.
Such extreme drama is rare. This section describes in brief six situations that raise questions of the ethical course of action.
Some are drawn from the authors’ personal experience, others from court cases and other reports.
The purpose of these short case reports is to raise questions.
Principles for addressing those questions are the subject of the next section.
### 8\.4\.1 The chief executive officer
One of us once worked as a statistical consultant for a client who wanted a proprietary model to predict commercial outcomes.
After reviewing the literature, an existing multiple linear regression model was found that matched the scenario well, and available public data were used to fit the parameters of the model.
The client’s staff were pleased with the result, but the CEO wanted a model that would give a competitive advantage.
After all, their competitors could easily follow the same process to the same model, so what advantage would the client’s company have?
The CEO asked the statistical consultant whether the coefficients in the model could be “tweaked” to reflect the specific values of his company.
The consultant suggested that this would not be appropriate, that the fitted coefficients best match the data and to change them arbitrarily would be “playing God.”
In response, the CEO rose from his chair and asserted, “I want to play God.”
How should the consultant respond?
### 8\.4\.2 Employment discrimination
One of us works on legal cases arising from audits of employers, conducted by the [*United States Office of Federal Contract Compliance Programs*](https://en.wikipedia.org/w/index.php?search=United%20States%20Office%20of%20Federal%20Contract%20Compliance%20Programs) (OFCCP).
In a typical case, the OFCCP asks for hiring and salary data from a company that has a contract with the United States government.
The company usually complies, sometimes unaware that the OFCCP applies a method to identify “discrimination” through a two\-standard\-deviation test outlined in the Uniform Guidelines on Employee Selection Procedures (UGESP).
A company that does not discriminate has some risk of being labeled as discriminating by the OFCCP method (Bridgeford 2014\).
By using a questionable statistical method, is the OFCCP acting unethically?
### 8\.4\.3 “Gaydar”
Y. Wang and Kosinski (2018\) used a deep neural network (see Section [11\.1\.5](ch-learningI.html#sec:neuralnet)) and logistic regression to build a classifier (see Chapter [10](ch-modeling.html#ch:modeling)) for sexual orientation based on pictures of people’s faces. The authors claim that if given five images of a person’s face, their model would correctly predict the sexual orientation of 91% of men and 83% of women. The authors highlight the potential harm that their work could do in their abstract:
> “Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”
[A subsequent article in *The New Yorker*](https://www.newyorker.com/news/daily-comment/the-ai-gaydar-study-and-the-real-dangers-of-big-data) also notes that:
> “the study consisted entirely of white faces, but only because the dating site had served up too few faces of color to provide for meaningful analysis.”
Was this research ethical? Were the authors justified in creating and publishing this model?
### 8\.4\.4 Race prediction
Imai and Khanna (2016\) built a racial prediction algorithm using a Bayes classifer (see Section [11\.1\.4](ch-learningI.html#sec:bayes)) trained on voter registration records from Florida and the U.S. Census Bureau’s name list.
In addition to the publishing the paper detailing the methodology, the authors published the software for the classifer on [*GitHub*](https://en.wikipedia.org/w/index.php?search=GitHub) under an open\-source license. The **wru** package is available on CRAN and will return predicted probabilities for a person’s race based on either their last name alone, or their last name and their address.
```
library(tidyverse)
library(wru)
predict_race(voter.file = voters, surname.only = TRUE) %>%
select(surname, pred.whi, pred.bla, pred.his, pred.asi, pred.oth)
```
```
[1] "Proceeding with surname-only predictions..."
```
```
surname pred.whi pred.bla pred.his pred.asi pred.oth
4 Khanna 0.0676 0.0043 0.00820 0.8668 0.05310
2 Imai 0.0812 0.0024 0.06890 0.7375 0.11000
8 Velasco 0.0594 0.0026 0.82270 0.1051 0.01020
1 Fifield 0.9356 0.0022 0.02850 0.0078 0.02590
10 Zhou 0.0098 0.0018 0.00065 0.9820 0.00575
7 Ratkovic 0.9187 0.0108 0.01083 0.0108 0.04880
3 Johnson 0.5897 0.3463 0.02360 0.0054 0.03500
5 Lopez 0.0486 0.0057 0.92920 0.0102 0.00630
11 Wantchekon 0.6665 0.0853 0.13670 0.0797 0.03180
6 Morse 0.9054 0.0431 0.02060 0.0072 0.02370
```
Given the long history of [*systemic racism*](https://en.wikipedia.org/w/index.php?search=systemic%20racism) [in the United States](https://en.wikipedia.org/wiki/Institutional_racism#United_States), it is clear how this software could be used to discriminate against people of color.
One of us once partnered with a progressive voting rights organization that wanted to use racial prediction to target members of an ethnic group to *help them register to vote*.
Was the publication of this model ethical?
Does the open\-source nature of the code affect your answer?
Is it ethical to use this software?
Does your answer change depending on the intended use?
### 8\.4\.5 Data scraping
In May 2016, the online OpenPsych Forum published
[a paper](http://openpsych.net/forum/showthread.php?tid=279) by Kirkegaard and Bjerrekær (2016\)
titled “The OkCupid data set: A very large public data set of dating site users.”
The resulting data set contained 2,620 variables—including usernames, gender, and dating preferences—from 68,371 people scraped from the [OkCupid](https://www.okcupid.com) dating website.
The ostensible purpose of the data dump was to provide an interesting open public data set to fellow researchers.
These data might be used to answer questions such as this one suggested in the abstract of the paper: whether the [*zodiac sign*](https://en.wikipedia.org/w/index.php?search=zodiac%20sign) of each user was associated with any of the other variables (spoiler alert: it wasn’t).
The data scraping did not involve any illicit technology such as breaking passwords. Nonetheless, the author received many comments on the OpenPsych Forum challenging the work as an ethical breach and accusing him of [*doxing*](https://en.wikipedia.org/w/index.php?search=doxing) people by releasing personal data. Does the work raise ethical issues?
### 8\.4\.6 Reproducible spreadsheet analysis
In 2010, [*Harvard University*](https://en.wikipedia.org/w/index.php?search=Harvard%20University) economists [Carmen Reinhart](https://en.wikipedia.org/w/index.php?search=Carmen%20Reinhart) and [Kenneth Rogoff](https://en.wikipedia.org/w/index.php?search=Kenneth%20Rogoff) published a report entitled “Growth in a Time of Debt” (Rogoff and Reinhart 2010\), which argued that countries which pursued austerity measures did not necessarily suffer from slow economic growth.
These ideas influenced the thinking of policymakers—notably United States Congressman [Paul Ryan](https://en.wikipedia.org/w/index.php?search=Paul%20Ryan)—during the time of the [*European debt crisis*](https://en.wikipedia.org/w/index.php?search=European%20debt%20crisis).
[*University of Massachusetts*](https://en.wikipedia.org/w/index.php?search=University%20of%20Massachusetts) graduate student [Thomas Herndon](https://en.wikipedia.org/w/index.php?search=Thomas%20Herndon) requested access to the data and analysis contained in the paper. After receiving the original spreadsheet from Reinhart, Herndon found several errors.
> “I clicked on cell L51, and saw that they had only averaged rows 30 through 44, instead of rows 30 through 49\.” —Thomas Herndon (Roose 2013\)
In a critique of the paper, Herndon, Ash, and Pollin (2014\) point out coding errors, selective inclusion of data, and odd weighting of summary statistics that shaped the conclusions of the Reinhart/Rogoff paper.
What ethical questions does publishing a flawed analysis raise?
### 8\.4\.7 Drug dangers
In September 2004, the drug company [*Merck*](https://en.wikipedia.org/w/index.php?search=Merck) withdrew the popular product [*Vioxx*](https://en.wikipedia.org/w/index.php?search=Vioxx) from the market because of evidence that the drug increases the risk of [*myocardial infarction*](https://en.wikipedia.org/w/index.php?search=myocardial%20infarction) (MI), a major type of heart attack.
Approximately 20 million Americans had taken Vioxx up to that point. The leading medical journal *Lancet* later reported an estimate that Vioxx use resulted in 88,000 Americans having heart attacks, of whom 38,000 died.
Vioxx had been approved in May 1999 by the [*United States Food and Drug Administration*](https://en.wikipedia.org/w/index.php?search=United%20States%20Food%20and%20Drug%20Administration) based on tests involving 5,400 subjects.
Slightly more than a year after the FDA approval, a study (Bombardier et al. 2000\) of 8,076 patients published in another leading medical journal, *The New England Journal of Medicine*, established that Vioxx reduced the incidence of severe gastrointestinal events substantially compared to the standard treatment, [*naproxen*](https://en.wikipedia.org/w/index.php?search=naproxen).
That’s good for Vioxx. In addition, the abstract reports these findings regarding heart attacks:
> The incidence of myocardial infarction was lower among patients in the naproxen group than among those in the \[Vioxx] group (0\.1 percent vs. 0\.4 percent; relative risk, 0\.2; 95% confidence interval, 0\.1 to 0\.7\); the overall mortality rate and the rate of death from cardiovascular causes were similar in the two groups."
Read the abstract again carefully. The Vioxx group had a much *higher* rate of MI than the group taking the standard treatment.
This influential report identified the high risk soon after the drug was approved for use.
Yet Vioxx was not withdrawn for another three years. Something clearly went wrong here. Did it involve an ethical lapse?
### 8\.4\.8 Legal negotiations
Lawyers sometimes retain statistical experts to help plan negotiations. In a common scenario, the defense lawyer will be negotiating the amount of damages in a case with the plaintiff’s attorney.
Plaintiffs will ask the statistician to estimate the amount of damages, with a clear but implicit directive that the estimate should reflect the plaintiff’s interests.
Similarly, the defense will ask their own expert to construct a framework that produces an estimate at a lower level.
Is this a game statisticians should play?
8\.5 Some principles to guide ethical action
--------------------------------------------
In Section [8\.1](ch-ethics.html#ethics-intro), we listed three principles from the Hippocratic Oath that has been administered to doctors for hundreds of years. Below, we reprint the three corresponding principles as outlined in the Data Science Oath published by the [*National Academy of Sciences*](https://en.wikipedia.org/w/index.php?search=National%20Academy%20of%20Sciences) (National Academies of Science, Engineering, and Medicine 2018\).
1. I will not be ashamed to say, “I know not,” nor will I fail to call in my colleagues when the skills of another are needed for solving a problem.
2. I will respect the privacy of my data subjects, for their data are not disclosed to me that the world may know, so I will tread with care in matters of privacy and security.
3. I will remember that my data are not just numbers without meaning or context, but represent real people and situations, and that my work may lead to unintended societal consequences, such as inequality, poverty, and disparities due to algorithmic bias.
To date, the Data Science Oath has not achieved the widespread adoption or formal acceptance of its inspiration.
We hope this will change in the coming years.
Another set of ethical guidelines for data science is the [Data Values and Principles](https://datapractices.org/manifesto/) manifesto published by DataPractices.org.
This document espouses four values (inclusion, experimentation, accountability, and impact) and 12 principles that provide a guide for the ethical practice of data science:
1. Use data to improve life for our users, customers, organizations, and communities.
2. Create reproducible and extensible work.
3. Build teams with diverse ideas, backgrounds, and strengths.
4. Prioritize the continuous collection and availability of discussions and metadata.
5. Clearly identify the questions and objectives that drive each project and use to guide both planning and refinement.
6. Be open to changing our methods and conclusions in response to new knowledge.
7. Recognize and mitigate bias in ourselves and in the data we use.
8. Present our work in ways that empower others to make better\-informed decisions.
9. Consider carefully the ethical implications of choices we make when using data, and the impacts of our work on individuals and society.
10. Respect and invite fair criticism while promoting the identification and open discussion of errors, risks, and unintended consequences of our work.
11. Protect the privacy and security of individuals represented in our data.
12. Help others to understand the most useful and appropriate applications of data to solve real\-world problems.
In October 2020, this document had over 2,000 signatories (including two of the authors of this book).
In what follows we explore how these principles can be applied to guide ethical thinking in the several scenarios outlined in the previous section.
### 8\.5\.1 The CEO
You’ve been asked by a company CEO to modify model coefficients from the correct values, that is, from the values found by a generally accepted method.
The stakeholder in this setting is the company.
If your work will involve a method that’s not generally accepted by the professional community, you’re obliged to point this out to the company.
Principles 8 and 12 are germane.
Have you presented your work in a way that empowers others to make better\-informed decisions (principle 8\)?
Certainly your client also has substantial knowledge of how their business works.
It’s important to realize that your client’s needs may not map well onto a particular statistical methodology.
The consultant should work genuinely to understand the client’s whole set of interests (principle 12\).
Often the problem that clients identify is not really the problem that needs to be solved when seen from an expert statistical perspective.
### 8\.5\.2 Employment discrimination
The procedures adopted by the OFCCP are stated using statistical terms like “standard deviation” that themselves suggest that they are part of a legitimate statistical method.
Yet the methods raise significant questions, since by construction they will sometimes label a company that is not discriminating as a discriminator.
Principle 10 suggests the OFCCP should “invite fair criticism” of their methodology.
OFCCP and others might argue that they are not a statistical organization. They are enforcing a law, not participating in research. The OFCCP has a responsibility to the courts.
The courts themselves, including the United States Supreme Court, have not developed or even called for a coherent approach to the use of statistics (although in 1977 the [Supreme Court labeled](http://peopleclick.com/resources/pri/10-09/Statistical_Significance.asp) differences greater than two or three standard deviations as too large to attribute solely to chance).
### 8\.5\.3 “Gaydar”
Principles 1, 3, 7, 9, and 11 are relevant here.
Does the prediction of sexual orientation based on facial recognition improve life for communities (principle 1\)?
As noted in the abstract, the researchers *did* consider the ethical implications of their work (principle 9\), but did they protect the privacy and security of the individuals presented in their data (principle 11\)?
The exclusion of non\-white faces from the study casts doubt on whether the standard outlined in principle 7 was met.
### 8\.5\.4 Race prediction
Clearly, using this software to discriminate against historically marginalized people would violate some combination of principles 3, 7, and 9\.
On the other hand, is it ethical to use this software to try and help underrepresented groups if those same principles are not violated?
The authors of the **wru** package admirably met principle 2, but they may not have fully adhered to principle 9\.
### 8\.5\.5 Data scraping
OkCupid provides public access to data. A researcher uses legitimate means to acquire those data. What could be wrong?
There is the matter of the stakeholders. The collection of data was intended to support psychological research.
The ethics of research involving humans requires that the human not be exposed to any risk for which consent has not been explicitly given.
The OkCupid members did not provide such consent.
Since the data contain information that makes it possible to identify individual humans, there is a realistic risk of the release of potentially embarrassing information, or worse, information that jeopardizes the physical safety of certain users.
Principles 1 and 11 were clearly violated by the authors.
Ultimately, the Danish Data Protection Agency [decided not to file any charges against the authors](https://emilkirkegaard.dk/en/wp-content/uploads/1.1-2016-631-0148-Sagen-afsluttes.pdf).
Another stakeholder is OkCupid itself. Many information providers, like OkCupid, have [*terms of use*](https://en.wikipedia.org/w/index.php?search=terms%20of%20use) that restrict how the data may be legitimately used. Such terms of use (see Section [8\.7\.3](ch-ethics.html#sec:terms-of-use)) form an explicit agreement between the service and the users of that service. They cannot ethically be disregarded.
### 8\.5\.6 Reproducible spreadsheet analysis
The scientific community as a whole is a stakeholder in public research.
Insofar as the research is used to inform public policy, the public as a whole is a stakeholder.
Researchers have an obligation to be truthful in their reporting of research. This is not just a matter of being honest but also of participating in the process by which scientific work is challenged or confirmed.
Reinhart and Rogoff honored this professional obligation by providing reasonable access to their software and data.
In this regard, they complied with principle 10\.
Seen from the perspective of data science, Microsoft Excel, the tool used by Reinhart and Rogoff, is an unfortunate choice.
It mixes the data with the analysis. It works at a low level of abstraction, so it’s difficult to program in a concise and readable way. Commands are customized to a particular size and organization of data, so it’s hard to apply to a new or modified data set.
One of the major strategies in debugging is to work on a data set where the answer is known; this is impractical in Excel.
Programming and revision in Excel generally involves lots of click\-and\-drag copying, which is itself an error\-prone operation.
Data science professionals have an ethical obligation to use tools that are reliable, verifiable, and conducive to reproducible data analysis (see Appendix [D](ch-reproduce.html#ch:reproduce)).
Reinhart and Rogoff did not meet the standard implied by principle 2\.
### 8\.5\.7 Drug dangers
When something goes wrong on a large scale, it’s tempting to look for a breach of ethics. This may indeed identify an offender, but we must also beware of creating scapegoats. With Vioxx, there were many claims, counterclaims, and lawsuits. The researchers failed to incorporate some data that were available and provided a misleading summary of results.
The journal editors also failed to highlight the very substantial problem of the increased rate of myocardial infarction with Vioxx.
To be sure, it’s unethical not to include data that undermines the conclusion presented in a paper. The Vioxx researchers were acting according to their original research protocol—a solid professional practice.
What seems to have happened with Vioxx is that the researchers had a theory that the higher rate of infarction was not due to Vioxx, *per se*, but to an aspect of the study protocol that excluded subjects who were being treated with aspirin to reduce the risk of heart attacks.
The researchers believed with some justification that the drug to which Vioxx was being compared, naproxen, was acting as a substitute for aspirin. They were wrong, as subsequent research showed.
Their failure was in not honoring principle 6 and publishing their results in a misleading way.
Professional ethics dictate that professional standards be applied in work.
Incidents like Vioxx should remind us to work with appropriate humility and to be vigilant to the possibility that our own explanations are misleading us.
### 8\.5\.8 Legal negotiations
In legal cases such as the one described earlier in the chapter, the data scientist has ethical obligations to their client. Depending on the circumstances, they may also have obligations to the court.
As always, you should be forthright with your client. Usually you will be using methods that you deem appropriate, but on occasion you will be directed to use a method that you think is inappropriate.
For instance, we’ve seen occasions when the client requested that the time period of data included in the analysis be limited in some way to produce a “better” result. We’ve had clients ask us to subdivide the data (in employment discrimination cases, say, by job title) in order to change p\-values.
Although such subdivision may be entirely legitimate, the decision about subdividing—seen from a purely statistical point of view—ought to be based on the situation, not the desired outcome (see the discussion of the “garden of forking paths” in Section [9\.7](ch-foundations.html#sec:p-perils)).
Your client is entitled to make such requests. Whether or not you think the method being asked for is the right one doesn’t enter into it. Your professional obligation is to inform the client what the flaws in the proposed method are and how and why you think another method would be better (principle 8\). (See the major exception that follows.)
The legal system in countries such as the U.S. is an *adversarial* system. Lawyers are allowed to frame legal arguments that may be dismissed: They are entitled to enter some facts and not others into evidence. Of course, the opposing legal team is entitled to create their own legal arguments and to cross\-examine the evidence to show how it is incomplete and misleading.
When you are working with a legal team as a data scientist, you are part of the team. The lawyers on the team are the experts about what negotiation strategies and legal theories to use, how to define the limits of the case (such as damages), and how to present their case or negotiate with the other party.
It is a different matter when you are presenting to the court. This might take the form of filing an expert report to the court, testifying as an expert witness, or being deposed.
A deposition is when you are questioned, under oath, outside of the courtroom. You are obliged to answer all questions honestly. (Your lawyer may, however, direct you not to answer a question about privileged communications.)
If you are an expert witness or filing an expert report, the word “expert” is significant. A court will certify you as an expert in a case giving you permission to express your opinions. Now you have professional ethical obligations to apply your expertise honestly and openly in forming those opinions.
When working on a legal case, you should get advice from a legal authority, which might be your client.
Remember that if you do shoddy work, or fail to reply honestly to the other side’s criticisms of your work, your credibility as an expert will be imperiled.
8\.6 Algorithmic bias
---------------------
Algorithms are at the core of many data science models (see Chapter [11](ch-learningI.html#ch:learningI) for a comprehensive introducion.
These models are being used to automate decision\-making in settings as diverse as navigation for self\-driving cars and determinations of risk for recidivism (return to criminal behavior) in the criminal justice system.
The potential for bias to be reinforced when these models are implemented is dramatic.
Biased data may lead to algorithmic bias.
As an example, some groups may be underrepresented or systematically excluded from data collection efforts.
D’Ignazio and Klein (2020\) highlight issues with data collection related to undocumented immigrants.
O’Neil (2016\) details several settings in which algorithmic bias has harmful consequences, whether intended or not.
Consider a criminal recidivism algorithm used in several states and detailed in [a *ProPublica* story](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) titled “Machine Bias” (Angwin et al. 2016\).
The algorithm returns predictions about how likely a criminal is to commit another crime based on a survey of 137 questions.
*ProPublica* claims that the algorithm is biased:
> “Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind.”
How could the predictions be biased, when the race of the defendants is not included in the model?
Consider that one of the survey questions is “was one of your parents ever sent to jail or prison?”
Because of the longstanding relationship between [*race and crime in the United States*](https://en.wikipedia.org/w/index.php?search=race%20and%20crime%20in%20the%20United%20States), Black people are much more likely to have a parent who was sent to prison.
In this manner, the question about the defendant’s parents acts as a [*proxy*](https://en.wikipedia.org/w/index.php?search=proxy) for race.
Thus, even though the recidivism algorithm doesn’t take race into account directly, it learns about race from the data that reflects the centuries\-old inequities in the criminal justice system.
For another example, suppose that this model for recidivism included interactions with the police as an important feature.
It may seem logical to assume that people who have had more interactions with the police are more likely to commit crimes in the future.
However, including this variable would likely lead to bias, since Black people are more likely to have interactions with police, even among those whose underlying probability of criminal behavior is the same (Andrew Gelman, Fagan, and Kiss 2007\).
Data scientists need to ensure that model assessment, testing, accountability, and transparency are integrated into their analysis to identify and counteract bias and maximize fairness.
8\.7 Data and disclosure
------------------------
### 8\.7\.1 Reidentification and disclosure avoidance
The ability to link multiple data sets and to use public information to identify individuals is a growing problem. A glaring example of this occurred in 1996 when then\-Governor of Massachusetts [William Weld](https://en.wikipedia.org/w/index.php?search=William%20Weld) collapsed while attending a graduation ceremony at [*Bentley College*](https://en.wikipedia.org/w/index.php?search=Bentley%20College).
An [*MIT*](https://en.wikipedia.org/w/index.php?search=MIT) graduate student used information from a public data release by the [*Massachusetts Group Insurance Commission*](https://en.wikipedia.org/w/index.php?search=Massachusetts%20Group%20Insurance%20Commission) to identify Weld’s subsequent hospitalization records.
The disclosure of this information was [highly publicized](http://healthaffairs.org/blog/2012/08/10/the-debate-over-re-identification-of-health-information-what-do-we-risk/) and led to many changes in data releases.
This was a situation where the right balance was not struck between disclosure (to help improve health care and control costs) and nondisclosure (to help ensure private information is not made public).
There are many challenges to ensure disclosure avoidance (Zaslavsky and Horton 1998; Ohm 2010\). This remains an
active and important area of research.
The [*Health Insurance Portability and Accountability Act*](https://en.wikipedia.org/w/index.php?search=Health%20Insurance%20Portability%20and%20Accountability%20Act) (HIPAA) was passed by the United States Congress in 1996—the same year as Weld’s illness.
The law augmented and clarified the role that researchers and medical care providers had in maintaining protected health information (PHI).
The HIPAA regulations developed since then specify procedures to ensure that individually identifiable PHI is protected when it is transferred, received, handled, analyzed, or shared.
As an example, detailed geographic information (e.g., home or office location) is not allowed to be shared unless there is an overriding need.
For research purposes, geographic information might be limited to state or territory, though for certain rare diseases or characteristics even this level of detail may lead to disclosure.
Those whose PHI is not protected can file a complaint with the Office of Civil Rights.
The HIPAA structure, while limited to medical information, provides a useful model for disclosure avoidance that is relevant to other data scientists.
Parties accessing PHI need to have privacy policies and procedures.
They must identify a privacy official and undertake training of their employees. If there is a disclosure they must
mitigate the effects to the extent practical.
There must be reasonable data safeguards to prevent intentional or unintentional use.
Covered entities may not retaliate against someone for assisting in investigations of disclosures.
Organizations must maintain records and documentation for six years after their last use of the data.
Similar regulations protect information collected by the statistical agencies of the United States.
### 8\.7\.2 Safe data storage
Inadvertent disclosures of data can be even more damaging than planned disclosures.
Stories abound of protected data being made available on the internet with subsequent harm to those whose information is made accessible.
Such releases may be due to misconfigured databases, malware, theft, or by posting on a public forum.
Each individual and organization needs to practice safe computing, to regularly audit their systems, and to implement plans to address computer and data security.
Such policies need to ensure that protections remain even when equipment is transferred or disposed of.
### 8\.7\.3 Data scraping and terms of use
A different issue arises relating to legal status of material on the Web.
Consider [*Zillow.com*](https://en.wikipedia.org/w/index.php?search=Zillow.com), an online real\-estate database company that combines data from a number of public and private sources to generate house price and rental information on more than 100 million homes across the United States.
Zillow has made access to their database
available through an API (see Section [6\.4\.2](ch-dataII.html#sec:apis)) under certain restrictions.
The terms of use for Zillow are provided in a [legal document](http://www.zillow.com/howto/api/APITerms.htm).
They require that users of the API consider the data on an “as is” basis, not replicate functionality of the Zillow website or mobile app, not retain any copies of the Zillow data, not separately extract data elements to enhance other data files, and not use the data for direct marketing.
Another common form for terms of use is a limit to the amount or frequency of access. Zillow’s API is limited to 1,000 calls per day to the home valuations or property details. Another example: [*The Weather Underground*](https://en.wikipedia.org/w/index.php?search=The%20Weather%20Underground) maintains an API focused on weather information.
They provide no\-cost access limited to 500 calls per day and 10 calls per minute and with no access to historical information.
They have a for\-pay system with multiple tiers for accessing more extensive data.
Data points are not just content in tabular form. Text is also data.
Many websites have restrictions on text mining.
[*Slate.com*](https://en.wikipedia.org/w/index.php?search=Slate.com), for example, states that users may not:
> “Engage in unauthorized spidering, scraping, or harvesting of content or information, or use any other unauthorized automated means to compile information.”
Apparently, it violates the Slate.com terms of use to compile a compendium of Slate articles (even for personal use) without their authorization.
To get authorization, you need to ask for it.
[Albert Y. Kim](https://en.wikipedia.org/w/index.php?search=Albert%20Y.%20Kim) of [*Smith College*](https://en.wikipedia.org/w/index.php?search=Smith%20College) published data with information for 59,946 [*San Francisco*](https://en.wikipedia.org/w/index.php?search=San%20Francisco) OkCupid users (a free online dating website) with the permission of the president of OkCupid (Kim and Escobedo\-Land 2015\).
To help minimize possible damage, he also removed certain variables (e.g., username) that would make it more straightforward to reidentify the profiles.
Contrast the concern for privacy taken here to the careless doxing of OkCupid users mentioned above.
8\.8 Reproducibility
--------------------
Disappointingly often, even the original researchers are unable to reproduce their own results upon revisitation.
This failure arises naturally enough when researchers use menu\-driven software that does not keep an audit trail of each step in the process.
For instance, in [*Excel*](https://en.wikipedia.org/w/index.php?search=Excel), the process of sorting data is not recorded. You can’t look at a spreadsheet and determine what range of data was sorted, so mistakes in selecting cases or variables for a sort are propagated untraceably through the subsequent analysis.
Researchers commonly use tools like word processors that do not mandate an explicit tie between the result presented in a publication and the analysis that produced the result. These seemingly innocuous practices contribute to the loss of reproducibility: numbers may be copied by hand into a document and graphics are cut\-and\-pasted into the report. (Imagine that you have inserted a graphic into a report in this way. How could you, or anyone else, easily demonstrate that the correct graphic was selected for inclusion?)
We describe [*reproducible analysis*](https://en.wikipedia.org/w/index.php?search=reproducible%20analysis) as the practice of recording each and every step, no matter how trivial seeming, in a data analysis. The main elements of a reproducible analysis plan (as described by [Project TIER](https://www.haverford.edu/project-tier) include:
* **Data**: all original data files in the form in which they originated,
* **Metadata**: codebooks and other information needed to understand the data,
* **Commands**: the computer code needed to extract, transform, and load the data—then run analyses, fit models,
generate graphical displays, and
* **Map**: a file that maps between the output and the results in the report.
The [*American Statistical Association*](https://en.wikipedia.org/w/index.php?search=American%20Statistical%20Association) (ASA) notes the importance of reproducible analysis in its curricular guidelines.
The development of new tools such as **R** Markdown and **knitr** have dramatically improved the usability of these methods in practice.
See Appendix [D](ch-reproduce.html#ch:reproduce) for an introduction to these tools.
Individuals and organizations have been working to develop
protocols to facilitate making the data analysis process more transparent and to integrate this into the workflow of practitioners and students.
One of us has worked as part of a research project team at the [*Channing Laboratory*](https://en.wikipedia.org/w/index.php?search=Channing%20Laboratory) at [*Harvard University*](https://en.wikipedia.org/w/index.php?search=Harvard%20University).
As part of the vetting process for all manuscripts, an analyst outside of the research team is required to review all programs used to generate results.
In addition, another individual is responsible for checking each number in the paper to ensure that it was correctly transcribed from the results.
Similar practice is underway at [The Odum Institute for Research in Social Science](http://www.irss.unc.edu/odum/home2.jsp) at the [*University of North Carolina*](https://en.wikipedia.org/w/index.php?search=University%20of%20North%20Carolina).
This organization performs third\-party code and data verification for several political science journals.
### 8\.8\.1 Example: Erroneous data merging
In Chapter [5](ch-join.html#ch:join), we discuss how the [*join*](https://en.wikipedia.org/w/index.php?search=join) operation can be used to merge two data tables together.
Incorrect merges can be very difficult to unravel unless the exact details of the merge have been recorded.
The **dplyr** `inner_join()` function simplifies this process.
In a 2013 paper published in the journal *Brain, Behavior, and Immunity*, Kern et al. reported a link between immune response and depression. To their credit, the authors later noticed that the results were the artifact of a faulty data merge between the lab results and other survey data. A retraction (Kern et al. 2013\), as well as a corrected paper reporting negative results (Kern et al. 2014\), was published in the same journal.
In some ways, this is science done well—ultimately the correct negative result was published, and the authors acted ethically by alerting the journal editor to their mistake.
However, the error likely would have been caught earlier had the authors adhered to stricter standards of reproducibility (see Appendix [D](ch-reproduce.html#ch:reproduce)) in the first place.
8\.9 Ethics, collectively
-------------------------
Although science is carried out by individuals and teams, the scientific community as a whole is a stakeholder.
Some of the ethical responsibilities faced by data scientists are created by the collective nature of the enterprise.
A team of [*Columbia University*](https://en.wikipedia.org/w/index.php?search=Columbia%20University) scientists discovered that a former post\-doc in the group, unbeknownst to the others, had fabricated and falsified research reported in articles in the journals *Cell* and *Nature*.
Needless to say, the post\-doc had violated his ethical obligations both with respect to his colleagues and to the scientific enterprise as a whole.
When the misconduct was discovered, the other members of the team incurred an ethical obligation to the scientific community.
In fulfillment of this obligation, they notified the journals and [retracted](http://retractionwatch.com/2015/06/17/columbia-biologists-deeply-regret-nature-retraction-after-postdoc-faked-74-panels-in-3-papers/) the papers, which had been highly cited.
To be sure, such episodes can tarnish the reputation of even the innocent team members, but the ethical obligation outweighs the desire to protect one’s reputation.
Perhaps surprisingly, there are situations where it is not ethical *not* to publish one’s work. [*Publication bias*](https://en.wikipedia.org/w/index.php?search=Publication%20bias) (or the “file\-drawer problem”) refers to the situation where reports of statistically significant (i.e., \\(p\<0\.05\\)) results are much more likely to be published than reports where the results are not statistically significant.
In many settings, this bias is for the good; a lot of scientific work is in the pursuit of hypotheses that turn out to be wrong or ideas that turn out not to be productive.
But with many research teams investigating similar ideas, or even with a single research team that goes down many parallel paths, the meaning of “statistically significant” becomes clouded and corrupt.
Imagine 100 parallel research efforts to investigate the effect of a drug that in reality has no effect at all. Roughly five of those efforts are expected to culminate in a misleadingly “statistically significant” (\\(p \< 0\.05\\)) result.
Combine this with publication bias and the scientific literature might consist of reports on just the five projects that happened to be significant.
In isolation, five such reports would be considered substantial evidence about the (non\-null) effect of the drug.
It might seem unlikely that there would be 100 parallel research efforts on the same drug, but at any given time there are tens of thousands of research efforts, any one of which has a 5% chance of producing a significant result even if there were no genuine effect.
The [*American Statistical Association*](https://en.wikipedia.org/w/index.php?search=American%20Statistical%20Association)’s ethical guidelines state, “Selecting the one \`significant’ result from a multiplicity of parallel tests poses a grave risk of an incorrect conclusion.
Failure to disclose the full extent of tests and their results in such a case would be highly misleading.” So, if you’re examining the effect on five different measures of health by five different foods, and you find that broccoli consumption has a statistically significant relationship with the development of colon cancer, not only should you be skeptical but you should include in your report the null result for the other 24 tests or perform an appropriate statistical correction to account for the multiple tests.
Often, there may be several different outcome measures, several different food types, and several potential covariates (age, sex, whether breastfed as an infant, smoking, the geographical area of residence or upbringing, etc.), so it’s easy to be performing dozens or hundreds of different tests without realizing it.
For clinical health trials, there are efforts to address this problem through trial registries.
In such registries (e.g., <https://clinicaltrials.gov>), researchers provide their study design and analysis protocol in advance and post results.
8\.10 Professional guidelines for ethical conduct
-------------------------------------------------
This chapter has outlined basic principles of professional ethics.
Usefully, several organizations have developed detailed statements on topics such as professionalism, integrity of data and methods, responsibilities to stakeholders, conflicts of interest, and the response to allegations of misconduct. One good source is the framework for professional ethics endorsed by the [*American Statistical Association*](https://en.wikipedia.org/w/index.php?search=American%20Statistical%20Association) (ASA) (Committee on Professional Ethics 1999\).
The Committee on Science, Engineering, and Public Policy of the National Academy of Sciences, National Academy of Engineering, and Institute of Medicine has published the third edition of *On Being a Scientist: A Guide to Responsible Conduct in Research*. The guide is structured into a number of chapters, many of which are highly relevant for data scientists (including “the Treatment of Data,” “Mistakes and Negligence,” “Sharing of Results,” “Competing Interests, Commitment, and Values,” and “The Researcher in Society”).
The [*Association for Computing Machinery*](https://en.wikipedia.org/w/index.php?search=Association%20for%20Computing%20Machinery) (ACM)—the world’s largest computing society, with more than 100,000 members—adopted a code of ethics in 1992 that was revised in 2018 (see [https://www.acm.org/about/code\-of\-ethics](https://www.acm.org/about/code-of-ethics)).
Other relevant statements and codes of conduct have been promulgated by the [Data Science Association](http://www.datascienceassn.org/code-of-conduct.html), the [International Statistical Institute](http://www.isi-web.org/about-isi/professional-ethics), and the [United Nations Statistics Division](http://unstats.un.org/unsd/dnss/gp/fundprinciples.aspx).
The [Belmont Report](http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html) outlines ethical
principles and guidelines for the protection of human research subjects.
8\.11 Further resources
-----------------------
For a book\-length treatment of ethical issues in statistics, see Hubert and Wainer (2012\).
The National Academies report on data science for undergraduates National Academies of Science, Engineering, and Medicine (2018\) included data ethics as a key component of
data acumen.
The report also included a draft oath for data scientists.
A historical perspective on the ASA’s Ethical Guidelines for Statistical Practice can be found in Ellenberg (1983\).
The University of Michigan provides an EdX course on “[Data Science Ethics](https://www.edx.org/course/data-science-ethics-michiganx-ds101x).”
[Carl Bergstrom](https://en.wikipedia.org/w/index.php?search=Carl%20Bergstrom) and [Jevin West](https://en.wikipedia.org/w/index.php?search=Jevin%20West) developed a course \`\`Calling Bullshit: Data Reasoning in a Digital World".
Course materials and related resources can be found at <https://callingbullshit.org>.
[Andrew Gelman](https://en.wikipedia.org/w/index.php?search=Andrew%20Gelman) has written a column on ethics in statistics in *CHANCE* for the past several years (see, for example Andrew Gelman (2011\); Andrew Gelman and Loken (2012\); Andrew Gelman (2012\); Andrew Gelman (2020\)).
*Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy* describes a number of frightening misuses of big data and algorithms (O’Neil 2016\).
The *Teach Data Science* blog has a series of entries focused on data ethics (<https://teachdatascience.com>).
D’Ignazio and Klein (2020\) provide a comprehensive introduction to data feminism (in contrast to data ethics).
The ACM Conference on Fairness, Accountability, and Transparency (FAccT) provides a cross\-disciplinary focus on data ethics issues (<https://facctconference.org/2020>).
The [Center for Open Science](https://cos.io/)—which develops the [Open Science Framework](https://osf.io/) (OSF)—is an organization that promotes openness, integrity, and reproducibility in scientific research.
The OSF provides an online platform for researchers to publish their scientific projects.
[Emil Kirkegaard](https://en.wikipedia.org/w/index.php?search=Emil%20Kirkegaard) used OSF to publish his OkCupid data set.
The [Institute for Quantitative Social Science](http://www.iq.harvard.edu/) at Harvard and the [Berkeley Initiative for Transparency in the Social Sciences](http://www.bitss.org/) are two other organizations working to promote reproducibility in social science research.
The [*American Political Association*](https://en.wikipedia.org/w/index.php?search=American%20Political%20Association) has incorporated the [Data Access and Research Transparency](http://www.dartstatement.org/) (DA\-RT) principles into its ethics guide.
The Consolidated Standards of Reporting Trials (CONSORT) statement at ([http://www.consort\-statement.org](http://www.consort-statement.org)) provides detailed guidance on the analysis and reporting of clinical trials.
Many more examples of how irreproducibility has led to scientific errors are available at <http://retractionwatch.com/>.
For example, [a study linking severe illness and divorce rates](http://retractionwatch.com/2015/09/10/divorce-study-felled-by-a-coding-error-gets-a-second-chance/#more-32151) was retracted due to a coding mistake.
8\.12 Exercises
---------------
**Problem 1 (Easy)**: A researcher is interested in the relationship of weather to sentiment (positivity or negativity of posts) on Twitter. They want to scrape data from <https://www.wunderground.com> and join that to Tweets in that geographic area at a particular time. One complication is that Weather Underground limits the number of data points that can be downloaded for free using their API (application program interface). The researcher sets up six free accounts to allow them to collect the data they want in a shorter time\-frame. What ethical guidelines are violated by this approach to data scraping?
**Problem 2 (Medium)**: A data scientist compiled data from several public sources (voter registration, political contributions, tax records) that were used to predict sexual orientation of individuals in a community. What ethical considerations arise that should guide use of such data sets?
**Problem 3 (Medium)**: A statistical analyst carried out an investigation of the association of gender and teaching evaluations at a university. They undertook exploratory analysis of the data and carried out a number of bivariate comparisons. The multiple items on the teaching evaluation were consolidated to a single measure based on these exploratory analyses. They used this information to construct a multivariable regression model that found evidence for biases. What issues might arise based on such an analytic approach?
**Problem 4 (Medium)**: In 2006, AOL released a database of search terms that users had used in the prior month (see <http://www.nytimes.com/2006/08/09/technology/09aol.html>). Research this disclosure and the reaction that ensued. What ethical issues are involved? What potential impact has this disclosure had?
**Problem 5 (Medium)**: A reporter carried out a clinical trial of chocolate where a small number of overweight subjects who had received medical clearance were randomized to either eat dark chocolate or not to eat dark chocolate. They were followed for a period and their change in weight was recorded from baseline until the end of the study. More than a dozen outcomes were recorded and one proved to be significantly different in the treatment group than the outcome. This study was publicized and received coverage from a number of magazines and television programs. Outline the ethical considerations that arise in this situation.
**Problem 6 (Medium)**: A *Slate* article ([http://tinyurl.com/slate\-ethics](http://tinyurl.com/slate-ethics)) discussed whether race/ethnicity should be included in a predictive model for how long a homeless family would stay in
homeless services. Discuss the ethical considerations involved in whether race/ethnicity should be included as a predictor in the model.
**Problem 7 (Medium)**: In the United States, the Confidential Information Protection and Statistical Efficiency Act (CIPSEA) governs the confidentiality of data collected by agencies such as the Bureau of Labor Statistics and the Census Bureau. What are the penalties for willful and knowing disclosure of protected information to
unauthorized persons?
**Problem 8 (Medium)**: A data analyst received permission to post a data set that was scraped from a social media site. The full data set included name, screen name, email address, geographic location, IP (internet protocol) address, demographic profiles, and preferences for relationships. Why might it be problematic to post a deidentified form of this data set where name and email address were removed?
**Problem 9 (Medium)**: A company uses a machine\-learning algorithm to determine which job advertisement to display for users searching for technology jobs. Based on past results, the algorithm tends to display lower\-paying jobs for women than for men (after controlling for other characteristics than gender). What ethical considerations might be considered when reviewing this algorithm?
**Problem 10 (Hard)**: An investigative team wants to winnow the set of variables to include in their final multiple regression model. They have 100 variables and one outcome measured for \\(n\=250\\) observations).
They use the following procedure:
1. Fit each of the 100 bivariate models for the outcome as a function of a single predictor, then
2. Include all of the significant predictors in the overall model.
What does the distribution of the p\-value for the overall test look like, assuming that there are no associations between any of the predictors and the outcome (all are assumed to be multivariate normal and independent).
Carry out a simulation to check your answer.
8\.13 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-ethics.html\#ethics\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-ethics.html#ethics-online-exercises)
**Problem 1 (Medium)**: In the United States, most students apply for grants or subsidized loans to finance their college education. Part of this process involves filling in a federal government form called the Free Application for Federal Student Aid (FAFSA). The form asks for information about family income and assets. The form also includes a place for listing the universities to which the information is to be sent. The data collected by FAFSA includes confidential financial information (listing the schools eligible to receive the information is effectively giving permission to share the data with them).
It turns out that the order in which the schools are listed carries important information. Students typically apply to several schools, but can attend only one of them. Until recently, admissions offices at some universities used the information as an important part of their models of whether an admitted student will accept admissions. The earlier in a list a school appears, the more likely the student is to attend that school.
Here’s the catch from the student’s point of view. Some institutions use statistical models to allocate grant aid (a scarce resource) where it is most likely to help ensure that a student enrolls. For these schools, the more likely a student is deemed to accept admissions, the lower the amount of grant aid they are likely to receive.
Is this ethical? Discuss.
---
8\.1 Introduction
-----------------
Work in data analytics involves expert knowledge, understanding, and skill.
In much of your work, you will be relying on the trust and confidence that your clients place in you. The term [*professional ethics*](https://en.wikipedia.org/w/index.php?search=professional%20ethics) describes the special responsibilities not to take unfair advantage of that trust. This involves more than being thoughtful and using common sense; there are specific professional standards that should guide your actions.
Moreover, due to the potential that their work may be deployed at scale, data scientists must anticipate how their work could used by others and wrestle with any ethical implications.
The best\-known professional standards are those in the [*Hippocratic Oath*](https://en.wikipedia.org/w/index.php?search=Hippocratic%20Oath) for physicians, which were originally written in the 5th century B.C. Three of the eight principles in the modern version of the oath (Wikipedia 2016\) are presented here because of similarity to standards for data analytics.
1. “I will not be ashamed to say ‘I know not,’ nor will I fail to call in my colleagues when the skills of another are needed for a patient’s recovery.”
2. “I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know.”
3. “I will remember that I remain a member of society, with special obligations to all my fellow human beings, those sound of mind and body as well as the infirm.”
Depending on the jurisdiction, these principles are extended and qualified by law. For instance, notwithstanding the need to “respect the privacy of my patients,” health\-care providers in the United States are required by law to report to appropriate government authorities evidence of child abuse or infectious diseases such as botulism, chicken pox, and cholera.
This chapter introduces principles of professional ethics for data science and gives examples of legal obligations, as well as guidelines issued by professional societies.
There is no official data scientist’s oath—although attempts to forge one exist (National Academies of Science, Engineering, and Medicine 2018\). Reasonable people can disagree about what actions are best, but the existing guidelines provide a description of the ethical expectations on which your clients can reasonably rely. As a consensus statement of professional ethics, the guidelines also establish standards of accountability.
8\.2 Truthful falsehoods
------------------------
The single best\-selling book with “statistics” in the title is *How to Lie with Statistics* by [Darrell Huff](https://en.wikipedia.org/w/index.php?search=Darrell%20Huff) (Huff 1954\). Written in the 1950s, the book shows graphical ploys to fool people even with accurate data. A general method is to violate conventions and tacit expectations that readers rely on when interpreting graphs. One way to think of *How to Lie* is a text to show the general public what these tacit expectations are and give tips for detecting when the trick is being played on them. The book’s title, while compelling, has wrongly tarred the field of statistics. The “statistics” of the title are really just “numbers.” The misleading graphical techniques are employed by politicians, journalists, and businessmen: not statisticians. More accurate titles would be “How to Lie with Numbers,” or “Don’t be misled by graphics.”
Some of the graphical tricks in *How to Lie* are still in use. Consider these three recent examples.
### 8\.2\.1 Stand your ground
In 2005, the [*Florida legislature*](https://en.wikipedia.org/w/index.php?search=Florida%20legislature) passed the controversial [“Stand Your Ground” law](https://en.wikipedia.org/wiki/Stand-your-ground_law) that broadened the situations in which citizens can use lethal force to protect themselves against perceived threats. Advocates believed that the new law would ultimately reduce crime; opponents feared an increase in the use of lethal force. What was the actual outcome?
The graphic in Figure [8\.1](ch-ethics.html#fig:florida) is a reproduction of one published by the news service [Reuters](http://static5.businessinsider.com/image/53038b556da8110e5ce82be7-604-756/florida%20gun%20deaths.jpg) on February 16, 2014 showing the number of firearm murders in Florida over the years.
Upon first glance, the graphic gives the visual impression that right after the passage of the 2005 law, the number of murders decreased substantially.
However, the numbers tell a different story.
Figure 8\.1: Reproduction of a data graphic reporting the number of gun deaths in Florida over time. The original image was published by Reuters.
The convention in data graphics is that up corresponds to increasing values. This is not an obscure convention—rather, it’s a standard part of the secondary school curriculum.
Close inspection reveals that the \\(y\\)\-axis in Figure [8\.1](ch-ethics.html#fig:florida) has been flipped upside down—the number of gun deaths increased sharply after 2005\.
### 8\.2\.2 Global temperature
Figure [8\.2](ch-ethics.html#fig:climate) shows another example of misleading graphics: a tweet by the news magazine *National Review* on the subject of climate change.
The dominant visual impression of the graphic is that global temperature has hardly changed at all.
Figure 8\.2: A tweet by *National Review* on December 14, 2015 showing the change in global temperature over time. The tweet was later deleted.
There is a tacit graphical convention that the coordinate scales on which the data are plotted are relevant to an informed interpretation of the data. The \\(x\\)\-axis follows the convention—1880 to 2015 is a reasonable choice when considering the relationship between human industrial activity and climate. The \\(y\\)\-axis, however, is utterly misleading. The scale goes from \\(\-10\\) to 110 degrees [*Fahrenheit*](https://en.wikipedia.org/w/index.php?search=Fahrenheit).
While this is a relevant scale for showing *season\-to\-season* variation in temperature, that is not the salient issue with respect to climate change.
The concern with climate change is about rising ocean levels, intensification of storms, ecological and agricultural disruption, etc.
These are the anticipated results of a change in global *average* temperature on the order of 5 degrees Fahrenheit.
The *National Review* graphic has obscured the data by showing them on an irrelevant scale where the actual changes in temperature are practically invisible.
By graying out the numbers on the \\(y\\)\-axis, the *National Review* makes it even harder to see the trick that’s being played.
The tweet was subsequently deleted.
### 8\.2\.3 COVID\-19 reporting
In May 2020, the state of [*Georgia*](https://en.wikipedia.org/w/index.php?search=Georgia) published [a highly misleading graphical display of COVID\-19 cases](https://www.vox.com/covid-19-coronavirus-us-response-trump/2020/5/18/21262265/georgia-covid-19-cases-declining-reopening) (see Figure [8\.3](ch-ethics.html#fig:covidga)).
Note that the results for April 17th appear to the right of April 19th, and that the counties are ordered such that all of the results are monotonically decreasing for each reporting period.
The net effect of the graph is to demonstrate that confirmed COVID cases are decreasing, but it does so in a misleading fashion.
Public outcry led to a statement from the governor’s office that moving forward, chronological order would be used to display time.
Figure 8\.3: A recreation of a misleading display of confirmed COVID\-19 cases in Georgia.
### 8\.2\.1 Stand your ground
In 2005, the [*Florida legislature*](https://en.wikipedia.org/w/index.php?search=Florida%20legislature) passed the controversial [“Stand Your Ground” law](https://en.wikipedia.org/wiki/Stand-your-ground_law) that broadened the situations in which citizens can use lethal force to protect themselves against perceived threats. Advocates believed that the new law would ultimately reduce crime; opponents feared an increase in the use of lethal force. What was the actual outcome?
The graphic in Figure [8\.1](ch-ethics.html#fig:florida) is a reproduction of one published by the news service [Reuters](http://static5.businessinsider.com/image/53038b556da8110e5ce82be7-604-756/florida%20gun%20deaths.jpg) on February 16, 2014 showing the number of firearm murders in Florida over the years.
Upon first glance, the graphic gives the visual impression that right after the passage of the 2005 law, the number of murders decreased substantially.
However, the numbers tell a different story.
Figure 8\.1: Reproduction of a data graphic reporting the number of gun deaths in Florida over time. The original image was published by Reuters.
The convention in data graphics is that up corresponds to increasing values. This is not an obscure convention—rather, it’s a standard part of the secondary school curriculum.
Close inspection reveals that the \\(y\\)\-axis in Figure [8\.1](ch-ethics.html#fig:florida) has been flipped upside down—the number of gun deaths increased sharply after 2005\.
### 8\.2\.2 Global temperature
Figure [8\.2](ch-ethics.html#fig:climate) shows another example of misleading graphics: a tweet by the news magazine *National Review* on the subject of climate change.
The dominant visual impression of the graphic is that global temperature has hardly changed at all.
Figure 8\.2: A tweet by *National Review* on December 14, 2015 showing the change in global temperature over time. The tweet was later deleted.
There is a tacit graphical convention that the coordinate scales on which the data are plotted are relevant to an informed interpretation of the data. The \\(x\\)\-axis follows the convention—1880 to 2015 is a reasonable choice when considering the relationship between human industrial activity and climate. The \\(y\\)\-axis, however, is utterly misleading. The scale goes from \\(\-10\\) to 110 degrees [*Fahrenheit*](https://en.wikipedia.org/w/index.php?search=Fahrenheit).
While this is a relevant scale for showing *season\-to\-season* variation in temperature, that is not the salient issue with respect to climate change.
The concern with climate change is about rising ocean levels, intensification of storms, ecological and agricultural disruption, etc.
These are the anticipated results of a change in global *average* temperature on the order of 5 degrees Fahrenheit.
The *National Review* graphic has obscured the data by showing them on an irrelevant scale where the actual changes in temperature are practically invisible.
By graying out the numbers on the \\(y\\)\-axis, the *National Review* makes it even harder to see the trick that’s being played.
The tweet was subsequently deleted.
### 8\.2\.3 COVID\-19 reporting
In May 2020, the state of [*Georgia*](https://en.wikipedia.org/w/index.php?search=Georgia) published [a highly misleading graphical display of COVID\-19 cases](https://www.vox.com/covid-19-coronavirus-us-response-trump/2020/5/18/21262265/georgia-covid-19-cases-declining-reopening) (see Figure [8\.3](ch-ethics.html#fig:covidga)).
Note that the results for April 17th appear to the right of April 19th, and that the counties are ordered such that all of the results are monotonically decreasing for each reporting period.
The net effect of the graph is to demonstrate that confirmed COVID cases are decreasing, but it does so in a misleading fashion.
Public outcry led to a statement from the governor’s office that moving forward, chronological order would be used to display time.
Figure 8\.3: A recreation of a misleading display of confirmed COVID\-19 cases in Georgia.
8\.3 Role of data science in society
------------------------------------
The examples in Figures [8\.1](ch-ethics.html#fig:florida), [8\.2](ch-ethics.html#fig:climate), and [8\.3](ch-ethics.html#fig:covidga) are not about lying with statistics.
Statistical methodology doesn’t enter into them.
It’s the professional ethics of journalism that the graphics violate, aided and abetted by an irresponsible ignorance of statistical methodology.
Insofar as the graphics concern matters of political controversy, they can be seen as part of the political process.
While politics is a profession, it’s a profession without any comprehensive standard of professional ethics.
As data scientists, what role do we play in shaping public discourse? What responsibilities do we have?
The stakes are high, and context matters.
The misleading data graphic about the “Stand Your Ground” law was published about six months after [George Zimmerman](https://en.wikipedia.org/w/index.php?search=George%20Zimmerman) was acquitted for [killing](https://en.wikipedia.org/wiki/Shooting_of_Trayvon_Martin) [Trayvon Martin](https://en.wikipedia.org/w/index.php?search=Trayvon%20Martin).
Did the data graphic affect public perception in the wake of this tragedy?
The *National Review* tweet was published during the thick of the presidential primaries leading up the 2016 election, and the publication is a leading voice in conservative political thought.
[*Pew Research*](https://en.wikipedia.org/w/index.php?search=Pew%20Research) reports that while concern about climate change has increased steadily among those who lean Democratic since 2013 (88% said climate change is “a major threat to the United States” in 2020, up from 58% in 2013\), [it did not increase at all among those who lean Republican from 2010 to mid\-2019](https://www.pewresearch.org/fact-tank/2020/04/16/u-s-concern-about-climate-change-is-rising-but-mainly-among-democrats/), holding steady at 25%.
Did the *National Review* persuade their readers to dismiss the [*scientific consensus on climate change*](https://en.wikipedia.org/w/index.php?search=scientific%20consensus%20on%20climate%20change)?
The misleading data graphic about COVID\-19 cases in Georgia was published during a time that Governor [Brian Kemp](https://en.wikipedia.org/w/index.php?search=Brian%20Kemp)’s reopening plan was facing stiff criticism from Atlanta mayor [Keisha Lance Bottoms](https://en.wikipedia.org/w/index.php?search=Keisha%20Lance%20Bottoms), his former opponent in the governor’s race [Stacey Abrams](https://en.wikipedia.org/w/index.php?search=Stacey%20Abrams), and even President [Donald Trump](https://en.wikipedia.org/w/index.php?search=Donald%20Trump).
[Journalists called attention to the data graphic on May 10th](https://www.businessinsider.com/graph-shows-georgia-bungling-coronavirus-data-2020-5).
The [Georgia Department of Health itself reports](https://dph.georgia.gov/covid-19-daily-status-report) that the 7\-day moving average for COVID\-19 cases increased by more than 125 cases per day during the two weeks following May 10th.
Did the Georgia’s governor’s office convince people to ignore the risk of COVID\-19?
These unanswered (and intentionally provocative) questions are meant to encourage you to see the deep and not always obvious ways in which data science work connects to society at\-large.
8\.4 Some settings for professional ethics
------------------------------------------
Common sense is a good starting point for evaluating the ethics of a situation. Tell the truth. Don’t steal. Don’t harm innocent people.
But professional ethics also require an informed assessment.
A dramatic illustration of this comes from legal ethics: a situation where the lawyers for an accused murderer found the bodies of two victims whose deaths were unknown to authorities and to the victims’ families.
The responsibility to confidentiality for their client precluded the lawyers from following their hearts and reporting the discovery.
The lawyers’ careers were destroyed by the public and political recriminations that followed, yet courts and legal scholars have confirmed that the lawyers were right to do what they did, and have even held them up as heroes for their ethical behavior.
Such extreme drama is rare. This section describes in brief six situations that raise questions of the ethical course of action.
Some are drawn from the authors’ personal experience, others from court cases and other reports.
The purpose of these short case reports is to raise questions.
Principles for addressing those questions are the subject of the next section.
### 8\.4\.1 The chief executive officer
One of us once worked as a statistical consultant for a client who wanted a proprietary model to predict commercial outcomes.
After reviewing the literature, an existing multiple linear regression model was found that matched the scenario well, and available public data were used to fit the parameters of the model.
The client’s staff were pleased with the result, but the CEO wanted a model that would give a competitive advantage.
After all, their competitors could easily follow the same process to the same model, so what advantage would the client’s company have?
The CEO asked the statistical consultant whether the coefficients in the model could be “tweaked” to reflect the specific values of his company.
The consultant suggested that this would not be appropriate, that the fitted coefficients best match the data and to change them arbitrarily would be “playing God.”
In response, the CEO rose from his chair and asserted, “I want to play God.”
How should the consultant respond?
### 8\.4\.2 Employment discrimination
One of us works on legal cases arising from audits of employers, conducted by the [*United States Office of Federal Contract Compliance Programs*](https://en.wikipedia.org/w/index.php?search=United%20States%20Office%20of%20Federal%20Contract%20Compliance%20Programs) (OFCCP).
In a typical case, the OFCCP asks for hiring and salary data from a company that has a contract with the United States government.
The company usually complies, sometimes unaware that the OFCCP applies a method to identify “discrimination” through a two\-standard\-deviation test outlined in the Uniform Guidelines on Employee Selection Procedures (UGESP).
A company that does not discriminate has some risk of being labeled as discriminating by the OFCCP method (Bridgeford 2014\).
By using a questionable statistical method, is the OFCCP acting unethically?
### 8\.4\.3 “Gaydar”
Y. Wang and Kosinski (2018\) used a deep neural network (see Section [11\.1\.5](ch-learningI.html#sec:neuralnet)) and logistic regression to build a classifier (see Chapter [10](ch-modeling.html#ch:modeling)) for sexual orientation based on pictures of people’s faces. The authors claim that if given five images of a person’s face, their model would correctly predict the sexual orientation of 91% of men and 83% of women. The authors highlight the potential harm that their work could do in their abstract:
> “Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”
[A subsequent article in *The New Yorker*](https://www.newyorker.com/news/daily-comment/the-ai-gaydar-study-and-the-real-dangers-of-big-data) also notes that:
> “the study consisted entirely of white faces, but only because the dating site had served up too few faces of color to provide for meaningful analysis.”
Was this research ethical? Were the authors justified in creating and publishing this model?
### 8\.4\.4 Race prediction
Imai and Khanna (2016\) built a racial prediction algorithm using a Bayes classifer (see Section [11\.1\.4](ch-learningI.html#sec:bayes)) trained on voter registration records from Florida and the U.S. Census Bureau’s name list.
In addition to the publishing the paper detailing the methodology, the authors published the software for the classifer on [*GitHub*](https://en.wikipedia.org/w/index.php?search=GitHub) under an open\-source license. The **wru** package is available on CRAN and will return predicted probabilities for a person’s race based on either their last name alone, or their last name and their address.
```
library(tidyverse)
library(wru)
predict_race(voter.file = voters, surname.only = TRUE) %>%
select(surname, pred.whi, pred.bla, pred.his, pred.asi, pred.oth)
```
```
[1] "Proceeding with surname-only predictions..."
```
```
surname pred.whi pred.bla pred.his pred.asi pred.oth
4 Khanna 0.0676 0.0043 0.00820 0.8668 0.05310
2 Imai 0.0812 0.0024 0.06890 0.7375 0.11000
8 Velasco 0.0594 0.0026 0.82270 0.1051 0.01020
1 Fifield 0.9356 0.0022 0.02850 0.0078 0.02590
10 Zhou 0.0098 0.0018 0.00065 0.9820 0.00575
7 Ratkovic 0.9187 0.0108 0.01083 0.0108 0.04880
3 Johnson 0.5897 0.3463 0.02360 0.0054 0.03500
5 Lopez 0.0486 0.0057 0.92920 0.0102 0.00630
11 Wantchekon 0.6665 0.0853 0.13670 0.0797 0.03180
6 Morse 0.9054 0.0431 0.02060 0.0072 0.02370
```
Given the long history of [*systemic racism*](https://en.wikipedia.org/w/index.php?search=systemic%20racism) [in the United States](https://en.wikipedia.org/wiki/Institutional_racism#United_States), it is clear how this software could be used to discriminate against people of color.
One of us once partnered with a progressive voting rights organization that wanted to use racial prediction to target members of an ethnic group to *help them register to vote*.
Was the publication of this model ethical?
Does the open\-source nature of the code affect your answer?
Is it ethical to use this software?
Does your answer change depending on the intended use?
### 8\.4\.5 Data scraping
In May 2016, the online OpenPsych Forum published
[a paper](http://openpsych.net/forum/showthread.php?tid=279) by Kirkegaard and Bjerrekær (2016\)
titled “The OkCupid data set: A very large public data set of dating site users.”
The resulting data set contained 2,620 variables—including usernames, gender, and dating preferences—from 68,371 people scraped from the [OkCupid](https://www.okcupid.com) dating website.
The ostensible purpose of the data dump was to provide an interesting open public data set to fellow researchers.
These data might be used to answer questions such as this one suggested in the abstract of the paper: whether the [*zodiac sign*](https://en.wikipedia.org/w/index.php?search=zodiac%20sign) of each user was associated with any of the other variables (spoiler alert: it wasn’t).
The data scraping did not involve any illicit technology such as breaking passwords. Nonetheless, the author received many comments on the OpenPsych Forum challenging the work as an ethical breach and accusing him of [*doxing*](https://en.wikipedia.org/w/index.php?search=doxing) people by releasing personal data. Does the work raise ethical issues?
### 8\.4\.6 Reproducible spreadsheet analysis
In 2010, [*Harvard University*](https://en.wikipedia.org/w/index.php?search=Harvard%20University) economists [Carmen Reinhart](https://en.wikipedia.org/w/index.php?search=Carmen%20Reinhart) and [Kenneth Rogoff](https://en.wikipedia.org/w/index.php?search=Kenneth%20Rogoff) published a report entitled “Growth in a Time of Debt” (Rogoff and Reinhart 2010\), which argued that countries which pursued austerity measures did not necessarily suffer from slow economic growth.
These ideas influenced the thinking of policymakers—notably United States Congressman [Paul Ryan](https://en.wikipedia.org/w/index.php?search=Paul%20Ryan)—during the time of the [*European debt crisis*](https://en.wikipedia.org/w/index.php?search=European%20debt%20crisis).
[*University of Massachusetts*](https://en.wikipedia.org/w/index.php?search=University%20of%20Massachusetts) graduate student [Thomas Herndon](https://en.wikipedia.org/w/index.php?search=Thomas%20Herndon) requested access to the data and analysis contained in the paper. After receiving the original spreadsheet from Reinhart, Herndon found several errors.
> “I clicked on cell L51, and saw that they had only averaged rows 30 through 44, instead of rows 30 through 49\.” —Thomas Herndon (Roose 2013\)
In a critique of the paper, Herndon, Ash, and Pollin (2014\) point out coding errors, selective inclusion of data, and odd weighting of summary statistics that shaped the conclusions of the Reinhart/Rogoff paper.
What ethical questions does publishing a flawed analysis raise?
### 8\.4\.7 Drug dangers
In September 2004, the drug company [*Merck*](https://en.wikipedia.org/w/index.php?search=Merck) withdrew the popular product [*Vioxx*](https://en.wikipedia.org/w/index.php?search=Vioxx) from the market because of evidence that the drug increases the risk of [*myocardial infarction*](https://en.wikipedia.org/w/index.php?search=myocardial%20infarction) (MI), a major type of heart attack.
Approximately 20 million Americans had taken Vioxx up to that point. The leading medical journal *Lancet* later reported an estimate that Vioxx use resulted in 88,000 Americans having heart attacks, of whom 38,000 died.
Vioxx had been approved in May 1999 by the [*United States Food and Drug Administration*](https://en.wikipedia.org/w/index.php?search=United%20States%20Food%20and%20Drug%20Administration) based on tests involving 5,400 subjects.
Slightly more than a year after the FDA approval, a study (Bombardier et al. 2000\) of 8,076 patients published in another leading medical journal, *The New England Journal of Medicine*, established that Vioxx reduced the incidence of severe gastrointestinal events substantially compared to the standard treatment, [*naproxen*](https://en.wikipedia.org/w/index.php?search=naproxen).
That’s good for Vioxx. In addition, the abstract reports these findings regarding heart attacks:
> The incidence of myocardial infarction was lower among patients in the naproxen group than among those in the \[Vioxx] group (0\.1 percent vs. 0\.4 percent; relative risk, 0\.2; 95% confidence interval, 0\.1 to 0\.7\); the overall mortality rate and the rate of death from cardiovascular causes were similar in the two groups."
Read the abstract again carefully. The Vioxx group had a much *higher* rate of MI than the group taking the standard treatment.
This influential report identified the high risk soon after the drug was approved for use.
Yet Vioxx was not withdrawn for another three years. Something clearly went wrong here. Did it involve an ethical lapse?
### 8\.4\.8 Legal negotiations
Lawyers sometimes retain statistical experts to help plan negotiations. In a common scenario, the defense lawyer will be negotiating the amount of damages in a case with the plaintiff’s attorney.
Plaintiffs will ask the statistician to estimate the amount of damages, with a clear but implicit directive that the estimate should reflect the plaintiff’s interests.
Similarly, the defense will ask their own expert to construct a framework that produces an estimate at a lower level.
Is this a game statisticians should play?
### 8\.4\.1 The chief executive officer
One of us once worked as a statistical consultant for a client who wanted a proprietary model to predict commercial outcomes.
After reviewing the literature, an existing multiple linear regression model was found that matched the scenario well, and available public data were used to fit the parameters of the model.
The client’s staff were pleased with the result, but the CEO wanted a model that would give a competitive advantage.
After all, their competitors could easily follow the same process to the same model, so what advantage would the client’s company have?
The CEO asked the statistical consultant whether the coefficients in the model could be “tweaked” to reflect the specific values of his company.
The consultant suggested that this would not be appropriate, that the fitted coefficients best match the data and to change them arbitrarily would be “playing God.”
In response, the CEO rose from his chair and asserted, “I want to play God.”
How should the consultant respond?
### 8\.4\.2 Employment discrimination
One of us works on legal cases arising from audits of employers, conducted by the [*United States Office of Federal Contract Compliance Programs*](https://en.wikipedia.org/w/index.php?search=United%20States%20Office%20of%20Federal%20Contract%20Compliance%20Programs) (OFCCP).
In a typical case, the OFCCP asks for hiring and salary data from a company that has a contract with the United States government.
The company usually complies, sometimes unaware that the OFCCP applies a method to identify “discrimination” through a two\-standard\-deviation test outlined in the Uniform Guidelines on Employee Selection Procedures (UGESP).
A company that does not discriminate has some risk of being labeled as discriminating by the OFCCP method (Bridgeford 2014\).
By using a questionable statistical method, is the OFCCP acting unethically?
### 8\.4\.3 “Gaydar”
Y. Wang and Kosinski (2018\) used a deep neural network (see Section [11\.1\.5](ch-learningI.html#sec:neuralnet)) and logistic regression to build a classifier (see Chapter [10](ch-modeling.html#ch:modeling)) for sexual orientation based on pictures of people’s faces. The authors claim that if given five images of a person’s face, their model would correctly predict the sexual orientation of 91% of men and 83% of women. The authors highlight the potential harm that their work could do in their abstract:
> “Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”
[A subsequent article in *The New Yorker*](https://www.newyorker.com/news/daily-comment/the-ai-gaydar-study-and-the-real-dangers-of-big-data) also notes that:
> “the study consisted entirely of white faces, but only because the dating site had served up too few faces of color to provide for meaningful analysis.”
Was this research ethical? Were the authors justified in creating and publishing this model?
### 8\.4\.4 Race prediction
Imai and Khanna (2016\) built a racial prediction algorithm using a Bayes classifer (see Section [11\.1\.4](ch-learningI.html#sec:bayes)) trained on voter registration records from Florida and the U.S. Census Bureau’s name list.
In addition to the publishing the paper detailing the methodology, the authors published the software for the classifer on [*GitHub*](https://en.wikipedia.org/w/index.php?search=GitHub) under an open\-source license. The **wru** package is available on CRAN and will return predicted probabilities for a person’s race based on either their last name alone, or their last name and their address.
```
library(tidyverse)
library(wru)
predict_race(voter.file = voters, surname.only = TRUE) %>%
select(surname, pred.whi, pred.bla, pred.his, pred.asi, pred.oth)
```
```
[1] "Proceeding with surname-only predictions..."
```
```
surname pred.whi pred.bla pred.his pred.asi pred.oth
4 Khanna 0.0676 0.0043 0.00820 0.8668 0.05310
2 Imai 0.0812 0.0024 0.06890 0.7375 0.11000
8 Velasco 0.0594 0.0026 0.82270 0.1051 0.01020
1 Fifield 0.9356 0.0022 0.02850 0.0078 0.02590
10 Zhou 0.0098 0.0018 0.00065 0.9820 0.00575
7 Ratkovic 0.9187 0.0108 0.01083 0.0108 0.04880
3 Johnson 0.5897 0.3463 0.02360 0.0054 0.03500
5 Lopez 0.0486 0.0057 0.92920 0.0102 0.00630
11 Wantchekon 0.6665 0.0853 0.13670 0.0797 0.03180
6 Morse 0.9054 0.0431 0.02060 0.0072 0.02370
```
Given the long history of [*systemic racism*](https://en.wikipedia.org/w/index.php?search=systemic%20racism) [in the United States](https://en.wikipedia.org/wiki/Institutional_racism#United_States), it is clear how this software could be used to discriminate against people of color.
One of us once partnered with a progressive voting rights organization that wanted to use racial prediction to target members of an ethnic group to *help them register to vote*.
Was the publication of this model ethical?
Does the open\-source nature of the code affect your answer?
Is it ethical to use this software?
Does your answer change depending on the intended use?
### 8\.4\.5 Data scraping
In May 2016, the online OpenPsych Forum published
[a paper](http://openpsych.net/forum/showthread.php?tid=279) by Kirkegaard and Bjerrekær (2016\)
titled “The OkCupid data set: A very large public data set of dating site users.”
The resulting data set contained 2,620 variables—including usernames, gender, and dating preferences—from 68,371 people scraped from the [OkCupid](https://www.okcupid.com) dating website.
The ostensible purpose of the data dump was to provide an interesting open public data set to fellow researchers.
These data might be used to answer questions such as this one suggested in the abstract of the paper: whether the [*zodiac sign*](https://en.wikipedia.org/w/index.php?search=zodiac%20sign) of each user was associated with any of the other variables (spoiler alert: it wasn’t).
The data scraping did not involve any illicit technology such as breaking passwords. Nonetheless, the author received many comments on the OpenPsych Forum challenging the work as an ethical breach and accusing him of [*doxing*](https://en.wikipedia.org/w/index.php?search=doxing) people by releasing personal data. Does the work raise ethical issues?
### 8\.4\.6 Reproducible spreadsheet analysis
In 2010, [*Harvard University*](https://en.wikipedia.org/w/index.php?search=Harvard%20University) economists [Carmen Reinhart](https://en.wikipedia.org/w/index.php?search=Carmen%20Reinhart) and [Kenneth Rogoff](https://en.wikipedia.org/w/index.php?search=Kenneth%20Rogoff) published a report entitled “Growth in a Time of Debt” (Rogoff and Reinhart 2010\), which argued that countries which pursued austerity measures did not necessarily suffer from slow economic growth.
These ideas influenced the thinking of policymakers—notably United States Congressman [Paul Ryan](https://en.wikipedia.org/w/index.php?search=Paul%20Ryan)—during the time of the [*European debt crisis*](https://en.wikipedia.org/w/index.php?search=European%20debt%20crisis).
[*University of Massachusetts*](https://en.wikipedia.org/w/index.php?search=University%20of%20Massachusetts) graduate student [Thomas Herndon](https://en.wikipedia.org/w/index.php?search=Thomas%20Herndon) requested access to the data and analysis contained in the paper. After receiving the original spreadsheet from Reinhart, Herndon found several errors.
> “I clicked on cell L51, and saw that they had only averaged rows 30 through 44, instead of rows 30 through 49\.” —Thomas Herndon (Roose 2013\)
In a critique of the paper, Herndon, Ash, and Pollin (2014\) point out coding errors, selective inclusion of data, and odd weighting of summary statistics that shaped the conclusions of the Reinhart/Rogoff paper.
What ethical questions does publishing a flawed analysis raise?
### 8\.4\.7 Drug dangers
In September 2004, the drug company [*Merck*](https://en.wikipedia.org/w/index.php?search=Merck) withdrew the popular product [*Vioxx*](https://en.wikipedia.org/w/index.php?search=Vioxx) from the market because of evidence that the drug increases the risk of [*myocardial infarction*](https://en.wikipedia.org/w/index.php?search=myocardial%20infarction) (MI), a major type of heart attack.
Approximately 20 million Americans had taken Vioxx up to that point. The leading medical journal *Lancet* later reported an estimate that Vioxx use resulted in 88,000 Americans having heart attacks, of whom 38,000 died.
Vioxx had been approved in May 1999 by the [*United States Food and Drug Administration*](https://en.wikipedia.org/w/index.php?search=United%20States%20Food%20and%20Drug%20Administration) based on tests involving 5,400 subjects.
Slightly more than a year after the FDA approval, a study (Bombardier et al. 2000\) of 8,076 patients published in another leading medical journal, *The New England Journal of Medicine*, established that Vioxx reduced the incidence of severe gastrointestinal events substantially compared to the standard treatment, [*naproxen*](https://en.wikipedia.org/w/index.php?search=naproxen).
That’s good for Vioxx. In addition, the abstract reports these findings regarding heart attacks:
> The incidence of myocardial infarction was lower among patients in the naproxen group than among those in the \[Vioxx] group (0\.1 percent vs. 0\.4 percent; relative risk, 0\.2; 95% confidence interval, 0\.1 to 0\.7\); the overall mortality rate and the rate of death from cardiovascular causes were similar in the two groups."
Read the abstract again carefully. The Vioxx group had a much *higher* rate of MI than the group taking the standard treatment.
This influential report identified the high risk soon after the drug was approved for use.
Yet Vioxx was not withdrawn for another three years. Something clearly went wrong here. Did it involve an ethical lapse?
### 8\.4\.8 Legal negotiations
Lawyers sometimes retain statistical experts to help plan negotiations. In a common scenario, the defense lawyer will be negotiating the amount of damages in a case with the plaintiff’s attorney.
Plaintiffs will ask the statistician to estimate the amount of damages, with a clear but implicit directive that the estimate should reflect the plaintiff’s interests.
Similarly, the defense will ask their own expert to construct a framework that produces an estimate at a lower level.
Is this a game statisticians should play?
8\.5 Some principles to guide ethical action
--------------------------------------------
In Section [8\.1](ch-ethics.html#ethics-intro), we listed three principles from the Hippocratic Oath that has been administered to doctors for hundreds of years. Below, we reprint the three corresponding principles as outlined in the Data Science Oath published by the [*National Academy of Sciences*](https://en.wikipedia.org/w/index.php?search=National%20Academy%20of%20Sciences) (National Academies of Science, Engineering, and Medicine 2018\).
1. I will not be ashamed to say, “I know not,” nor will I fail to call in my colleagues when the skills of another are needed for solving a problem.
2. I will respect the privacy of my data subjects, for their data are not disclosed to me that the world may know, so I will tread with care in matters of privacy and security.
3. I will remember that my data are not just numbers without meaning or context, but represent real people and situations, and that my work may lead to unintended societal consequences, such as inequality, poverty, and disparities due to algorithmic bias.
To date, the Data Science Oath has not achieved the widespread adoption or formal acceptance of its inspiration.
We hope this will change in the coming years.
Another set of ethical guidelines for data science is the [Data Values and Principles](https://datapractices.org/manifesto/) manifesto published by DataPractices.org.
This document espouses four values (inclusion, experimentation, accountability, and impact) and 12 principles that provide a guide for the ethical practice of data science:
1. Use data to improve life for our users, customers, organizations, and communities.
2. Create reproducible and extensible work.
3. Build teams with diverse ideas, backgrounds, and strengths.
4. Prioritize the continuous collection and availability of discussions and metadata.
5. Clearly identify the questions and objectives that drive each project and use to guide both planning and refinement.
6. Be open to changing our methods and conclusions in response to new knowledge.
7. Recognize and mitigate bias in ourselves and in the data we use.
8. Present our work in ways that empower others to make better\-informed decisions.
9. Consider carefully the ethical implications of choices we make when using data, and the impacts of our work on individuals and society.
10. Respect and invite fair criticism while promoting the identification and open discussion of errors, risks, and unintended consequences of our work.
11. Protect the privacy and security of individuals represented in our data.
12. Help others to understand the most useful and appropriate applications of data to solve real\-world problems.
In October 2020, this document had over 2,000 signatories (including two of the authors of this book).
In what follows we explore how these principles can be applied to guide ethical thinking in the several scenarios outlined in the previous section.
### 8\.5\.1 The CEO
You’ve been asked by a company CEO to modify model coefficients from the correct values, that is, from the values found by a generally accepted method.
The stakeholder in this setting is the company.
If your work will involve a method that’s not generally accepted by the professional community, you’re obliged to point this out to the company.
Principles 8 and 12 are germane.
Have you presented your work in a way that empowers others to make better\-informed decisions (principle 8\)?
Certainly your client also has substantial knowledge of how their business works.
It’s important to realize that your client’s needs may not map well onto a particular statistical methodology.
The consultant should work genuinely to understand the client’s whole set of interests (principle 12\).
Often the problem that clients identify is not really the problem that needs to be solved when seen from an expert statistical perspective.
### 8\.5\.2 Employment discrimination
The procedures adopted by the OFCCP are stated using statistical terms like “standard deviation” that themselves suggest that they are part of a legitimate statistical method.
Yet the methods raise significant questions, since by construction they will sometimes label a company that is not discriminating as a discriminator.
Principle 10 suggests the OFCCP should “invite fair criticism” of their methodology.
OFCCP and others might argue that they are not a statistical organization. They are enforcing a law, not participating in research. The OFCCP has a responsibility to the courts.
The courts themselves, including the United States Supreme Court, have not developed or even called for a coherent approach to the use of statistics (although in 1977 the [Supreme Court labeled](http://peopleclick.com/resources/pri/10-09/Statistical_Significance.asp) differences greater than two or three standard deviations as too large to attribute solely to chance).
### 8\.5\.3 “Gaydar”
Principles 1, 3, 7, 9, and 11 are relevant here.
Does the prediction of sexual orientation based on facial recognition improve life for communities (principle 1\)?
As noted in the abstract, the researchers *did* consider the ethical implications of their work (principle 9\), but did they protect the privacy and security of the individuals presented in their data (principle 11\)?
The exclusion of non\-white faces from the study casts doubt on whether the standard outlined in principle 7 was met.
### 8\.5\.4 Race prediction
Clearly, using this software to discriminate against historically marginalized people would violate some combination of principles 3, 7, and 9\.
On the other hand, is it ethical to use this software to try and help underrepresented groups if those same principles are not violated?
The authors of the **wru** package admirably met principle 2, but they may not have fully adhered to principle 9\.
### 8\.5\.5 Data scraping
OkCupid provides public access to data. A researcher uses legitimate means to acquire those data. What could be wrong?
There is the matter of the stakeholders. The collection of data was intended to support psychological research.
The ethics of research involving humans requires that the human not be exposed to any risk for which consent has not been explicitly given.
The OkCupid members did not provide such consent.
Since the data contain information that makes it possible to identify individual humans, there is a realistic risk of the release of potentially embarrassing information, or worse, information that jeopardizes the physical safety of certain users.
Principles 1 and 11 were clearly violated by the authors.
Ultimately, the Danish Data Protection Agency [decided not to file any charges against the authors](https://emilkirkegaard.dk/en/wp-content/uploads/1.1-2016-631-0148-Sagen-afsluttes.pdf).
Another stakeholder is OkCupid itself. Many information providers, like OkCupid, have [*terms of use*](https://en.wikipedia.org/w/index.php?search=terms%20of%20use) that restrict how the data may be legitimately used. Such terms of use (see Section [8\.7\.3](ch-ethics.html#sec:terms-of-use)) form an explicit agreement between the service and the users of that service. They cannot ethically be disregarded.
### 8\.5\.6 Reproducible spreadsheet analysis
The scientific community as a whole is a stakeholder in public research.
Insofar as the research is used to inform public policy, the public as a whole is a stakeholder.
Researchers have an obligation to be truthful in their reporting of research. This is not just a matter of being honest but also of participating in the process by which scientific work is challenged or confirmed.
Reinhart and Rogoff honored this professional obligation by providing reasonable access to their software and data.
In this regard, they complied with principle 10\.
Seen from the perspective of data science, Microsoft Excel, the tool used by Reinhart and Rogoff, is an unfortunate choice.
It mixes the data with the analysis. It works at a low level of abstraction, so it’s difficult to program in a concise and readable way. Commands are customized to a particular size and organization of data, so it’s hard to apply to a new or modified data set.
One of the major strategies in debugging is to work on a data set where the answer is known; this is impractical in Excel.
Programming and revision in Excel generally involves lots of click\-and\-drag copying, which is itself an error\-prone operation.
Data science professionals have an ethical obligation to use tools that are reliable, verifiable, and conducive to reproducible data analysis (see Appendix [D](ch-reproduce.html#ch:reproduce)).
Reinhart and Rogoff did not meet the standard implied by principle 2\.
### 8\.5\.7 Drug dangers
When something goes wrong on a large scale, it’s tempting to look for a breach of ethics. This may indeed identify an offender, but we must also beware of creating scapegoats. With Vioxx, there were many claims, counterclaims, and lawsuits. The researchers failed to incorporate some data that were available and provided a misleading summary of results.
The journal editors also failed to highlight the very substantial problem of the increased rate of myocardial infarction with Vioxx.
To be sure, it’s unethical not to include data that undermines the conclusion presented in a paper. The Vioxx researchers were acting according to their original research protocol—a solid professional practice.
What seems to have happened with Vioxx is that the researchers had a theory that the higher rate of infarction was not due to Vioxx, *per se*, but to an aspect of the study protocol that excluded subjects who were being treated with aspirin to reduce the risk of heart attacks.
The researchers believed with some justification that the drug to which Vioxx was being compared, naproxen, was acting as a substitute for aspirin. They were wrong, as subsequent research showed.
Their failure was in not honoring principle 6 and publishing their results in a misleading way.
Professional ethics dictate that professional standards be applied in work.
Incidents like Vioxx should remind us to work with appropriate humility and to be vigilant to the possibility that our own explanations are misleading us.
### 8\.5\.8 Legal negotiations
In legal cases such as the one described earlier in the chapter, the data scientist has ethical obligations to their client. Depending on the circumstances, they may also have obligations to the court.
As always, you should be forthright with your client. Usually you will be using methods that you deem appropriate, but on occasion you will be directed to use a method that you think is inappropriate.
For instance, we’ve seen occasions when the client requested that the time period of data included in the analysis be limited in some way to produce a “better” result. We’ve had clients ask us to subdivide the data (in employment discrimination cases, say, by job title) in order to change p\-values.
Although such subdivision may be entirely legitimate, the decision about subdividing—seen from a purely statistical point of view—ought to be based on the situation, not the desired outcome (see the discussion of the “garden of forking paths” in Section [9\.7](ch-foundations.html#sec:p-perils)).
Your client is entitled to make such requests. Whether or not you think the method being asked for is the right one doesn’t enter into it. Your professional obligation is to inform the client what the flaws in the proposed method are and how and why you think another method would be better (principle 8\). (See the major exception that follows.)
The legal system in countries such as the U.S. is an *adversarial* system. Lawyers are allowed to frame legal arguments that may be dismissed: They are entitled to enter some facts and not others into evidence. Of course, the opposing legal team is entitled to create their own legal arguments and to cross\-examine the evidence to show how it is incomplete and misleading.
When you are working with a legal team as a data scientist, you are part of the team. The lawyers on the team are the experts about what negotiation strategies and legal theories to use, how to define the limits of the case (such as damages), and how to present their case or negotiate with the other party.
It is a different matter when you are presenting to the court. This might take the form of filing an expert report to the court, testifying as an expert witness, or being deposed.
A deposition is when you are questioned, under oath, outside of the courtroom. You are obliged to answer all questions honestly. (Your lawyer may, however, direct you not to answer a question about privileged communications.)
If you are an expert witness or filing an expert report, the word “expert” is significant. A court will certify you as an expert in a case giving you permission to express your opinions. Now you have professional ethical obligations to apply your expertise honestly and openly in forming those opinions.
When working on a legal case, you should get advice from a legal authority, which might be your client.
Remember that if you do shoddy work, or fail to reply honestly to the other side’s criticisms of your work, your credibility as an expert will be imperiled.
### 8\.5\.1 The CEO
You’ve been asked by a company CEO to modify model coefficients from the correct values, that is, from the values found by a generally accepted method.
The stakeholder in this setting is the company.
If your work will involve a method that’s not generally accepted by the professional community, you’re obliged to point this out to the company.
Principles 8 and 12 are germane.
Have you presented your work in a way that empowers others to make better\-informed decisions (principle 8\)?
Certainly your client also has substantial knowledge of how their business works.
It’s important to realize that your client’s needs may not map well onto a particular statistical methodology.
The consultant should work genuinely to understand the client’s whole set of interests (principle 12\).
Often the problem that clients identify is not really the problem that needs to be solved when seen from an expert statistical perspective.
### 8\.5\.2 Employment discrimination
The procedures adopted by the OFCCP are stated using statistical terms like “standard deviation” that themselves suggest that they are part of a legitimate statistical method.
Yet the methods raise significant questions, since by construction they will sometimes label a company that is not discriminating as a discriminator.
Principle 10 suggests the OFCCP should “invite fair criticism” of their methodology.
OFCCP and others might argue that they are not a statistical organization. They are enforcing a law, not participating in research. The OFCCP has a responsibility to the courts.
The courts themselves, including the United States Supreme Court, have not developed or even called for a coherent approach to the use of statistics (although in 1977 the [Supreme Court labeled](http://peopleclick.com/resources/pri/10-09/Statistical_Significance.asp) differences greater than two or three standard deviations as too large to attribute solely to chance).
### 8\.5\.3 “Gaydar”
Principles 1, 3, 7, 9, and 11 are relevant here.
Does the prediction of sexual orientation based on facial recognition improve life for communities (principle 1\)?
As noted in the abstract, the researchers *did* consider the ethical implications of their work (principle 9\), but did they protect the privacy and security of the individuals presented in their data (principle 11\)?
The exclusion of non\-white faces from the study casts doubt on whether the standard outlined in principle 7 was met.
### 8\.5\.4 Race prediction
Clearly, using this software to discriminate against historically marginalized people would violate some combination of principles 3, 7, and 9\.
On the other hand, is it ethical to use this software to try and help underrepresented groups if those same principles are not violated?
The authors of the **wru** package admirably met principle 2, but they may not have fully adhered to principle 9\.
### 8\.5\.5 Data scraping
OkCupid provides public access to data. A researcher uses legitimate means to acquire those data. What could be wrong?
There is the matter of the stakeholders. The collection of data was intended to support psychological research.
The ethics of research involving humans requires that the human not be exposed to any risk for which consent has not been explicitly given.
The OkCupid members did not provide such consent.
Since the data contain information that makes it possible to identify individual humans, there is a realistic risk of the release of potentially embarrassing information, or worse, information that jeopardizes the physical safety of certain users.
Principles 1 and 11 were clearly violated by the authors.
Ultimately, the Danish Data Protection Agency [decided not to file any charges against the authors](https://emilkirkegaard.dk/en/wp-content/uploads/1.1-2016-631-0148-Sagen-afsluttes.pdf).
Another stakeholder is OkCupid itself. Many information providers, like OkCupid, have [*terms of use*](https://en.wikipedia.org/w/index.php?search=terms%20of%20use) that restrict how the data may be legitimately used. Such terms of use (see Section [8\.7\.3](ch-ethics.html#sec:terms-of-use)) form an explicit agreement between the service and the users of that service. They cannot ethically be disregarded.
### 8\.5\.6 Reproducible spreadsheet analysis
The scientific community as a whole is a stakeholder in public research.
Insofar as the research is used to inform public policy, the public as a whole is a stakeholder.
Researchers have an obligation to be truthful in their reporting of research. This is not just a matter of being honest but also of participating in the process by which scientific work is challenged or confirmed.
Reinhart and Rogoff honored this professional obligation by providing reasonable access to their software and data.
In this regard, they complied with principle 10\.
Seen from the perspective of data science, Microsoft Excel, the tool used by Reinhart and Rogoff, is an unfortunate choice.
It mixes the data with the analysis. It works at a low level of abstraction, so it’s difficult to program in a concise and readable way. Commands are customized to a particular size and organization of data, so it’s hard to apply to a new or modified data set.
One of the major strategies in debugging is to work on a data set where the answer is known; this is impractical in Excel.
Programming and revision in Excel generally involves lots of click\-and\-drag copying, which is itself an error\-prone operation.
Data science professionals have an ethical obligation to use tools that are reliable, verifiable, and conducive to reproducible data analysis (see Appendix [D](ch-reproduce.html#ch:reproduce)).
Reinhart and Rogoff did not meet the standard implied by principle 2\.
### 8\.5\.7 Drug dangers
When something goes wrong on a large scale, it’s tempting to look for a breach of ethics. This may indeed identify an offender, but we must also beware of creating scapegoats. With Vioxx, there were many claims, counterclaims, and lawsuits. The researchers failed to incorporate some data that were available and provided a misleading summary of results.
The journal editors also failed to highlight the very substantial problem of the increased rate of myocardial infarction with Vioxx.
To be sure, it’s unethical not to include data that undermines the conclusion presented in a paper. The Vioxx researchers were acting according to their original research protocol—a solid professional practice.
What seems to have happened with Vioxx is that the researchers had a theory that the higher rate of infarction was not due to Vioxx, *per se*, but to an aspect of the study protocol that excluded subjects who were being treated with aspirin to reduce the risk of heart attacks.
The researchers believed with some justification that the drug to which Vioxx was being compared, naproxen, was acting as a substitute for aspirin. They were wrong, as subsequent research showed.
Their failure was in not honoring principle 6 and publishing their results in a misleading way.
Professional ethics dictate that professional standards be applied in work.
Incidents like Vioxx should remind us to work with appropriate humility and to be vigilant to the possibility that our own explanations are misleading us.
### 8\.5\.8 Legal negotiations
In legal cases such as the one described earlier in the chapter, the data scientist has ethical obligations to their client. Depending on the circumstances, they may also have obligations to the court.
As always, you should be forthright with your client. Usually you will be using methods that you deem appropriate, but on occasion you will be directed to use a method that you think is inappropriate.
For instance, we’ve seen occasions when the client requested that the time period of data included in the analysis be limited in some way to produce a “better” result. We’ve had clients ask us to subdivide the data (in employment discrimination cases, say, by job title) in order to change p\-values.
Although such subdivision may be entirely legitimate, the decision about subdividing—seen from a purely statistical point of view—ought to be based on the situation, not the desired outcome (see the discussion of the “garden of forking paths” in Section [9\.7](ch-foundations.html#sec:p-perils)).
Your client is entitled to make such requests. Whether or not you think the method being asked for is the right one doesn’t enter into it. Your professional obligation is to inform the client what the flaws in the proposed method are and how and why you think another method would be better (principle 8\). (See the major exception that follows.)
The legal system in countries such as the U.S. is an *adversarial* system. Lawyers are allowed to frame legal arguments that may be dismissed: They are entitled to enter some facts and not others into evidence. Of course, the opposing legal team is entitled to create their own legal arguments and to cross\-examine the evidence to show how it is incomplete and misleading.
When you are working with a legal team as a data scientist, you are part of the team. The lawyers on the team are the experts about what negotiation strategies and legal theories to use, how to define the limits of the case (such as damages), and how to present their case or negotiate with the other party.
It is a different matter when you are presenting to the court. This might take the form of filing an expert report to the court, testifying as an expert witness, or being deposed.
A deposition is when you are questioned, under oath, outside of the courtroom. You are obliged to answer all questions honestly. (Your lawyer may, however, direct you not to answer a question about privileged communications.)
If you are an expert witness or filing an expert report, the word “expert” is significant. A court will certify you as an expert in a case giving you permission to express your opinions. Now you have professional ethical obligations to apply your expertise honestly and openly in forming those opinions.
When working on a legal case, you should get advice from a legal authority, which might be your client.
Remember that if you do shoddy work, or fail to reply honestly to the other side’s criticisms of your work, your credibility as an expert will be imperiled.
8\.6 Algorithmic bias
---------------------
Algorithms are at the core of many data science models (see Chapter [11](ch-learningI.html#ch:learningI) for a comprehensive introducion.
These models are being used to automate decision\-making in settings as diverse as navigation for self\-driving cars and determinations of risk for recidivism (return to criminal behavior) in the criminal justice system.
The potential for bias to be reinforced when these models are implemented is dramatic.
Biased data may lead to algorithmic bias.
As an example, some groups may be underrepresented or systematically excluded from data collection efforts.
D’Ignazio and Klein (2020\) highlight issues with data collection related to undocumented immigrants.
O’Neil (2016\) details several settings in which algorithmic bias has harmful consequences, whether intended or not.
Consider a criminal recidivism algorithm used in several states and detailed in [a *ProPublica* story](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) titled “Machine Bias” (Angwin et al. 2016\).
The algorithm returns predictions about how likely a criminal is to commit another crime based on a survey of 137 questions.
*ProPublica* claims that the algorithm is biased:
> “Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind.”
How could the predictions be biased, when the race of the defendants is not included in the model?
Consider that one of the survey questions is “was one of your parents ever sent to jail or prison?”
Because of the longstanding relationship between [*race and crime in the United States*](https://en.wikipedia.org/w/index.php?search=race%20and%20crime%20in%20the%20United%20States), Black people are much more likely to have a parent who was sent to prison.
In this manner, the question about the defendant’s parents acts as a [*proxy*](https://en.wikipedia.org/w/index.php?search=proxy) for race.
Thus, even though the recidivism algorithm doesn’t take race into account directly, it learns about race from the data that reflects the centuries\-old inequities in the criminal justice system.
For another example, suppose that this model for recidivism included interactions with the police as an important feature.
It may seem logical to assume that people who have had more interactions with the police are more likely to commit crimes in the future.
However, including this variable would likely lead to bias, since Black people are more likely to have interactions with police, even among those whose underlying probability of criminal behavior is the same (Andrew Gelman, Fagan, and Kiss 2007\).
Data scientists need to ensure that model assessment, testing, accountability, and transparency are integrated into their analysis to identify and counteract bias and maximize fairness.
8\.7 Data and disclosure
------------------------
### 8\.7\.1 Reidentification and disclosure avoidance
The ability to link multiple data sets and to use public information to identify individuals is a growing problem. A glaring example of this occurred in 1996 when then\-Governor of Massachusetts [William Weld](https://en.wikipedia.org/w/index.php?search=William%20Weld) collapsed while attending a graduation ceremony at [*Bentley College*](https://en.wikipedia.org/w/index.php?search=Bentley%20College).
An [*MIT*](https://en.wikipedia.org/w/index.php?search=MIT) graduate student used information from a public data release by the [*Massachusetts Group Insurance Commission*](https://en.wikipedia.org/w/index.php?search=Massachusetts%20Group%20Insurance%20Commission) to identify Weld’s subsequent hospitalization records.
The disclosure of this information was [highly publicized](http://healthaffairs.org/blog/2012/08/10/the-debate-over-re-identification-of-health-information-what-do-we-risk/) and led to many changes in data releases.
This was a situation where the right balance was not struck between disclosure (to help improve health care and control costs) and nondisclosure (to help ensure private information is not made public).
There are many challenges to ensure disclosure avoidance (Zaslavsky and Horton 1998; Ohm 2010\). This remains an
active and important area of research.
The [*Health Insurance Portability and Accountability Act*](https://en.wikipedia.org/w/index.php?search=Health%20Insurance%20Portability%20and%20Accountability%20Act) (HIPAA) was passed by the United States Congress in 1996—the same year as Weld’s illness.
The law augmented and clarified the role that researchers and medical care providers had in maintaining protected health information (PHI).
The HIPAA regulations developed since then specify procedures to ensure that individually identifiable PHI is protected when it is transferred, received, handled, analyzed, or shared.
As an example, detailed geographic information (e.g., home or office location) is not allowed to be shared unless there is an overriding need.
For research purposes, geographic information might be limited to state or territory, though for certain rare diseases or characteristics even this level of detail may lead to disclosure.
Those whose PHI is not protected can file a complaint with the Office of Civil Rights.
The HIPAA structure, while limited to medical information, provides a useful model for disclosure avoidance that is relevant to other data scientists.
Parties accessing PHI need to have privacy policies and procedures.
They must identify a privacy official and undertake training of their employees. If there is a disclosure they must
mitigate the effects to the extent practical.
There must be reasonable data safeguards to prevent intentional or unintentional use.
Covered entities may not retaliate against someone for assisting in investigations of disclosures.
Organizations must maintain records and documentation for six years after their last use of the data.
Similar regulations protect information collected by the statistical agencies of the United States.
### 8\.7\.2 Safe data storage
Inadvertent disclosures of data can be even more damaging than planned disclosures.
Stories abound of protected data being made available on the internet with subsequent harm to those whose information is made accessible.
Such releases may be due to misconfigured databases, malware, theft, or by posting on a public forum.
Each individual and organization needs to practice safe computing, to regularly audit their systems, and to implement plans to address computer and data security.
Such policies need to ensure that protections remain even when equipment is transferred or disposed of.
### 8\.7\.3 Data scraping and terms of use
A different issue arises relating to legal status of material on the Web.
Consider [*Zillow.com*](https://en.wikipedia.org/w/index.php?search=Zillow.com), an online real\-estate database company that combines data from a number of public and private sources to generate house price and rental information on more than 100 million homes across the United States.
Zillow has made access to their database
available through an API (see Section [6\.4\.2](ch-dataII.html#sec:apis)) under certain restrictions.
The terms of use for Zillow are provided in a [legal document](http://www.zillow.com/howto/api/APITerms.htm).
They require that users of the API consider the data on an “as is” basis, not replicate functionality of the Zillow website or mobile app, not retain any copies of the Zillow data, not separately extract data elements to enhance other data files, and not use the data for direct marketing.
Another common form for terms of use is a limit to the amount or frequency of access. Zillow’s API is limited to 1,000 calls per day to the home valuations or property details. Another example: [*The Weather Underground*](https://en.wikipedia.org/w/index.php?search=The%20Weather%20Underground) maintains an API focused on weather information.
They provide no\-cost access limited to 500 calls per day and 10 calls per minute and with no access to historical information.
They have a for\-pay system with multiple tiers for accessing more extensive data.
Data points are not just content in tabular form. Text is also data.
Many websites have restrictions on text mining.
[*Slate.com*](https://en.wikipedia.org/w/index.php?search=Slate.com), for example, states that users may not:
> “Engage in unauthorized spidering, scraping, or harvesting of content or information, or use any other unauthorized automated means to compile information.”
Apparently, it violates the Slate.com terms of use to compile a compendium of Slate articles (even for personal use) without their authorization.
To get authorization, you need to ask for it.
[Albert Y. Kim](https://en.wikipedia.org/w/index.php?search=Albert%20Y.%20Kim) of [*Smith College*](https://en.wikipedia.org/w/index.php?search=Smith%20College) published data with information for 59,946 [*San Francisco*](https://en.wikipedia.org/w/index.php?search=San%20Francisco) OkCupid users (a free online dating website) with the permission of the president of OkCupid (Kim and Escobedo\-Land 2015\).
To help minimize possible damage, he also removed certain variables (e.g., username) that would make it more straightforward to reidentify the profiles.
Contrast the concern for privacy taken here to the careless doxing of OkCupid users mentioned above.
### 8\.7\.1 Reidentification and disclosure avoidance
The ability to link multiple data sets and to use public information to identify individuals is a growing problem. A glaring example of this occurred in 1996 when then\-Governor of Massachusetts [William Weld](https://en.wikipedia.org/w/index.php?search=William%20Weld) collapsed while attending a graduation ceremony at [*Bentley College*](https://en.wikipedia.org/w/index.php?search=Bentley%20College).
An [*MIT*](https://en.wikipedia.org/w/index.php?search=MIT) graduate student used information from a public data release by the [*Massachusetts Group Insurance Commission*](https://en.wikipedia.org/w/index.php?search=Massachusetts%20Group%20Insurance%20Commission) to identify Weld’s subsequent hospitalization records.
The disclosure of this information was [highly publicized](http://healthaffairs.org/blog/2012/08/10/the-debate-over-re-identification-of-health-information-what-do-we-risk/) and led to many changes in data releases.
This was a situation where the right balance was not struck between disclosure (to help improve health care and control costs) and nondisclosure (to help ensure private information is not made public).
There are many challenges to ensure disclosure avoidance (Zaslavsky and Horton 1998; Ohm 2010\). This remains an
active and important area of research.
The [*Health Insurance Portability and Accountability Act*](https://en.wikipedia.org/w/index.php?search=Health%20Insurance%20Portability%20and%20Accountability%20Act) (HIPAA) was passed by the United States Congress in 1996—the same year as Weld’s illness.
The law augmented and clarified the role that researchers and medical care providers had in maintaining protected health information (PHI).
The HIPAA regulations developed since then specify procedures to ensure that individually identifiable PHI is protected when it is transferred, received, handled, analyzed, or shared.
As an example, detailed geographic information (e.g., home or office location) is not allowed to be shared unless there is an overriding need.
For research purposes, geographic information might be limited to state or territory, though for certain rare diseases or characteristics even this level of detail may lead to disclosure.
Those whose PHI is not protected can file a complaint with the Office of Civil Rights.
The HIPAA structure, while limited to medical information, provides a useful model for disclosure avoidance that is relevant to other data scientists.
Parties accessing PHI need to have privacy policies and procedures.
They must identify a privacy official and undertake training of their employees. If there is a disclosure they must
mitigate the effects to the extent practical.
There must be reasonable data safeguards to prevent intentional or unintentional use.
Covered entities may not retaliate against someone for assisting in investigations of disclosures.
Organizations must maintain records and documentation for six years after their last use of the data.
Similar regulations protect information collected by the statistical agencies of the United States.
### 8\.7\.2 Safe data storage
Inadvertent disclosures of data can be even more damaging than planned disclosures.
Stories abound of protected data being made available on the internet with subsequent harm to those whose information is made accessible.
Such releases may be due to misconfigured databases, malware, theft, or by posting on a public forum.
Each individual and organization needs to practice safe computing, to regularly audit their systems, and to implement plans to address computer and data security.
Such policies need to ensure that protections remain even when equipment is transferred or disposed of.
### 8\.7\.3 Data scraping and terms of use
A different issue arises relating to legal status of material on the Web.
Consider [*Zillow.com*](https://en.wikipedia.org/w/index.php?search=Zillow.com), an online real\-estate database company that combines data from a number of public and private sources to generate house price and rental information on more than 100 million homes across the United States.
Zillow has made access to their database
available through an API (see Section [6\.4\.2](ch-dataII.html#sec:apis)) under certain restrictions.
The terms of use for Zillow are provided in a [legal document](http://www.zillow.com/howto/api/APITerms.htm).
They require that users of the API consider the data on an “as is” basis, not replicate functionality of the Zillow website or mobile app, not retain any copies of the Zillow data, not separately extract data elements to enhance other data files, and not use the data for direct marketing.
Another common form for terms of use is a limit to the amount or frequency of access. Zillow’s API is limited to 1,000 calls per day to the home valuations or property details. Another example: [*The Weather Underground*](https://en.wikipedia.org/w/index.php?search=The%20Weather%20Underground) maintains an API focused on weather information.
They provide no\-cost access limited to 500 calls per day and 10 calls per minute and with no access to historical information.
They have a for\-pay system with multiple tiers for accessing more extensive data.
Data points are not just content in tabular form. Text is also data.
Many websites have restrictions on text mining.
[*Slate.com*](https://en.wikipedia.org/w/index.php?search=Slate.com), for example, states that users may not:
> “Engage in unauthorized spidering, scraping, or harvesting of content or information, or use any other unauthorized automated means to compile information.”
Apparently, it violates the Slate.com terms of use to compile a compendium of Slate articles (even for personal use) without their authorization.
To get authorization, you need to ask for it.
[Albert Y. Kim](https://en.wikipedia.org/w/index.php?search=Albert%20Y.%20Kim) of [*Smith College*](https://en.wikipedia.org/w/index.php?search=Smith%20College) published data with information for 59,946 [*San Francisco*](https://en.wikipedia.org/w/index.php?search=San%20Francisco) OkCupid users (a free online dating website) with the permission of the president of OkCupid (Kim and Escobedo\-Land 2015\).
To help minimize possible damage, he also removed certain variables (e.g., username) that would make it more straightforward to reidentify the profiles.
Contrast the concern for privacy taken here to the careless doxing of OkCupid users mentioned above.
8\.8 Reproducibility
--------------------
Disappointingly often, even the original researchers are unable to reproduce their own results upon revisitation.
This failure arises naturally enough when researchers use menu\-driven software that does not keep an audit trail of each step in the process.
For instance, in [*Excel*](https://en.wikipedia.org/w/index.php?search=Excel), the process of sorting data is not recorded. You can’t look at a spreadsheet and determine what range of data was sorted, so mistakes in selecting cases or variables for a sort are propagated untraceably through the subsequent analysis.
Researchers commonly use tools like word processors that do not mandate an explicit tie between the result presented in a publication and the analysis that produced the result. These seemingly innocuous practices contribute to the loss of reproducibility: numbers may be copied by hand into a document and graphics are cut\-and\-pasted into the report. (Imagine that you have inserted a graphic into a report in this way. How could you, or anyone else, easily demonstrate that the correct graphic was selected for inclusion?)
We describe [*reproducible analysis*](https://en.wikipedia.org/w/index.php?search=reproducible%20analysis) as the practice of recording each and every step, no matter how trivial seeming, in a data analysis. The main elements of a reproducible analysis plan (as described by [Project TIER](https://www.haverford.edu/project-tier) include:
* **Data**: all original data files in the form in which they originated,
* **Metadata**: codebooks and other information needed to understand the data,
* **Commands**: the computer code needed to extract, transform, and load the data—then run analyses, fit models,
generate graphical displays, and
* **Map**: a file that maps between the output and the results in the report.
The [*American Statistical Association*](https://en.wikipedia.org/w/index.php?search=American%20Statistical%20Association) (ASA) notes the importance of reproducible analysis in its curricular guidelines.
The development of new tools such as **R** Markdown and **knitr** have dramatically improved the usability of these methods in practice.
See Appendix [D](ch-reproduce.html#ch:reproduce) for an introduction to these tools.
Individuals and organizations have been working to develop
protocols to facilitate making the data analysis process more transparent and to integrate this into the workflow of practitioners and students.
One of us has worked as part of a research project team at the [*Channing Laboratory*](https://en.wikipedia.org/w/index.php?search=Channing%20Laboratory) at [*Harvard University*](https://en.wikipedia.org/w/index.php?search=Harvard%20University).
As part of the vetting process for all manuscripts, an analyst outside of the research team is required to review all programs used to generate results.
In addition, another individual is responsible for checking each number in the paper to ensure that it was correctly transcribed from the results.
Similar practice is underway at [The Odum Institute for Research in Social Science](http://www.irss.unc.edu/odum/home2.jsp) at the [*University of North Carolina*](https://en.wikipedia.org/w/index.php?search=University%20of%20North%20Carolina).
This organization performs third\-party code and data verification for several political science journals.
### 8\.8\.1 Example: Erroneous data merging
In Chapter [5](ch-join.html#ch:join), we discuss how the [*join*](https://en.wikipedia.org/w/index.php?search=join) operation can be used to merge two data tables together.
Incorrect merges can be very difficult to unravel unless the exact details of the merge have been recorded.
The **dplyr** `inner_join()` function simplifies this process.
In a 2013 paper published in the journal *Brain, Behavior, and Immunity*, Kern et al. reported a link between immune response and depression. To their credit, the authors later noticed that the results were the artifact of a faulty data merge between the lab results and other survey data. A retraction (Kern et al. 2013\), as well as a corrected paper reporting negative results (Kern et al. 2014\), was published in the same journal.
In some ways, this is science done well—ultimately the correct negative result was published, and the authors acted ethically by alerting the journal editor to their mistake.
However, the error likely would have been caught earlier had the authors adhered to stricter standards of reproducibility (see Appendix [D](ch-reproduce.html#ch:reproduce)) in the first place.
### 8\.8\.1 Example: Erroneous data merging
In Chapter [5](ch-join.html#ch:join), we discuss how the [*join*](https://en.wikipedia.org/w/index.php?search=join) operation can be used to merge two data tables together.
Incorrect merges can be very difficult to unravel unless the exact details of the merge have been recorded.
The **dplyr** `inner_join()` function simplifies this process.
In a 2013 paper published in the journal *Brain, Behavior, and Immunity*, Kern et al. reported a link between immune response and depression. To their credit, the authors later noticed that the results were the artifact of a faulty data merge between the lab results and other survey data. A retraction (Kern et al. 2013\), as well as a corrected paper reporting negative results (Kern et al. 2014\), was published in the same journal.
In some ways, this is science done well—ultimately the correct negative result was published, and the authors acted ethically by alerting the journal editor to their mistake.
However, the error likely would have been caught earlier had the authors adhered to stricter standards of reproducibility (see Appendix [D](ch-reproduce.html#ch:reproduce)) in the first place.
8\.9 Ethics, collectively
-------------------------
Although science is carried out by individuals and teams, the scientific community as a whole is a stakeholder.
Some of the ethical responsibilities faced by data scientists are created by the collective nature of the enterprise.
A team of [*Columbia University*](https://en.wikipedia.org/w/index.php?search=Columbia%20University) scientists discovered that a former post\-doc in the group, unbeknownst to the others, had fabricated and falsified research reported in articles in the journals *Cell* and *Nature*.
Needless to say, the post\-doc had violated his ethical obligations both with respect to his colleagues and to the scientific enterprise as a whole.
When the misconduct was discovered, the other members of the team incurred an ethical obligation to the scientific community.
In fulfillment of this obligation, they notified the journals and [retracted](http://retractionwatch.com/2015/06/17/columbia-biologists-deeply-regret-nature-retraction-after-postdoc-faked-74-panels-in-3-papers/) the papers, which had been highly cited.
To be sure, such episodes can tarnish the reputation of even the innocent team members, but the ethical obligation outweighs the desire to protect one’s reputation.
Perhaps surprisingly, there are situations where it is not ethical *not* to publish one’s work. [*Publication bias*](https://en.wikipedia.org/w/index.php?search=Publication%20bias) (or the “file\-drawer problem”) refers to the situation where reports of statistically significant (i.e., \\(p\<0\.05\\)) results are much more likely to be published than reports where the results are not statistically significant.
In many settings, this bias is for the good; a lot of scientific work is in the pursuit of hypotheses that turn out to be wrong or ideas that turn out not to be productive.
But with many research teams investigating similar ideas, or even with a single research team that goes down many parallel paths, the meaning of “statistically significant” becomes clouded and corrupt.
Imagine 100 parallel research efforts to investigate the effect of a drug that in reality has no effect at all. Roughly five of those efforts are expected to culminate in a misleadingly “statistically significant” (\\(p \< 0\.05\\)) result.
Combine this with publication bias and the scientific literature might consist of reports on just the five projects that happened to be significant.
In isolation, five such reports would be considered substantial evidence about the (non\-null) effect of the drug.
It might seem unlikely that there would be 100 parallel research efforts on the same drug, but at any given time there are tens of thousands of research efforts, any one of which has a 5% chance of producing a significant result even if there were no genuine effect.
The [*American Statistical Association*](https://en.wikipedia.org/w/index.php?search=American%20Statistical%20Association)’s ethical guidelines state, “Selecting the one \`significant’ result from a multiplicity of parallel tests poses a grave risk of an incorrect conclusion.
Failure to disclose the full extent of tests and their results in such a case would be highly misleading.” So, if you’re examining the effect on five different measures of health by five different foods, and you find that broccoli consumption has a statistically significant relationship with the development of colon cancer, not only should you be skeptical but you should include in your report the null result for the other 24 tests or perform an appropriate statistical correction to account for the multiple tests.
Often, there may be several different outcome measures, several different food types, and several potential covariates (age, sex, whether breastfed as an infant, smoking, the geographical area of residence or upbringing, etc.), so it’s easy to be performing dozens or hundreds of different tests without realizing it.
For clinical health trials, there are efforts to address this problem through trial registries.
In such registries (e.g., <https://clinicaltrials.gov>), researchers provide their study design and analysis protocol in advance and post results.
8\.10 Professional guidelines for ethical conduct
-------------------------------------------------
This chapter has outlined basic principles of professional ethics.
Usefully, several organizations have developed detailed statements on topics such as professionalism, integrity of data and methods, responsibilities to stakeholders, conflicts of interest, and the response to allegations of misconduct. One good source is the framework for professional ethics endorsed by the [*American Statistical Association*](https://en.wikipedia.org/w/index.php?search=American%20Statistical%20Association) (ASA) (Committee on Professional Ethics 1999\).
The Committee on Science, Engineering, and Public Policy of the National Academy of Sciences, National Academy of Engineering, and Institute of Medicine has published the third edition of *On Being a Scientist: A Guide to Responsible Conduct in Research*. The guide is structured into a number of chapters, many of which are highly relevant for data scientists (including “the Treatment of Data,” “Mistakes and Negligence,” “Sharing of Results,” “Competing Interests, Commitment, and Values,” and “The Researcher in Society”).
The [*Association for Computing Machinery*](https://en.wikipedia.org/w/index.php?search=Association%20for%20Computing%20Machinery) (ACM)—the world’s largest computing society, with more than 100,000 members—adopted a code of ethics in 1992 that was revised in 2018 (see [https://www.acm.org/about/code\-of\-ethics](https://www.acm.org/about/code-of-ethics)).
Other relevant statements and codes of conduct have been promulgated by the [Data Science Association](http://www.datascienceassn.org/code-of-conduct.html), the [International Statistical Institute](http://www.isi-web.org/about-isi/professional-ethics), and the [United Nations Statistics Division](http://unstats.un.org/unsd/dnss/gp/fundprinciples.aspx).
The [Belmont Report](http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html) outlines ethical
principles and guidelines for the protection of human research subjects.
8\.11 Further resources
-----------------------
For a book\-length treatment of ethical issues in statistics, see Hubert and Wainer (2012\).
The National Academies report on data science for undergraduates National Academies of Science, Engineering, and Medicine (2018\) included data ethics as a key component of
data acumen.
The report also included a draft oath for data scientists.
A historical perspective on the ASA’s Ethical Guidelines for Statistical Practice can be found in Ellenberg (1983\).
The University of Michigan provides an EdX course on “[Data Science Ethics](https://www.edx.org/course/data-science-ethics-michiganx-ds101x).”
[Carl Bergstrom](https://en.wikipedia.org/w/index.php?search=Carl%20Bergstrom) and [Jevin West](https://en.wikipedia.org/w/index.php?search=Jevin%20West) developed a course \`\`Calling Bullshit: Data Reasoning in a Digital World".
Course materials and related resources can be found at <https://callingbullshit.org>.
[Andrew Gelman](https://en.wikipedia.org/w/index.php?search=Andrew%20Gelman) has written a column on ethics in statistics in *CHANCE* for the past several years (see, for example Andrew Gelman (2011\); Andrew Gelman and Loken (2012\); Andrew Gelman (2012\); Andrew Gelman (2020\)).
*Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy* describes a number of frightening misuses of big data and algorithms (O’Neil 2016\).
The *Teach Data Science* blog has a series of entries focused on data ethics (<https://teachdatascience.com>).
D’Ignazio and Klein (2020\) provide a comprehensive introduction to data feminism (in contrast to data ethics).
The ACM Conference on Fairness, Accountability, and Transparency (FAccT) provides a cross\-disciplinary focus on data ethics issues (<https://facctconference.org/2020>).
The [Center for Open Science](https://cos.io/)—which develops the [Open Science Framework](https://osf.io/) (OSF)—is an organization that promotes openness, integrity, and reproducibility in scientific research.
The OSF provides an online platform for researchers to publish their scientific projects.
[Emil Kirkegaard](https://en.wikipedia.org/w/index.php?search=Emil%20Kirkegaard) used OSF to publish his OkCupid data set.
The [Institute for Quantitative Social Science](http://www.iq.harvard.edu/) at Harvard and the [Berkeley Initiative for Transparency in the Social Sciences](http://www.bitss.org/) are two other organizations working to promote reproducibility in social science research.
The [*American Political Association*](https://en.wikipedia.org/w/index.php?search=American%20Political%20Association) has incorporated the [Data Access and Research Transparency](http://www.dartstatement.org/) (DA\-RT) principles into its ethics guide.
The Consolidated Standards of Reporting Trials (CONSORT) statement at ([http://www.consort\-statement.org](http://www.consort-statement.org)) provides detailed guidance on the analysis and reporting of clinical trials.
Many more examples of how irreproducibility has led to scientific errors are available at <http://retractionwatch.com/>.
For example, [a study linking severe illness and divorce rates](http://retractionwatch.com/2015/09/10/divorce-study-felled-by-a-coding-error-gets-a-second-chance/#more-32151) was retracted due to a coding mistake.
8\.12 Exercises
---------------
**Problem 1 (Easy)**: A researcher is interested in the relationship of weather to sentiment (positivity or negativity of posts) on Twitter. They want to scrape data from <https://www.wunderground.com> and join that to Tweets in that geographic area at a particular time. One complication is that Weather Underground limits the number of data points that can be downloaded for free using their API (application program interface). The researcher sets up six free accounts to allow them to collect the data they want in a shorter time\-frame. What ethical guidelines are violated by this approach to data scraping?
**Problem 2 (Medium)**: A data scientist compiled data from several public sources (voter registration, political contributions, tax records) that were used to predict sexual orientation of individuals in a community. What ethical considerations arise that should guide use of such data sets?
**Problem 3 (Medium)**: A statistical analyst carried out an investigation of the association of gender and teaching evaluations at a university. They undertook exploratory analysis of the data and carried out a number of bivariate comparisons. The multiple items on the teaching evaluation were consolidated to a single measure based on these exploratory analyses. They used this information to construct a multivariable regression model that found evidence for biases. What issues might arise based on such an analytic approach?
**Problem 4 (Medium)**: In 2006, AOL released a database of search terms that users had used in the prior month (see <http://www.nytimes.com/2006/08/09/technology/09aol.html>). Research this disclosure and the reaction that ensued. What ethical issues are involved? What potential impact has this disclosure had?
**Problem 5 (Medium)**: A reporter carried out a clinical trial of chocolate where a small number of overweight subjects who had received medical clearance were randomized to either eat dark chocolate or not to eat dark chocolate. They were followed for a period and their change in weight was recorded from baseline until the end of the study. More than a dozen outcomes were recorded and one proved to be significantly different in the treatment group than the outcome. This study was publicized and received coverage from a number of magazines and television programs. Outline the ethical considerations that arise in this situation.
**Problem 6 (Medium)**: A *Slate* article ([http://tinyurl.com/slate\-ethics](http://tinyurl.com/slate-ethics)) discussed whether race/ethnicity should be included in a predictive model for how long a homeless family would stay in
homeless services. Discuss the ethical considerations involved in whether race/ethnicity should be included as a predictor in the model.
**Problem 7 (Medium)**: In the United States, the Confidential Information Protection and Statistical Efficiency Act (CIPSEA) governs the confidentiality of data collected by agencies such as the Bureau of Labor Statistics and the Census Bureau. What are the penalties for willful and knowing disclosure of protected information to
unauthorized persons?
**Problem 8 (Medium)**: A data analyst received permission to post a data set that was scraped from a social media site. The full data set included name, screen name, email address, geographic location, IP (internet protocol) address, demographic profiles, and preferences for relationships. Why might it be problematic to post a deidentified form of this data set where name and email address were removed?
**Problem 9 (Medium)**: A company uses a machine\-learning algorithm to determine which job advertisement to display for users searching for technology jobs. Based on past results, the algorithm tends to display lower\-paying jobs for women than for men (after controlling for other characteristics than gender). What ethical considerations might be considered when reviewing this algorithm?
**Problem 10 (Hard)**: An investigative team wants to winnow the set of variables to include in their final multiple regression model. They have 100 variables and one outcome measured for \\(n\=250\\) observations).
They use the following procedure:
1. Fit each of the 100 bivariate models for the outcome as a function of a single predictor, then
2. Include all of the significant predictors in the overall model.
What does the distribution of the p\-value for the overall test look like, assuming that there are no associations between any of the predictors and the outcome (all are assumed to be multivariate normal and independent).
Carry out a simulation to check your answer.
8\.13 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-ethics.html\#ethics\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-ethics.html#ethics-online-exercises)
**Problem 1 (Medium)**: In the United States, most students apply for grants or subsidized loans to finance their college education. Part of this process involves filling in a federal government form called the Free Application for Federal Student Aid (FAFSA). The form asks for information about family income and assets. The form also includes a place for listing the universities to which the information is to be sent. The data collected by FAFSA includes confidential financial information (listing the schools eligible to receive the information is effectively giving permission to share the data with them).
It turns out that the order in which the schools are listed carries important information. Students typically apply to several schools, but can attend only one of them. Until recently, admissions offices at some universities used the information as an important part of their models of whether an admitted student will accept admissions. The earlier in a list a school appears, the more likely the student is to attend that school.
Here’s the catch from the student’s point of view. Some institutions use statistical models to allocate grant aid (a scarce resource) where it is most likely to help ensure that a student enrolls. For these schools, the more likely a student is deemed to accept admissions, the lower the amount of grant aid they are likely to receive.
Is this ethical? Discuss.
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-foundations.html |
Chapter 9 Statistical foundations
=================================
The ultimate objective in data science is to extract meaning from data.
Data wrangling and visualization are tools to this end.
Wrangling re\-organizes cases and variables to make data easier to interpret.
Visualization is a primary tool for connecting our minds with the data, so that we humans can search for meaning.
Visualizations are powerful because human visual cognitive skills are strong.
We are very good at seeing patterns even when partially obscured by random noise.
On the other hand, we are also very good at seeing patterns even when they are not there.
People can easily be misled by the accidental, evanescent patterns that appear in random noise.
It’s important, therefore, to be able to discern when the patterns we see are so strong and robust that we can be confident they are not mere accidents.
Statistical methods quantify patterns and their strength.
They are essential tools for interpreting data.
As we’ll see later in this book, the methods are also crucial for finding patterns that are too complex or multi\-faceted to be seen visually.
Some people think that [*big data*](https://en.wikipedia.org/w/index.php?search=big%20data) has made statistics obsolete. The argument is that with lots of data, the data can speak clearly for themselves.
This is wrong, as we shall see.
The discipline for making efficient use of data that is a core of statistical methodology leads to deeper thinking about how to make use of data—that thinking applies to large data sets as well.
In this chapter, we will introduce key ideas from statistics that permeate data science and that will be reinforced later in the book.
At the same time, the extended example used in this chapter will illustrate a data science [*workflow*](https://en.wikipedia.org/w/index.php?search=workflow) that uses a cycle of wrangling, exploring, visualizing, and modeling.
9\.1 Samples and populations
----------------------------
In previous chapters, we’ve considered data as being fixed.
Indeed, the word “data” stems from the Latin word for “given”—any set of data is treated as given.
Statistical methodology is governed by a broader point of view.
Yes, the data we have in hand are fixed, but the methodology assumes that the cases are drawn from a much larger set of potential cases.
The given data are a [*sample*](https://en.wikipedia.org/w/index.php?search=sample) of a larger [*population*](https://en.wikipedia.org/w/index.php?search=population) of potential cases.
In statistical methodology, we view our sample of cases in the context of this population.
We imagine other samples that might have been drawn from the population.
At the same time, we imagine that there might have been additional variables that could have been measured from the population.
We permit ourselves to construct new variables that have a special feature: any patterns that appear involving the new variables are guaranteed to be random and accidental.
The tools we will use to gain access to the imagined cases from the population and the contrived no\-pattern variables involve the mathematics of probability or (more simply) random selection from a set.
In the next section, we’ll elucidate some of the connections between the sample—the data we’ve got—and the population.
To do this, we’ll use an artifice: constructing a playground that contains the entire population.
Then, we can work with data consisting of a smaller set of cases selected at random from this population. This lets us demonstrate and justify the statistical methods in a setting where we know the “correct” answer.
That way, we can develop ideas about how much confidence statistical methods can give us about the patterns we see.
### 9\.1\.1 Example: Setting travel policy by sampling from the population
Suppose you were asked to help develop a travel policy for business travelers based in [*New York City*](https://en.wikipedia.org/w/index.php?search=New%20York%20City).
Imagine that the traveler has a meeting in [*San Francisco*](https://en.wikipedia.org/w/index.php?search=San%20Francisco) (airport code SFO) at a specified time \\(t\\).
The policy to be formulated will say how much earlier than \\(t\\) an acceptable flight should arrive in order to avoid being late to the meeting due to a flight delay.
For the purpose of this example, recall from the previous section that we are going to pretend that we already have on hand the complete *population* of flights.
For this purpose, we’re going to use the subset of 336,776 flights in 2013 in the **nycflights13** package, which gives airline delays from New York City airports in 2013\.
The policy we develop will be for 2013\.
Of course this is unrealistic in practice.
If we had the complete population we could simply look up the best flight that arrived in time for the meeting!
More realistically, the problem would be to develop a policy for this year based on the sample of data that have already been collected.
We’re going to simulate this situation by drawing a sample from the population of flights into SFO.
Playing the role of the population in our little drama, `SF` comprises the complete collection of such flights.
```
library(tidyverse)
library(mdsr)
library(nycflights13)
SF <- flights %>%
filter(dest == "SFO", !is.na(arr_delay))
```
We’re going to work with just a sample from this population.
For now, we’ll set the sample size to be \\(n \= 25\\) cases.
```
set.seed(101)
sf_25 <- SF %>%
slice_sample(n = 25)
```
A simple (but näive) way to set the policy is to look for the longest flight delay and insist that travel be arranged to deal with this delay.
```
sf_25 %>%
skim(arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 arr_delay 25 0 0.12 29.8 -38 -23 -5 14 103
```
The maximum delay is 103 minutes, about 2 hours.
So, should our travel policy be that the traveler should plan on arriving in SFO about 2 hours ahead?
In our example world, we can look at the complete set of flights to see what was the actual worst delay in 2013\.
```
SF %>%
skim(arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 arr_delay 13173 0 2.67 47.7 -86 -23 -8 12 1007
```
Notice that the results from the sample are different from the results for the population. In the population, the longest delay was 1,007 minutes—almost 17 hours.
This suggests that to avoid missing a meeting, you should travel the day before the meeting.
Safe enough, but then:
* an extra travel day is expensive in terms of lodging, meals, and the traveler’s time;
* even at that, there’s no guarantee that there will never be a delay of more than 1,007 minutes.
A sensible travel policy will trade off small probabilities of being late against the savings in cost and traveler’s time.
For instance, you might judge it acceptable to be late just 2% of the time—a 98% chance of being on time.
Here’s the \\(98^{th}\\) percentile of the arrival delays in our data sample:
```
sf_25 %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
```
```
# A tibble: 1 × 1
q98
<dbl>
1 67.5
```
A delay of 68 minutes is more than an hour.
The calculation is easy, but how good is the answer?
This is not a question about whether the \\(98^{th}\\) percentile was calculated properly—that will always be the case for any competent data scientist.
The question is really along these lines: Suppose we used a 90\-minute travel policy.
How well would that have worked in achieving our intention to be late for meetings only 2% of the time?
With the population data in hand, it’s easy to answer this question.
```
SF %>%
group_by(arr_delay < 90) %>%
count() %>%
mutate(pct = n / nrow(SF))
```
```
# A tibble: 2 × 3
# Groups: arr_delay < 90 [2]
`arr_delay < 90` n pct
<lgl> <int> <dbl>
1 FALSE 640 0.0486
2 TRUE 12533 0.951
```
The 90\-minute policy would miss its mark 5% of the time, much worse than we intended.
To correctly hit the mark 2% of the time, we will want to increase the policy from 90 minutes to what value?
With the population, it’s easy to calculate the \\(98^{th}\\) percentile of the arrival delays:
```
SF %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
```
```
# A tibble: 1 × 1
q98
<dbl>
1 153
```
It should have been about 150 minutes.
But in most real\-world settings, we do not have access to the population data.
We have only our sample.
How can we use our sample to judge whether the result we get from the sample is going to be good enough to meet the 98% goal?
And if it’s not good enough, how large should a sample be to give a result that is likely to be good enough?
This is where the concepts and methods from statistics come in.
We will continue exploring this example throughout the chapter.
In addition to addressing our initial question, we’ll examine the extent to which the policy should depend on the airline carrier, the time of year, hour of day, and day of the week.
The basic concepts we’ll build on are sample statistics such as the [*mean*](https://en.wikipedia.org/w/index.php?search=mean) and [*standard deviation*](https://en.wikipedia.org/w/index.php?search=standard%20deviation).
These topics are covered in introductory statistics books.
Readers who have not yet encountered these topics should review an introductory statistics text such as the [OpenIntro Statistics]((http://openintro.org)) books (<http://openintro.org>), Appendix [E](ch-regression.html#ch:regression), or the materials in Section [9\.8](ch-foundations.html#foundations-further) (Further resources).
9\.2 Sample statistics
----------------------
Statistics (plural) is a field that overlaps with and contributes to data science. A [*statistic*](https://en.wikipedia.org/w/index.php?search=statistic) (singular) is a number that summarizes data.
Ideally, a statistic captures all of the useful information from the individual observations.
When we calculate the \\(98^{th}\\) percentile of a sample, we are calculating one of many possible sample statistics.
Among the many sample statistics are the mean of a variable, the standard deviation, the [*median*](https://en.wikipedia.org/w/index.php?search=median), the maximum, and the minimum.
It turns out that sample statistics such as the maximum and minimum are not very useful.
The reason is that there is not a reliable (or [*robust*](https://en.wikipedia.org/w/index.php?search=robust)) way to figure out how well the sample statistic reflects what is going on in the population.
Similarly, the \\(98^{th}\\) percentile is not a reliable sample statistic for small samples (such as our 25 flights into SFO), in the sense that it will vary considerably in small samples.
On the other hand, a median is a more reliable sample statistic.
Under certain conditions, the mean and standard deviation are reliable as well.
In other words, there are established techniques for figuring out—from the sample itself—how well the sample statistic reflects the population.
### 9\.2\.1 The sampling distribution
Ultimately we need to figure out the reliability of a sample statistic from the sample itself.
For now, though, we are going to use the population to develop some ideas about how to define reliability.
So we will still be in the playground world where we have the population in hand.
If we were to collect a new sample from the population, how similar would the sample statistic on that new sample be to the same statistic calculated on the original sample?
Or, stated somewhat differently, if we draw many different samples from the population, each of size \\(n\\), and calculated the sample statistic on each of those samples, how similar would the sample statistic be across all the samples?
With the population in hand, it’s easy to figure this out; use `slice_sample()` many times and calculate the sample statistic on each trial.
For instance, here are two trials in which we sample and calculate the mean arrival delay.
(We’ll explain the `replace = FALSE` in the next section.
Briefly, it means to draw the sample as one would deal from a set of cards: None of the cards can appear twice in one hand.)
```
n <- 25
SF %>%
slice_sample(n = n) %>%
summarize(mean_arr_delay = mean(arr_delay))
```
```
# A tibble: 1 × 1
mean_arr_delay
<dbl>
1 8.32
```
```
SF %>%
slice_sample(n = n) %>%
summarize(mean_arr_delay = mean(arr_delay))
```
```
# A tibble: 1 × 1
mean_arr_delay
<dbl>
1 19.8
```
Perhaps it would be better to run many trials (though each one would require considerable effort in the real world).
The `map()` function from the **purrr** package (see Chapter [7](ch-iteration.html#ch:iteration)) lets us automate the process.
Here are the results from 500 trials.
```
num_trials <- 500
sf_25_means <- 1:num_trials %>%
map_dfr(
~ SF %>%
slice_sample(n = n) %>%
summarize(mean_arr_delay = mean(arr_delay))
) %>%
mutate(n = n)
head(sf_25_means)
```
```
# A tibble: 6 × 2
mean_arr_delay n
<dbl> <dbl>
1 -3.64 25
2 1.08 25
3 16.2 25
4 -2.64 25
5 0.4 25
6 8.04 25
```
We now have 500 trials, for each of which we calculated the mean arrival delay.
Let’s examine how spread out the results are.
```
sf_25_means %>%
skim(mean_arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 mean_arr_delay 500 0 1.78 9.22 -17.2 -4.37 0.76 7.36 57.3
```
To discuss reliability, it helps to have some standardized vocabulary.
* The [*sample size*](https://en.wikipedia.org/w/index.php?search=sample%20size) is the number of cases in the sample, usually denoted with \\(n\\). In the above, the sample size is \\(n \= 25\\).
* The [*sampling distribution*](https://en.wikipedia.org/w/index.php?search=sampling%20distribution) is the collection of the sample statistic from all of the trials.
We carried out 500 trials here, but the exact number of trials is not important so long as it is large.
* The [*shape*](https://en.wikipedia.org/w/index.php?search=shape) of the sampling distribution is worth noting.
Here it is a little skewed to the right. We can tell because in this case the mean is more than twice the median.
* The [*standard error*](https://en.wikipedia.org/w/index.php?search=standard%20error) is the standard deviation of the sampling distribution. It describes the width of the sampling distribution.
For the trials calculating the sample mean in samples with \\(n \= 25\\), the standard error is 9\.22 minutes.
(You can see this value in the output of `skim()` above, as the standard deviation of the sample means that we generated.)
* The 95% [*confidence interval*](https://en.wikipedia.org/w/index.php?search=confidence%20interval) is another way of summarizing the sampling distribution.
From Figure [9\.1](ch-foundations.html#fig:sampdist25) (left panel) you can see it is about \\(\-16\\) to \+20 minutes.
The interval can be used to identify plausible values for the true mean arrival delay. It is calculated from the mean and standard error of the sampling distribution.
```
sf_25_means %>%
summarize(
x_bar = mean(mean_arr_delay),
se = sd(mean_arr_delay)
) %>%
mutate(
ci_lower = x_bar - 2 * se, # approximately 95% of observations
ci_upper = x_bar + 2 * se # are within two standard errors
)
```
```
# A tibble: 1 × 4
x_bar se ci_lower ci_upper
<dbl> <dbl> <dbl> <dbl>
1 1.78 9.22 -16.7 20.2
```
Alternatively, it can be calculated directly using a [*t\-test*](https://en.wikipedia.org/w/index.php?search=t-test).
```
sf_25_means %>%
pull(mean_arr_delay) %>%
t.test()
```
```
One Sample t-test
data: .
t = 4, df = 499, p-value = 2e-05
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
0.969 2.590
sample estimates:
mean of x
1.78
```
This vocabulary can be very confusing at first. Remember that “standard error” and “confidence interval” always refer to the sampling distribution, not to the population and not to a single sample.
The standard error and confidence intervals are two different, but closely related, forms for describing the reliability of the calculated sample statistic.
An important question that statistical methods allow you to address is what size of sample \\(n\\) is needed to get a result with an acceptable reliability.
What constitutes “acceptable” depends on the goal you are trying to accomplish.
But measuring the reliability is a straightforward matter of finding the standard error and/or confidence interval.
Notice that the sample statistic varies considerably.
For samples of size \\(n\=25\\) they range from \\(\-17\\) to 57 minutes. This is important information.
It illustrates the reliability of the sample mean for samples of arrival delays of size \\(n \= 25\\).
Figure [9\.1](ch-foundations.html#fig:sampdist25) (left) shows the distribution of the trials with a histogram.
In this example, we used a sample size of \\(n \= 25\\) and found a standard error of 9\.2 minutes.
What would happen if we used an even larger sample, say \\(n \= 100\\)?
The calculation is the same as before but with a different \\(n\\).
```
n <- 100
sf_100_means <- 1:500 %>%
map_dfr(
~ SF %>%
slice_sample(n = n) %>%
summarize(mean_arr_delay = mean(arr_delay))
) %>%
mutate(n = n)
```
```
sf_25_means %>%
bind_rows(sf_100_means) %>%
ggplot(aes(x = mean_arr_delay)) +
geom_histogram(bins = 30) +
facet_grid( ~ n) +
xlab("Sample mean")
```
Figure 9\.1: The sampling distribution of the mean arrival delay with a sample size of \\(n\=25\\) (left) and also for a larger sample size of \\(n \= 100\\) (right). Note that the sampling distribution is less variable for a larger sample size.
Figure [9\.1](ch-foundations.html#fig:sampdist25) (right panel) displays the shape of the sampling distribution for samples of size \\(n\=25\\) and \\(n \= 100\\).
Comparing the two sampling distributions, one with \\(n \= 25\\) and the other with \\(n \= 100\\) shows some patterns that are generally true for statistics such as the mean:
* Both sampling distributions are centered at the same value.
* A larger sample size produces a standard error that is smaller. That is, a larger sample size is more reliable than a smaller sample size. You can see that the standard deviation for \\(n \= 100\\) is one\-half that for \\(n \= 25\\). As a rule, the standard error of a sampling distribution scales as \\(1 / \\sqrt{n}\\).
* For large sample sizes, the shape of the sampling distribution tends to bell\-shaped. In a bit of archaic terminology, this shape is often called the [*normal distribution*](https://en.wikipedia.org/w/index.php?search=normal%20distribution). Indeed, the distribution arises very frequently in statistics, but there is nothing abnormal about any other distribution shape.
9\.3 The bootstrap
------------------
In the previous examples, we had access to the population data and so we could find the sampling distribution by repeatedly sampling from the population.
In practice, however, we have only one sample and not the entire population. The [*bootstrap*](https://en.wikipedia.org/w/index.php?search=bootstrap) is a statistical method that allows us to approximate the sampling distribution even without access to the population.
The logical leap involved in the bootstrap is to think of our sample itself as if it were the population.
Just as in the previous examples we drew many samples from the population, now we will draw many new samples from our original sample.
This process is called [*resampling*](https://en.wikipedia.org/w/index.php?search=resampling): drawing a new sample from an existing sample.
When sampling from a population, we would of course make sure not to duplicate any of the cases, just as we would never deal the same playing card twice in one hand.
When resampling, however, we do allow such duplication (in fact, this is what allows us to estimate the variability of the sample).
Therefore, we [*sample with replacement*](https://en.wikipedia.org/w/index.php?search=sample%20with%20replacement).
To illustrate, consider `three_flights`, a very small sample (\\(n \= 3\\)) from the flights data.
Notice that each of the cases in `three_flights` is unique.
There are no duplicates.
```
three_flights <- SF %>%
slice_sample(n = 3, replace = FALSE) %>%
select(year, month, day, dep_time)
three_flights
```
```
# A tibble: 3 × 4
year month day dep_time
<int> <int> <int> <int>
1 2013 11 4 726
2 2013 3 12 734
3 2013 3 25 1702
```
Resampling from `three_flights` is done by setting the `replace` argument to `TRUE`, which allows the sample to include duplicates.
```
three_flights %>% slice_sample(n = 3, replace = TRUE)
```
```
# A tibble: 3 × 4
year month day dep_time
<int> <int> <int> <int>
1 2013 3 25 1702
2 2013 11 4 726
3 2013 3 12 734
```
In this particular resample, each of the individual cases appear once (but in a different order).
That’s a matter of luck. Let’s try again.
```
three_flights %>% slice_sample(n = 3, replace = TRUE)
```
```
# A tibble: 3 × 4
year month day dep_time
<int> <int> <int> <int>
1 2013 3 12 734
2 2013 3 12 734
3 2013 3 25 1702
```
This resample has two instances of one case and a single instance of another.
Bootstrapping does not create new cases: It isn’t a way to collect data.
In reality, constructing a sample involves genuine data acquisition, e.g., field work or lab work or using information technology systems to consolidate data.
In this textbook example, we get to save all that effort and simply select at random from the population, `SF`.
The one and only time we use the population is to draw the original sample, which, as always with a sample, we do without replacement.
Let’s use bootstrapping to estimate the reliability of the mean arrival time calculated on a sample of size 200\. (Ordinarily this is all we get to observe about the population.)
```
n <- 200
orig_sample <- SF %>%
slice_sample(n = n, replace = FALSE)
```
Now, with this sample in hand, we can draw a resample (of that sample size) and calculate the mean arrival delay.
```
orig_sample %>%
slice_sample(n = n, replace = TRUE) %>%
summarize(mean_arr_delay = mean(arr_delay))
```
```
# A tibble: 1 × 1
mean_arr_delay
<dbl>
1 6.80
```
By repeating this process many times, we’ll be able to see how much variation there is from sample to sample:
```
sf_200_bs <- 1:num_trials %>%
map_dfr(
~orig_sample %>%
slice_sample(n = n, replace = TRUE) %>%
summarize(mean_arr_delay = mean(arr_delay))
) %>%
mutate(n = n)
sf_200_bs %>%
skim(mean_arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 mean_arr_delay 500 0 3.05 3.09 -5.03 1.01 3 5.14 13.1
```
We could estimate the standard deviation of the arrival delays to be about 3\.1 minutes.
Ordinarily, we wouldn’t be able to check this result.
But because we have access to the population data in this example, we can.
Let’s compare our bootstrap estimate to a set of (hypothetical) samples of size \\(n\=200\\) from the original `SF` flights (the population).
```
sf_200_pop <- 1:num_trials %>%
map_dfr(
~SF %>%
slice_sample(n = n, replace = TRUE) %>%
summarize(mean_arr_delay = mean(arr_delay))
) %>%
mutate(n = n)
sf_200_pop %>%
skim(mean_arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 mean_arr_delay 500 0 2.59 3.34 -5.90 0.235 2.51 4.80 14.2
```
Notice that the population was not used in the bootstrap (`sf_200_bs`), just the original sample.
What’s remarkable here is that the standard error calculated using the bootstrap (3\.1 minutes) is a reasonable approximation to the standard error of the sampling distribution calculated by taking repeated samples from the population (3\.3 minutes).
The distribution of values in the bootstrap trials is called the [*bootstrap distribution*](https://en.wikipedia.org/w/index.php?search=bootstrap%20distribution).
It’s not exactly the same as the sampling distribution, but for moderate to large sample sizes and sufficient number of bootstraps it has been proven to approximate those aspects of the sampling distribution that we care most about, such as the standard
error and quantiles (B. Efron and Tibshirani 1993\).
### 9\.3\.1 Example: Setting travel policy
Let’s return to our original example of setting a travel policy for selecting flights from New York to San Francisco.
Recall that we decided to set a goal of arriving in time for the meeting 98% of the time.
We can calculate the \\(98^{th}\\) percentile from our sample of size \\(n \= 200\\) flights, and use bootstrapping to see how reliable that sample statistic is.
The sample itself suggests a policy of scheduling a flight to arrive 141 minutes early.
```
orig_sample %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
```
```
# A tibble: 1 × 1
q98
<dbl>
1 141.
```
We can check the reliability of that estimate using bootstrapping.
```
n <- nrow(orig_sample)
sf_200_bs <- 1:num_trials %>%
map_dfr(
~orig_sample %>%
slice_sample(n = n, replace = TRUE) %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
)
sf_200_bs %>%
skim(q98)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 q98 500 0 140. 29.2 53.0 123. 141 154. 196.
```
The bootstrapped standard error is about 29 minutes. The corresponding 95% confidence interval is 140 \\(\\pm\\) 58 minutes.
A policy based on this would be practically a shot in the dark: unlikely to hit the target.
One way to fix things might be to collect more data, hoping to get a more reliable estimate of the \\(98^{th}\\) percentile.
Imagine that we could do the work to generate a sample with \\(n \= 10,000\\) cases.
```
set.seed(1001)
n_large <- 10000
sf_10000_bs <- SF %>%
slice_sample(n = n_large, replace = FALSE)
sf_200_bs <- 1:num_trials %>%
map_dfr(~sf_10000_bs %>%
slice_sample(n = n_large, replace = TRUE) %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
)
sf_200_bs %>%
skim(q98)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 q98 500 0 154. 4.14 139. 151. 153. 156. 169
```
The standard deviation is much narrower, 154 \\(\\pm\\) 8 minutes.
Having more data makes it easier to better refine estimates, particularly in the tails.
9\.4 Outliers
-------------
One place where more data is helpful is in identifying unusual or extreme events: [*outliers*](https://en.wikipedia.org/w/index.php?search=outliers).
Suppose we consider any flight delayed by 7 hours (420 minutes) or more as an extreme event (see Section [15\.5](ch-sql.html#sec:ft8-flights)).
While an arbitrary choice, 420 minutes may be valuable as a marker for seriously delayed flights.
```
SF %>%
filter(arr_delay >= 420) %>%
select(month, day, dep_delay, arr_delay, carrier)
```
```
# A tibble: 7 × 5
month day dep_delay arr_delay carrier
<int> <int> <dbl> <dbl> <chr>
1 12 7 374 422 UA
2 7 6 589 561 DL
3 7 7 629 676 VX
4 7 7 653 632 VX
5 7 10 453 445 B6
6 7 10 432 433 VX
7 9 20 1014 1007 AA
```
Most of the very long delays (five of seven) were in July, and [*Virgin America*](https://en.wikipedia.org/w/index.php?search=Virgin%20America) (`VX`) is the most frequent offender.
Immediately, this suggests one possible route for improving the outcome of the business travel policy we have been asked to develop.
We could tell people to arrive extra early in July and to avoid `VX`.
But let’s not rush into this. The outliers themselves may be misleading.
These outliers account for a tiny fraction of the flights into San Francisco from New York in 2013\.
That’s a small component of our goal of having a failure rate of 2% in getting to meetings on time. And there was an even more extremely rare event at SFO in July 2013: the [crash\-landing of Asiana Airlines flight 214](https://en.wikipedia.org/wiki/Asiana_Airlines_Flight_214).
We might remove these points to get a better sense of the main part of the distribution.
Outliers can often tell us interesting things.
How they should be handled depends on their cause. Outliers due to data irregularities or errors should be fixed.
Other outliers may yield important insights.
Outliers should never be dropped unless there is a clear rationale.
If outliers are dropped this should be clearly reported.
Figure [9\.2](ch-foundations.html#fig:allflights2) displays the histogram without those outliers.
```
SF %>%
filter(arr_delay < 420) %>%
ggplot(aes(arr_delay)) +
geom_histogram(binwidth = 15) +
labs(x = "Arrival delay (in minutes)")
```
Figure 9\.2: Distribution of flight arrival delays in 2013 for flights to San Francisco from NYC airports that were delayed less than 7 hours. The distribution features a long right tail (even after pruning the outliers).
Note that the large majority of flights arrive without any delay or a delay of less than 60 minutes.
Might we be able to identify patterns that can presage when the longer delays are likely to occur?
The outliers suggested that `month` or `carrier` may be linked to long delays.
Let’s see how that plays out with the large majority of data.
```
SF %>%
mutate(long_delay = arr_delay > 60) %>%
group_by(month, long_delay) %>%
count() %>%
pivot_wider(names_from = month, values_from = n) %>%
data.frame()
```
```
long_delay X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12
1 FALSE 856 741 812 993 1128 980 966 1159 1124 1177 1107 1093
2 TRUE 29 21 61 112 65 209 226 96 65 36 51 66
```
We see that June and July (months 6 and 7\) are problem months.
```
SF %>%
mutate(long_delay = arr_delay > 60) %>%
group_by(carrier, long_delay) %>%
count() %>%
pivot_wider(names_from = carrier, values_from = n) %>%
data.frame()
```
```
long_delay AA B6 DL UA VX
1 FALSE 1250 934 1757 6236 1959
2 TRUE 148 86 91 492 220
```
[*Delta Airlines*](https://en.wikipedia.org/w/index.php?search=Delta%20Airlines) (`DL`) has reasonable performance.
These two simple analyses hint at a policy that might advise travelers to plan to arrive extra early in June and July and to consider Delta as an airline for travel to `SFO` (see Section [15\.5](ch-sql.html#sec:ft8-flights) for a fuller discussion of which airlines seem to have fewer delays in general).
9\.5 Statistical models: Explaining variation
---------------------------------------------
In the previous section, we used month of the year and airline to narrow down the situations in which the risk of an unacceptable flight delay is large.
Another way to think about this is that we are *explaining* part of the variation in arrival delay from flight to flight. [*Statistical modeling*](https://en.wikipedia.org/w/index.php?search=Statistical%20modeling) provides a way to relate variables to one another.
Doing so helps us better understand the system we are studying.
To illustrate modeling, let’s consider another question from the airline delays data set: What impact, if any, does scheduled
time of departure have on expected flight delay?
Many people think that earlier flights are less likely to be delayed, since flight delays tend to cascade over the course of the day.
Is this theory supported by the data?
We first begin by considering time of day.
In the **nycflights13** package, the `flights` data frame has a variable (`hour`) that specifies the *scheduled* hour of departure.
```
SF %>%
group_by(hour) %>%
count() %>%
pivot_wider(names_from = hour, values_from = n) %>%
data.frame()
```
```
X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17 X18 X19 X20 X21
1 55 663 1696 987 429 1744 413 504 476 528 946 897 1491 1091 731 465 57
```
We see that many flights are scheduled in the early to mid\-morning and from the late afternoon to early evening.
None are scheduled before 5 am or after 10 pm.
Let’s examine how the arrival delay depends on the hour.
We’ll do this in two ways: first using standard box\-and\-whisker plots to show the distribution of arrival delays; second with a kind of statistical model called a [*linear model*](https://en.wikipedia.org/w/index.php?search=linear%20model) that lets us track the mean arrival delay over the course of the day.
```
SF %>%
ggplot(aes(x = hour, y = arr_delay)) +
geom_boxplot(alpha = 0.1, aes(group = hour)) +
geom_smooth(method = "lm") +
xlab("Scheduled hour of departure") +
ylab("Arrival delay (minutes)") +
coord_cartesian(ylim = c(-30, 120))
```
Figure 9\.3: Association of flight arrival delays with scheduled departure time for flights to San Francisco from New York airports in 2013\.
Figure [9\.3](ch-foundations.html#fig:schedhour) displays the arrival delay versus schedule departure hour. The average arrival delay increases over the course of the day.
The trend line itself is created via a regression model (see Appendix [E](ch-regression.html#ch:regression)).
```
mod1 <- lm(arr_delay ~ hour, data = SF)
broom::tidy(mod1)
```
```
# A tibble: 2 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -22.9 1.23 -18.6 2.88e- 76
2 hour 2.01 0.0915 22.0 1.78e-105
```
The number under the “estimate” for `hour` indicates that the arrival delay is predicted to be about 2 minutes higher per hour.
Over the 15 hours of flights, this leads to a 30\-minute increase in arrival delay comparing flights at the end of the day to flights at the beginning of the day.
The `tidy()` function from the **broom** package also calculates the standard error: 0\.09 minutes per hour.
Stated as a 95% confidence interval, this model indicates that we are 95% confident that the true arrival delay increases by \\(2\.0 \\pm 0\.18\\) minutes per hour.
The rightmost column gives the [*p\-value*](https://en.wikipedia.org/w/index.php?search=p-value), a way of translating the estimate and standard error onto a scale from zero to one.
By convention, p\-values below 0\.05 provide a kind of certificate testifying that random, accidental patterns would be unlikely to generate an estimate as large as that observed.
The tiny p\-value given in the report (`2e-16` is 0\.0000000000000002\) is another way of saying that if there was no association between time of day and flight delays, we would be *very* unlikely to see a result this extreme or more extreme.
Re\-read those last three sentences.
Confusing? Despite an almost universal practice of presenting p\-values, they are mostly misunderstood even by scientists and other professionals.
The p\-value conveys much less information than usually supposed: The “certificate” might not be worth the paper it’s printed on (see Section [9\.7](ch-foundations.html#sec:p-perils)).
Can we do better?
What additional factors might help to explain flight delays?
Let’s look at departure airport, carrier (airline), month of the year, and day of the week.
Some wrangling will let us extract the day of the week (`dow`) from the year, month, and day of month.
We’ll also create a variable `season` that summarizes what we already know about the month: that June and July are the months with long delays.
These will be used as [*explanatory variables*](https://en.wikipedia.org/w/index.php?search=explanatory%20variables) to account for the [*response variable*](https://en.wikipedia.org/w/index.php?search=response%20variable): arrival delay.
```
library(lubridate)
SF <- SF %>%
mutate(
day = as.Date(time_hour),
dow = as.character(wday(day, label = TRUE)),
season = ifelse(month %in% 6:7, "summer", "other month")
)
```
Now we can build a model that includes variables we want to use to explain arrival delay.
```
mod2 <- lm(arr_delay ~ hour + origin + carrier + season + dow, data = SF)
broom::tidy(mod2)
```
```
# A tibble: 14 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -24.6 2.17 -11.3 1.27e- 29
2 hour 2.08 0.0898 23.2 1.44e-116
3 originJFK 4.12 1.00 4.10 4.17e- 5
4 carrierB6 -10.3 1.88 -5.49 4.07e- 8
5 carrierDL -18.4 1.62 -11.4 5.88e- 30
6 carrierUA -4.76 1.48 -3.21 1.31e- 3
7 carrierVX -5.06 1.60 -3.17 1.54e- 3
8 seasonsummer 25.3 1.03 24.5 5.20e-130
9 dowMon 1.74 1.45 1.20 2.28e- 1
10 dowSat -5.60 1.55 -3.62 2.98e- 4
11 dowSun 5.12 1.48 3.46 5.32e- 4
12 dowThu 3.16 1.45 2.18 2.90e- 2
13 dowTue -1.65 1.45 -1.14 2.53e- 1
14 dowWed -0.884 1.45 -0.610 5.42e- 1
```
The numbers in the “estimate” column tell us that we should add 4\.1 minutes to the average delay if departing from `JFK` (instead of `EWR`, also known as [*Newark*](https://en.wikipedia.org/w/index.php?search=Newark), which is the reference group).
Delta has a better average delay than the other carriers.
Delays are on average longer in June and July (by 25 minutes), and on Sundays (by 5 minutes).
Recall that the Aviana crash was in July.
The model also indicates that Sundays are associated with roughly 5 minutes of additional delays; Saturdays are 6 minutes less delayed on average.
(Each of the days of the week is being compared to Friday, chosen as the reference group because it comes first alphabetically.)
The standard errors tell us the precision of these estimates; the p\-values describe whether the individual patterns are consistent with what might be expected to occur by accident even if there were no systemic association between the variables.
In this example, we’ve used `lm()` to construct what are called [*linear models*](https://en.wikipedia.org/w/index.php?search=linear%20models).
Linear models describe how the mean of the response variable varies with the explanatory variables.
They are the most widely used statistical modeling technique, but there are others.
In particular, since our original motivation was to set a policy about business travel, we might want a modeling technique that lets us look at another question: What is the probability that a flight will be, say, greater than 100 minutes late?
Without going into detail, we’ll mention that a technique called [*logistic regression*](https://en.wikipedia.org/w/index.php?search=logistic%20regression) is appropriate for such [*dichotomous*](https://en.wikipedia.org/w/index.php?search=dichotomous) outcomes (see Chapter [11](ch-learningI.html#ch:learningI) and Section [E.5](ch-regression.html#sec:logistic) for more examples).
9\.6 Confounding and accounting for other factors
-------------------------------------------------
We drill the mantra [*correlation does not imply causation*](https://en.wikipedia.org/w/index.php?search=correlation%20does%20not%20imply%20causation) into students whenever statistics are discussed.
While the statement is certainly true, it may not be so helpful.
There are many times when correlations *do* imply causal relationships (beyond just in carefully conducted [*randomized trials*](https://en.wikipedia.org/w/index.php?search=randomized%20trials)).
A major concern for observational data is whether the true associations are being distorted by *other factors* that may be the actual determinants of the observed relationship between two factors.
Such other factors may [*confound*](https://en.wikipedia.org/w/index.php?search=confound) the relationship being studied.
Randomized trials in scientific experiments are considered the gold standard for evidence\-based research.
Such trials, sometimes called [*A/B tests*](https://en.wikipedia.org/w/index.php?search=A/B%20tests), are commonly undertaken to compare the effect of a treatment (e.g., two different forms of a Web page).
By controlling who receives a new intervention and who receives a control (or standard
treatment), the investigator ensures that, on average, all other factors are balanced between the two groups.
This allows them to conclude that if there are differences in the outcomes
measured at the end of the trial, they can be attributed to the
application of the treatment.
(It’s worth noting that randomized trials can also have confounding if subjects don’t comply with treatments or are lost on follow\-up.)
While they are ideal, randomized trials are not practical in many settings.
It is not ethical to
randomize some children to smoke and the others not to smoke in order to determine whether cigarettes cause lung cancer.
It is not
practical to randomize adults to either drink coffee or abstain to determine whether it has
long\-term health impacts.
Observational (or “found”) data may be the only feasible way to answer important questions.
Let’s consider an example of confounding using observational data on average teacher salaries (in 2010\) and average total SAT scores
for each of the 50 United States.
The SAT ([*Scholastic Aptitude Test*](https://en.wikipedia.org/w/index.php?search=Scholastic%20Aptitude%20Test)) is a high\-stakes exam used for entry into college.
Are higher teacher salaries associated with better outcomes on the test at the state level?
If so, should we adjust salaries to improve test performance?
Figure [9\.4](ch-foundations.html#fig:sat1) displays a scatterplot of these data.
We also fit a linear regression model.
```
SAT_2010 <- SAT_2010 %>%
mutate(Salary = salary/1000)
SAT_plot <- ggplot(data = SAT_2010, aes(x = Salary, y = total)) +
geom_point() +
geom_smooth(method = "lm") +
ylab("Average total score on the SAT") +
xlab("Average teacher salary (thousands of USD)")
SAT_plot
```
Figure 9\.4: Scatterplot of average SAT scores versus average teacher salaries (in thousands of dollars) for the 50 United States in 2010\.
```
SAT_mod1 <- lm(total ~ Salary, data = SAT_2010)
broom::tidy(SAT_mod1)
```
```
# A tibble: 2 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 1871. 113. 16.5 1.86e-21
2 Salary -5.02 2.05 -2.45 1.79e- 2
```
Lurking in the background, however, is another important factor.
The percentage of students who take the SAT in each state varies dramatically (from 3% to 93% in 2010\).
We can create a variable called `SAT_grp` that divides the states into two groups.
```
SAT_2010 %>%
skim(sat_pct)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 sat_pct 50 0 38.5 32.0 3 6 27 68 93
```
```
SAT_2010 <- SAT_2010 %>%
mutate(SAT_grp = ifelse(sat_pct <= 27, "Low", "High"))
SAT_2010 %>%
group_by(SAT_grp) %>%
count()
```
```
# A tibble: 2 × 2
# Groups: SAT_grp [2]
SAT_grp n
<chr> <int>
1 High 25
2 Low 25
```
Figure [9\.5](ch-foundations.html#fig:sat2) displays a scatterplot of these data stratified by the grouping of percentage taking the SAT.
```
SAT_plot %+% SAT_2010 +
aes(color = SAT_grp) +
scale_color_brewer("% taking\nthe SAT", palette = "Set2")
```
Figure 9\.5: Scatterplot of average SAT scores versus average teacher salaries (in thousands of dollars) for the 50 United States in 2010, stratified by the percentage of students taking the SAT in each state.
Using techniques developed in Section [7\.5](ch-iteration.html#sec:group-map), we can derive the coefficients of the linear model fit to the two separate groups.
```
SAT_2010 %>%
group_by(SAT_grp) %>%
group_modify(~broom::tidy(lm(total ~ Salary, data = .x)))
```
```
# A tibble: 4 × 6
# Groups: SAT_grp [2]
SAT_grp term estimate std.error statistic p.value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 High (Intercept) 1428. 62.4 22.9 2.51e-17
2 High Salary 1.16 1.06 1.09 2.85e- 1
3 Low (Intercept) 1583. 141. 11.2 8.52e-11
4 Low Salary 2.22 2.75 0.809 4.27e- 1
```
For each of the groups, average teacher salary is positively associated with average SAT score.
But when we collapse over this variable, average teacher salary is negatively associated with average SAT score.
This form of confounding is a quantitative version of [*Simpson’s paradox*](https://en.wikipedia.org/w/index.php?search=Simpson's%20paradox) and arises in many situations.
It can be summarized in the following way:
* Among states with a low percentage taking the SAT, teacher salaries and SAT scores are positively associated.
* Among states with a high percentage taking the SAT, teacher salaries and SAT scores are positively associated.
* Among all states, salaries and SAT scores are negatively associated.
Addressing confounding is straightforward if the confounding variables are measured.
Stratification is one approach (as seen above).
Multiple regression is another technique.
Let’s add the `sat_pct` variable as an additional predictor into the regression model.
```
SAT_mod2 <- lm(total ~ Salary + sat_pct, data = SAT_2010)
broom::tidy(SAT_mod2)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 1589. 58.5 27.2 2.16e-30
2 Salary 2.64 1.15 2.30 2.62e- 2
3 sat_pct -3.55 0.278 -12.8 7.11e-17
```
We now see that the slope for `Salary` is positive and statistically significant when
we control for `sat_pct`.
This is consistent with the results when the model
was stratified by `SAT_grp`.
We still can’t really conclude that teacher salaries cause improvements in SAT scores.
However, the associations that we observe after accounting for the confounding are likely more reliable than those that do not take those factors into account.
Data scientists spend most of their time working with observational data.
When seeking to find meaning from such data, it is important to look out for potential confounding factors that could distort observed associations.
9\.7 The perils of p\-values
----------------------------
We close with a reminder of the perils of [*null hypothesis statistical testing*](https://en.wikipedia.org/w/index.php?search=null%20hypothesis%20statistical%20testing).
Recall that a p\-value is defined as the probability of seeing a sample statistic
as extreme (or more extreme) than the one that was observed if it were really the case that patterns in the data are a result of random chance.
This hypothesis, that only randomness is in play, is called the [*null hypothesis*](https://en.wikipedia.org/w/index.php?search=null%20hypothesis).
For the earlier models involving the airlines data, the null hypothesis would be that there is no association between the predictors and the flight delay.
For the SAT and salary example, the null hypothesis would be that the
true (population) regression coefficient (slope) is zero.
Historically, when using [*hypothesis testing*](https://en.wikipedia.org/w/index.php?search=hypothesis%20testing), analysts have declared results with a p\-value of 0\.05 or smaller as [*statistically significant*](https://en.wikipedia.org/w/index.php?search=statistically%20significant), while values larger than 0\.05 are declared non\-significant.
The threshold for that cutoff is called the [*alpha level*](https://en.wikipedia.org/w/index.php?search=alpha%20level) of the test.
If the null hypothesis is true, hypothesis testers would incorrectly reject the null hypothesis \\(100 \\cdot \\alpha\\)% of the time.
There are a number of serious issues with this form of “all or nothing thinking.”
Recall that p\-values are computed by simulating a world in which a null hypothesis is set to be true (see Chapter [13](ch-simulation.html#ch:simulation)).
The p\-value indicates the quality of the concordance between the data and the simulation results.
A large p\-value indicates the data are concordant with the simulation.
A very small p\-value means otherwise: that the simulation is irrelevant to describing the mechanism behind the observed patterns.
Unfortunately, that in itself tells us little about what kind of hypothesis would be relevant.
Ironically, a “significant result” means that we get to reject the null hypothesis but doesn’t tell us what hypothesis to accept!
Always report the actual p\-value (or a statement that it is less than some small value such as
`p < 0.0001`) rather than just the decision (reject null vs. fail to reject the null).
In addition, confidence intervals are often more interpretable and should be reported as well.
Null hypothesis testing and p\-values are a vexing topic for many analysts.
To help clarify these issues, the American Statistical Association endorsed a statement on p\-values (Wasserstein and Lazar 2016\) that laid out six useful principles:
* p\-values can indicate how incompatible the data are
with a specified statistical model.
* p\-values do not measure the probability that the studied
hypothesis is true, or the probability that the data
were produced by random chance alone.
* Scientific conclusions and business or policy decisions
should not be based only on whether a p\-value passes
a specific threshold.
* Proper inference requires full reporting and transparency.
* A p\-value, or statistical significance, does not measure
the size of an effect or the importance of a result.
* By itself, a p\-value does not provide a good measure of
evidence regarding a model or hypothesis.
More recent guidance (Wasserstein, Schirm, and Lazar 2019\) suggested the ATOM proposal: “Accept uncertainty, be Thoughtful, Open, and Modest.”
The problem with p\-values is even more vexing in most real\-world investigations.
Analyses might involve not just a single hypothesis test but instead have dozens or more.
In such a situation, even small p\-values do not demonstrate discordance between the data and the null hypothesis, so the statistical analysis may tell us nothing at all.
In an attempt to restore meaning to p\-values, investigators are starting
to clearly delineate and pre\-specify the primary and secondary outcomes for a randomized trial.
Imagine that such a trial has five outcomes that are defined as being of primary interest.
If the usual procedure in which a test is declared statistically significant if its p\-value is less than 0\.05 is used, the null hypotheses are true, and the tests are independent, we would expect that we would reject one or more of the null hypotheses more than 22% of the time (considerably more than 5% of the time we want).
```
1 - (1 - 0.05)^5
```
```
[1] 0.226
```
Clinical trialists have sometimes adapted to this problem by using more stringent statistical determinations.
A simple, albeit conservative approach is use of a [*Bonferroni correction*](https://en.wikipedia.org/w/index.php?search=Bonferroni%20correction).
Consider dividing our \\(\\alpha\\)\-level by the number of tests, and only rejecting the null hypothesis
when the p\-value is less than this adjusted value.
In our example, the new threshold would be 0\.01 (and the overall experiment\-wise error rate is preserved at 0\.05\).
```
1 - (1 - 0.01)^5
```
```
[1] 0.049
```
For observational analyses without pre\-specified protocols, it is much harder to determine what
(if any) Bonferroni correction is appropriate.
For analyses that involve many hypothesis tests it is appropriate to include a note of possible limitations that some of the results may be spurious due to [*multiple comparisons*](https://en.wikipedia.org/w/index.php?search=multiple%20comparisons).
A related problem has been called the [*garden of forking paths*](https://en.wikipedia.org/w/index.php?search=garden%20of%20forking%20paths) by [Andrew Gelman](https://en.wikipedia.org/w/index.php?search=Andrew%20Gelman) of [*Columbia University*](https://en.wikipedia.org/w/index.php?search=Columbia%20University).
Most analyses involve many decisions about how to code data, determine important factors, and formulate and then revise models before the final analyses are set.
This process involves looking at the data to construct a parsimonious representation.
For example, a continuous predictor might be cut into some arbitrary groupings to assess the relationship between that predictor and the outcome.
Or certain variables might be included or excluded from a regression model in an exploratory process.
This process tends to lead towards hypothesis tests that are biased against a null result, since decisions that yield more of a signal (or smaller p\-value) might be chosen rather than other options.
In clinical trials, the garden of forking paths problem may be less common, since analytic plans need to be prespecified and published.
For most data science problems, however, this is a vexing issue that leads to questions about [*reproducible results*](https://en.wikipedia.org/w/index.php?search=reproducible%20results).
9\.8 Further resources
----------------------
While this chapter raises many important issues related to the appropriate use of statistics in data science, it can only scratch the surface.
A number of accessible books provide background in basic statistics (Diez, Barr, and Çetinkaya\-Rundel 2019\) and statistical practice (Belle 2008; Good and Hardin 2012\).
Rice (2006\) provides a modern introduction to the foundations of statistics
as well as a detailed derivation of the sampling distribution of the median (pages 409–410\).
Other resources related to theoretical statistics can be found in D. Nolan and Speed (1999\); Horton, Brown, and Qian (2004\); Horton (2013\); Green and Blankenship (2015\).
Shalizi’s forthcoming [*Advanced Data Analysis from an Elementary Point of View*](http://www.stat.cmu.edu/~cshalizi/ADAfaEPoV) provides a technical introduction to a wide range of important topics in statistics, including causal inference.
Wasserstein and Lazar (2016\) laid out principles for the appropriate use of p\-values.
A special issue of *The American Statistician* was devoted to issues around p\-values (Wasserstein, Schirm, and Lazar 2019\).
T. C. Hesterberg et al. (2005\) and T. Hesterberg (2015\) discuss the potential and perils for resampling\-based inference. Bradley Efron and Hastie (2016\) provide an overview of modern inference techniques.
Missing data provide job security for data scientists since it arises in almost all real\-world
studies.
A number of principled approaches have been developed to account for missing values, most notably multiple imputation.
Accessible
references to the extensive literature on incomplete data include Little and Rubin (2002\); Raghunathan (2004\); Horton and Kleinman (2007\).
While clinical trials are often considered a gold standard for evidence\-based decision making,
it is worth noting that they are almost always imperfect.
Subjects may not comply with the intervention that they were randomized to.
They make break the
[*blinding*](https://en.wikipedia.org/w/index.php?search=blinding) and learn what treatment they have been assigned.
Some subjects may drop out
of the study.
All of these issues complicate analysis and interpretation and have led to
improvements in trial design and analysis along with the development of causal inference models.
The CONSORT (Consolidated Standards of Reporting Trials) statement
([http://www.consort\-statement.org](http://www.consort-statement.org)) was developed to alleviate problems with trial
reporting.
Reproducibility and the perils of multiple comparisons have been the subject of much discussion
in recent years.
Nuzzo (2014\) summarizes why p\-values are not as reliable as often assumed.
The STROBE ([Strengthening the Reporting of Observational Studies in Epidemiology](http://www.strobe-statement.org))
statement discusses ways to improve the use of inferential methods (see also Appendix [D](ch-reproduce.html#ch:reproduce)).
Aspects of ethics and bias are covered in detail in Chapter [8](ch-ethics.html#ch:ethics).
9\.9 Exercises
--------------
**Problem 1 (Easy)**: We saw that a 95% confidence interval for a mean was constructed by taking the estimate and adding and subtracting two standard deviations. How many standard deviations should be used if a 99% confidence interval is desired?
**Problem 2 (Easy)**: Calculate and interpret a 95% confidence interval for the mean age of mothers from the
`Gestation` data set from the `mosaicData` package.
**Problem 3 (Medium)**: Use the bootstrap to generate and interpret a 95% confidence interval for the median age of mothers
for the `Gestation` data set from the `mosaicData` package.
**Problem 4 (Medium)**: The `NHANES` data set in the `NHANES` package includes survey data collected by the U.S. National Center for Health Statistics (NCHS), which has conducted a series of health and nutrition surveys since the early 1960s.
1. An investigator is interested in fitting a model to predict the probability that a female subject will have a diagnosis of diabetes. Predictors for this model include age and BMI. Imagine that only 1/10 of the data are available but that these data are sampled randomly from the full set of observations (this mechanism is called “Missing Completely at Random,” or MCAR). What implications will this sampling have on the results?
2. Imagine that only 1/10 of the data are available but that these data are sampled from the full set of observations such that missingness depends on age, with older subjects less likely to be observed than younger subjects (this mechanism is called “Covariate Dependent Missingness,” or CDM). What implications will this sampling have on the results?
3. Imagine that only 1/10 of the data are available but that these data are sampled from the full set of observations such that missingness depends on diabetes status (this mechanism is called “Non\-Ignorable Non\-Response,” or NINR). What implications will this sampling have on the results?
**Problem 5 (Medium)**: Use the bootstrap to generate a 95% confidence interval for the regression parameters in
a model for weight as a function of age for the `Gestation` data frame from the `mosaicData` package.
**Problem 6 (Medium)**: A data scientist working for a company that sells mortgages for new home purchases might be interested in determining what factors might be predictive of defaulting on the loan. Some of the mortgagees have missing income
in their data set. Would it be reasonable for the analyst to drop these loans from their analytic data set? Explain.
**Problem 7 (Medium)**: The `Whickham` data set in the `mosaicData` package includes data on age, smoking, and mortality from a one\-in\-six survey of the electoral roll in Whickham, a mixed urban and rural district near Newcastle upon Tyne, in the United Kingdom. The survey was conducted in 1972–1974 to study heart disease and thyroid disease. A follow\-up on those in the survey was conducted 20 years later. Describe the association between smoking status and mortality in this study. Be sure to consider the role of age as a possible confounding factor.
9\.10 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-foundations.html\#datavizI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-foundations.html#datavizI-online-exercises)
**Problem 1 (Medium)**: Here is an short excerpt from an article, “Benefits of the Dinner Table Ritual,” in the *New York Times*, May 3, 2005\.
The family dinner has long been an example of family togetherness. But recently, scientists have been coming up with compelling reasons … for families to pull up a chair around the table.
Recent studies have begun to shore up the idea that family dinners \[that is, eating dinner together as a family] can have an effect.
For example, a 2004 study of 4,746 children 11 to 18 years old, published in *The Archives of Pediatrics and Adolescent Medicine*, found that frequent family meals were associated with a lower risk of smoking, drinking and using marijuana; with a lower incidence of depressive symptoms and suicidal thoughts; and with better grades.
Another study last year, a survey of 12\- to 17\-year\-olds by the National Center on Addiction and Substance Abuse at Columbia University, found that teenagers who reported eating two or fewer dinners a week with family members were more than one and a half times as likely to smoke, drink or use illegal substances than were teenagers who had five to seven family dinners. \\(\\ldots\\) A study from the University of Minnesota published last year found that adolescent girls who reported
having more frequent family meals and a positive atmosphere during those meals were less likely to have eating disorders.
Explain in what ways the studies, as reported, do and do not provide a compelling reason for families to eat together frequently.
Considering the study done by the National Center on Addition and Substance Abuse, describe what might have been the explanatory and response variables measured, and what sort of model they would have
used.
**Problem 2 (Medium)**: In 2010, the Minnesota Twins played their first season at Target Field. However, up
through 2009, the Twins played at the Metrodome (an indoor stadium). In the Metrodome, air ventilator fans are used both to keep the roof up and to ventilate the stadium. Typically, the air is blown from all directions into the center of the stadium.
According to a retired supervisor in the Metrodome, in the late innings
of some games the fans would be modified so that the ventilation
air would blow out from home plate toward the outfield. The idea is that the
air flow might increase the length of a fly ball. To see if manipulating
the fans could possibly make any difference, a group of students at the
University of Minnesota and their professor built a \`cannon’ that used
compressed air to shoot baseballs. They then did the following experiment.
* Shoot balls at angles around 50 degrees with velocity of around 150 feet per second.
* Shoot balls under two different settings: headwind (air blowing from outfield toward
home plate) or tailwind (air blowing from home plate toward outfield).
* Record other variables: weight of the ball (in grams), diameter of the ball (in cm), and
distance of the ball’s flight (in feet).
Background: People who know little or nothing about baseball might find these basic facts useful. The batter stands near “home plate” and tries to hit the ball toward the outfield. A “fly ball” refers to a ball that is hit into the air. It is desirable to hit
the ball as far as possible. For reasons of basic physics, the distance is maximized when the ball is hit at an intermediate angle steeper than 45 degrees from the horizontal.
Description of variables:
* `Cond`: the wind conditions, a categorical variable with levels `Headwind`, `Tailwind`
* `Angle`: the angle of ball’s trajectory
* `Velocity`: velocity of ball in feet per second
* `BallWt`: weight of ball in grams
* `BallDia`: diameter of ball in inches
* `Dist`: distance in feet of the flight of the ball
Here is the
output of several models:
```
> lm1 <- lm(Dist ~ Cond, data = ds) # FIRST MODEL
```
```
> summary(lm1)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 350.768 2.179 160.967 <2e-16 ***
CondTail 5.865 3.281 1.788 0.0833 .
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 9.499 on 32 degrees of freedom
Multiple R-squared: 0.0908, Adjusted R-squared: 0.06239
F-statistic: 3.196 on 1 and 32 DF, p-value: 0.0833
```
```
> confint(lm1)
2.5 % 97.5 %
(Intercept) 346.32966 355.20718
CondTail -0.81784 12.54766
```
```
> # SECOND MODEL
> lm2 <- lm(Dist ~ Cond + Velocity + Angle + BallWt + BallDia, data = ds)
> summary(lm2)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 181.7443 335.6959 0.541 0.59252
CondTail 7.6705 2.4593 3.119 0.00418 **
Velocity 1.7284 0.5433 3.181 0.00357 **
Angle -1.6014 1.7995 -0.890 0.38110
BallWt -3.9862 2.6697 -1.493 0.14659
BallDia 190.3715 62.5115 3.045 0.00502 **
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 6.805 on 28 degrees of freedom
Multiple R-squared: 0.5917, Adjusted R-squared: 0.5188
F-statistic: 8.115 on 5 and 28 DF, p-value: 7.81e-05
```
```
> confint(lm2)
2.5 % 97.5 %
(Intercept) -505.8974691 869.386165
CondTail 2.6328174 12.708166
Velocity 0.6155279 2.841188
Angle -5.2874318 2.084713
BallWt -9.4549432 1.482457
BallDia 62.3224999 318.420536
```
1. Consider the results from the model of `Dist` as a function
of `Cond` (first model). Briefly summarize what this model says about the relationship between the wind conditions and the distance travelled by the ball. Make sure to say something sensible about the strength of evidence that there is any relationship at all.
2. Briefly summarize the model that has `Dist` as the response variable and includes the other variables as explanatory variables (second model) by reporting and interpreting the `CondTail` parameter. This second model suggests a somewhat different result for the relationship between `Dist` and `Cond`. Summarize the differences and explain in statistical terms why the inclusion of the other explanatory variables has affected the results.
---
9\.1 Samples and populations
----------------------------
In previous chapters, we’ve considered data as being fixed.
Indeed, the word “data” stems from the Latin word for “given”—any set of data is treated as given.
Statistical methodology is governed by a broader point of view.
Yes, the data we have in hand are fixed, but the methodology assumes that the cases are drawn from a much larger set of potential cases.
The given data are a [*sample*](https://en.wikipedia.org/w/index.php?search=sample) of a larger [*population*](https://en.wikipedia.org/w/index.php?search=population) of potential cases.
In statistical methodology, we view our sample of cases in the context of this population.
We imagine other samples that might have been drawn from the population.
At the same time, we imagine that there might have been additional variables that could have been measured from the population.
We permit ourselves to construct new variables that have a special feature: any patterns that appear involving the new variables are guaranteed to be random and accidental.
The tools we will use to gain access to the imagined cases from the population and the contrived no\-pattern variables involve the mathematics of probability or (more simply) random selection from a set.
In the next section, we’ll elucidate some of the connections between the sample—the data we’ve got—and the population.
To do this, we’ll use an artifice: constructing a playground that contains the entire population.
Then, we can work with data consisting of a smaller set of cases selected at random from this population. This lets us demonstrate and justify the statistical methods in a setting where we know the “correct” answer.
That way, we can develop ideas about how much confidence statistical methods can give us about the patterns we see.
### 9\.1\.1 Example: Setting travel policy by sampling from the population
Suppose you were asked to help develop a travel policy for business travelers based in [*New York City*](https://en.wikipedia.org/w/index.php?search=New%20York%20City).
Imagine that the traveler has a meeting in [*San Francisco*](https://en.wikipedia.org/w/index.php?search=San%20Francisco) (airport code SFO) at a specified time \\(t\\).
The policy to be formulated will say how much earlier than \\(t\\) an acceptable flight should arrive in order to avoid being late to the meeting due to a flight delay.
For the purpose of this example, recall from the previous section that we are going to pretend that we already have on hand the complete *population* of flights.
For this purpose, we’re going to use the subset of 336,776 flights in 2013 in the **nycflights13** package, which gives airline delays from New York City airports in 2013\.
The policy we develop will be for 2013\.
Of course this is unrealistic in practice.
If we had the complete population we could simply look up the best flight that arrived in time for the meeting!
More realistically, the problem would be to develop a policy for this year based on the sample of data that have already been collected.
We’re going to simulate this situation by drawing a sample from the population of flights into SFO.
Playing the role of the population in our little drama, `SF` comprises the complete collection of such flights.
```
library(tidyverse)
library(mdsr)
library(nycflights13)
SF <- flights %>%
filter(dest == "SFO", !is.na(arr_delay))
```
We’re going to work with just a sample from this population.
For now, we’ll set the sample size to be \\(n \= 25\\) cases.
```
set.seed(101)
sf_25 <- SF %>%
slice_sample(n = 25)
```
A simple (but näive) way to set the policy is to look for the longest flight delay and insist that travel be arranged to deal with this delay.
```
sf_25 %>%
skim(arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 arr_delay 25 0 0.12 29.8 -38 -23 -5 14 103
```
The maximum delay is 103 minutes, about 2 hours.
So, should our travel policy be that the traveler should plan on arriving in SFO about 2 hours ahead?
In our example world, we can look at the complete set of flights to see what was the actual worst delay in 2013\.
```
SF %>%
skim(arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 arr_delay 13173 0 2.67 47.7 -86 -23 -8 12 1007
```
Notice that the results from the sample are different from the results for the population. In the population, the longest delay was 1,007 minutes—almost 17 hours.
This suggests that to avoid missing a meeting, you should travel the day before the meeting.
Safe enough, but then:
* an extra travel day is expensive in terms of lodging, meals, and the traveler’s time;
* even at that, there’s no guarantee that there will never be a delay of more than 1,007 minutes.
A sensible travel policy will trade off small probabilities of being late against the savings in cost and traveler’s time.
For instance, you might judge it acceptable to be late just 2% of the time—a 98% chance of being on time.
Here’s the \\(98^{th}\\) percentile of the arrival delays in our data sample:
```
sf_25 %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
```
```
# A tibble: 1 × 1
q98
<dbl>
1 67.5
```
A delay of 68 minutes is more than an hour.
The calculation is easy, but how good is the answer?
This is not a question about whether the \\(98^{th}\\) percentile was calculated properly—that will always be the case for any competent data scientist.
The question is really along these lines: Suppose we used a 90\-minute travel policy.
How well would that have worked in achieving our intention to be late for meetings only 2% of the time?
With the population data in hand, it’s easy to answer this question.
```
SF %>%
group_by(arr_delay < 90) %>%
count() %>%
mutate(pct = n / nrow(SF))
```
```
# A tibble: 2 × 3
# Groups: arr_delay < 90 [2]
`arr_delay < 90` n pct
<lgl> <int> <dbl>
1 FALSE 640 0.0486
2 TRUE 12533 0.951
```
The 90\-minute policy would miss its mark 5% of the time, much worse than we intended.
To correctly hit the mark 2% of the time, we will want to increase the policy from 90 minutes to what value?
With the population, it’s easy to calculate the \\(98^{th}\\) percentile of the arrival delays:
```
SF %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
```
```
# A tibble: 1 × 1
q98
<dbl>
1 153
```
It should have been about 150 minutes.
But in most real\-world settings, we do not have access to the population data.
We have only our sample.
How can we use our sample to judge whether the result we get from the sample is going to be good enough to meet the 98% goal?
And if it’s not good enough, how large should a sample be to give a result that is likely to be good enough?
This is where the concepts and methods from statistics come in.
We will continue exploring this example throughout the chapter.
In addition to addressing our initial question, we’ll examine the extent to which the policy should depend on the airline carrier, the time of year, hour of day, and day of the week.
The basic concepts we’ll build on are sample statistics such as the [*mean*](https://en.wikipedia.org/w/index.php?search=mean) and [*standard deviation*](https://en.wikipedia.org/w/index.php?search=standard%20deviation).
These topics are covered in introductory statistics books.
Readers who have not yet encountered these topics should review an introductory statistics text such as the [OpenIntro Statistics]((http://openintro.org)) books (<http://openintro.org>), Appendix [E](ch-regression.html#ch:regression), or the materials in Section [9\.8](ch-foundations.html#foundations-further) (Further resources).
### 9\.1\.1 Example: Setting travel policy by sampling from the population
Suppose you were asked to help develop a travel policy for business travelers based in [*New York City*](https://en.wikipedia.org/w/index.php?search=New%20York%20City).
Imagine that the traveler has a meeting in [*San Francisco*](https://en.wikipedia.org/w/index.php?search=San%20Francisco) (airport code SFO) at a specified time \\(t\\).
The policy to be formulated will say how much earlier than \\(t\\) an acceptable flight should arrive in order to avoid being late to the meeting due to a flight delay.
For the purpose of this example, recall from the previous section that we are going to pretend that we already have on hand the complete *population* of flights.
For this purpose, we’re going to use the subset of 336,776 flights in 2013 in the **nycflights13** package, which gives airline delays from New York City airports in 2013\.
The policy we develop will be for 2013\.
Of course this is unrealistic in practice.
If we had the complete population we could simply look up the best flight that arrived in time for the meeting!
More realistically, the problem would be to develop a policy for this year based on the sample of data that have already been collected.
We’re going to simulate this situation by drawing a sample from the population of flights into SFO.
Playing the role of the population in our little drama, `SF` comprises the complete collection of such flights.
```
library(tidyverse)
library(mdsr)
library(nycflights13)
SF <- flights %>%
filter(dest == "SFO", !is.na(arr_delay))
```
We’re going to work with just a sample from this population.
For now, we’ll set the sample size to be \\(n \= 25\\) cases.
```
set.seed(101)
sf_25 <- SF %>%
slice_sample(n = 25)
```
A simple (but näive) way to set the policy is to look for the longest flight delay and insist that travel be arranged to deal with this delay.
```
sf_25 %>%
skim(arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 arr_delay 25 0 0.12 29.8 -38 -23 -5 14 103
```
The maximum delay is 103 minutes, about 2 hours.
So, should our travel policy be that the traveler should plan on arriving in SFO about 2 hours ahead?
In our example world, we can look at the complete set of flights to see what was the actual worst delay in 2013\.
```
SF %>%
skim(arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 arr_delay 13173 0 2.67 47.7 -86 -23 -8 12 1007
```
Notice that the results from the sample are different from the results for the population. In the population, the longest delay was 1,007 minutes—almost 17 hours.
This suggests that to avoid missing a meeting, you should travel the day before the meeting.
Safe enough, but then:
* an extra travel day is expensive in terms of lodging, meals, and the traveler’s time;
* even at that, there’s no guarantee that there will never be a delay of more than 1,007 minutes.
A sensible travel policy will trade off small probabilities of being late against the savings in cost and traveler’s time.
For instance, you might judge it acceptable to be late just 2% of the time—a 98% chance of being on time.
Here’s the \\(98^{th}\\) percentile of the arrival delays in our data sample:
```
sf_25 %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
```
```
# A tibble: 1 × 1
q98
<dbl>
1 67.5
```
A delay of 68 minutes is more than an hour.
The calculation is easy, but how good is the answer?
This is not a question about whether the \\(98^{th}\\) percentile was calculated properly—that will always be the case for any competent data scientist.
The question is really along these lines: Suppose we used a 90\-minute travel policy.
How well would that have worked in achieving our intention to be late for meetings only 2% of the time?
With the population data in hand, it’s easy to answer this question.
```
SF %>%
group_by(arr_delay < 90) %>%
count() %>%
mutate(pct = n / nrow(SF))
```
```
# A tibble: 2 × 3
# Groups: arr_delay < 90 [2]
`arr_delay < 90` n pct
<lgl> <int> <dbl>
1 FALSE 640 0.0486
2 TRUE 12533 0.951
```
The 90\-minute policy would miss its mark 5% of the time, much worse than we intended.
To correctly hit the mark 2% of the time, we will want to increase the policy from 90 minutes to what value?
With the population, it’s easy to calculate the \\(98^{th}\\) percentile of the arrival delays:
```
SF %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
```
```
# A tibble: 1 × 1
q98
<dbl>
1 153
```
It should have been about 150 minutes.
But in most real\-world settings, we do not have access to the population data.
We have only our sample.
How can we use our sample to judge whether the result we get from the sample is going to be good enough to meet the 98% goal?
And if it’s not good enough, how large should a sample be to give a result that is likely to be good enough?
This is where the concepts and methods from statistics come in.
We will continue exploring this example throughout the chapter.
In addition to addressing our initial question, we’ll examine the extent to which the policy should depend on the airline carrier, the time of year, hour of day, and day of the week.
The basic concepts we’ll build on are sample statistics such as the [*mean*](https://en.wikipedia.org/w/index.php?search=mean) and [*standard deviation*](https://en.wikipedia.org/w/index.php?search=standard%20deviation).
These topics are covered in introductory statistics books.
Readers who have not yet encountered these topics should review an introductory statistics text such as the [OpenIntro Statistics]((http://openintro.org)) books (<http://openintro.org>), Appendix [E](ch-regression.html#ch:regression), or the materials in Section [9\.8](ch-foundations.html#foundations-further) (Further resources).
9\.2 Sample statistics
----------------------
Statistics (plural) is a field that overlaps with and contributes to data science. A [*statistic*](https://en.wikipedia.org/w/index.php?search=statistic) (singular) is a number that summarizes data.
Ideally, a statistic captures all of the useful information from the individual observations.
When we calculate the \\(98^{th}\\) percentile of a sample, we are calculating one of many possible sample statistics.
Among the many sample statistics are the mean of a variable, the standard deviation, the [*median*](https://en.wikipedia.org/w/index.php?search=median), the maximum, and the minimum.
It turns out that sample statistics such as the maximum and minimum are not very useful.
The reason is that there is not a reliable (or [*robust*](https://en.wikipedia.org/w/index.php?search=robust)) way to figure out how well the sample statistic reflects what is going on in the population.
Similarly, the \\(98^{th}\\) percentile is not a reliable sample statistic for small samples (such as our 25 flights into SFO), in the sense that it will vary considerably in small samples.
On the other hand, a median is a more reliable sample statistic.
Under certain conditions, the mean and standard deviation are reliable as well.
In other words, there are established techniques for figuring out—from the sample itself—how well the sample statistic reflects the population.
### 9\.2\.1 The sampling distribution
Ultimately we need to figure out the reliability of a sample statistic from the sample itself.
For now, though, we are going to use the population to develop some ideas about how to define reliability.
So we will still be in the playground world where we have the population in hand.
If we were to collect a new sample from the population, how similar would the sample statistic on that new sample be to the same statistic calculated on the original sample?
Or, stated somewhat differently, if we draw many different samples from the population, each of size \\(n\\), and calculated the sample statistic on each of those samples, how similar would the sample statistic be across all the samples?
With the population in hand, it’s easy to figure this out; use `slice_sample()` many times and calculate the sample statistic on each trial.
For instance, here are two trials in which we sample and calculate the mean arrival delay.
(We’ll explain the `replace = FALSE` in the next section.
Briefly, it means to draw the sample as one would deal from a set of cards: None of the cards can appear twice in one hand.)
```
n <- 25
SF %>%
slice_sample(n = n) %>%
summarize(mean_arr_delay = mean(arr_delay))
```
```
# A tibble: 1 × 1
mean_arr_delay
<dbl>
1 8.32
```
```
SF %>%
slice_sample(n = n) %>%
summarize(mean_arr_delay = mean(arr_delay))
```
```
# A tibble: 1 × 1
mean_arr_delay
<dbl>
1 19.8
```
Perhaps it would be better to run many trials (though each one would require considerable effort in the real world).
The `map()` function from the **purrr** package (see Chapter [7](ch-iteration.html#ch:iteration)) lets us automate the process.
Here are the results from 500 trials.
```
num_trials <- 500
sf_25_means <- 1:num_trials %>%
map_dfr(
~ SF %>%
slice_sample(n = n) %>%
summarize(mean_arr_delay = mean(arr_delay))
) %>%
mutate(n = n)
head(sf_25_means)
```
```
# A tibble: 6 × 2
mean_arr_delay n
<dbl> <dbl>
1 -3.64 25
2 1.08 25
3 16.2 25
4 -2.64 25
5 0.4 25
6 8.04 25
```
We now have 500 trials, for each of which we calculated the mean arrival delay.
Let’s examine how spread out the results are.
```
sf_25_means %>%
skim(mean_arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 mean_arr_delay 500 0 1.78 9.22 -17.2 -4.37 0.76 7.36 57.3
```
To discuss reliability, it helps to have some standardized vocabulary.
* The [*sample size*](https://en.wikipedia.org/w/index.php?search=sample%20size) is the number of cases in the sample, usually denoted with \\(n\\). In the above, the sample size is \\(n \= 25\\).
* The [*sampling distribution*](https://en.wikipedia.org/w/index.php?search=sampling%20distribution) is the collection of the sample statistic from all of the trials.
We carried out 500 trials here, but the exact number of trials is not important so long as it is large.
* The [*shape*](https://en.wikipedia.org/w/index.php?search=shape) of the sampling distribution is worth noting.
Here it is a little skewed to the right. We can tell because in this case the mean is more than twice the median.
* The [*standard error*](https://en.wikipedia.org/w/index.php?search=standard%20error) is the standard deviation of the sampling distribution. It describes the width of the sampling distribution.
For the trials calculating the sample mean in samples with \\(n \= 25\\), the standard error is 9\.22 minutes.
(You can see this value in the output of `skim()` above, as the standard deviation of the sample means that we generated.)
* The 95% [*confidence interval*](https://en.wikipedia.org/w/index.php?search=confidence%20interval) is another way of summarizing the sampling distribution.
From Figure [9\.1](ch-foundations.html#fig:sampdist25) (left panel) you can see it is about \\(\-16\\) to \+20 minutes.
The interval can be used to identify plausible values for the true mean arrival delay. It is calculated from the mean and standard error of the sampling distribution.
```
sf_25_means %>%
summarize(
x_bar = mean(mean_arr_delay),
se = sd(mean_arr_delay)
) %>%
mutate(
ci_lower = x_bar - 2 * se, # approximately 95% of observations
ci_upper = x_bar + 2 * se # are within two standard errors
)
```
```
# A tibble: 1 × 4
x_bar se ci_lower ci_upper
<dbl> <dbl> <dbl> <dbl>
1 1.78 9.22 -16.7 20.2
```
Alternatively, it can be calculated directly using a [*t\-test*](https://en.wikipedia.org/w/index.php?search=t-test).
```
sf_25_means %>%
pull(mean_arr_delay) %>%
t.test()
```
```
One Sample t-test
data: .
t = 4, df = 499, p-value = 2e-05
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
0.969 2.590
sample estimates:
mean of x
1.78
```
This vocabulary can be very confusing at first. Remember that “standard error” and “confidence interval” always refer to the sampling distribution, not to the population and not to a single sample.
The standard error and confidence intervals are two different, but closely related, forms for describing the reliability of the calculated sample statistic.
An important question that statistical methods allow you to address is what size of sample \\(n\\) is needed to get a result with an acceptable reliability.
What constitutes “acceptable” depends on the goal you are trying to accomplish.
But measuring the reliability is a straightforward matter of finding the standard error and/or confidence interval.
Notice that the sample statistic varies considerably.
For samples of size \\(n\=25\\) they range from \\(\-17\\) to 57 minutes. This is important information.
It illustrates the reliability of the sample mean for samples of arrival delays of size \\(n \= 25\\).
Figure [9\.1](ch-foundations.html#fig:sampdist25) (left) shows the distribution of the trials with a histogram.
In this example, we used a sample size of \\(n \= 25\\) and found a standard error of 9\.2 minutes.
What would happen if we used an even larger sample, say \\(n \= 100\\)?
The calculation is the same as before but with a different \\(n\\).
```
n <- 100
sf_100_means <- 1:500 %>%
map_dfr(
~ SF %>%
slice_sample(n = n) %>%
summarize(mean_arr_delay = mean(arr_delay))
) %>%
mutate(n = n)
```
```
sf_25_means %>%
bind_rows(sf_100_means) %>%
ggplot(aes(x = mean_arr_delay)) +
geom_histogram(bins = 30) +
facet_grid( ~ n) +
xlab("Sample mean")
```
Figure 9\.1: The sampling distribution of the mean arrival delay with a sample size of \\(n\=25\\) (left) and also for a larger sample size of \\(n \= 100\\) (right). Note that the sampling distribution is less variable for a larger sample size.
Figure [9\.1](ch-foundations.html#fig:sampdist25) (right panel) displays the shape of the sampling distribution for samples of size \\(n\=25\\) and \\(n \= 100\\).
Comparing the two sampling distributions, one with \\(n \= 25\\) and the other with \\(n \= 100\\) shows some patterns that are generally true for statistics such as the mean:
* Both sampling distributions are centered at the same value.
* A larger sample size produces a standard error that is smaller. That is, a larger sample size is more reliable than a smaller sample size. You can see that the standard deviation for \\(n \= 100\\) is one\-half that for \\(n \= 25\\). As a rule, the standard error of a sampling distribution scales as \\(1 / \\sqrt{n}\\).
* For large sample sizes, the shape of the sampling distribution tends to bell\-shaped. In a bit of archaic terminology, this shape is often called the [*normal distribution*](https://en.wikipedia.org/w/index.php?search=normal%20distribution). Indeed, the distribution arises very frequently in statistics, but there is nothing abnormal about any other distribution shape.
### 9\.2\.1 The sampling distribution
Ultimately we need to figure out the reliability of a sample statistic from the sample itself.
For now, though, we are going to use the population to develop some ideas about how to define reliability.
So we will still be in the playground world where we have the population in hand.
If we were to collect a new sample from the population, how similar would the sample statistic on that new sample be to the same statistic calculated on the original sample?
Or, stated somewhat differently, if we draw many different samples from the population, each of size \\(n\\), and calculated the sample statistic on each of those samples, how similar would the sample statistic be across all the samples?
With the population in hand, it’s easy to figure this out; use `slice_sample()` many times and calculate the sample statistic on each trial.
For instance, here are two trials in which we sample and calculate the mean arrival delay.
(We’ll explain the `replace = FALSE` in the next section.
Briefly, it means to draw the sample as one would deal from a set of cards: None of the cards can appear twice in one hand.)
```
n <- 25
SF %>%
slice_sample(n = n) %>%
summarize(mean_arr_delay = mean(arr_delay))
```
```
# A tibble: 1 × 1
mean_arr_delay
<dbl>
1 8.32
```
```
SF %>%
slice_sample(n = n) %>%
summarize(mean_arr_delay = mean(arr_delay))
```
```
# A tibble: 1 × 1
mean_arr_delay
<dbl>
1 19.8
```
Perhaps it would be better to run many trials (though each one would require considerable effort in the real world).
The `map()` function from the **purrr** package (see Chapter [7](ch-iteration.html#ch:iteration)) lets us automate the process.
Here are the results from 500 trials.
```
num_trials <- 500
sf_25_means <- 1:num_trials %>%
map_dfr(
~ SF %>%
slice_sample(n = n) %>%
summarize(mean_arr_delay = mean(arr_delay))
) %>%
mutate(n = n)
head(sf_25_means)
```
```
# A tibble: 6 × 2
mean_arr_delay n
<dbl> <dbl>
1 -3.64 25
2 1.08 25
3 16.2 25
4 -2.64 25
5 0.4 25
6 8.04 25
```
We now have 500 trials, for each of which we calculated the mean arrival delay.
Let’s examine how spread out the results are.
```
sf_25_means %>%
skim(mean_arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 mean_arr_delay 500 0 1.78 9.22 -17.2 -4.37 0.76 7.36 57.3
```
To discuss reliability, it helps to have some standardized vocabulary.
* The [*sample size*](https://en.wikipedia.org/w/index.php?search=sample%20size) is the number of cases in the sample, usually denoted with \\(n\\). In the above, the sample size is \\(n \= 25\\).
* The [*sampling distribution*](https://en.wikipedia.org/w/index.php?search=sampling%20distribution) is the collection of the sample statistic from all of the trials.
We carried out 500 trials here, but the exact number of trials is not important so long as it is large.
* The [*shape*](https://en.wikipedia.org/w/index.php?search=shape) of the sampling distribution is worth noting.
Here it is a little skewed to the right. We can tell because in this case the mean is more than twice the median.
* The [*standard error*](https://en.wikipedia.org/w/index.php?search=standard%20error) is the standard deviation of the sampling distribution. It describes the width of the sampling distribution.
For the trials calculating the sample mean in samples with \\(n \= 25\\), the standard error is 9\.22 minutes.
(You can see this value in the output of `skim()` above, as the standard deviation of the sample means that we generated.)
* The 95% [*confidence interval*](https://en.wikipedia.org/w/index.php?search=confidence%20interval) is another way of summarizing the sampling distribution.
From Figure [9\.1](ch-foundations.html#fig:sampdist25) (left panel) you can see it is about \\(\-16\\) to \+20 minutes.
The interval can be used to identify plausible values for the true mean arrival delay. It is calculated from the mean and standard error of the sampling distribution.
```
sf_25_means %>%
summarize(
x_bar = mean(mean_arr_delay),
se = sd(mean_arr_delay)
) %>%
mutate(
ci_lower = x_bar - 2 * se, # approximately 95% of observations
ci_upper = x_bar + 2 * se # are within two standard errors
)
```
```
# A tibble: 1 × 4
x_bar se ci_lower ci_upper
<dbl> <dbl> <dbl> <dbl>
1 1.78 9.22 -16.7 20.2
```
Alternatively, it can be calculated directly using a [*t\-test*](https://en.wikipedia.org/w/index.php?search=t-test).
```
sf_25_means %>%
pull(mean_arr_delay) %>%
t.test()
```
```
One Sample t-test
data: .
t = 4, df = 499, p-value = 2e-05
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
0.969 2.590
sample estimates:
mean of x
1.78
```
This vocabulary can be very confusing at first. Remember that “standard error” and “confidence interval” always refer to the sampling distribution, not to the population and not to a single sample.
The standard error and confidence intervals are two different, but closely related, forms for describing the reliability of the calculated sample statistic.
An important question that statistical methods allow you to address is what size of sample \\(n\\) is needed to get a result with an acceptable reliability.
What constitutes “acceptable” depends on the goal you are trying to accomplish.
But measuring the reliability is a straightforward matter of finding the standard error and/or confidence interval.
Notice that the sample statistic varies considerably.
For samples of size \\(n\=25\\) they range from \\(\-17\\) to 57 minutes. This is important information.
It illustrates the reliability of the sample mean for samples of arrival delays of size \\(n \= 25\\).
Figure [9\.1](ch-foundations.html#fig:sampdist25) (left) shows the distribution of the trials with a histogram.
In this example, we used a sample size of \\(n \= 25\\) and found a standard error of 9\.2 minutes.
What would happen if we used an even larger sample, say \\(n \= 100\\)?
The calculation is the same as before but with a different \\(n\\).
```
n <- 100
sf_100_means <- 1:500 %>%
map_dfr(
~ SF %>%
slice_sample(n = n) %>%
summarize(mean_arr_delay = mean(arr_delay))
) %>%
mutate(n = n)
```
```
sf_25_means %>%
bind_rows(sf_100_means) %>%
ggplot(aes(x = mean_arr_delay)) +
geom_histogram(bins = 30) +
facet_grid( ~ n) +
xlab("Sample mean")
```
Figure 9\.1: The sampling distribution of the mean arrival delay with a sample size of \\(n\=25\\) (left) and also for a larger sample size of \\(n \= 100\\) (right). Note that the sampling distribution is less variable for a larger sample size.
Figure [9\.1](ch-foundations.html#fig:sampdist25) (right panel) displays the shape of the sampling distribution for samples of size \\(n\=25\\) and \\(n \= 100\\).
Comparing the two sampling distributions, one with \\(n \= 25\\) and the other with \\(n \= 100\\) shows some patterns that are generally true for statistics such as the mean:
* Both sampling distributions are centered at the same value.
* A larger sample size produces a standard error that is smaller. That is, a larger sample size is more reliable than a smaller sample size. You can see that the standard deviation for \\(n \= 100\\) is one\-half that for \\(n \= 25\\). As a rule, the standard error of a sampling distribution scales as \\(1 / \\sqrt{n}\\).
* For large sample sizes, the shape of the sampling distribution tends to bell\-shaped. In a bit of archaic terminology, this shape is often called the [*normal distribution*](https://en.wikipedia.org/w/index.php?search=normal%20distribution). Indeed, the distribution arises very frequently in statistics, but there is nothing abnormal about any other distribution shape.
9\.3 The bootstrap
------------------
In the previous examples, we had access to the population data and so we could find the sampling distribution by repeatedly sampling from the population.
In practice, however, we have only one sample and not the entire population. The [*bootstrap*](https://en.wikipedia.org/w/index.php?search=bootstrap) is a statistical method that allows us to approximate the sampling distribution even without access to the population.
The logical leap involved in the bootstrap is to think of our sample itself as if it were the population.
Just as in the previous examples we drew many samples from the population, now we will draw many new samples from our original sample.
This process is called [*resampling*](https://en.wikipedia.org/w/index.php?search=resampling): drawing a new sample from an existing sample.
When sampling from a population, we would of course make sure not to duplicate any of the cases, just as we would never deal the same playing card twice in one hand.
When resampling, however, we do allow such duplication (in fact, this is what allows us to estimate the variability of the sample).
Therefore, we [*sample with replacement*](https://en.wikipedia.org/w/index.php?search=sample%20with%20replacement).
To illustrate, consider `three_flights`, a very small sample (\\(n \= 3\\)) from the flights data.
Notice that each of the cases in `three_flights` is unique.
There are no duplicates.
```
three_flights <- SF %>%
slice_sample(n = 3, replace = FALSE) %>%
select(year, month, day, dep_time)
three_flights
```
```
# A tibble: 3 × 4
year month day dep_time
<int> <int> <int> <int>
1 2013 11 4 726
2 2013 3 12 734
3 2013 3 25 1702
```
Resampling from `three_flights` is done by setting the `replace` argument to `TRUE`, which allows the sample to include duplicates.
```
three_flights %>% slice_sample(n = 3, replace = TRUE)
```
```
# A tibble: 3 × 4
year month day dep_time
<int> <int> <int> <int>
1 2013 3 25 1702
2 2013 11 4 726
3 2013 3 12 734
```
In this particular resample, each of the individual cases appear once (but in a different order).
That’s a matter of luck. Let’s try again.
```
three_flights %>% slice_sample(n = 3, replace = TRUE)
```
```
# A tibble: 3 × 4
year month day dep_time
<int> <int> <int> <int>
1 2013 3 12 734
2 2013 3 12 734
3 2013 3 25 1702
```
This resample has two instances of one case and a single instance of another.
Bootstrapping does not create new cases: It isn’t a way to collect data.
In reality, constructing a sample involves genuine data acquisition, e.g., field work or lab work or using information technology systems to consolidate data.
In this textbook example, we get to save all that effort and simply select at random from the population, `SF`.
The one and only time we use the population is to draw the original sample, which, as always with a sample, we do without replacement.
Let’s use bootstrapping to estimate the reliability of the mean arrival time calculated on a sample of size 200\. (Ordinarily this is all we get to observe about the population.)
```
n <- 200
orig_sample <- SF %>%
slice_sample(n = n, replace = FALSE)
```
Now, with this sample in hand, we can draw a resample (of that sample size) and calculate the mean arrival delay.
```
orig_sample %>%
slice_sample(n = n, replace = TRUE) %>%
summarize(mean_arr_delay = mean(arr_delay))
```
```
# A tibble: 1 × 1
mean_arr_delay
<dbl>
1 6.80
```
By repeating this process many times, we’ll be able to see how much variation there is from sample to sample:
```
sf_200_bs <- 1:num_trials %>%
map_dfr(
~orig_sample %>%
slice_sample(n = n, replace = TRUE) %>%
summarize(mean_arr_delay = mean(arr_delay))
) %>%
mutate(n = n)
sf_200_bs %>%
skim(mean_arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 mean_arr_delay 500 0 3.05 3.09 -5.03 1.01 3 5.14 13.1
```
We could estimate the standard deviation of the arrival delays to be about 3\.1 minutes.
Ordinarily, we wouldn’t be able to check this result.
But because we have access to the population data in this example, we can.
Let’s compare our bootstrap estimate to a set of (hypothetical) samples of size \\(n\=200\\) from the original `SF` flights (the population).
```
sf_200_pop <- 1:num_trials %>%
map_dfr(
~SF %>%
slice_sample(n = n, replace = TRUE) %>%
summarize(mean_arr_delay = mean(arr_delay))
) %>%
mutate(n = n)
sf_200_pop %>%
skim(mean_arr_delay)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 mean_arr_delay 500 0 2.59 3.34 -5.90 0.235 2.51 4.80 14.2
```
Notice that the population was not used in the bootstrap (`sf_200_bs`), just the original sample.
What’s remarkable here is that the standard error calculated using the bootstrap (3\.1 minutes) is a reasonable approximation to the standard error of the sampling distribution calculated by taking repeated samples from the population (3\.3 minutes).
The distribution of values in the bootstrap trials is called the [*bootstrap distribution*](https://en.wikipedia.org/w/index.php?search=bootstrap%20distribution).
It’s not exactly the same as the sampling distribution, but for moderate to large sample sizes and sufficient number of bootstraps it has been proven to approximate those aspects of the sampling distribution that we care most about, such as the standard
error and quantiles (B. Efron and Tibshirani 1993\).
### 9\.3\.1 Example: Setting travel policy
Let’s return to our original example of setting a travel policy for selecting flights from New York to San Francisco.
Recall that we decided to set a goal of arriving in time for the meeting 98% of the time.
We can calculate the \\(98^{th}\\) percentile from our sample of size \\(n \= 200\\) flights, and use bootstrapping to see how reliable that sample statistic is.
The sample itself suggests a policy of scheduling a flight to arrive 141 minutes early.
```
orig_sample %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
```
```
# A tibble: 1 × 1
q98
<dbl>
1 141.
```
We can check the reliability of that estimate using bootstrapping.
```
n <- nrow(orig_sample)
sf_200_bs <- 1:num_trials %>%
map_dfr(
~orig_sample %>%
slice_sample(n = n, replace = TRUE) %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
)
sf_200_bs %>%
skim(q98)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 q98 500 0 140. 29.2 53.0 123. 141 154. 196.
```
The bootstrapped standard error is about 29 minutes. The corresponding 95% confidence interval is 140 \\(\\pm\\) 58 minutes.
A policy based on this would be practically a shot in the dark: unlikely to hit the target.
One way to fix things might be to collect more data, hoping to get a more reliable estimate of the \\(98^{th}\\) percentile.
Imagine that we could do the work to generate a sample with \\(n \= 10,000\\) cases.
```
set.seed(1001)
n_large <- 10000
sf_10000_bs <- SF %>%
slice_sample(n = n_large, replace = FALSE)
sf_200_bs <- 1:num_trials %>%
map_dfr(~sf_10000_bs %>%
slice_sample(n = n_large, replace = TRUE) %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
)
sf_200_bs %>%
skim(q98)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 q98 500 0 154. 4.14 139. 151. 153. 156. 169
```
The standard deviation is much narrower, 154 \\(\\pm\\) 8 minutes.
Having more data makes it easier to better refine estimates, particularly in the tails.
### 9\.3\.1 Example: Setting travel policy
Let’s return to our original example of setting a travel policy for selecting flights from New York to San Francisco.
Recall that we decided to set a goal of arriving in time for the meeting 98% of the time.
We can calculate the \\(98^{th}\\) percentile from our sample of size \\(n \= 200\\) flights, and use bootstrapping to see how reliable that sample statistic is.
The sample itself suggests a policy of scheduling a flight to arrive 141 minutes early.
```
orig_sample %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
```
```
# A tibble: 1 × 1
q98
<dbl>
1 141.
```
We can check the reliability of that estimate using bootstrapping.
```
n <- nrow(orig_sample)
sf_200_bs <- 1:num_trials %>%
map_dfr(
~orig_sample %>%
slice_sample(n = n, replace = TRUE) %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
)
sf_200_bs %>%
skim(q98)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 q98 500 0 140. 29.2 53.0 123. 141 154. 196.
```
The bootstrapped standard error is about 29 minutes. The corresponding 95% confidence interval is 140 \\(\\pm\\) 58 minutes.
A policy based on this would be practically a shot in the dark: unlikely to hit the target.
One way to fix things might be to collect more data, hoping to get a more reliable estimate of the \\(98^{th}\\) percentile.
Imagine that we could do the work to generate a sample with \\(n \= 10,000\\) cases.
```
set.seed(1001)
n_large <- 10000
sf_10000_bs <- SF %>%
slice_sample(n = n_large, replace = FALSE)
sf_200_bs <- 1:num_trials %>%
map_dfr(~sf_10000_bs %>%
slice_sample(n = n_large, replace = TRUE) %>%
summarize(q98 = quantile(arr_delay, p = 0.98))
)
sf_200_bs %>%
skim(q98)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 q98 500 0 154. 4.14 139. 151. 153. 156. 169
```
The standard deviation is much narrower, 154 \\(\\pm\\) 8 minutes.
Having more data makes it easier to better refine estimates, particularly in the tails.
9\.4 Outliers
-------------
One place where more data is helpful is in identifying unusual or extreme events: [*outliers*](https://en.wikipedia.org/w/index.php?search=outliers).
Suppose we consider any flight delayed by 7 hours (420 minutes) or more as an extreme event (see Section [15\.5](ch-sql.html#sec:ft8-flights)).
While an arbitrary choice, 420 minutes may be valuable as a marker for seriously delayed flights.
```
SF %>%
filter(arr_delay >= 420) %>%
select(month, day, dep_delay, arr_delay, carrier)
```
```
# A tibble: 7 × 5
month day dep_delay arr_delay carrier
<int> <int> <dbl> <dbl> <chr>
1 12 7 374 422 UA
2 7 6 589 561 DL
3 7 7 629 676 VX
4 7 7 653 632 VX
5 7 10 453 445 B6
6 7 10 432 433 VX
7 9 20 1014 1007 AA
```
Most of the very long delays (five of seven) were in July, and [*Virgin America*](https://en.wikipedia.org/w/index.php?search=Virgin%20America) (`VX`) is the most frequent offender.
Immediately, this suggests one possible route for improving the outcome of the business travel policy we have been asked to develop.
We could tell people to arrive extra early in July and to avoid `VX`.
But let’s not rush into this. The outliers themselves may be misleading.
These outliers account for a tiny fraction of the flights into San Francisco from New York in 2013\.
That’s a small component of our goal of having a failure rate of 2% in getting to meetings on time. And there was an even more extremely rare event at SFO in July 2013: the [crash\-landing of Asiana Airlines flight 214](https://en.wikipedia.org/wiki/Asiana_Airlines_Flight_214).
We might remove these points to get a better sense of the main part of the distribution.
Outliers can often tell us interesting things.
How they should be handled depends on their cause. Outliers due to data irregularities or errors should be fixed.
Other outliers may yield important insights.
Outliers should never be dropped unless there is a clear rationale.
If outliers are dropped this should be clearly reported.
Figure [9\.2](ch-foundations.html#fig:allflights2) displays the histogram without those outliers.
```
SF %>%
filter(arr_delay < 420) %>%
ggplot(aes(arr_delay)) +
geom_histogram(binwidth = 15) +
labs(x = "Arrival delay (in minutes)")
```
Figure 9\.2: Distribution of flight arrival delays in 2013 for flights to San Francisco from NYC airports that were delayed less than 7 hours. The distribution features a long right tail (even after pruning the outliers).
Note that the large majority of flights arrive without any delay or a delay of less than 60 minutes.
Might we be able to identify patterns that can presage when the longer delays are likely to occur?
The outliers suggested that `month` or `carrier` may be linked to long delays.
Let’s see how that plays out with the large majority of data.
```
SF %>%
mutate(long_delay = arr_delay > 60) %>%
group_by(month, long_delay) %>%
count() %>%
pivot_wider(names_from = month, values_from = n) %>%
data.frame()
```
```
long_delay X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12
1 FALSE 856 741 812 993 1128 980 966 1159 1124 1177 1107 1093
2 TRUE 29 21 61 112 65 209 226 96 65 36 51 66
```
We see that June and July (months 6 and 7\) are problem months.
```
SF %>%
mutate(long_delay = arr_delay > 60) %>%
group_by(carrier, long_delay) %>%
count() %>%
pivot_wider(names_from = carrier, values_from = n) %>%
data.frame()
```
```
long_delay AA B6 DL UA VX
1 FALSE 1250 934 1757 6236 1959
2 TRUE 148 86 91 492 220
```
[*Delta Airlines*](https://en.wikipedia.org/w/index.php?search=Delta%20Airlines) (`DL`) has reasonable performance.
These two simple analyses hint at a policy that might advise travelers to plan to arrive extra early in June and July and to consider Delta as an airline for travel to `SFO` (see Section [15\.5](ch-sql.html#sec:ft8-flights) for a fuller discussion of which airlines seem to have fewer delays in general).
9\.5 Statistical models: Explaining variation
---------------------------------------------
In the previous section, we used month of the year and airline to narrow down the situations in which the risk of an unacceptable flight delay is large.
Another way to think about this is that we are *explaining* part of the variation in arrival delay from flight to flight. [*Statistical modeling*](https://en.wikipedia.org/w/index.php?search=Statistical%20modeling) provides a way to relate variables to one another.
Doing so helps us better understand the system we are studying.
To illustrate modeling, let’s consider another question from the airline delays data set: What impact, if any, does scheduled
time of departure have on expected flight delay?
Many people think that earlier flights are less likely to be delayed, since flight delays tend to cascade over the course of the day.
Is this theory supported by the data?
We first begin by considering time of day.
In the **nycflights13** package, the `flights` data frame has a variable (`hour`) that specifies the *scheduled* hour of departure.
```
SF %>%
group_by(hour) %>%
count() %>%
pivot_wider(names_from = hour, values_from = n) %>%
data.frame()
```
```
X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17 X18 X19 X20 X21
1 55 663 1696 987 429 1744 413 504 476 528 946 897 1491 1091 731 465 57
```
We see that many flights are scheduled in the early to mid\-morning and from the late afternoon to early evening.
None are scheduled before 5 am or after 10 pm.
Let’s examine how the arrival delay depends on the hour.
We’ll do this in two ways: first using standard box\-and\-whisker plots to show the distribution of arrival delays; second with a kind of statistical model called a [*linear model*](https://en.wikipedia.org/w/index.php?search=linear%20model) that lets us track the mean arrival delay over the course of the day.
```
SF %>%
ggplot(aes(x = hour, y = arr_delay)) +
geom_boxplot(alpha = 0.1, aes(group = hour)) +
geom_smooth(method = "lm") +
xlab("Scheduled hour of departure") +
ylab("Arrival delay (minutes)") +
coord_cartesian(ylim = c(-30, 120))
```
Figure 9\.3: Association of flight arrival delays with scheduled departure time for flights to San Francisco from New York airports in 2013\.
Figure [9\.3](ch-foundations.html#fig:schedhour) displays the arrival delay versus schedule departure hour. The average arrival delay increases over the course of the day.
The trend line itself is created via a regression model (see Appendix [E](ch-regression.html#ch:regression)).
```
mod1 <- lm(arr_delay ~ hour, data = SF)
broom::tidy(mod1)
```
```
# A tibble: 2 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -22.9 1.23 -18.6 2.88e- 76
2 hour 2.01 0.0915 22.0 1.78e-105
```
The number under the “estimate” for `hour` indicates that the arrival delay is predicted to be about 2 minutes higher per hour.
Over the 15 hours of flights, this leads to a 30\-minute increase in arrival delay comparing flights at the end of the day to flights at the beginning of the day.
The `tidy()` function from the **broom** package also calculates the standard error: 0\.09 minutes per hour.
Stated as a 95% confidence interval, this model indicates that we are 95% confident that the true arrival delay increases by \\(2\.0 \\pm 0\.18\\) minutes per hour.
The rightmost column gives the [*p\-value*](https://en.wikipedia.org/w/index.php?search=p-value), a way of translating the estimate and standard error onto a scale from zero to one.
By convention, p\-values below 0\.05 provide a kind of certificate testifying that random, accidental patterns would be unlikely to generate an estimate as large as that observed.
The tiny p\-value given in the report (`2e-16` is 0\.0000000000000002\) is another way of saying that if there was no association between time of day and flight delays, we would be *very* unlikely to see a result this extreme or more extreme.
Re\-read those last three sentences.
Confusing? Despite an almost universal practice of presenting p\-values, they are mostly misunderstood even by scientists and other professionals.
The p\-value conveys much less information than usually supposed: The “certificate” might not be worth the paper it’s printed on (see Section [9\.7](ch-foundations.html#sec:p-perils)).
Can we do better?
What additional factors might help to explain flight delays?
Let’s look at departure airport, carrier (airline), month of the year, and day of the week.
Some wrangling will let us extract the day of the week (`dow`) from the year, month, and day of month.
We’ll also create a variable `season` that summarizes what we already know about the month: that June and July are the months with long delays.
These will be used as [*explanatory variables*](https://en.wikipedia.org/w/index.php?search=explanatory%20variables) to account for the [*response variable*](https://en.wikipedia.org/w/index.php?search=response%20variable): arrival delay.
```
library(lubridate)
SF <- SF %>%
mutate(
day = as.Date(time_hour),
dow = as.character(wday(day, label = TRUE)),
season = ifelse(month %in% 6:7, "summer", "other month")
)
```
Now we can build a model that includes variables we want to use to explain arrival delay.
```
mod2 <- lm(arr_delay ~ hour + origin + carrier + season + dow, data = SF)
broom::tidy(mod2)
```
```
# A tibble: 14 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -24.6 2.17 -11.3 1.27e- 29
2 hour 2.08 0.0898 23.2 1.44e-116
3 originJFK 4.12 1.00 4.10 4.17e- 5
4 carrierB6 -10.3 1.88 -5.49 4.07e- 8
5 carrierDL -18.4 1.62 -11.4 5.88e- 30
6 carrierUA -4.76 1.48 -3.21 1.31e- 3
7 carrierVX -5.06 1.60 -3.17 1.54e- 3
8 seasonsummer 25.3 1.03 24.5 5.20e-130
9 dowMon 1.74 1.45 1.20 2.28e- 1
10 dowSat -5.60 1.55 -3.62 2.98e- 4
11 dowSun 5.12 1.48 3.46 5.32e- 4
12 dowThu 3.16 1.45 2.18 2.90e- 2
13 dowTue -1.65 1.45 -1.14 2.53e- 1
14 dowWed -0.884 1.45 -0.610 5.42e- 1
```
The numbers in the “estimate” column tell us that we should add 4\.1 minutes to the average delay if departing from `JFK` (instead of `EWR`, also known as [*Newark*](https://en.wikipedia.org/w/index.php?search=Newark), which is the reference group).
Delta has a better average delay than the other carriers.
Delays are on average longer in June and July (by 25 minutes), and on Sundays (by 5 minutes).
Recall that the Aviana crash was in July.
The model also indicates that Sundays are associated with roughly 5 minutes of additional delays; Saturdays are 6 minutes less delayed on average.
(Each of the days of the week is being compared to Friday, chosen as the reference group because it comes first alphabetically.)
The standard errors tell us the precision of these estimates; the p\-values describe whether the individual patterns are consistent with what might be expected to occur by accident even if there were no systemic association between the variables.
In this example, we’ve used `lm()` to construct what are called [*linear models*](https://en.wikipedia.org/w/index.php?search=linear%20models).
Linear models describe how the mean of the response variable varies with the explanatory variables.
They are the most widely used statistical modeling technique, but there are others.
In particular, since our original motivation was to set a policy about business travel, we might want a modeling technique that lets us look at another question: What is the probability that a flight will be, say, greater than 100 minutes late?
Without going into detail, we’ll mention that a technique called [*logistic regression*](https://en.wikipedia.org/w/index.php?search=logistic%20regression) is appropriate for such [*dichotomous*](https://en.wikipedia.org/w/index.php?search=dichotomous) outcomes (see Chapter [11](ch-learningI.html#ch:learningI) and Section [E.5](ch-regression.html#sec:logistic) for more examples).
9\.6 Confounding and accounting for other factors
-------------------------------------------------
We drill the mantra [*correlation does not imply causation*](https://en.wikipedia.org/w/index.php?search=correlation%20does%20not%20imply%20causation) into students whenever statistics are discussed.
While the statement is certainly true, it may not be so helpful.
There are many times when correlations *do* imply causal relationships (beyond just in carefully conducted [*randomized trials*](https://en.wikipedia.org/w/index.php?search=randomized%20trials)).
A major concern for observational data is whether the true associations are being distorted by *other factors* that may be the actual determinants of the observed relationship between two factors.
Such other factors may [*confound*](https://en.wikipedia.org/w/index.php?search=confound) the relationship being studied.
Randomized trials in scientific experiments are considered the gold standard for evidence\-based research.
Such trials, sometimes called [*A/B tests*](https://en.wikipedia.org/w/index.php?search=A/B%20tests), are commonly undertaken to compare the effect of a treatment (e.g., two different forms of a Web page).
By controlling who receives a new intervention and who receives a control (or standard
treatment), the investigator ensures that, on average, all other factors are balanced between the two groups.
This allows them to conclude that if there are differences in the outcomes
measured at the end of the trial, they can be attributed to the
application of the treatment.
(It’s worth noting that randomized trials can also have confounding if subjects don’t comply with treatments or are lost on follow\-up.)
While they are ideal, randomized trials are not practical in many settings.
It is not ethical to
randomize some children to smoke and the others not to smoke in order to determine whether cigarettes cause lung cancer.
It is not
practical to randomize adults to either drink coffee or abstain to determine whether it has
long\-term health impacts.
Observational (or “found”) data may be the only feasible way to answer important questions.
Let’s consider an example of confounding using observational data on average teacher salaries (in 2010\) and average total SAT scores
for each of the 50 United States.
The SAT ([*Scholastic Aptitude Test*](https://en.wikipedia.org/w/index.php?search=Scholastic%20Aptitude%20Test)) is a high\-stakes exam used for entry into college.
Are higher teacher salaries associated with better outcomes on the test at the state level?
If so, should we adjust salaries to improve test performance?
Figure [9\.4](ch-foundations.html#fig:sat1) displays a scatterplot of these data.
We also fit a linear regression model.
```
SAT_2010 <- SAT_2010 %>%
mutate(Salary = salary/1000)
SAT_plot <- ggplot(data = SAT_2010, aes(x = Salary, y = total)) +
geom_point() +
geom_smooth(method = "lm") +
ylab("Average total score on the SAT") +
xlab("Average teacher salary (thousands of USD)")
SAT_plot
```
Figure 9\.4: Scatterplot of average SAT scores versus average teacher salaries (in thousands of dollars) for the 50 United States in 2010\.
```
SAT_mod1 <- lm(total ~ Salary, data = SAT_2010)
broom::tidy(SAT_mod1)
```
```
# A tibble: 2 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 1871. 113. 16.5 1.86e-21
2 Salary -5.02 2.05 -2.45 1.79e- 2
```
Lurking in the background, however, is another important factor.
The percentage of students who take the SAT in each state varies dramatically (from 3% to 93% in 2010\).
We can create a variable called `SAT_grp` that divides the states into two groups.
```
SAT_2010 %>%
skim(sat_pct)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 sat_pct 50 0 38.5 32.0 3 6 27 68 93
```
```
SAT_2010 <- SAT_2010 %>%
mutate(SAT_grp = ifelse(sat_pct <= 27, "Low", "High"))
SAT_2010 %>%
group_by(SAT_grp) %>%
count()
```
```
# A tibble: 2 × 2
# Groups: SAT_grp [2]
SAT_grp n
<chr> <int>
1 High 25
2 Low 25
```
Figure [9\.5](ch-foundations.html#fig:sat2) displays a scatterplot of these data stratified by the grouping of percentage taking the SAT.
```
SAT_plot %+% SAT_2010 +
aes(color = SAT_grp) +
scale_color_brewer("% taking\nthe SAT", palette = "Set2")
```
Figure 9\.5: Scatterplot of average SAT scores versus average teacher salaries (in thousands of dollars) for the 50 United States in 2010, stratified by the percentage of students taking the SAT in each state.
Using techniques developed in Section [7\.5](ch-iteration.html#sec:group-map), we can derive the coefficients of the linear model fit to the two separate groups.
```
SAT_2010 %>%
group_by(SAT_grp) %>%
group_modify(~broom::tidy(lm(total ~ Salary, data = .x)))
```
```
# A tibble: 4 × 6
# Groups: SAT_grp [2]
SAT_grp term estimate std.error statistic p.value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 High (Intercept) 1428. 62.4 22.9 2.51e-17
2 High Salary 1.16 1.06 1.09 2.85e- 1
3 Low (Intercept) 1583. 141. 11.2 8.52e-11
4 Low Salary 2.22 2.75 0.809 4.27e- 1
```
For each of the groups, average teacher salary is positively associated with average SAT score.
But when we collapse over this variable, average teacher salary is negatively associated with average SAT score.
This form of confounding is a quantitative version of [*Simpson’s paradox*](https://en.wikipedia.org/w/index.php?search=Simpson's%20paradox) and arises in many situations.
It can be summarized in the following way:
* Among states with a low percentage taking the SAT, teacher salaries and SAT scores are positively associated.
* Among states with a high percentage taking the SAT, teacher salaries and SAT scores are positively associated.
* Among all states, salaries and SAT scores are negatively associated.
Addressing confounding is straightforward if the confounding variables are measured.
Stratification is one approach (as seen above).
Multiple regression is another technique.
Let’s add the `sat_pct` variable as an additional predictor into the regression model.
```
SAT_mod2 <- lm(total ~ Salary + sat_pct, data = SAT_2010)
broom::tidy(SAT_mod2)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 1589. 58.5 27.2 2.16e-30
2 Salary 2.64 1.15 2.30 2.62e- 2
3 sat_pct -3.55 0.278 -12.8 7.11e-17
```
We now see that the slope for `Salary` is positive and statistically significant when
we control for `sat_pct`.
This is consistent with the results when the model
was stratified by `SAT_grp`.
We still can’t really conclude that teacher salaries cause improvements in SAT scores.
However, the associations that we observe after accounting for the confounding are likely more reliable than those that do not take those factors into account.
Data scientists spend most of their time working with observational data.
When seeking to find meaning from such data, it is important to look out for potential confounding factors that could distort observed associations.
9\.7 The perils of p\-values
----------------------------
We close with a reminder of the perils of [*null hypothesis statistical testing*](https://en.wikipedia.org/w/index.php?search=null%20hypothesis%20statistical%20testing).
Recall that a p\-value is defined as the probability of seeing a sample statistic
as extreme (or more extreme) than the one that was observed if it were really the case that patterns in the data are a result of random chance.
This hypothesis, that only randomness is in play, is called the [*null hypothesis*](https://en.wikipedia.org/w/index.php?search=null%20hypothesis).
For the earlier models involving the airlines data, the null hypothesis would be that there is no association between the predictors and the flight delay.
For the SAT and salary example, the null hypothesis would be that the
true (population) regression coefficient (slope) is zero.
Historically, when using [*hypothesis testing*](https://en.wikipedia.org/w/index.php?search=hypothesis%20testing), analysts have declared results with a p\-value of 0\.05 or smaller as [*statistically significant*](https://en.wikipedia.org/w/index.php?search=statistically%20significant), while values larger than 0\.05 are declared non\-significant.
The threshold for that cutoff is called the [*alpha level*](https://en.wikipedia.org/w/index.php?search=alpha%20level) of the test.
If the null hypothesis is true, hypothesis testers would incorrectly reject the null hypothesis \\(100 \\cdot \\alpha\\)% of the time.
There are a number of serious issues with this form of “all or nothing thinking.”
Recall that p\-values are computed by simulating a world in which a null hypothesis is set to be true (see Chapter [13](ch-simulation.html#ch:simulation)).
The p\-value indicates the quality of the concordance between the data and the simulation results.
A large p\-value indicates the data are concordant with the simulation.
A very small p\-value means otherwise: that the simulation is irrelevant to describing the mechanism behind the observed patterns.
Unfortunately, that in itself tells us little about what kind of hypothesis would be relevant.
Ironically, a “significant result” means that we get to reject the null hypothesis but doesn’t tell us what hypothesis to accept!
Always report the actual p\-value (or a statement that it is less than some small value such as
`p < 0.0001`) rather than just the decision (reject null vs. fail to reject the null).
In addition, confidence intervals are often more interpretable and should be reported as well.
Null hypothesis testing and p\-values are a vexing topic for many analysts.
To help clarify these issues, the American Statistical Association endorsed a statement on p\-values (Wasserstein and Lazar 2016\) that laid out six useful principles:
* p\-values can indicate how incompatible the data are
with a specified statistical model.
* p\-values do not measure the probability that the studied
hypothesis is true, or the probability that the data
were produced by random chance alone.
* Scientific conclusions and business or policy decisions
should not be based only on whether a p\-value passes
a specific threshold.
* Proper inference requires full reporting and transparency.
* A p\-value, or statistical significance, does not measure
the size of an effect or the importance of a result.
* By itself, a p\-value does not provide a good measure of
evidence regarding a model or hypothesis.
More recent guidance (Wasserstein, Schirm, and Lazar 2019\) suggested the ATOM proposal: “Accept uncertainty, be Thoughtful, Open, and Modest.”
The problem with p\-values is even more vexing in most real\-world investigations.
Analyses might involve not just a single hypothesis test but instead have dozens or more.
In such a situation, even small p\-values do not demonstrate discordance between the data and the null hypothesis, so the statistical analysis may tell us nothing at all.
In an attempt to restore meaning to p\-values, investigators are starting
to clearly delineate and pre\-specify the primary and secondary outcomes for a randomized trial.
Imagine that such a trial has five outcomes that are defined as being of primary interest.
If the usual procedure in which a test is declared statistically significant if its p\-value is less than 0\.05 is used, the null hypotheses are true, and the tests are independent, we would expect that we would reject one or more of the null hypotheses more than 22% of the time (considerably more than 5% of the time we want).
```
1 - (1 - 0.05)^5
```
```
[1] 0.226
```
Clinical trialists have sometimes adapted to this problem by using more stringent statistical determinations.
A simple, albeit conservative approach is use of a [*Bonferroni correction*](https://en.wikipedia.org/w/index.php?search=Bonferroni%20correction).
Consider dividing our \\(\\alpha\\)\-level by the number of tests, and only rejecting the null hypothesis
when the p\-value is less than this adjusted value.
In our example, the new threshold would be 0\.01 (and the overall experiment\-wise error rate is preserved at 0\.05\).
```
1 - (1 - 0.01)^5
```
```
[1] 0.049
```
For observational analyses without pre\-specified protocols, it is much harder to determine what
(if any) Bonferroni correction is appropriate.
For analyses that involve many hypothesis tests it is appropriate to include a note of possible limitations that some of the results may be spurious due to [*multiple comparisons*](https://en.wikipedia.org/w/index.php?search=multiple%20comparisons).
A related problem has been called the [*garden of forking paths*](https://en.wikipedia.org/w/index.php?search=garden%20of%20forking%20paths) by [Andrew Gelman](https://en.wikipedia.org/w/index.php?search=Andrew%20Gelman) of [*Columbia University*](https://en.wikipedia.org/w/index.php?search=Columbia%20University).
Most analyses involve many decisions about how to code data, determine important factors, and formulate and then revise models before the final analyses are set.
This process involves looking at the data to construct a parsimonious representation.
For example, a continuous predictor might be cut into some arbitrary groupings to assess the relationship between that predictor and the outcome.
Or certain variables might be included or excluded from a regression model in an exploratory process.
This process tends to lead towards hypothesis tests that are biased against a null result, since decisions that yield more of a signal (or smaller p\-value) might be chosen rather than other options.
In clinical trials, the garden of forking paths problem may be less common, since analytic plans need to be prespecified and published.
For most data science problems, however, this is a vexing issue that leads to questions about [*reproducible results*](https://en.wikipedia.org/w/index.php?search=reproducible%20results).
9\.8 Further resources
----------------------
While this chapter raises many important issues related to the appropriate use of statistics in data science, it can only scratch the surface.
A number of accessible books provide background in basic statistics (Diez, Barr, and Çetinkaya\-Rundel 2019\) and statistical practice (Belle 2008; Good and Hardin 2012\).
Rice (2006\) provides a modern introduction to the foundations of statistics
as well as a detailed derivation of the sampling distribution of the median (pages 409–410\).
Other resources related to theoretical statistics can be found in D. Nolan and Speed (1999\); Horton, Brown, and Qian (2004\); Horton (2013\); Green and Blankenship (2015\).
Shalizi’s forthcoming [*Advanced Data Analysis from an Elementary Point of View*](http://www.stat.cmu.edu/~cshalizi/ADAfaEPoV) provides a technical introduction to a wide range of important topics in statistics, including causal inference.
Wasserstein and Lazar (2016\) laid out principles for the appropriate use of p\-values.
A special issue of *The American Statistician* was devoted to issues around p\-values (Wasserstein, Schirm, and Lazar 2019\).
T. C. Hesterberg et al. (2005\) and T. Hesterberg (2015\) discuss the potential and perils for resampling\-based inference. Bradley Efron and Hastie (2016\) provide an overview of modern inference techniques.
Missing data provide job security for data scientists since it arises in almost all real\-world
studies.
A number of principled approaches have been developed to account for missing values, most notably multiple imputation.
Accessible
references to the extensive literature on incomplete data include Little and Rubin (2002\); Raghunathan (2004\); Horton and Kleinman (2007\).
While clinical trials are often considered a gold standard for evidence\-based decision making,
it is worth noting that they are almost always imperfect.
Subjects may not comply with the intervention that they were randomized to.
They make break the
[*blinding*](https://en.wikipedia.org/w/index.php?search=blinding) and learn what treatment they have been assigned.
Some subjects may drop out
of the study.
All of these issues complicate analysis and interpretation and have led to
improvements in trial design and analysis along with the development of causal inference models.
The CONSORT (Consolidated Standards of Reporting Trials) statement
([http://www.consort\-statement.org](http://www.consort-statement.org)) was developed to alleviate problems with trial
reporting.
Reproducibility and the perils of multiple comparisons have been the subject of much discussion
in recent years.
Nuzzo (2014\) summarizes why p\-values are not as reliable as often assumed.
The STROBE ([Strengthening the Reporting of Observational Studies in Epidemiology](http://www.strobe-statement.org))
statement discusses ways to improve the use of inferential methods (see also Appendix [D](ch-reproduce.html#ch:reproduce)).
Aspects of ethics and bias are covered in detail in Chapter [8](ch-ethics.html#ch:ethics).
9\.9 Exercises
--------------
**Problem 1 (Easy)**: We saw that a 95% confidence interval for a mean was constructed by taking the estimate and adding and subtracting two standard deviations. How many standard deviations should be used if a 99% confidence interval is desired?
**Problem 2 (Easy)**: Calculate and interpret a 95% confidence interval for the mean age of mothers from the
`Gestation` data set from the `mosaicData` package.
**Problem 3 (Medium)**: Use the bootstrap to generate and interpret a 95% confidence interval for the median age of mothers
for the `Gestation` data set from the `mosaicData` package.
**Problem 4 (Medium)**: The `NHANES` data set in the `NHANES` package includes survey data collected by the U.S. National Center for Health Statistics (NCHS), which has conducted a series of health and nutrition surveys since the early 1960s.
1. An investigator is interested in fitting a model to predict the probability that a female subject will have a diagnosis of diabetes. Predictors for this model include age and BMI. Imagine that only 1/10 of the data are available but that these data are sampled randomly from the full set of observations (this mechanism is called “Missing Completely at Random,” or MCAR). What implications will this sampling have on the results?
2. Imagine that only 1/10 of the data are available but that these data are sampled from the full set of observations such that missingness depends on age, with older subjects less likely to be observed than younger subjects (this mechanism is called “Covariate Dependent Missingness,” or CDM). What implications will this sampling have on the results?
3. Imagine that only 1/10 of the data are available but that these data are sampled from the full set of observations such that missingness depends on diabetes status (this mechanism is called “Non\-Ignorable Non\-Response,” or NINR). What implications will this sampling have on the results?
**Problem 5 (Medium)**: Use the bootstrap to generate a 95% confidence interval for the regression parameters in
a model for weight as a function of age for the `Gestation` data frame from the `mosaicData` package.
**Problem 6 (Medium)**: A data scientist working for a company that sells mortgages for new home purchases might be interested in determining what factors might be predictive of defaulting on the loan. Some of the mortgagees have missing income
in their data set. Would it be reasonable for the analyst to drop these loans from their analytic data set? Explain.
**Problem 7 (Medium)**: The `Whickham` data set in the `mosaicData` package includes data on age, smoking, and mortality from a one\-in\-six survey of the electoral roll in Whickham, a mixed urban and rural district near Newcastle upon Tyne, in the United Kingdom. The survey was conducted in 1972–1974 to study heart disease and thyroid disease. A follow\-up on those in the survey was conducted 20 years later. Describe the association between smoking status and mortality in this study. Be sure to consider the role of age as a possible confounding factor.
9\.10 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-foundations.html\#datavizI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-foundations.html#datavizI-online-exercises)
**Problem 1 (Medium)**: Here is an short excerpt from an article, “Benefits of the Dinner Table Ritual,” in the *New York Times*, May 3, 2005\.
The family dinner has long been an example of family togetherness. But recently, scientists have been coming up with compelling reasons … for families to pull up a chair around the table.
Recent studies have begun to shore up the idea that family dinners \[that is, eating dinner together as a family] can have an effect.
For example, a 2004 study of 4,746 children 11 to 18 years old, published in *The Archives of Pediatrics and Adolescent Medicine*, found that frequent family meals were associated with a lower risk of smoking, drinking and using marijuana; with a lower incidence of depressive symptoms and suicidal thoughts; and with better grades.
Another study last year, a survey of 12\- to 17\-year\-olds by the National Center on Addiction and Substance Abuse at Columbia University, found that teenagers who reported eating two or fewer dinners a week with family members were more than one and a half times as likely to smoke, drink or use illegal substances than were teenagers who had five to seven family dinners. \\(\\ldots\\) A study from the University of Minnesota published last year found that adolescent girls who reported
having more frequent family meals and a positive atmosphere during those meals were less likely to have eating disorders.
Explain in what ways the studies, as reported, do and do not provide a compelling reason for families to eat together frequently.
Considering the study done by the National Center on Addition and Substance Abuse, describe what might have been the explanatory and response variables measured, and what sort of model they would have
used.
**Problem 2 (Medium)**: In 2010, the Minnesota Twins played their first season at Target Field. However, up
through 2009, the Twins played at the Metrodome (an indoor stadium). In the Metrodome, air ventilator fans are used both to keep the roof up and to ventilate the stadium. Typically, the air is blown from all directions into the center of the stadium.
According to a retired supervisor in the Metrodome, in the late innings
of some games the fans would be modified so that the ventilation
air would blow out from home plate toward the outfield. The idea is that the
air flow might increase the length of a fly ball. To see if manipulating
the fans could possibly make any difference, a group of students at the
University of Minnesota and their professor built a \`cannon’ that used
compressed air to shoot baseballs. They then did the following experiment.
* Shoot balls at angles around 50 degrees with velocity of around 150 feet per second.
* Shoot balls under two different settings: headwind (air blowing from outfield toward
home plate) or tailwind (air blowing from home plate toward outfield).
* Record other variables: weight of the ball (in grams), diameter of the ball (in cm), and
distance of the ball’s flight (in feet).
Background: People who know little or nothing about baseball might find these basic facts useful. The batter stands near “home plate” and tries to hit the ball toward the outfield. A “fly ball” refers to a ball that is hit into the air. It is desirable to hit
the ball as far as possible. For reasons of basic physics, the distance is maximized when the ball is hit at an intermediate angle steeper than 45 degrees from the horizontal.
Description of variables:
* `Cond`: the wind conditions, a categorical variable with levels `Headwind`, `Tailwind`
* `Angle`: the angle of ball’s trajectory
* `Velocity`: velocity of ball in feet per second
* `BallWt`: weight of ball in grams
* `BallDia`: diameter of ball in inches
* `Dist`: distance in feet of the flight of the ball
Here is the
output of several models:
```
> lm1 <- lm(Dist ~ Cond, data = ds) # FIRST MODEL
```
```
> summary(lm1)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 350.768 2.179 160.967 <2e-16 ***
CondTail 5.865 3.281 1.788 0.0833 .
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 9.499 on 32 degrees of freedom
Multiple R-squared: 0.0908, Adjusted R-squared: 0.06239
F-statistic: 3.196 on 1 and 32 DF, p-value: 0.0833
```
```
> confint(lm1)
2.5 % 97.5 %
(Intercept) 346.32966 355.20718
CondTail -0.81784 12.54766
```
```
> # SECOND MODEL
> lm2 <- lm(Dist ~ Cond + Velocity + Angle + BallWt + BallDia, data = ds)
> summary(lm2)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 181.7443 335.6959 0.541 0.59252
CondTail 7.6705 2.4593 3.119 0.00418 **
Velocity 1.7284 0.5433 3.181 0.00357 **
Angle -1.6014 1.7995 -0.890 0.38110
BallWt -3.9862 2.6697 -1.493 0.14659
BallDia 190.3715 62.5115 3.045 0.00502 **
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 6.805 on 28 degrees of freedom
Multiple R-squared: 0.5917, Adjusted R-squared: 0.5188
F-statistic: 8.115 on 5 and 28 DF, p-value: 7.81e-05
```
```
> confint(lm2)
2.5 % 97.5 %
(Intercept) -505.8974691 869.386165
CondTail 2.6328174 12.708166
Velocity 0.6155279 2.841188
Angle -5.2874318 2.084713
BallWt -9.4549432 1.482457
BallDia 62.3224999 318.420536
```
1. Consider the results from the model of `Dist` as a function
of `Cond` (first model). Briefly summarize what this model says about the relationship between the wind conditions and the distance travelled by the ball. Make sure to say something sensible about the strength of evidence that there is any relationship at all.
2. Briefly summarize the model that has `Dist` as the response variable and includes the other variables as explanatory variables (second model) by reporting and interpreting the `CondTail` parameter. This second model suggests a somewhat different result for the relationship between `Dist` and `Cond`. Summarize the differences and explain in statistical terms why the inclusion of the other explanatory variables has affected the results.
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-modeling.html |
Chapter 10 Predictive modeling
==============================
Thus far, we have discussed two primary methods for investigating relationships among variables in our data: graphics and regression models. Graphics are often interpretable through intuitive inspection alone. They can be used to identify patterns and multivariate relationships in data—this is called [*exploratory data analysis*](https://en.wikipedia.org/w/index.php?search=exploratory%20data%20analysis). Regression models can help us quantify the magnitude and direction of relationships among variables. Thus, both are useful for helping us understand the world and then tell a coherent story about it.
However, graphics are not always the best way to explore or to present data. Graphics work well when there are two or three or even four variables involved. As we saw in Chapter [2](ch-vizI.html#ch:vizI), two variables can be represented with position on paper or on screen via a scatterplot. Ultimately, that information is processed by the eye’s retina. To represent a third variable, color or size can be used. In principle, more variables can be represented by other graphical aesthetics: shape, angle, color saturation, opacity, facets, etc., but doing so raises problems for human cognition—people simply struggle to integrate so many graphical modes into a coherent whole.
While regression scales well into higher dimensions, it is a limited modeling framework. Rather, it is just one type of model, and the space of all possible models is infinite. In the next three chapters we will explore this space by considering a variety of models that exist outside of a regression framework. The idea that a general specification for a model could be tuned to a specific data set automatically has led to the field of [*machine learning*](https://en.wikipedia.org/w/index.php?search=machine%20learning).
The term machine learning was coined in the late 1950s to describe a set of inter\-related algorithmic techniques for extracting information from data without human intervention.
In the days before computers, the dominant modeling framework was regression, which is based heavily on the mathematical disciplines of linear algebra and calculus.
Many of the important concepts in machine learning emerged from the development of regression, but models that are associated with machine learning tend to be valued more for their ability to make accurate predictions and scale to large data sets, as opposed to the mathematical simplicity, ease of interpretation of the parameters, and solid inferential setting that has made regression so widespread Bradley Efron (2020\).
Nevertheless, regression and related statistical techniques from Chapter [9](ch-foundations.html#ch:foundations) provide an important foundation for understanding machine learning.
Appendix [E](ch-regression.html#ch:regression) provides a brief overview of regression modeling.
There are two main branches in machine learning: [*supervised learning*](https://en.wikipedia.org/w/index.php?search=supervised%20learning) (modeling a specific response variable as a function of some explanatory variables) and [*unsupervised learning*](https://en.wikipedia.org/w/index.php?search=unsupervised%20learning) (approaches to finding patterns or groupings in data where there is no clear response variable).
In unsupervised learning, the outcome is unmeasured, and thus the task is often framed as a search for otherwise [*unmeasured features*](https://en.wikipedia.org/w/index.php?search=unmeasured%20features) of the cases.
For instance, assembling DNA data into an evolutionary tree is a problem in unsupervised learning.
No matter how much DNA data you have, you don’t have a direct measurement of where each organism fits on the “true” evolutionary tree.
Instead, the problem is to create a representation that organizes the DNA data themselves.
By contrast, in supervised learning—which includes linear and logistic regression—the data being studied already include measurements of outcome variables.
For instance, in the **NHANES** data, there is already a variable indicating whether or not a person has diabetes.
These outcome variables are often referred to as [*labels*](https://en.wikipedia.org/w/index.php?search=labels).
Building a model to explore or describe how other variables (often called [*features*](https://en.wikipedia.org/w/index.php?search=features) or predictors) are related to diabetes (weight? age? smoking?) is an exercise in supervised learning.
We discuss metrics for model evaluation in this chapter, several types of supervised learning models in the next, and postpone discussion of unsupervised learning to Chapter [12](ch-learningII.html#ch:learningII).
It is important to understand that we cannot provide an in\-depth treatment of each technique in this book.
Rather, our goal is to provide a high\-level overview of machine learning techniques that you are likely to come across.
By working through these chapters, you will understand the general goals of machine learning, the evaluation techniques that are typically employed, and the basic models that are most commonly used.
For a deeper understanding of these techniques, we strongly recommend G. James et al. (2013\) or Hastie, Tibshirani, and Friedman (2009\).
10\.1 Predictive modeling
-------------------------
The basic goal of predictive modeling is to find a [*function*](https://en.wikipedia.org/w/index.php?search=function) that accurately describes how different measured explanatory variables can be combined to make a prediction about a response variable.
A function represents a relationship between inputs and an output (see Appendix [C](ch-function.html#ch:function)). Outdoor temperature is a function of season: Season is the input; temperature is the output. Length of the day—i.e., how many hours of
daylight—is a function of latitude and day of the year: Latitude and day of the year (e.g., March 22\) are the inputs; day length is the output.
Modeling a person’s risk of developing [*diabetes*](https://en.wikipedia.org/w/index.php?search=diabetes) could also be a function.
We might suspect that age and obesity are likely informative, but how should they be combined?
A bit of **R** syntax will help with defining functions: the [*tilde*](https://en.wikipedia.org/w/index.php?search=tilde). The tilde is used to define what the output variable (or outcome, on the
left\-hand side) is and what the input variables (or predictors, on the right\-hand side) are. You’ll see expressions like this:
```
diabetic ~ age + sex + weight + height
```
Here, the variable `diabetic` is marked as the output, simply because it is on the left of the tilde (`~`). The variables `age`, `sex`, `weight`, and `height` are to be the inputs to the function. You may also see the form `diabetic ~ .` in certain places. The dot to the right of the tilde is a shortcut that means: “use all the available variables (except the output).” The object above has class `formula` in **R**.
There are several different goals that might motivate constructing a function.
* Predict the output given an input. It is February, what will the temperature be? Or on June 15th in [*Northampton, MA*](https://en.wikipedia.org/w/index.php?search=Northampton,%20MA), U.S.A. (latitude 42\.3 deg N), how many hours of daylight will there be?
* Determine which variables are useful inputs. It is obvious from experience that temperature is a function of season. But in less familiar situations, e.g., predicting diabetes, the relevant inputs are uncertain or unknown.
* Generate hypotheses. For a scientist trying to figure out the causes of diabetes, it can be useful to construct a predictive model, then look to see what variables turn out to be related to the risk of developing this disorder. For instance, you might find that diet, age, and blood pressure are risk factors. Socioeconomic status is not a direct cause of diabetes, but it might be that there an association through factors related to the accessibility of health care. That “might be” is a hypothesis, and one that you probably would not have thought of before finding a function relating risk of diabetes to those inputs.
* Understand how a system works. For instance, a reasonable function relating hours of daylight to day\-of\-the\-year and latitude reveals that the northern and southern hemisphere have reversed patterns: Long days in the southern hemisphere will be short days in the northern hemisphere.
Depending on your motivation, the kind of model and the input variables may differ. In understanding how a system works, the variables you use should be related to the actual, causal mechanisms involved, e.g., the genetics of diabetes. For predicting an output, it hardly matters what the causal mechanisms are. Instead, all that’s required is that the inputs are known at a time
*before* the prediction is to be made.
10\.2 Simple classification models
----------------------------------
Classifiers are an important complement to regression models in the fields of machine learning and predictive modeling.
Whereas regression models have a quantitative response variable (and can thus often be visualized as a geometric surface), classification models have a categorical response (and are often visualized as a discrete surface, i.e., a tree).
To reduce cognitive overhead, we will restrict our attention in this chapter to classification models based on logistic regression.
In the next chapter, we will introduce other types of [*classifiers*](https://en.wikipedia.org/w/index.php?search=classifiers).
A [*logistic regression*](https://en.wikipedia.org/w/index.php?search=logistic%20regression) model (see Appendix [E](ch-regression.html#ch:regression)) can take a set of explanatory variables (or features) and convert them into a predicted probability.
In such a model, the analyst specifies the form of the relationship and what variables are included.
If \\(\\mathbf{X}\\) is the [*matrix*](https://en.wikipedia.org/w/index.php?search=matrix) of our \\(p\\) explanatory variables, we can think of this as a function \\(f: \\mathbb{R}^p \\rightarrow (0,1\)\\) that returns a probability \\(\\pi \\in (0,1\)\\).
However, since the actual values of the response variable \\(y\\) are binary (i.e., in \\(\\{0,1\\}\\)), we can implement rules \\(g: (0,1\) \\rightarrow \\{0,1\\}\\) that map values of \\(p\\) to either 0 or 1\.
Thus, we can use a logistic regression model as the core of a function \\(h: \\mathbb{R}^p \\rightarrow \\{0,1\\}\\), such that \\(h(\\mathbf{X}) \= g(f(\\mathbf{X}))\\) is always either \\(0\\) or \\(1\\).
Such models are known as [*classifiers*](https://en.wikipedia.org/w/index.php?search=classifiers).
More generally, whereas regression models for quantitative response variables return real numbers, models for categorical response variables are called classifiers.
### 10\.2\.1 Example: High\-earners in the 1994 United States Census
A marketing analyst might be interested in finding factors that can be used
to predict whether a potential customer is a high\-earner.
The 1994 [*United States Census*](https://en.wikipedia.org/w/index.php?search=United%20States%20Census) provides information that can inform such a model,
with records from 32,561 adults that include a binary variable indicating whether each person makes greater or less than $50,000 (nearly $90,000 in 2020 after accounting for inflation).
We will use the indicator of high income as our response variable.
```
library(tidyverse)
library(mdsr)
url <-
"http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
census <- read_csv(
url,
col_names = c(
"age", "workclass", "fnlwgt", "education",
"education_1", "marital_status", "occupation", "relationship",
"race", "sex", "capital_gain", "capital_loss", "hours_per_week",
"native_country", "income"
)
) %>%
mutate(income = factor(income))
glimpse(census)
```
```
Rows: 32,561
Columns: 15
$ age <dbl> 39, 50, 38, 53, 28, 37, 49, 52, 31, 42, 37, 30, 23,…
$ workclass <chr> "State-gov", "Self-emp-not-inc", "Private", "Privat…
$ fnlwgt <dbl> 77516, 83311, 215646, 234721, 338409, 284582, 16018…
$ education <chr> "Bachelors", "Bachelors", "HS-grad", "11th", "Bache…
$ education_1 <dbl> 13, 13, 9, 7, 13, 14, 5, 9, 14, 13, 10, 13, 13, 12,…
$ marital_status <chr> "Never-married", "Married-civ-spouse", "Divorced", …
$ occupation <chr> "Adm-clerical", "Exec-managerial", "Handlers-cleane…
$ relationship <chr> "Not-in-family", "Husband", "Not-in-family", "Husba…
$ race <chr> "White", "White", "White", "Black", "Black", "White…
$ sex <chr> "Male", "Male", "Male", "Male", "Female", "Female",…
$ capital_gain <dbl> 2174, 0, 0, 0, 0, 0, 0, 0, 14084, 5178, 0, 0, 0, 0,…
$ capital_loss <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
$ hours_per_week <dbl> 40, 13, 40, 40, 40, 40, 16, 45, 50, 40, 80, 40, 30,…
$ native_country <chr> "United-States", "United-States", "United-States", …
$ income <fct> <=50K, <=50K, <=50K, <=50K, <=50K, <=50K, <=50K, >5…
```
Throughout this chapter, we will use the **tidymodels** package to streamline our computations.
The **tidymodels** package is really a collection of packages, similar to the **tidyverse**. The workhorse package for model fitting is called **parsnip**, while the model evaluation metrics are provided by the **yardstick** package.
For reasons that we will discuss later (in Section [10\.3\.2](ch-modeling.html#sec:cv)), we will first separate our data set into two pieces by sampling the rows at random.
A sample of 80% of the rows will become the training data set, with the remaining 20% set aside as the testing (or “hold\-out”) data set.
The `initial_split()` function divides the data, while the `training()` and `testing()` functions recover the two smaller data sets.
```
library(tidymodels)
set.seed(364)
n <- nrow(census)
census_parts <- census %>%
initial_split(prop = 0.8)
train <- census_parts %>%
training()
test <- census_parts %>%
testing()
list(train, test) %>%
map_int(nrow)
```
```
[1] 26048 6513
```
We first compute the observed percentage of high earners in the training set as \\(\\bar{\\pi}\\).
```
pi_bar <- train %>%
count(income) %>%
mutate(pct = n / sum(n)) %>%
filter(income == ">50K") %>%
pull(pct)
pi_bar
```
```
[1] 0.241
```
Note that only about 24% of those in the sample make more than $50k.
#### 10\.2\.1\.1 The null model
Since we know \\(\\bar{\\pi}\\), it follows that the [*accuracy*](https://en.wikipedia.org/w/index.php?search=accuracy) of the [*null model*](https://en.wikipedia.org/w/index.php?search=null%20model) is \\(1 \- \\bar{\\pi}\\), which is about 76%, since we can get that many right by just predicting that everyone makes less than $50k.
```
train %>%
count(income) %>%
mutate(pct = n / sum(n))
```
```
# A tibble: 2 × 3
income n pct
<fct> <int> <dbl>
1 <=50K 19763 0.759
2 >50K 6285 0.241
```
While we can compute the accuracy of the null model with simple arithmetic, when we compare models later, it will be useful to have our null model stored as a model object. We can create such an object using **tidymodels** by specifying a logistic regression model with no explanatory variables. The computational engine is `glm` because `glm()` is the name of the **R** function that actually fits `vocab("generalized linear models")` (of which logistic regression is a special case).
```
mod_null <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(income ~ 1, data = train)
```
After using the `predict()` function to compute the predicted values, the **yardstick** package will help us compute the accuracy.
```
library(yardstick)
pred <- train %>%
select(income, capital_gain) %>%
bind_cols(
predict(mod_null, new_data = train, type = "class")
) %>%
rename(income_null = .pred_class)
accuracy(pred, income, income_null)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.759
```
Always benchmark your predictive models against a reasonable null model.
Another important tool in verifying a model’s accuracy is called the [*confusion matrix*](https://en.wikipedia.org/w/index.php?search=confusion%20matrix) (really).
Simply put, this is a two\-way table that counts how often our model made the correct prediction.
Note that there are two different types of mistakes that our model can make: predicting a high income when the income was in fact low (a [*Type I error*](https://en.wikipedia.org/w/index.php?search=Type%20I%20error)), and predicting a low income when the income was in fact high (a [*Type II error*](https://en.wikipedia.org/w/index.php?search=Type%20II%20error)).
```
confusion_null <- pred %>%
conf_mat(truth = income, estimate = income_null)
confusion_null
```
```
Truth
Prediction <=50K >50K
<=50K 19763 6285
>50K 0 0
```
Note again that the null model predicts that *everyone* is a low earner, so it makes many Type II errors (false negatives) but no Type I errors (false positives).
#### 10\.2\.1\.2 Logistic regression
Beating the null model shouldn’t be hard. Our first attempt will be to employ a simple logistic regression model. First, we’ll fit the model using only one explanatory variable: `capital_gain`. This variable measures the amount of money that each person paid in [*capital gains*](https://en.wikipedia.org/w/index.php?search=capital%20gains) tax. Since capital gains are accrued on assets (e.g., stocks, houses), it stands to reason that people who pay more in capital gains are likely to have more wealth and, similarly, are likely to have high incomes.
In addition, capital gains is directly related since it is a component of total income.
```
mod_log_1 <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(income ~ capital_gain, data = train)
```
Figure [10\.1](ch-modeling.html#fig:log-cap-gains) illustrates how the predicted probability of being a high earner varies in our simple logistic regression model with respect to the amount of capital gains tax paid.
```
train_plus <- train %>%
mutate(high_earner = as.integer(income == ">50K"))
ggplot(train_plus, aes(x = capital_gain, y = high_earner)) +
geom_count(
position = position_jitter(width = 0, height = 0.05),
alpha = 0.5
) +
geom_smooth(
method = "glm", method.args = list(family = "binomial"),
color = "dodgerblue", lty = 2, se = FALSE
) +
geom_hline(aes(yintercept = 0.5), linetype = 3) +
scale_x_log10(labels = scales::dollar)
```
Figure 10\.1: Simple logistic regression model for high\-earner status based on capital gains tax paid.
How accurate is this model?
```
pred <- pred %>%
bind_cols(
predict(mod_log_1, new_data = train, type = "class")
) %>%
rename(income_log_1 = .pred_class)
confusion_log_1 <- pred %>%
conf_mat(truth = income, estimate = income_log_1)
confusion_log_1
```
```
Truth
Prediction <=50K >50K
<=50K 19560 5015
>50K 203 1270
```
```
accuracy(pred, income, income_log_1)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.800
```
In Figure [10\.2](ch-modeling.html#fig:autoplot-null-confusion), we graphically compare the confusion matrices of the null model and the simple logistic regression model. The true positives of the latter model are an important improvement.
```
autoplot(confusion_null) +
geom_label(
aes(
x = (xmax + xmin) / 2,
y = (ymax + ymin) / 2,
label = c("TN", "FP", "FN", "TP")
)
)
autoplot(confusion_log_1) +
geom_label(
aes(
x = (xmax + xmin) / 2,
y = (ymax + ymin) / 2,
label = c("TN", "FP", "FN", "TP")
)
)
```
Figure 10\.2: Visual summary of the predictive accuracy of the null model (left) versus the logistic regression model with one explanatory variable (right). The null model never predicts a positive.
Using `capital_gains` as a single explanatory variable improved the model’s accuracy on the training data to 80%, a notable increase over the null model’s accuracy of 75\.9%.
We can easily interpret the rule generated by the logistic regression model here, since there is only a single predictor.
```
broom::tidy(mod_log_1)
```
```
# A tibble: 2 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.37 0.0159 -86.1 0
2 capital_gain 0.000335 0.00000962 34.8 3.57e-265
```
Recall that logistic regression uses the [*logit*](https://en.wikipedia.org/w/index.php?search=logit) function to map predicted probabilities to the whole real line.
We can invert this function to find the value of capital gains that would yield
a predicted value of 0\.5\.
\\\[
logit(\\hat{\\pi}) \= \\log{ \\left( \\frac{\\hat{\\pi}}{1\-\\hat{\\pi}} \\right) } \= \\beta\_0 \+ \\beta\_1 \\cdot capital\\\_gain
\\]
We can invert this function to find the value of capital gains that would yield a predicted value of 0\.5 by plugging in \\(\\hat{\\pi} \= 0\.5\\), \\(\\beta\_0\=\\) \-1\.373, \\(\\beta\_1\=\\) 0\.000335, and solving for \\(capital\\\_gain\\).
The answer in this case is \\(\-\\beta\_0 / \\beta\_1\\), or $4,102\.
We can confirm that when we inspect the predicted probabilities, the classification shifts from `<=50K` to `>50K` as the value of `captial_gain` jumps from $4,101 to $4,386\.
For these observations, the predicted probabilities jump from 0\.494 to 0\.517\.
```
income_probs <- pred %>%
select(income, income_log_1, capital_gain) %>%
bind_cols(
predict(mod_log_1, new_data = train, type = "prob")
)
income_probs %>%
rename(rich_prob = `.pred_>50K`) %>%
distinct() %>%
filter(abs(rich_prob - 0.5) < 0.02) %>%
arrange(desc(rich_prob))
```
```
# A tibble: 5 × 5
income income_log_1 capital_gain `.pred_<=50K` rich_prob
<fct> <fct> <dbl> <dbl> <dbl>
1 <=50K <=50K 4101 0.500 0.500
2 <=50K <=50K 4064 0.503 0.497
3 <=50K <=50K 3942 0.513 0.487
4 <=50K <=50K 3908 0.516 0.484
5 <=50K <=50K 3887 0.518 0.482
```
Thus, the model says to call a taxpayer high income if their capital gains are above $4,102\.
But why should we restrict our model to one explanatory variable? Let’s fit a more sophisticated model that incorporates the other explanatory variables.
```
mod_log_all <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(
income ~ age + workclass + education + marital_status +
occupation + relationship + race + sex +
capital_gain + capital_loss + hours_per_week,
data = train
)
pred <- pred %>%
bind_cols(
predict(mod_log_all, new_data = train, type = "class")
) %>%
rename(income_log_all = .pred_class)
pred %>%
conf_mat(truth = income, estimate = income_log_all)
```
```
Truth
Prediction <=50K >50K
<=50K 18395 2493
>50K 1368 3792
```
```
accuracy(pred, income, income_log_all)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.852
```
Not surprisingly, by including more explanatory variables, we have improved the predictive accuracy on the training set.
Unfortunately, predictive modeling is not quite this easy.
In the next section, we’ll see where our naïve approach can fail.
10\.3 Evaluating models
-----------------------
How do you know if your model is a good one? In this section, we outline some of the key concepts in model evaluation—a critical step in predictive analytics.
### 10\.3\.1 Bias\-variance trade\-off
We want to have models that minimize both [*bias*](https://en.wikipedia.org/w/index.php?search=bias) and [*variance*](https://en.wikipedia.org/w/index.php?search=variance), but to some extent these are mutually exclusive goals. A complicated model will have less bias, but will in general have higher variance. A simple model can reduce variance but at the cost of increased bias. The optimal balance between bias and variance depends on the purpose for which the model is constructed (e.g., prediction vs. description of causal relationships) and the system being modeled. One helpful class of techniques—called [*regularization*](https://en.wikipedia.org/w/index.php?search=regularization)—provides model architectures that can balance bias and variance in a graduated way. Examples of regularization techniques are [*ridge regression*](https://en.wikipedia.org/w/index.php?search=ridge%20regression) and the [*lasso*](https://en.wikipedia.org/w/index.php?search=lasso) (see Section [11\.5](ch-learningI.html#sec:regularization)).
### 10\.3\.2 Cross\-validation
A vexing and seductive trap that modelers sometimes fall into is [*overfitting*](https://en.wikipedia.org/w/index.php?search=overfitting).
Every model discussed in this chapter is [*fit*](https://en.wikipedia.org/w/index.php?search=fit) to a set of data.
That is, given a set of [*training*](https://en.wikipedia.org/w/index.php?search=training) data and the specification for the type of model, each algorithm will determine the optimal set of parameters for that model and those data.
However, if the model works well on those training data, but not so well on a set of [*testing*](https://en.wikipedia.org/w/index.php?search=testing) data—that the model has never seen—then the model is said to be [*overfitting*](https://en.wikipedia.org/w/index.php?search=overfitting).
Perhaps the most elementary mistake in predictive analytics is to overfit your model to the training data, only to see it later perform miserably on the testing set.
In predictive analytics, data sets are often divided into two sets:
* **Training**: The set of data on which you build your model
* **Testing**: After your model is built, this is the set used to test it by evaluating it against data that it has not previously seen.
For example, in this chapter we set aside 80% of the observations to use as a training set, but held back another 20% for testing.
The 80/20 scheme we have employed in this chapter is among the simplest possible schemes, but there are other possibilities.
Perhaps a 90/10 or a 75/25 split would be a better option.
The goal is to have as much data in the training set to allow the model to perform well while having sufficient numbers in the test set to properly assess it.
An alternative approach to combat this problem is [*cross\-validation*](https://en.wikipedia.org/w/index.php?search=cross-validation).
To perform a 2\-fold cross\-validation:
* Randomly separate your data (by rows) into two data sets with the same number of observations. Let’s call them \\(X\_1\\) and \\(X\_2\\).
* Build your model on the data in \\(X\_1\\), and then run the data in \\(X\_2\\) through your model. How well does it perform? Just because your model performs well on \\(X\_1\\) (this is known as [*in\-sample*](https://en.wikipedia.org/w/index.php?search=in-sample) testing) does not imply that it will perform as well on the data in \\(X\_2\\) ([*out\-of\-sample*](https://en.wikipedia.org/w/index.php?search=out-of-sample) testing).
* Now reverse the roles of \\(X\_1\\) and \\(X\_2\\), so that the data in \\(X\_2\\) is used for training, and the data in \\(X\_1\\) is used for testing.
* If your first model is overfit, then it will likely not perform as well on the second set of data.
More complex schemes for cross\-validating are possible. \\(k\\)\-fold cross\-validation is the generalization of 2\-fold cross validation, in which the data are separated into \\(k\\) equal\-sized partitions, and each of the \\(k\\) partitions is chosen to be the testing set once, with the other \\(k\-1\\) partitions used for training.
### 10\.3\.3 Confusion matrices and ROC curves
For classifiers, we have already seen the confusion matrix, which is a common way to assess the effectiveness of a classifier.
Recall that each of the classifiers we have discussed in this chapter are capable of producing not only a binary class label, but also the predicted probability of belonging to either class.
Rounding the probabilities in the usual way (using 0\.5 as a threshold) may not be a good idea, since the average probability might not be anywhere near 0\.5, and thus we could have far too many predictions in one class.
For example, in the `census` data, only about 24% of the people in the training set had income above $50,000\.
Thus, a properly calibrated predictive model should predict that about 24% of the people have incomes above $50,000\. Consider the raw probabilities returned by the simple logistic regression model.
```
head(income_probs)
```
```
# A tibble: 6 × 5
income income_log_1 capital_gain `.pred_<=50K` `.pred_>50K`
<fct> <fct> <dbl> <dbl> <dbl>
1 <=50K <=50K 0 0.798 0.202
2 >50K <=50K 0 0.798 0.202
3 <=50K <=50K 0 0.798 0.202
4 <=50K <=50K 0 0.798 0.202
5 <=50K <=50K 0 0.798 0.202
6 <=50K <=50K 0 0.798 0.202
```
If we round these using a threshold of 0\.5, then only NA% are predicted to have high incomes.
Note that here we are able to work with the unfortunate characters in the variable names by wrapping them with backticks. Of course, we could also rename them.
```
income_probs %>%
group_by(rich = `.pred_>50K` > 0.5) %>%
count() %>%
mutate(pct = n / nrow(income_probs))
```
```
# A tibble: 2 × 3
# Groups: rich [2]
rich n pct
<lgl> <int> <dbl>
1 FALSE 24575 0.943
2 TRUE 1473 0.0565
```
A better alternative would be to use the overall observed percentage (i.e., 24%) as a threshold instead:
```
income_probs %>%
group_by(rich = `.pred_>50K` > pi_bar) %>%
count() %>%
mutate(pct = n / nrow(income_probs))
```
```
# A tibble: 2 × 3
# Groups: rich [2]
rich n pct
<lgl> <int> <dbl>
1 FALSE 23930 0.919
2 TRUE 2118 0.0813
```
This is an improvement, but a more principled approach to assessing the quality of a classifier is a [*receiver operating characteristic*](https://en.wikipedia.org/w/index.php?search=receiver%20operating%20characteristic) (ROC) curve. This considers all possible threshold values for rounding, and graphically displays the trade\-off between [*sensitivity*](https://en.wikipedia.org/w/index.php?search=sensitivity) (the true positive rate) and [*specificity*](https://en.wikipedia.org/w/index.php?search=specificity) (the true negative rate). What is actually plotted is the true positive rate as a function of the false positive rate.
ROC curves are common in machine learning and operations research as well
as assessment of test characteristics and medical imaging. They can be constructed in **R** using the **yardstick** package. Note that ROC curves operate on the fitted probabilities in \\((0,1\)\\).
```
roc <- pred %>%
mutate(estimate = pull(income_probs, `.pred_>50K`)) %>%
roc_curve(truth = income, estimate, event_level = "second") %>%
autoplot()
```
Note that while the `roc_curve()` function performs the calculations necessary to draw the ROC curve, the `autoplot()` function is the one that actually returns a **ggplot2** object.
In Figure [10\.3](ch-modeling.html#fig:roc-log) the upper\-left corner represents a perfect classifier, which would have a true positive rate of 1 and a false positive rate of 0\. On the other hand, a random classifier would lie along the diagonal, since it would be equally likely to make either kind of mistake.
The simple logistic regression model that we used had the following true and false positive rates, which are indicated in Figure [10\.3](ch-modeling.html#fig:roc-log) by the black dot.
A number of other metrics are available.
```
metrics <- pred %>%
conf_mat(income, income_log_1) %>%
summary(event_level = "second")
metrics
```
```
# A tibble: 13 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.800
2 kap binary 0.260
3 sens binary 0.202
4 spec binary 0.990
5 ppv binary 0.862
6 npv binary 0.796
7 mcc binary 0.355
8 j_index binary 0.192
9 bal_accuracy binary 0.596
10 detection_prevalence binary 0.0565
11 precision binary 0.862
12 recall binary 0.202
13 f_meas binary 0.327
```
```
roc_mod <- metrics %>%
filter(.metric %in% c("sens", "spec")) %>%
pivot_wider(-.estimator, names_from = .metric, values_from = .estimate)
roc +
geom_point(
data = roc_mod, size = 3,
aes(x = 1 - spec, y = sens)
)
```
Figure 10\.3: ROC curve for the simple logistic regression model.
Depending on our tolerance for false positives vs. false negatives, we could modify the way that our logistic regression model rounds probabilities, which would have the effect of moving the black dot in Figure [10\.3](ch-modeling.html#fig:roc-log) along the curve.
### 10\.3\.4 Measuring prediction error for quantitative responses
For evaluating models with a quantitative response, there are a variety of criteria that are commonly used. Here we outline three of the simplest and most common. The following presumes a vector of real observations denoted \\(y\\) and a corresponding vector of prediction \\(\\hat{y}\\):
* **RMSE**: [*Root\-mean\-square error*](https://en.wikipedia.org/w/index.php?search=Root-mean-square%20error) is probably the most common:
\\\[
RMSE(y, \\hat{y}) \= \\sqrt{\\frac{1}{n} \\sum\_{i\=1}^n (y \- \\hat{y})^2} \\,.
\\]
The RMSE has several desirable properties. Namely, it is in the same units as the response variable \\(y\\), it captures both overestimates and underestimates equally, and it penalizes large misses heavily.
* **MAE**: [*Mean absolute error*](https://en.wikipedia.org/w/index.php?search=Mean%20absolute%20error) is similar to the RMSE, but does not penalize large misses as heavily, due to the replacement of the squared term by an absolute value:
\\\[
MAE(y, \\hat{y}) \= \\frac{1}{n} \\sum\_{i\=1}^n \|y \- \\hat{y}\| \\,.
\\]
* **Correlation**: The previous two methods require that the units and scale of the predictions \\(\\hat{y}\\) are the same as the response variable \\(y\\).
While this is of course necessary for accurate predictions, some predictive models merely want to track the trends in the response.
In many such cases the [*correlation*](https://en.wikipedia.org/w/index.php?search=correlation) between \\(y\\) and \\(\\hat{y}\\) may suffice.
In addition to the usual Pearson product\-moment correlation (measure of linear association), statistics related to [*rank correlation*](https://en.wikipedia.org/w/index.php?search=rank%20correlation) may be useful.
That is, instead of trying to minimize \\(y \- \\hat{y}\\), it might be enough to make sure that the \\(\\hat{y}\_i\\)’s are in the same relative order as the \\(y\_i\\)’s.
Popular measures of rank correlation include Spearman’s \\(\\rho\\) and Kendall’s \\(\\tau\\).
* **Coefficient of determination**: (\\(R^2\\)) The [*coefficient of determination*](https://en.wikipedia.org/w/index.php?search=coefficient%20of%20determination) describes what proportion of variability in the outcome is explained by the model.
It is measured on a scale of \\(\[0,1]\\), with 1 indicating a perfect match between \\(y\\) and \\(\\hat{y}\\).
### 10\.3\.5 Example: Evaluation of income models
Recall that we separated the 32,561 observations in the `census` data set into a training set that contained 80% of the observations and a testing set that contained the remaining 20%.
Since the separation was done by selecting rows uniformly at random, and the number of observations was fairly large, it seems likely that both the training and testing set will contain similar information.
For example, the distribution of `capital_gain` is similar in both the testing and training sets.
Nevertheless, it is worth formally testing the performance of our models on both sets.
```
train %>%
skim(capital_gain)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 capital_gain 26048 0 1079. 7406. 0 0 0 0 99999
```
```
test %>%
skim(capital_gain)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 capital_gain 6513 0 1073. 7300. 0 0 0 0 99999
```
We note that at least three quarters of both samples reported no capital gains.
To do this, we build a data frame that contains an identifier for each of our three models, as well as a list\-column with the model objects.
```
mods <- tibble(
type = c("null", "log_1", "log_all"),
mod = list(mod_null, mod_log_1, mod_log_all)
)
```
We can iterate through the list of models and apply the `predict()` method to each object, using both the testing and training sets.
```
mods <- mods %>%
mutate(
y_train = list(pull(train, income)),
y_test = list(pull(test, income)),
y_hat_train = map(
mod,
~pull(predict(.x, new_data = train, type = "class"), .pred_class)
),
y_hat_test = map(
mod,
~pull(predict(.x, new_data = test, type = "class"), .pred_class)
)
)
mods
```
```
# A tibble: 3 × 6
type mod y_train y_test y_hat_train y_hat_test
<chr> <list> <list> <list> <list> <list>
1 null <fit[+]> <fct [26,048]> <fct [6,513]> <fct [26,048]> <fct [6,513]>
2 log_1 <fit[+]> <fct [26,048]> <fct [6,513]> <fct [26,048]> <fct [6,513]>
3 log_all <fit[+]> <fct [26,048]> <fct [6,513]> <fct [26,048]> <fct [6,513]>
```
Now that we have the predictions for each model, we just need to compare them to the truth (`y`) and tally the results. We can do this using the `map2_dbl()` function from the **purrr** package.
```
mods <- mods %>%
mutate(
accuracy_train = map2_dbl(y_train, y_hat_train, accuracy_vec),
accuracy_test = map2_dbl(y_test, y_hat_test, accuracy_vec),
sens_test =
map2_dbl(y_test, y_hat_test, sens_vec, event_level = "second"),
spec_test =
map2_dbl(y_test, y_hat_test, spec_vec, event_level = "second")
)
```
Table 10\.1: Model accuracy measures for the income model.
| type | accuracy\_train | accuracy\_test | sens\_test | spec\_test |
| --- | --- | --- | --- | --- |
| log\_all | 0\.852 | 0\.849 | 0\.598 | 0\.928 |
| log\_1 | 0\.800 | 0\.803 | 0\.204 | 0\.991 |
| null | 0\.759 | 0\.761 | 0\.000 | 1\.000 |
Table [10\.1](ch-modeling.html#tab:accuracytest) displays a number of model accuracy measures.
Note that each model performs slightly worse on the testing set than it did on the training set.
As expected, the null model has a sensitivity of 0 and a specificity of 1, because it always makes the same prediction.
While the model that includes all of the variables is slightly less specific than the single explanatory variable model, it is much more sensitive.
In this case, we should probably conclude that the `log_all` model is the most likely to be useful.
In Figure [10\.4](ch-modeling.html#fig:roc-log-compare), we compare the ROC curves for all census models on the testing data set. Some data wrangling is necessary before we can gather the information to make these curves.
```
mods <- mods %>%
mutate(
y_hat_prob_test = map(
mod,
~pull(predict(.x, new_data = test, type = "prob"), `.pred_>50K`)
),
type = fct_reorder(type, sens_test, .desc = TRUE)
)
```
```
mods %>%
select(type, y_test, y_hat_prob_test) %>%
unnest(cols = c(y_test, y_hat_prob_test)) %>%
group_by(type) %>%
roc_curve(truth = y_test, y_hat_prob_test, event_level = "second") %>%
autoplot() +
geom_point(
data = mods,
aes(x = 1 - spec_test, y = sens_test, color = type),
size = 3
) +
scale_color_brewer("Model", palette = "Set2")
```
Figure 10\.4: Comparison of ROC curves across three logistic regression models on the Census testing data. The null model has a true positive rate of zero and lies along the diagonal. The full model is the best overall performer, as its curve lies furthest from the diagonal.
10\.4 Extended example: Who has diabetes?
-----------------------------------------
Consider the relationship between age and [*diabetes mellitus*](https://en.wikipedia.org/w/index.php?search=diabetes%20mellitus), a group of metabolic diseases characterized by high blood sugar levels.
As with many diseases, the risk of contracting diabetes increases with age and is associated with many other factors.
Age does not suggest a way to avoid diabetes: there is no way for you to change your age.
You can, however, change things like diet, physical fitness, etc.
Knowing what is predictive of diabetes can be helpful in practice, for instance, to design an efficient screening program to test people for the disease.
Let’s start simply. What is the relationship between age, body\-mass index (BMI), and diabetes for adults surveyed in `NHANES`?
Note that the overall rate of diabetes is relatively low.
```
library(NHANES)
people <- NHANES %>%
select(Age, Gender, Diabetes, BMI, HHIncome, PhysActive) %>%
drop_na()
glimpse(people)
```
```
Rows: 7,555
Columns: 6
$ Age <int> 34, 34, 34, 49, 45, 45, 45, 66, 58, 54, 58, 50, 33, 60,…
$ Gender <fct> male, male, male, female, female, female, female, male,…
$ Diabetes <fct> No, No, No, No, No, No, No, No, No, No, No, No, No, No,…
$ BMI <dbl> 32.2, 32.2, 32.2, 30.6, 27.2, 27.2, 27.2, 23.7, 23.7, 2…
$ HHIncome <fct> 25000-34999, 25000-34999, 25000-34999, 35000-44999, 750…
$ PhysActive <fct> No, No, No, No, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes,…
```
```
people %>%
group_by(Diabetes) %>%
count() %>%
mutate(pct = n / nrow(people))
```
```
# A tibble: 2 × 3
# Groups: Diabetes [2]
Diabetes n pct
<fct> <int> <dbl>
1 No 6871 0.909
2 Yes 684 0.0905
```
We can visualize any model.
In this case, we will tile the \\((Age, BMI)\\)\-plane with a fine grid of 10,000 points.
```
library(modelr)
num_points <- 100
fake_grid <- data_grid(
people,
Age = seq_range(Age, num_points),
BMI = seq_range(BMI, num_points)
)
```
Next, we will evaluate each of our four models on each grid point, taking care to retrieve not the classification itself, but the probability of having diabetes.
The null model considers no variable.
The next two models consider only age, or BMI, while the last model considers both.
```
dmod_null <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(Diabetes ~ 1, data = people)
dmod_log_1 <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(Diabetes ~ Age, data = people)
dmod_log_2 <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(Diabetes ~ BMI, data = people)
dmod_log_12 <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(Diabetes ~ Age + BMI, data = people)
bmi_mods <- tibble(
type = factor(
c("Null", "Logistic (Age)", "Logistic (BMI)", "Logistic (Age, BMI)")
),
mod = list(dmod_null, dmod_log_1, dmod_log_2, dmod_log_12),
y_hat = map(mod, predict, new_data = fake_grid, type = "prob")
)
```
Next, we add the grid data (`X`), and then use `map2()` to combine the predictions (`y_hat`) with the grid data.
```
bmi_mods <- bmi_mods %>%
mutate(
X = list(fake_grid),
yX = map2(y_hat, X, bind_cols)
)
```
Finally, we use `unnest()` to stretch the data frame out.
We now have a prediction at each of our 10,000 grid points for each of our four models.
```
res <- bmi_mods %>%
select(type, yX) %>%
unnest(cols = yX)
res
```
```
# A tibble: 40,000 × 5
type .pred_No .pred_Yes Age BMI
<fct> <dbl> <dbl> <dbl> <dbl>
1 Null 0.909 0.0905 12 13.3
2 Null 0.909 0.0905 12 14.0
3 Null 0.909 0.0905 12 14.7
4 Null 0.909 0.0905 12 15.4
5 Null 0.909 0.0905 12 16.0
6 Null 0.909 0.0905 12 16.7
7 Null 0.909 0.0905 12 17.4
8 Null 0.909 0.0905 12 18.1
9 Null 0.909 0.0905 12 18.8
10 Null 0.909 0.0905 12 19.5
# … with 39,990 more rows
```
Figure [10\.5](ch-modeling.html#fig:mod-log-compare) illustrates each model in the data space. Whereas the null model predicts the probability of diabetes to be constant irrespective of age and BMI, including age (BMI) as an explanatory variable allows the predicted probability to vary in the horizontal (vertical) direction.
Older patients and those with larger body mass have a higher probability of having diabetes.
Having both variables as covariates allows the probability to vary with respect to both age and BMI.
```
ggplot(data = res, aes(x = Age, y = BMI)) +
geom_tile(aes(fill = .pred_Yes), color = NA) +
geom_count(
data = people,
aes(color = Diabetes), alpha = 0.4
) +
scale_fill_gradient("Prob of\nDiabetes", low = "white", high = "red") +
scale_color_manual(values = c("gold", "black")) +
scale_size(range = c(0, 2)) +
scale_x_continuous(expand = c(0.02, 0)) +
scale_y_continuous(expand = c(0.02, 0)) +
facet_wrap(~fct_rev(type))
```
Figure 10\.5: Comparison of logistic regression models in the data space. Note the greater flexibility as more variables are introduced.
10\.5 Further resources
-----------------------
The **tidymodels** package and documentation contains many vignettes[15](#fn15) that go into further detail on how the package can be used.
10\.6 Exercises
---------------
**Problem 1 (Easy)**: In the first example in the chapter, a training dataset of 80% of the rows was created for the Census data.
What would be the tradeoffs of using a 90%/10% split instead?
**Problem 2 (Easy)**: Without using jargon, describe what a receiver operating characteristic (ROC) curve is and why it is important in predictive analytics and machine learning.
**Problem 3 (Medium)**: Investigators in the HELP (Health Evaluation and Linkage to Primary Care) study were interested in modeling the probability of being `homeless` (one or more nights spent on the street or in a shelter in the past six months vs. housed) as a function of age.
1. Generate a confusion matrix for the null model and interpret the result.
2. Fit and interpret logistic regression model for the probability of being `homeless` as a function of age.
3. What is the predicted probability of being homeless for a 20 year old? For a 40 year old?
4. Generate a confusion matrix for the second model and interpret the result.
**Problem 4 (Medium)**: Investigators in the HELP (Health Evaluation and Linkage to Primary Care) study were interested in modeling associations between demographic factors and a baseline measure of depressive symptoms `cesd`.
They fit a linear regression model using the following predictors: `age`, `sex`, and `homeless` to the `HELPrct` data from the `mosaicData` package.
1. Calculate and interpret the coefficient of determination (\\(R^2\\)) for this model and the null model.
2. Calculate and interpret the root mean squared error for this model and for the null model.
3. Calculate and interpret the mean absolute error (MAE) for this model and the null model.
**Problem 5 (Medium)**: What impact does the random number seed have on our results?
1. Repeat the Census logistic regression model that controlled only for capital gains but using a different random number seed (365 instead of 364\) for the 80%/20% split. Would you expect big differences in the accuracy using the training data? Testing data?
2. Repeat the process using a random number seed of 366\. What do you conclude?
**Problem 6 (Hard)**: Smoking is an important public health concern. Use the `NHANES` data from the `NHANES` package to develop a logistic regression model that identifies predictors of current smoking among those 20 or older. (Hint: note that the `SmokeNow` variable is missing for those who have never smoked: you will need to recode the variable to construct your outcome variable.)
```
library(tidyverse)
library(NHANES)
mosaic::tally(~ SmokeNow + Smoke100, data = filter(NHANES, Age >= 20))
```
```
Smoke100
SmokeNow No Yes
No 0 1745
Yes 0 1466
<NA> 4024 0
```
10\.7 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-modeling.html\#modeling\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-modeling.html#modeling-online-exercises)
No exercises found
---
10\.1 Predictive modeling
-------------------------
The basic goal of predictive modeling is to find a [*function*](https://en.wikipedia.org/w/index.php?search=function) that accurately describes how different measured explanatory variables can be combined to make a prediction about a response variable.
A function represents a relationship between inputs and an output (see Appendix [C](ch-function.html#ch:function)). Outdoor temperature is a function of season: Season is the input; temperature is the output. Length of the day—i.e., how many hours of
daylight—is a function of latitude and day of the year: Latitude and day of the year (e.g., March 22\) are the inputs; day length is the output.
Modeling a person’s risk of developing [*diabetes*](https://en.wikipedia.org/w/index.php?search=diabetes) could also be a function.
We might suspect that age and obesity are likely informative, but how should they be combined?
A bit of **R** syntax will help with defining functions: the [*tilde*](https://en.wikipedia.org/w/index.php?search=tilde). The tilde is used to define what the output variable (or outcome, on the
left\-hand side) is and what the input variables (or predictors, on the right\-hand side) are. You’ll see expressions like this:
```
diabetic ~ age + sex + weight + height
```
Here, the variable `diabetic` is marked as the output, simply because it is on the left of the tilde (`~`). The variables `age`, `sex`, `weight`, and `height` are to be the inputs to the function. You may also see the form `diabetic ~ .` in certain places. The dot to the right of the tilde is a shortcut that means: “use all the available variables (except the output).” The object above has class `formula` in **R**.
There are several different goals that might motivate constructing a function.
* Predict the output given an input. It is February, what will the temperature be? Or on June 15th in [*Northampton, MA*](https://en.wikipedia.org/w/index.php?search=Northampton,%20MA), U.S.A. (latitude 42\.3 deg N), how many hours of daylight will there be?
* Determine which variables are useful inputs. It is obvious from experience that temperature is a function of season. But in less familiar situations, e.g., predicting diabetes, the relevant inputs are uncertain or unknown.
* Generate hypotheses. For a scientist trying to figure out the causes of diabetes, it can be useful to construct a predictive model, then look to see what variables turn out to be related to the risk of developing this disorder. For instance, you might find that diet, age, and blood pressure are risk factors. Socioeconomic status is not a direct cause of diabetes, but it might be that there an association through factors related to the accessibility of health care. That “might be” is a hypothesis, and one that you probably would not have thought of before finding a function relating risk of diabetes to those inputs.
* Understand how a system works. For instance, a reasonable function relating hours of daylight to day\-of\-the\-year and latitude reveals that the northern and southern hemisphere have reversed patterns: Long days in the southern hemisphere will be short days in the northern hemisphere.
Depending on your motivation, the kind of model and the input variables may differ. In understanding how a system works, the variables you use should be related to the actual, causal mechanisms involved, e.g., the genetics of diabetes. For predicting an output, it hardly matters what the causal mechanisms are. Instead, all that’s required is that the inputs are known at a time
*before* the prediction is to be made.
10\.2 Simple classification models
----------------------------------
Classifiers are an important complement to regression models in the fields of machine learning and predictive modeling.
Whereas regression models have a quantitative response variable (and can thus often be visualized as a geometric surface), classification models have a categorical response (and are often visualized as a discrete surface, i.e., a tree).
To reduce cognitive overhead, we will restrict our attention in this chapter to classification models based on logistic regression.
In the next chapter, we will introduce other types of [*classifiers*](https://en.wikipedia.org/w/index.php?search=classifiers).
A [*logistic regression*](https://en.wikipedia.org/w/index.php?search=logistic%20regression) model (see Appendix [E](ch-regression.html#ch:regression)) can take a set of explanatory variables (or features) and convert them into a predicted probability.
In such a model, the analyst specifies the form of the relationship and what variables are included.
If \\(\\mathbf{X}\\) is the [*matrix*](https://en.wikipedia.org/w/index.php?search=matrix) of our \\(p\\) explanatory variables, we can think of this as a function \\(f: \\mathbb{R}^p \\rightarrow (0,1\)\\) that returns a probability \\(\\pi \\in (0,1\)\\).
However, since the actual values of the response variable \\(y\\) are binary (i.e., in \\(\\{0,1\\}\\)), we can implement rules \\(g: (0,1\) \\rightarrow \\{0,1\\}\\) that map values of \\(p\\) to either 0 or 1\.
Thus, we can use a logistic regression model as the core of a function \\(h: \\mathbb{R}^p \\rightarrow \\{0,1\\}\\), such that \\(h(\\mathbf{X}) \= g(f(\\mathbf{X}))\\) is always either \\(0\\) or \\(1\\).
Such models are known as [*classifiers*](https://en.wikipedia.org/w/index.php?search=classifiers).
More generally, whereas regression models for quantitative response variables return real numbers, models for categorical response variables are called classifiers.
### 10\.2\.1 Example: High\-earners in the 1994 United States Census
A marketing analyst might be interested in finding factors that can be used
to predict whether a potential customer is a high\-earner.
The 1994 [*United States Census*](https://en.wikipedia.org/w/index.php?search=United%20States%20Census) provides information that can inform such a model,
with records from 32,561 adults that include a binary variable indicating whether each person makes greater or less than $50,000 (nearly $90,000 in 2020 after accounting for inflation).
We will use the indicator of high income as our response variable.
```
library(tidyverse)
library(mdsr)
url <-
"http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
census <- read_csv(
url,
col_names = c(
"age", "workclass", "fnlwgt", "education",
"education_1", "marital_status", "occupation", "relationship",
"race", "sex", "capital_gain", "capital_loss", "hours_per_week",
"native_country", "income"
)
) %>%
mutate(income = factor(income))
glimpse(census)
```
```
Rows: 32,561
Columns: 15
$ age <dbl> 39, 50, 38, 53, 28, 37, 49, 52, 31, 42, 37, 30, 23,…
$ workclass <chr> "State-gov", "Self-emp-not-inc", "Private", "Privat…
$ fnlwgt <dbl> 77516, 83311, 215646, 234721, 338409, 284582, 16018…
$ education <chr> "Bachelors", "Bachelors", "HS-grad", "11th", "Bache…
$ education_1 <dbl> 13, 13, 9, 7, 13, 14, 5, 9, 14, 13, 10, 13, 13, 12,…
$ marital_status <chr> "Never-married", "Married-civ-spouse", "Divorced", …
$ occupation <chr> "Adm-clerical", "Exec-managerial", "Handlers-cleane…
$ relationship <chr> "Not-in-family", "Husband", "Not-in-family", "Husba…
$ race <chr> "White", "White", "White", "Black", "Black", "White…
$ sex <chr> "Male", "Male", "Male", "Male", "Female", "Female",…
$ capital_gain <dbl> 2174, 0, 0, 0, 0, 0, 0, 0, 14084, 5178, 0, 0, 0, 0,…
$ capital_loss <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
$ hours_per_week <dbl> 40, 13, 40, 40, 40, 40, 16, 45, 50, 40, 80, 40, 30,…
$ native_country <chr> "United-States", "United-States", "United-States", …
$ income <fct> <=50K, <=50K, <=50K, <=50K, <=50K, <=50K, <=50K, >5…
```
Throughout this chapter, we will use the **tidymodels** package to streamline our computations.
The **tidymodels** package is really a collection of packages, similar to the **tidyverse**. The workhorse package for model fitting is called **parsnip**, while the model evaluation metrics are provided by the **yardstick** package.
For reasons that we will discuss later (in Section [10\.3\.2](ch-modeling.html#sec:cv)), we will first separate our data set into two pieces by sampling the rows at random.
A sample of 80% of the rows will become the training data set, with the remaining 20% set aside as the testing (or “hold\-out”) data set.
The `initial_split()` function divides the data, while the `training()` and `testing()` functions recover the two smaller data sets.
```
library(tidymodels)
set.seed(364)
n <- nrow(census)
census_parts <- census %>%
initial_split(prop = 0.8)
train <- census_parts %>%
training()
test <- census_parts %>%
testing()
list(train, test) %>%
map_int(nrow)
```
```
[1] 26048 6513
```
We first compute the observed percentage of high earners in the training set as \\(\\bar{\\pi}\\).
```
pi_bar <- train %>%
count(income) %>%
mutate(pct = n / sum(n)) %>%
filter(income == ">50K") %>%
pull(pct)
pi_bar
```
```
[1] 0.241
```
Note that only about 24% of those in the sample make more than $50k.
#### 10\.2\.1\.1 The null model
Since we know \\(\\bar{\\pi}\\), it follows that the [*accuracy*](https://en.wikipedia.org/w/index.php?search=accuracy) of the [*null model*](https://en.wikipedia.org/w/index.php?search=null%20model) is \\(1 \- \\bar{\\pi}\\), which is about 76%, since we can get that many right by just predicting that everyone makes less than $50k.
```
train %>%
count(income) %>%
mutate(pct = n / sum(n))
```
```
# A tibble: 2 × 3
income n pct
<fct> <int> <dbl>
1 <=50K 19763 0.759
2 >50K 6285 0.241
```
While we can compute the accuracy of the null model with simple arithmetic, when we compare models later, it will be useful to have our null model stored as a model object. We can create such an object using **tidymodels** by specifying a logistic regression model with no explanatory variables. The computational engine is `glm` because `glm()` is the name of the **R** function that actually fits `vocab("generalized linear models")` (of which logistic regression is a special case).
```
mod_null <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(income ~ 1, data = train)
```
After using the `predict()` function to compute the predicted values, the **yardstick** package will help us compute the accuracy.
```
library(yardstick)
pred <- train %>%
select(income, capital_gain) %>%
bind_cols(
predict(mod_null, new_data = train, type = "class")
) %>%
rename(income_null = .pred_class)
accuracy(pred, income, income_null)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.759
```
Always benchmark your predictive models against a reasonable null model.
Another important tool in verifying a model’s accuracy is called the [*confusion matrix*](https://en.wikipedia.org/w/index.php?search=confusion%20matrix) (really).
Simply put, this is a two\-way table that counts how often our model made the correct prediction.
Note that there are two different types of mistakes that our model can make: predicting a high income when the income was in fact low (a [*Type I error*](https://en.wikipedia.org/w/index.php?search=Type%20I%20error)), and predicting a low income when the income was in fact high (a [*Type II error*](https://en.wikipedia.org/w/index.php?search=Type%20II%20error)).
```
confusion_null <- pred %>%
conf_mat(truth = income, estimate = income_null)
confusion_null
```
```
Truth
Prediction <=50K >50K
<=50K 19763 6285
>50K 0 0
```
Note again that the null model predicts that *everyone* is a low earner, so it makes many Type II errors (false negatives) but no Type I errors (false positives).
#### 10\.2\.1\.2 Logistic regression
Beating the null model shouldn’t be hard. Our first attempt will be to employ a simple logistic regression model. First, we’ll fit the model using only one explanatory variable: `capital_gain`. This variable measures the amount of money that each person paid in [*capital gains*](https://en.wikipedia.org/w/index.php?search=capital%20gains) tax. Since capital gains are accrued on assets (e.g., stocks, houses), it stands to reason that people who pay more in capital gains are likely to have more wealth and, similarly, are likely to have high incomes.
In addition, capital gains is directly related since it is a component of total income.
```
mod_log_1 <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(income ~ capital_gain, data = train)
```
Figure [10\.1](ch-modeling.html#fig:log-cap-gains) illustrates how the predicted probability of being a high earner varies in our simple logistic regression model with respect to the amount of capital gains tax paid.
```
train_plus <- train %>%
mutate(high_earner = as.integer(income == ">50K"))
ggplot(train_plus, aes(x = capital_gain, y = high_earner)) +
geom_count(
position = position_jitter(width = 0, height = 0.05),
alpha = 0.5
) +
geom_smooth(
method = "glm", method.args = list(family = "binomial"),
color = "dodgerblue", lty = 2, se = FALSE
) +
geom_hline(aes(yintercept = 0.5), linetype = 3) +
scale_x_log10(labels = scales::dollar)
```
Figure 10\.1: Simple logistic regression model for high\-earner status based on capital gains tax paid.
How accurate is this model?
```
pred <- pred %>%
bind_cols(
predict(mod_log_1, new_data = train, type = "class")
) %>%
rename(income_log_1 = .pred_class)
confusion_log_1 <- pred %>%
conf_mat(truth = income, estimate = income_log_1)
confusion_log_1
```
```
Truth
Prediction <=50K >50K
<=50K 19560 5015
>50K 203 1270
```
```
accuracy(pred, income, income_log_1)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.800
```
In Figure [10\.2](ch-modeling.html#fig:autoplot-null-confusion), we graphically compare the confusion matrices of the null model and the simple logistic regression model. The true positives of the latter model are an important improvement.
```
autoplot(confusion_null) +
geom_label(
aes(
x = (xmax + xmin) / 2,
y = (ymax + ymin) / 2,
label = c("TN", "FP", "FN", "TP")
)
)
autoplot(confusion_log_1) +
geom_label(
aes(
x = (xmax + xmin) / 2,
y = (ymax + ymin) / 2,
label = c("TN", "FP", "FN", "TP")
)
)
```
Figure 10\.2: Visual summary of the predictive accuracy of the null model (left) versus the logistic regression model with one explanatory variable (right). The null model never predicts a positive.
Using `capital_gains` as a single explanatory variable improved the model’s accuracy on the training data to 80%, a notable increase over the null model’s accuracy of 75\.9%.
We can easily interpret the rule generated by the logistic regression model here, since there is only a single predictor.
```
broom::tidy(mod_log_1)
```
```
# A tibble: 2 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.37 0.0159 -86.1 0
2 capital_gain 0.000335 0.00000962 34.8 3.57e-265
```
Recall that logistic regression uses the [*logit*](https://en.wikipedia.org/w/index.php?search=logit) function to map predicted probabilities to the whole real line.
We can invert this function to find the value of capital gains that would yield
a predicted value of 0\.5\.
\\\[
logit(\\hat{\\pi}) \= \\log{ \\left( \\frac{\\hat{\\pi}}{1\-\\hat{\\pi}} \\right) } \= \\beta\_0 \+ \\beta\_1 \\cdot capital\\\_gain
\\]
We can invert this function to find the value of capital gains that would yield a predicted value of 0\.5 by plugging in \\(\\hat{\\pi} \= 0\.5\\), \\(\\beta\_0\=\\) \-1\.373, \\(\\beta\_1\=\\) 0\.000335, and solving for \\(capital\\\_gain\\).
The answer in this case is \\(\-\\beta\_0 / \\beta\_1\\), or $4,102\.
We can confirm that when we inspect the predicted probabilities, the classification shifts from `<=50K` to `>50K` as the value of `captial_gain` jumps from $4,101 to $4,386\.
For these observations, the predicted probabilities jump from 0\.494 to 0\.517\.
```
income_probs <- pred %>%
select(income, income_log_1, capital_gain) %>%
bind_cols(
predict(mod_log_1, new_data = train, type = "prob")
)
income_probs %>%
rename(rich_prob = `.pred_>50K`) %>%
distinct() %>%
filter(abs(rich_prob - 0.5) < 0.02) %>%
arrange(desc(rich_prob))
```
```
# A tibble: 5 × 5
income income_log_1 capital_gain `.pred_<=50K` rich_prob
<fct> <fct> <dbl> <dbl> <dbl>
1 <=50K <=50K 4101 0.500 0.500
2 <=50K <=50K 4064 0.503 0.497
3 <=50K <=50K 3942 0.513 0.487
4 <=50K <=50K 3908 0.516 0.484
5 <=50K <=50K 3887 0.518 0.482
```
Thus, the model says to call a taxpayer high income if their capital gains are above $4,102\.
But why should we restrict our model to one explanatory variable? Let’s fit a more sophisticated model that incorporates the other explanatory variables.
```
mod_log_all <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(
income ~ age + workclass + education + marital_status +
occupation + relationship + race + sex +
capital_gain + capital_loss + hours_per_week,
data = train
)
pred <- pred %>%
bind_cols(
predict(mod_log_all, new_data = train, type = "class")
) %>%
rename(income_log_all = .pred_class)
pred %>%
conf_mat(truth = income, estimate = income_log_all)
```
```
Truth
Prediction <=50K >50K
<=50K 18395 2493
>50K 1368 3792
```
```
accuracy(pred, income, income_log_all)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.852
```
Not surprisingly, by including more explanatory variables, we have improved the predictive accuracy on the training set.
Unfortunately, predictive modeling is not quite this easy.
In the next section, we’ll see where our naïve approach can fail.
### 10\.2\.1 Example: High\-earners in the 1994 United States Census
A marketing analyst might be interested in finding factors that can be used
to predict whether a potential customer is a high\-earner.
The 1994 [*United States Census*](https://en.wikipedia.org/w/index.php?search=United%20States%20Census) provides information that can inform such a model,
with records from 32,561 adults that include a binary variable indicating whether each person makes greater or less than $50,000 (nearly $90,000 in 2020 after accounting for inflation).
We will use the indicator of high income as our response variable.
```
library(tidyverse)
library(mdsr)
url <-
"http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
census <- read_csv(
url,
col_names = c(
"age", "workclass", "fnlwgt", "education",
"education_1", "marital_status", "occupation", "relationship",
"race", "sex", "capital_gain", "capital_loss", "hours_per_week",
"native_country", "income"
)
) %>%
mutate(income = factor(income))
glimpse(census)
```
```
Rows: 32,561
Columns: 15
$ age <dbl> 39, 50, 38, 53, 28, 37, 49, 52, 31, 42, 37, 30, 23,…
$ workclass <chr> "State-gov", "Self-emp-not-inc", "Private", "Privat…
$ fnlwgt <dbl> 77516, 83311, 215646, 234721, 338409, 284582, 16018…
$ education <chr> "Bachelors", "Bachelors", "HS-grad", "11th", "Bache…
$ education_1 <dbl> 13, 13, 9, 7, 13, 14, 5, 9, 14, 13, 10, 13, 13, 12,…
$ marital_status <chr> "Never-married", "Married-civ-spouse", "Divorced", …
$ occupation <chr> "Adm-clerical", "Exec-managerial", "Handlers-cleane…
$ relationship <chr> "Not-in-family", "Husband", "Not-in-family", "Husba…
$ race <chr> "White", "White", "White", "Black", "Black", "White…
$ sex <chr> "Male", "Male", "Male", "Male", "Female", "Female",…
$ capital_gain <dbl> 2174, 0, 0, 0, 0, 0, 0, 0, 14084, 5178, 0, 0, 0, 0,…
$ capital_loss <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
$ hours_per_week <dbl> 40, 13, 40, 40, 40, 40, 16, 45, 50, 40, 80, 40, 30,…
$ native_country <chr> "United-States", "United-States", "United-States", …
$ income <fct> <=50K, <=50K, <=50K, <=50K, <=50K, <=50K, <=50K, >5…
```
Throughout this chapter, we will use the **tidymodels** package to streamline our computations.
The **tidymodels** package is really a collection of packages, similar to the **tidyverse**. The workhorse package for model fitting is called **parsnip**, while the model evaluation metrics are provided by the **yardstick** package.
For reasons that we will discuss later (in Section [10\.3\.2](ch-modeling.html#sec:cv)), we will first separate our data set into two pieces by sampling the rows at random.
A sample of 80% of the rows will become the training data set, with the remaining 20% set aside as the testing (or “hold\-out”) data set.
The `initial_split()` function divides the data, while the `training()` and `testing()` functions recover the two smaller data sets.
```
library(tidymodels)
set.seed(364)
n <- nrow(census)
census_parts <- census %>%
initial_split(prop = 0.8)
train <- census_parts %>%
training()
test <- census_parts %>%
testing()
list(train, test) %>%
map_int(nrow)
```
```
[1] 26048 6513
```
We first compute the observed percentage of high earners in the training set as \\(\\bar{\\pi}\\).
```
pi_bar <- train %>%
count(income) %>%
mutate(pct = n / sum(n)) %>%
filter(income == ">50K") %>%
pull(pct)
pi_bar
```
```
[1] 0.241
```
Note that only about 24% of those in the sample make more than $50k.
#### 10\.2\.1\.1 The null model
Since we know \\(\\bar{\\pi}\\), it follows that the [*accuracy*](https://en.wikipedia.org/w/index.php?search=accuracy) of the [*null model*](https://en.wikipedia.org/w/index.php?search=null%20model) is \\(1 \- \\bar{\\pi}\\), which is about 76%, since we can get that many right by just predicting that everyone makes less than $50k.
```
train %>%
count(income) %>%
mutate(pct = n / sum(n))
```
```
# A tibble: 2 × 3
income n pct
<fct> <int> <dbl>
1 <=50K 19763 0.759
2 >50K 6285 0.241
```
While we can compute the accuracy of the null model with simple arithmetic, when we compare models later, it will be useful to have our null model stored as a model object. We can create such an object using **tidymodels** by specifying a logistic regression model with no explanatory variables. The computational engine is `glm` because `glm()` is the name of the **R** function that actually fits `vocab("generalized linear models")` (of which logistic regression is a special case).
```
mod_null <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(income ~ 1, data = train)
```
After using the `predict()` function to compute the predicted values, the **yardstick** package will help us compute the accuracy.
```
library(yardstick)
pred <- train %>%
select(income, capital_gain) %>%
bind_cols(
predict(mod_null, new_data = train, type = "class")
) %>%
rename(income_null = .pred_class)
accuracy(pred, income, income_null)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.759
```
Always benchmark your predictive models against a reasonable null model.
Another important tool in verifying a model’s accuracy is called the [*confusion matrix*](https://en.wikipedia.org/w/index.php?search=confusion%20matrix) (really).
Simply put, this is a two\-way table that counts how often our model made the correct prediction.
Note that there are two different types of mistakes that our model can make: predicting a high income when the income was in fact low (a [*Type I error*](https://en.wikipedia.org/w/index.php?search=Type%20I%20error)), and predicting a low income when the income was in fact high (a [*Type II error*](https://en.wikipedia.org/w/index.php?search=Type%20II%20error)).
```
confusion_null <- pred %>%
conf_mat(truth = income, estimate = income_null)
confusion_null
```
```
Truth
Prediction <=50K >50K
<=50K 19763 6285
>50K 0 0
```
Note again that the null model predicts that *everyone* is a low earner, so it makes many Type II errors (false negatives) but no Type I errors (false positives).
#### 10\.2\.1\.2 Logistic regression
Beating the null model shouldn’t be hard. Our first attempt will be to employ a simple logistic regression model. First, we’ll fit the model using only one explanatory variable: `capital_gain`. This variable measures the amount of money that each person paid in [*capital gains*](https://en.wikipedia.org/w/index.php?search=capital%20gains) tax. Since capital gains are accrued on assets (e.g., stocks, houses), it stands to reason that people who pay more in capital gains are likely to have more wealth and, similarly, are likely to have high incomes.
In addition, capital gains is directly related since it is a component of total income.
```
mod_log_1 <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(income ~ capital_gain, data = train)
```
Figure [10\.1](ch-modeling.html#fig:log-cap-gains) illustrates how the predicted probability of being a high earner varies in our simple logistic regression model with respect to the amount of capital gains tax paid.
```
train_plus <- train %>%
mutate(high_earner = as.integer(income == ">50K"))
ggplot(train_plus, aes(x = capital_gain, y = high_earner)) +
geom_count(
position = position_jitter(width = 0, height = 0.05),
alpha = 0.5
) +
geom_smooth(
method = "glm", method.args = list(family = "binomial"),
color = "dodgerblue", lty = 2, se = FALSE
) +
geom_hline(aes(yintercept = 0.5), linetype = 3) +
scale_x_log10(labels = scales::dollar)
```
Figure 10\.1: Simple logistic regression model for high\-earner status based on capital gains tax paid.
How accurate is this model?
```
pred <- pred %>%
bind_cols(
predict(mod_log_1, new_data = train, type = "class")
) %>%
rename(income_log_1 = .pred_class)
confusion_log_1 <- pred %>%
conf_mat(truth = income, estimate = income_log_1)
confusion_log_1
```
```
Truth
Prediction <=50K >50K
<=50K 19560 5015
>50K 203 1270
```
```
accuracy(pred, income, income_log_1)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.800
```
In Figure [10\.2](ch-modeling.html#fig:autoplot-null-confusion), we graphically compare the confusion matrices of the null model and the simple logistic regression model. The true positives of the latter model are an important improvement.
```
autoplot(confusion_null) +
geom_label(
aes(
x = (xmax + xmin) / 2,
y = (ymax + ymin) / 2,
label = c("TN", "FP", "FN", "TP")
)
)
autoplot(confusion_log_1) +
geom_label(
aes(
x = (xmax + xmin) / 2,
y = (ymax + ymin) / 2,
label = c("TN", "FP", "FN", "TP")
)
)
```
Figure 10\.2: Visual summary of the predictive accuracy of the null model (left) versus the logistic regression model with one explanatory variable (right). The null model never predicts a positive.
Using `capital_gains` as a single explanatory variable improved the model’s accuracy on the training data to 80%, a notable increase over the null model’s accuracy of 75\.9%.
We can easily interpret the rule generated by the logistic regression model here, since there is only a single predictor.
```
broom::tidy(mod_log_1)
```
```
# A tibble: 2 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.37 0.0159 -86.1 0
2 capital_gain 0.000335 0.00000962 34.8 3.57e-265
```
Recall that logistic regression uses the [*logit*](https://en.wikipedia.org/w/index.php?search=logit) function to map predicted probabilities to the whole real line.
We can invert this function to find the value of capital gains that would yield
a predicted value of 0\.5\.
\\\[
logit(\\hat{\\pi}) \= \\log{ \\left( \\frac{\\hat{\\pi}}{1\-\\hat{\\pi}} \\right) } \= \\beta\_0 \+ \\beta\_1 \\cdot capital\\\_gain
\\]
We can invert this function to find the value of capital gains that would yield a predicted value of 0\.5 by plugging in \\(\\hat{\\pi} \= 0\.5\\), \\(\\beta\_0\=\\) \-1\.373, \\(\\beta\_1\=\\) 0\.000335, and solving for \\(capital\\\_gain\\).
The answer in this case is \\(\-\\beta\_0 / \\beta\_1\\), or $4,102\.
We can confirm that when we inspect the predicted probabilities, the classification shifts from `<=50K` to `>50K` as the value of `captial_gain` jumps from $4,101 to $4,386\.
For these observations, the predicted probabilities jump from 0\.494 to 0\.517\.
```
income_probs <- pred %>%
select(income, income_log_1, capital_gain) %>%
bind_cols(
predict(mod_log_1, new_data = train, type = "prob")
)
income_probs %>%
rename(rich_prob = `.pred_>50K`) %>%
distinct() %>%
filter(abs(rich_prob - 0.5) < 0.02) %>%
arrange(desc(rich_prob))
```
```
# A tibble: 5 × 5
income income_log_1 capital_gain `.pred_<=50K` rich_prob
<fct> <fct> <dbl> <dbl> <dbl>
1 <=50K <=50K 4101 0.500 0.500
2 <=50K <=50K 4064 0.503 0.497
3 <=50K <=50K 3942 0.513 0.487
4 <=50K <=50K 3908 0.516 0.484
5 <=50K <=50K 3887 0.518 0.482
```
Thus, the model says to call a taxpayer high income if their capital gains are above $4,102\.
But why should we restrict our model to one explanatory variable? Let’s fit a more sophisticated model that incorporates the other explanatory variables.
```
mod_log_all <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(
income ~ age + workclass + education + marital_status +
occupation + relationship + race + sex +
capital_gain + capital_loss + hours_per_week,
data = train
)
pred <- pred %>%
bind_cols(
predict(mod_log_all, new_data = train, type = "class")
) %>%
rename(income_log_all = .pred_class)
pred %>%
conf_mat(truth = income, estimate = income_log_all)
```
```
Truth
Prediction <=50K >50K
<=50K 18395 2493
>50K 1368 3792
```
```
accuracy(pred, income, income_log_all)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.852
```
Not surprisingly, by including more explanatory variables, we have improved the predictive accuracy on the training set.
Unfortunately, predictive modeling is not quite this easy.
In the next section, we’ll see where our naïve approach can fail.
#### 10\.2\.1\.1 The null model
Since we know \\(\\bar{\\pi}\\), it follows that the [*accuracy*](https://en.wikipedia.org/w/index.php?search=accuracy) of the [*null model*](https://en.wikipedia.org/w/index.php?search=null%20model) is \\(1 \- \\bar{\\pi}\\), which is about 76%, since we can get that many right by just predicting that everyone makes less than $50k.
```
train %>%
count(income) %>%
mutate(pct = n / sum(n))
```
```
# A tibble: 2 × 3
income n pct
<fct> <int> <dbl>
1 <=50K 19763 0.759
2 >50K 6285 0.241
```
While we can compute the accuracy of the null model with simple arithmetic, when we compare models later, it will be useful to have our null model stored as a model object. We can create such an object using **tidymodels** by specifying a logistic regression model with no explanatory variables. The computational engine is `glm` because `glm()` is the name of the **R** function that actually fits `vocab("generalized linear models")` (of which logistic regression is a special case).
```
mod_null <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(income ~ 1, data = train)
```
After using the `predict()` function to compute the predicted values, the **yardstick** package will help us compute the accuracy.
```
library(yardstick)
pred <- train %>%
select(income, capital_gain) %>%
bind_cols(
predict(mod_null, new_data = train, type = "class")
) %>%
rename(income_null = .pred_class)
accuracy(pred, income, income_null)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.759
```
Always benchmark your predictive models against a reasonable null model.
Another important tool in verifying a model’s accuracy is called the [*confusion matrix*](https://en.wikipedia.org/w/index.php?search=confusion%20matrix) (really).
Simply put, this is a two\-way table that counts how often our model made the correct prediction.
Note that there are two different types of mistakes that our model can make: predicting a high income when the income was in fact low (a [*Type I error*](https://en.wikipedia.org/w/index.php?search=Type%20I%20error)), and predicting a low income when the income was in fact high (a [*Type II error*](https://en.wikipedia.org/w/index.php?search=Type%20II%20error)).
```
confusion_null <- pred %>%
conf_mat(truth = income, estimate = income_null)
confusion_null
```
```
Truth
Prediction <=50K >50K
<=50K 19763 6285
>50K 0 0
```
Note again that the null model predicts that *everyone* is a low earner, so it makes many Type II errors (false negatives) but no Type I errors (false positives).
#### 10\.2\.1\.2 Logistic regression
Beating the null model shouldn’t be hard. Our first attempt will be to employ a simple logistic regression model. First, we’ll fit the model using only one explanatory variable: `capital_gain`. This variable measures the amount of money that each person paid in [*capital gains*](https://en.wikipedia.org/w/index.php?search=capital%20gains) tax. Since capital gains are accrued on assets (e.g., stocks, houses), it stands to reason that people who pay more in capital gains are likely to have more wealth and, similarly, are likely to have high incomes.
In addition, capital gains is directly related since it is a component of total income.
```
mod_log_1 <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(income ~ capital_gain, data = train)
```
Figure [10\.1](ch-modeling.html#fig:log-cap-gains) illustrates how the predicted probability of being a high earner varies in our simple logistic regression model with respect to the amount of capital gains tax paid.
```
train_plus <- train %>%
mutate(high_earner = as.integer(income == ">50K"))
ggplot(train_plus, aes(x = capital_gain, y = high_earner)) +
geom_count(
position = position_jitter(width = 0, height = 0.05),
alpha = 0.5
) +
geom_smooth(
method = "glm", method.args = list(family = "binomial"),
color = "dodgerblue", lty = 2, se = FALSE
) +
geom_hline(aes(yintercept = 0.5), linetype = 3) +
scale_x_log10(labels = scales::dollar)
```
Figure 10\.1: Simple logistic regression model for high\-earner status based on capital gains tax paid.
How accurate is this model?
```
pred <- pred %>%
bind_cols(
predict(mod_log_1, new_data = train, type = "class")
) %>%
rename(income_log_1 = .pred_class)
confusion_log_1 <- pred %>%
conf_mat(truth = income, estimate = income_log_1)
confusion_log_1
```
```
Truth
Prediction <=50K >50K
<=50K 19560 5015
>50K 203 1270
```
```
accuracy(pred, income, income_log_1)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.800
```
In Figure [10\.2](ch-modeling.html#fig:autoplot-null-confusion), we graphically compare the confusion matrices of the null model and the simple logistic regression model. The true positives of the latter model are an important improvement.
```
autoplot(confusion_null) +
geom_label(
aes(
x = (xmax + xmin) / 2,
y = (ymax + ymin) / 2,
label = c("TN", "FP", "FN", "TP")
)
)
autoplot(confusion_log_1) +
geom_label(
aes(
x = (xmax + xmin) / 2,
y = (ymax + ymin) / 2,
label = c("TN", "FP", "FN", "TP")
)
)
```
Figure 10\.2: Visual summary of the predictive accuracy of the null model (left) versus the logistic regression model with one explanatory variable (right). The null model never predicts a positive.
Using `capital_gains` as a single explanatory variable improved the model’s accuracy on the training data to 80%, a notable increase over the null model’s accuracy of 75\.9%.
We can easily interpret the rule generated by the logistic regression model here, since there is only a single predictor.
```
broom::tidy(mod_log_1)
```
```
# A tibble: 2 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.37 0.0159 -86.1 0
2 capital_gain 0.000335 0.00000962 34.8 3.57e-265
```
Recall that logistic regression uses the [*logit*](https://en.wikipedia.org/w/index.php?search=logit) function to map predicted probabilities to the whole real line.
We can invert this function to find the value of capital gains that would yield
a predicted value of 0\.5\.
\\\[
logit(\\hat{\\pi}) \= \\log{ \\left( \\frac{\\hat{\\pi}}{1\-\\hat{\\pi}} \\right) } \= \\beta\_0 \+ \\beta\_1 \\cdot capital\\\_gain
\\]
We can invert this function to find the value of capital gains that would yield a predicted value of 0\.5 by plugging in \\(\\hat{\\pi} \= 0\.5\\), \\(\\beta\_0\=\\) \-1\.373, \\(\\beta\_1\=\\) 0\.000335, and solving for \\(capital\\\_gain\\).
The answer in this case is \\(\-\\beta\_0 / \\beta\_1\\), or $4,102\.
We can confirm that when we inspect the predicted probabilities, the classification shifts from `<=50K` to `>50K` as the value of `captial_gain` jumps from $4,101 to $4,386\.
For these observations, the predicted probabilities jump from 0\.494 to 0\.517\.
```
income_probs <- pred %>%
select(income, income_log_1, capital_gain) %>%
bind_cols(
predict(mod_log_1, new_data = train, type = "prob")
)
income_probs %>%
rename(rich_prob = `.pred_>50K`) %>%
distinct() %>%
filter(abs(rich_prob - 0.5) < 0.02) %>%
arrange(desc(rich_prob))
```
```
# A tibble: 5 × 5
income income_log_1 capital_gain `.pred_<=50K` rich_prob
<fct> <fct> <dbl> <dbl> <dbl>
1 <=50K <=50K 4101 0.500 0.500
2 <=50K <=50K 4064 0.503 0.497
3 <=50K <=50K 3942 0.513 0.487
4 <=50K <=50K 3908 0.516 0.484
5 <=50K <=50K 3887 0.518 0.482
```
Thus, the model says to call a taxpayer high income if their capital gains are above $4,102\.
But why should we restrict our model to one explanatory variable? Let’s fit a more sophisticated model that incorporates the other explanatory variables.
```
mod_log_all <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(
income ~ age + workclass + education + marital_status +
occupation + relationship + race + sex +
capital_gain + capital_loss + hours_per_week,
data = train
)
pred <- pred %>%
bind_cols(
predict(mod_log_all, new_data = train, type = "class")
) %>%
rename(income_log_all = .pred_class)
pred %>%
conf_mat(truth = income, estimate = income_log_all)
```
```
Truth
Prediction <=50K >50K
<=50K 18395 2493
>50K 1368 3792
```
```
accuracy(pred, income, income_log_all)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.852
```
Not surprisingly, by including more explanatory variables, we have improved the predictive accuracy on the training set.
Unfortunately, predictive modeling is not quite this easy.
In the next section, we’ll see where our naïve approach can fail.
10\.3 Evaluating models
-----------------------
How do you know if your model is a good one? In this section, we outline some of the key concepts in model evaluation—a critical step in predictive analytics.
### 10\.3\.1 Bias\-variance trade\-off
We want to have models that minimize both [*bias*](https://en.wikipedia.org/w/index.php?search=bias) and [*variance*](https://en.wikipedia.org/w/index.php?search=variance), but to some extent these are mutually exclusive goals. A complicated model will have less bias, but will in general have higher variance. A simple model can reduce variance but at the cost of increased bias. The optimal balance between bias and variance depends on the purpose for which the model is constructed (e.g., prediction vs. description of causal relationships) and the system being modeled. One helpful class of techniques—called [*regularization*](https://en.wikipedia.org/w/index.php?search=regularization)—provides model architectures that can balance bias and variance in a graduated way. Examples of regularization techniques are [*ridge regression*](https://en.wikipedia.org/w/index.php?search=ridge%20regression) and the [*lasso*](https://en.wikipedia.org/w/index.php?search=lasso) (see Section [11\.5](ch-learningI.html#sec:regularization)).
### 10\.3\.2 Cross\-validation
A vexing and seductive trap that modelers sometimes fall into is [*overfitting*](https://en.wikipedia.org/w/index.php?search=overfitting).
Every model discussed in this chapter is [*fit*](https://en.wikipedia.org/w/index.php?search=fit) to a set of data.
That is, given a set of [*training*](https://en.wikipedia.org/w/index.php?search=training) data and the specification for the type of model, each algorithm will determine the optimal set of parameters for that model and those data.
However, if the model works well on those training data, but not so well on a set of [*testing*](https://en.wikipedia.org/w/index.php?search=testing) data—that the model has never seen—then the model is said to be [*overfitting*](https://en.wikipedia.org/w/index.php?search=overfitting).
Perhaps the most elementary mistake in predictive analytics is to overfit your model to the training data, only to see it later perform miserably on the testing set.
In predictive analytics, data sets are often divided into two sets:
* **Training**: The set of data on which you build your model
* **Testing**: After your model is built, this is the set used to test it by evaluating it against data that it has not previously seen.
For example, in this chapter we set aside 80% of the observations to use as a training set, but held back another 20% for testing.
The 80/20 scheme we have employed in this chapter is among the simplest possible schemes, but there are other possibilities.
Perhaps a 90/10 or a 75/25 split would be a better option.
The goal is to have as much data in the training set to allow the model to perform well while having sufficient numbers in the test set to properly assess it.
An alternative approach to combat this problem is [*cross\-validation*](https://en.wikipedia.org/w/index.php?search=cross-validation).
To perform a 2\-fold cross\-validation:
* Randomly separate your data (by rows) into two data sets with the same number of observations. Let’s call them \\(X\_1\\) and \\(X\_2\\).
* Build your model on the data in \\(X\_1\\), and then run the data in \\(X\_2\\) through your model. How well does it perform? Just because your model performs well on \\(X\_1\\) (this is known as [*in\-sample*](https://en.wikipedia.org/w/index.php?search=in-sample) testing) does not imply that it will perform as well on the data in \\(X\_2\\) ([*out\-of\-sample*](https://en.wikipedia.org/w/index.php?search=out-of-sample) testing).
* Now reverse the roles of \\(X\_1\\) and \\(X\_2\\), so that the data in \\(X\_2\\) is used for training, and the data in \\(X\_1\\) is used for testing.
* If your first model is overfit, then it will likely not perform as well on the second set of data.
More complex schemes for cross\-validating are possible. \\(k\\)\-fold cross\-validation is the generalization of 2\-fold cross validation, in which the data are separated into \\(k\\) equal\-sized partitions, and each of the \\(k\\) partitions is chosen to be the testing set once, with the other \\(k\-1\\) partitions used for training.
### 10\.3\.3 Confusion matrices and ROC curves
For classifiers, we have already seen the confusion matrix, which is a common way to assess the effectiveness of a classifier.
Recall that each of the classifiers we have discussed in this chapter are capable of producing not only a binary class label, but also the predicted probability of belonging to either class.
Rounding the probabilities in the usual way (using 0\.5 as a threshold) may not be a good idea, since the average probability might not be anywhere near 0\.5, and thus we could have far too many predictions in one class.
For example, in the `census` data, only about 24% of the people in the training set had income above $50,000\.
Thus, a properly calibrated predictive model should predict that about 24% of the people have incomes above $50,000\. Consider the raw probabilities returned by the simple logistic regression model.
```
head(income_probs)
```
```
# A tibble: 6 × 5
income income_log_1 capital_gain `.pred_<=50K` `.pred_>50K`
<fct> <fct> <dbl> <dbl> <dbl>
1 <=50K <=50K 0 0.798 0.202
2 >50K <=50K 0 0.798 0.202
3 <=50K <=50K 0 0.798 0.202
4 <=50K <=50K 0 0.798 0.202
5 <=50K <=50K 0 0.798 0.202
6 <=50K <=50K 0 0.798 0.202
```
If we round these using a threshold of 0\.5, then only NA% are predicted to have high incomes.
Note that here we are able to work with the unfortunate characters in the variable names by wrapping them with backticks. Of course, we could also rename them.
```
income_probs %>%
group_by(rich = `.pred_>50K` > 0.5) %>%
count() %>%
mutate(pct = n / nrow(income_probs))
```
```
# A tibble: 2 × 3
# Groups: rich [2]
rich n pct
<lgl> <int> <dbl>
1 FALSE 24575 0.943
2 TRUE 1473 0.0565
```
A better alternative would be to use the overall observed percentage (i.e., 24%) as a threshold instead:
```
income_probs %>%
group_by(rich = `.pred_>50K` > pi_bar) %>%
count() %>%
mutate(pct = n / nrow(income_probs))
```
```
# A tibble: 2 × 3
# Groups: rich [2]
rich n pct
<lgl> <int> <dbl>
1 FALSE 23930 0.919
2 TRUE 2118 0.0813
```
This is an improvement, but a more principled approach to assessing the quality of a classifier is a [*receiver operating characteristic*](https://en.wikipedia.org/w/index.php?search=receiver%20operating%20characteristic) (ROC) curve. This considers all possible threshold values for rounding, and graphically displays the trade\-off between [*sensitivity*](https://en.wikipedia.org/w/index.php?search=sensitivity) (the true positive rate) and [*specificity*](https://en.wikipedia.org/w/index.php?search=specificity) (the true negative rate). What is actually plotted is the true positive rate as a function of the false positive rate.
ROC curves are common in machine learning and operations research as well
as assessment of test characteristics and medical imaging. They can be constructed in **R** using the **yardstick** package. Note that ROC curves operate on the fitted probabilities in \\((0,1\)\\).
```
roc <- pred %>%
mutate(estimate = pull(income_probs, `.pred_>50K`)) %>%
roc_curve(truth = income, estimate, event_level = "second") %>%
autoplot()
```
Note that while the `roc_curve()` function performs the calculations necessary to draw the ROC curve, the `autoplot()` function is the one that actually returns a **ggplot2** object.
In Figure [10\.3](ch-modeling.html#fig:roc-log) the upper\-left corner represents a perfect classifier, which would have a true positive rate of 1 and a false positive rate of 0\. On the other hand, a random classifier would lie along the diagonal, since it would be equally likely to make either kind of mistake.
The simple logistic regression model that we used had the following true and false positive rates, which are indicated in Figure [10\.3](ch-modeling.html#fig:roc-log) by the black dot.
A number of other metrics are available.
```
metrics <- pred %>%
conf_mat(income, income_log_1) %>%
summary(event_level = "second")
metrics
```
```
# A tibble: 13 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.800
2 kap binary 0.260
3 sens binary 0.202
4 spec binary 0.990
5 ppv binary 0.862
6 npv binary 0.796
7 mcc binary 0.355
8 j_index binary 0.192
9 bal_accuracy binary 0.596
10 detection_prevalence binary 0.0565
11 precision binary 0.862
12 recall binary 0.202
13 f_meas binary 0.327
```
```
roc_mod <- metrics %>%
filter(.metric %in% c("sens", "spec")) %>%
pivot_wider(-.estimator, names_from = .metric, values_from = .estimate)
roc +
geom_point(
data = roc_mod, size = 3,
aes(x = 1 - spec, y = sens)
)
```
Figure 10\.3: ROC curve for the simple logistic regression model.
Depending on our tolerance for false positives vs. false negatives, we could modify the way that our logistic regression model rounds probabilities, which would have the effect of moving the black dot in Figure [10\.3](ch-modeling.html#fig:roc-log) along the curve.
### 10\.3\.4 Measuring prediction error for quantitative responses
For evaluating models with a quantitative response, there are a variety of criteria that are commonly used. Here we outline three of the simplest and most common. The following presumes a vector of real observations denoted \\(y\\) and a corresponding vector of prediction \\(\\hat{y}\\):
* **RMSE**: [*Root\-mean\-square error*](https://en.wikipedia.org/w/index.php?search=Root-mean-square%20error) is probably the most common:
\\\[
RMSE(y, \\hat{y}) \= \\sqrt{\\frac{1}{n} \\sum\_{i\=1}^n (y \- \\hat{y})^2} \\,.
\\]
The RMSE has several desirable properties. Namely, it is in the same units as the response variable \\(y\\), it captures both overestimates and underestimates equally, and it penalizes large misses heavily.
* **MAE**: [*Mean absolute error*](https://en.wikipedia.org/w/index.php?search=Mean%20absolute%20error) is similar to the RMSE, but does not penalize large misses as heavily, due to the replacement of the squared term by an absolute value:
\\\[
MAE(y, \\hat{y}) \= \\frac{1}{n} \\sum\_{i\=1}^n \|y \- \\hat{y}\| \\,.
\\]
* **Correlation**: The previous two methods require that the units and scale of the predictions \\(\\hat{y}\\) are the same as the response variable \\(y\\).
While this is of course necessary for accurate predictions, some predictive models merely want to track the trends in the response.
In many such cases the [*correlation*](https://en.wikipedia.org/w/index.php?search=correlation) between \\(y\\) and \\(\\hat{y}\\) may suffice.
In addition to the usual Pearson product\-moment correlation (measure of linear association), statistics related to [*rank correlation*](https://en.wikipedia.org/w/index.php?search=rank%20correlation) may be useful.
That is, instead of trying to minimize \\(y \- \\hat{y}\\), it might be enough to make sure that the \\(\\hat{y}\_i\\)’s are in the same relative order as the \\(y\_i\\)’s.
Popular measures of rank correlation include Spearman’s \\(\\rho\\) and Kendall’s \\(\\tau\\).
* **Coefficient of determination**: (\\(R^2\\)) The [*coefficient of determination*](https://en.wikipedia.org/w/index.php?search=coefficient%20of%20determination) describes what proportion of variability in the outcome is explained by the model.
It is measured on a scale of \\(\[0,1]\\), with 1 indicating a perfect match between \\(y\\) and \\(\\hat{y}\\).
### 10\.3\.5 Example: Evaluation of income models
Recall that we separated the 32,561 observations in the `census` data set into a training set that contained 80% of the observations and a testing set that contained the remaining 20%.
Since the separation was done by selecting rows uniformly at random, and the number of observations was fairly large, it seems likely that both the training and testing set will contain similar information.
For example, the distribution of `capital_gain` is similar in both the testing and training sets.
Nevertheless, it is worth formally testing the performance of our models on both sets.
```
train %>%
skim(capital_gain)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 capital_gain 26048 0 1079. 7406. 0 0 0 0 99999
```
```
test %>%
skim(capital_gain)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 capital_gain 6513 0 1073. 7300. 0 0 0 0 99999
```
We note that at least three quarters of both samples reported no capital gains.
To do this, we build a data frame that contains an identifier for each of our three models, as well as a list\-column with the model objects.
```
mods <- tibble(
type = c("null", "log_1", "log_all"),
mod = list(mod_null, mod_log_1, mod_log_all)
)
```
We can iterate through the list of models and apply the `predict()` method to each object, using both the testing and training sets.
```
mods <- mods %>%
mutate(
y_train = list(pull(train, income)),
y_test = list(pull(test, income)),
y_hat_train = map(
mod,
~pull(predict(.x, new_data = train, type = "class"), .pred_class)
),
y_hat_test = map(
mod,
~pull(predict(.x, new_data = test, type = "class"), .pred_class)
)
)
mods
```
```
# A tibble: 3 × 6
type mod y_train y_test y_hat_train y_hat_test
<chr> <list> <list> <list> <list> <list>
1 null <fit[+]> <fct [26,048]> <fct [6,513]> <fct [26,048]> <fct [6,513]>
2 log_1 <fit[+]> <fct [26,048]> <fct [6,513]> <fct [26,048]> <fct [6,513]>
3 log_all <fit[+]> <fct [26,048]> <fct [6,513]> <fct [26,048]> <fct [6,513]>
```
Now that we have the predictions for each model, we just need to compare them to the truth (`y`) and tally the results. We can do this using the `map2_dbl()` function from the **purrr** package.
```
mods <- mods %>%
mutate(
accuracy_train = map2_dbl(y_train, y_hat_train, accuracy_vec),
accuracy_test = map2_dbl(y_test, y_hat_test, accuracy_vec),
sens_test =
map2_dbl(y_test, y_hat_test, sens_vec, event_level = "second"),
spec_test =
map2_dbl(y_test, y_hat_test, spec_vec, event_level = "second")
)
```
Table 10\.1: Model accuracy measures for the income model.
| type | accuracy\_train | accuracy\_test | sens\_test | spec\_test |
| --- | --- | --- | --- | --- |
| log\_all | 0\.852 | 0\.849 | 0\.598 | 0\.928 |
| log\_1 | 0\.800 | 0\.803 | 0\.204 | 0\.991 |
| null | 0\.759 | 0\.761 | 0\.000 | 1\.000 |
Table [10\.1](ch-modeling.html#tab:accuracytest) displays a number of model accuracy measures.
Note that each model performs slightly worse on the testing set than it did on the training set.
As expected, the null model has a sensitivity of 0 and a specificity of 1, because it always makes the same prediction.
While the model that includes all of the variables is slightly less specific than the single explanatory variable model, it is much more sensitive.
In this case, we should probably conclude that the `log_all` model is the most likely to be useful.
In Figure [10\.4](ch-modeling.html#fig:roc-log-compare), we compare the ROC curves for all census models on the testing data set. Some data wrangling is necessary before we can gather the information to make these curves.
```
mods <- mods %>%
mutate(
y_hat_prob_test = map(
mod,
~pull(predict(.x, new_data = test, type = "prob"), `.pred_>50K`)
),
type = fct_reorder(type, sens_test, .desc = TRUE)
)
```
```
mods %>%
select(type, y_test, y_hat_prob_test) %>%
unnest(cols = c(y_test, y_hat_prob_test)) %>%
group_by(type) %>%
roc_curve(truth = y_test, y_hat_prob_test, event_level = "second") %>%
autoplot() +
geom_point(
data = mods,
aes(x = 1 - spec_test, y = sens_test, color = type),
size = 3
) +
scale_color_brewer("Model", palette = "Set2")
```
Figure 10\.4: Comparison of ROC curves across three logistic regression models on the Census testing data. The null model has a true positive rate of zero and lies along the diagonal. The full model is the best overall performer, as its curve lies furthest from the diagonal.
### 10\.3\.1 Bias\-variance trade\-off
We want to have models that minimize both [*bias*](https://en.wikipedia.org/w/index.php?search=bias) and [*variance*](https://en.wikipedia.org/w/index.php?search=variance), but to some extent these are mutually exclusive goals. A complicated model will have less bias, but will in general have higher variance. A simple model can reduce variance but at the cost of increased bias. The optimal balance between bias and variance depends on the purpose for which the model is constructed (e.g., prediction vs. description of causal relationships) and the system being modeled. One helpful class of techniques—called [*regularization*](https://en.wikipedia.org/w/index.php?search=regularization)—provides model architectures that can balance bias and variance in a graduated way. Examples of regularization techniques are [*ridge regression*](https://en.wikipedia.org/w/index.php?search=ridge%20regression) and the [*lasso*](https://en.wikipedia.org/w/index.php?search=lasso) (see Section [11\.5](ch-learningI.html#sec:regularization)).
### 10\.3\.2 Cross\-validation
A vexing and seductive trap that modelers sometimes fall into is [*overfitting*](https://en.wikipedia.org/w/index.php?search=overfitting).
Every model discussed in this chapter is [*fit*](https://en.wikipedia.org/w/index.php?search=fit) to a set of data.
That is, given a set of [*training*](https://en.wikipedia.org/w/index.php?search=training) data and the specification for the type of model, each algorithm will determine the optimal set of parameters for that model and those data.
However, if the model works well on those training data, but not so well on a set of [*testing*](https://en.wikipedia.org/w/index.php?search=testing) data—that the model has never seen—then the model is said to be [*overfitting*](https://en.wikipedia.org/w/index.php?search=overfitting).
Perhaps the most elementary mistake in predictive analytics is to overfit your model to the training data, only to see it later perform miserably on the testing set.
In predictive analytics, data sets are often divided into two sets:
* **Training**: The set of data on which you build your model
* **Testing**: After your model is built, this is the set used to test it by evaluating it against data that it has not previously seen.
For example, in this chapter we set aside 80% of the observations to use as a training set, but held back another 20% for testing.
The 80/20 scheme we have employed in this chapter is among the simplest possible schemes, but there are other possibilities.
Perhaps a 90/10 or a 75/25 split would be a better option.
The goal is to have as much data in the training set to allow the model to perform well while having sufficient numbers in the test set to properly assess it.
An alternative approach to combat this problem is [*cross\-validation*](https://en.wikipedia.org/w/index.php?search=cross-validation).
To perform a 2\-fold cross\-validation:
* Randomly separate your data (by rows) into two data sets with the same number of observations. Let’s call them \\(X\_1\\) and \\(X\_2\\).
* Build your model on the data in \\(X\_1\\), and then run the data in \\(X\_2\\) through your model. How well does it perform? Just because your model performs well on \\(X\_1\\) (this is known as [*in\-sample*](https://en.wikipedia.org/w/index.php?search=in-sample) testing) does not imply that it will perform as well on the data in \\(X\_2\\) ([*out\-of\-sample*](https://en.wikipedia.org/w/index.php?search=out-of-sample) testing).
* Now reverse the roles of \\(X\_1\\) and \\(X\_2\\), so that the data in \\(X\_2\\) is used for training, and the data in \\(X\_1\\) is used for testing.
* If your first model is overfit, then it will likely not perform as well on the second set of data.
More complex schemes for cross\-validating are possible. \\(k\\)\-fold cross\-validation is the generalization of 2\-fold cross validation, in which the data are separated into \\(k\\) equal\-sized partitions, and each of the \\(k\\) partitions is chosen to be the testing set once, with the other \\(k\-1\\) partitions used for training.
### 10\.3\.3 Confusion matrices and ROC curves
For classifiers, we have already seen the confusion matrix, which is a common way to assess the effectiveness of a classifier.
Recall that each of the classifiers we have discussed in this chapter are capable of producing not only a binary class label, but also the predicted probability of belonging to either class.
Rounding the probabilities in the usual way (using 0\.5 as a threshold) may not be a good idea, since the average probability might not be anywhere near 0\.5, and thus we could have far too many predictions in one class.
For example, in the `census` data, only about 24% of the people in the training set had income above $50,000\.
Thus, a properly calibrated predictive model should predict that about 24% of the people have incomes above $50,000\. Consider the raw probabilities returned by the simple logistic regression model.
```
head(income_probs)
```
```
# A tibble: 6 × 5
income income_log_1 capital_gain `.pred_<=50K` `.pred_>50K`
<fct> <fct> <dbl> <dbl> <dbl>
1 <=50K <=50K 0 0.798 0.202
2 >50K <=50K 0 0.798 0.202
3 <=50K <=50K 0 0.798 0.202
4 <=50K <=50K 0 0.798 0.202
5 <=50K <=50K 0 0.798 0.202
6 <=50K <=50K 0 0.798 0.202
```
If we round these using a threshold of 0\.5, then only NA% are predicted to have high incomes.
Note that here we are able to work with the unfortunate characters in the variable names by wrapping them with backticks. Of course, we could also rename them.
```
income_probs %>%
group_by(rich = `.pred_>50K` > 0.5) %>%
count() %>%
mutate(pct = n / nrow(income_probs))
```
```
# A tibble: 2 × 3
# Groups: rich [2]
rich n pct
<lgl> <int> <dbl>
1 FALSE 24575 0.943
2 TRUE 1473 0.0565
```
A better alternative would be to use the overall observed percentage (i.e., 24%) as a threshold instead:
```
income_probs %>%
group_by(rich = `.pred_>50K` > pi_bar) %>%
count() %>%
mutate(pct = n / nrow(income_probs))
```
```
# A tibble: 2 × 3
# Groups: rich [2]
rich n pct
<lgl> <int> <dbl>
1 FALSE 23930 0.919
2 TRUE 2118 0.0813
```
This is an improvement, but a more principled approach to assessing the quality of a classifier is a [*receiver operating characteristic*](https://en.wikipedia.org/w/index.php?search=receiver%20operating%20characteristic) (ROC) curve. This considers all possible threshold values for rounding, and graphically displays the trade\-off between [*sensitivity*](https://en.wikipedia.org/w/index.php?search=sensitivity) (the true positive rate) and [*specificity*](https://en.wikipedia.org/w/index.php?search=specificity) (the true negative rate). What is actually plotted is the true positive rate as a function of the false positive rate.
ROC curves are common in machine learning and operations research as well
as assessment of test characteristics and medical imaging. They can be constructed in **R** using the **yardstick** package. Note that ROC curves operate on the fitted probabilities in \\((0,1\)\\).
```
roc <- pred %>%
mutate(estimate = pull(income_probs, `.pred_>50K`)) %>%
roc_curve(truth = income, estimate, event_level = "second") %>%
autoplot()
```
Note that while the `roc_curve()` function performs the calculations necessary to draw the ROC curve, the `autoplot()` function is the one that actually returns a **ggplot2** object.
In Figure [10\.3](ch-modeling.html#fig:roc-log) the upper\-left corner represents a perfect classifier, which would have a true positive rate of 1 and a false positive rate of 0\. On the other hand, a random classifier would lie along the diagonal, since it would be equally likely to make either kind of mistake.
The simple logistic regression model that we used had the following true and false positive rates, which are indicated in Figure [10\.3](ch-modeling.html#fig:roc-log) by the black dot.
A number of other metrics are available.
```
metrics <- pred %>%
conf_mat(income, income_log_1) %>%
summary(event_level = "second")
metrics
```
```
# A tibble: 13 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.800
2 kap binary 0.260
3 sens binary 0.202
4 spec binary 0.990
5 ppv binary 0.862
6 npv binary 0.796
7 mcc binary 0.355
8 j_index binary 0.192
9 bal_accuracy binary 0.596
10 detection_prevalence binary 0.0565
11 precision binary 0.862
12 recall binary 0.202
13 f_meas binary 0.327
```
```
roc_mod <- metrics %>%
filter(.metric %in% c("sens", "spec")) %>%
pivot_wider(-.estimator, names_from = .metric, values_from = .estimate)
roc +
geom_point(
data = roc_mod, size = 3,
aes(x = 1 - spec, y = sens)
)
```
Figure 10\.3: ROC curve for the simple logistic regression model.
Depending on our tolerance for false positives vs. false negatives, we could modify the way that our logistic regression model rounds probabilities, which would have the effect of moving the black dot in Figure [10\.3](ch-modeling.html#fig:roc-log) along the curve.
### 10\.3\.4 Measuring prediction error for quantitative responses
For evaluating models with a quantitative response, there are a variety of criteria that are commonly used. Here we outline three of the simplest and most common. The following presumes a vector of real observations denoted \\(y\\) and a corresponding vector of prediction \\(\\hat{y}\\):
* **RMSE**: [*Root\-mean\-square error*](https://en.wikipedia.org/w/index.php?search=Root-mean-square%20error) is probably the most common:
\\\[
RMSE(y, \\hat{y}) \= \\sqrt{\\frac{1}{n} \\sum\_{i\=1}^n (y \- \\hat{y})^2} \\,.
\\]
The RMSE has several desirable properties. Namely, it is in the same units as the response variable \\(y\\), it captures both overestimates and underestimates equally, and it penalizes large misses heavily.
* **MAE**: [*Mean absolute error*](https://en.wikipedia.org/w/index.php?search=Mean%20absolute%20error) is similar to the RMSE, but does not penalize large misses as heavily, due to the replacement of the squared term by an absolute value:
\\\[
MAE(y, \\hat{y}) \= \\frac{1}{n} \\sum\_{i\=1}^n \|y \- \\hat{y}\| \\,.
\\]
* **Correlation**: The previous two methods require that the units and scale of the predictions \\(\\hat{y}\\) are the same as the response variable \\(y\\).
While this is of course necessary for accurate predictions, some predictive models merely want to track the trends in the response.
In many such cases the [*correlation*](https://en.wikipedia.org/w/index.php?search=correlation) between \\(y\\) and \\(\\hat{y}\\) may suffice.
In addition to the usual Pearson product\-moment correlation (measure of linear association), statistics related to [*rank correlation*](https://en.wikipedia.org/w/index.php?search=rank%20correlation) may be useful.
That is, instead of trying to minimize \\(y \- \\hat{y}\\), it might be enough to make sure that the \\(\\hat{y}\_i\\)’s are in the same relative order as the \\(y\_i\\)’s.
Popular measures of rank correlation include Spearman’s \\(\\rho\\) and Kendall’s \\(\\tau\\).
* **Coefficient of determination**: (\\(R^2\\)) The [*coefficient of determination*](https://en.wikipedia.org/w/index.php?search=coefficient%20of%20determination) describes what proportion of variability in the outcome is explained by the model.
It is measured on a scale of \\(\[0,1]\\), with 1 indicating a perfect match between \\(y\\) and \\(\\hat{y}\\).
### 10\.3\.5 Example: Evaluation of income models
Recall that we separated the 32,561 observations in the `census` data set into a training set that contained 80% of the observations and a testing set that contained the remaining 20%.
Since the separation was done by selecting rows uniformly at random, and the number of observations was fairly large, it seems likely that both the training and testing set will contain similar information.
For example, the distribution of `capital_gain` is similar in both the testing and training sets.
Nevertheless, it is worth formally testing the performance of our models on both sets.
```
train %>%
skim(capital_gain)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 capital_gain 26048 0 1079. 7406. 0 0 0 0 99999
```
```
test %>%
skim(capital_gain)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75 p100
1 capital_gain 6513 0 1073. 7300. 0 0 0 0 99999
```
We note that at least three quarters of both samples reported no capital gains.
To do this, we build a data frame that contains an identifier for each of our three models, as well as a list\-column with the model objects.
```
mods <- tibble(
type = c("null", "log_1", "log_all"),
mod = list(mod_null, mod_log_1, mod_log_all)
)
```
We can iterate through the list of models and apply the `predict()` method to each object, using both the testing and training sets.
```
mods <- mods %>%
mutate(
y_train = list(pull(train, income)),
y_test = list(pull(test, income)),
y_hat_train = map(
mod,
~pull(predict(.x, new_data = train, type = "class"), .pred_class)
),
y_hat_test = map(
mod,
~pull(predict(.x, new_data = test, type = "class"), .pred_class)
)
)
mods
```
```
# A tibble: 3 × 6
type mod y_train y_test y_hat_train y_hat_test
<chr> <list> <list> <list> <list> <list>
1 null <fit[+]> <fct [26,048]> <fct [6,513]> <fct [26,048]> <fct [6,513]>
2 log_1 <fit[+]> <fct [26,048]> <fct [6,513]> <fct [26,048]> <fct [6,513]>
3 log_all <fit[+]> <fct [26,048]> <fct [6,513]> <fct [26,048]> <fct [6,513]>
```
Now that we have the predictions for each model, we just need to compare them to the truth (`y`) and tally the results. We can do this using the `map2_dbl()` function from the **purrr** package.
```
mods <- mods %>%
mutate(
accuracy_train = map2_dbl(y_train, y_hat_train, accuracy_vec),
accuracy_test = map2_dbl(y_test, y_hat_test, accuracy_vec),
sens_test =
map2_dbl(y_test, y_hat_test, sens_vec, event_level = "second"),
spec_test =
map2_dbl(y_test, y_hat_test, spec_vec, event_level = "second")
)
```
Table 10\.1: Model accuracy measures for the income model.
| type | accuracy\_train | accuracy\_test | sens\_test | spec\_test |
| --- | --- | --- | --- | --- |
| log\_all | 0\.852 | 0\.849 | 0\.598 | 0\.928 |
| log\_1 | 0\.800 | 0\.803 | 0\.204 | 0\.991 |
| null | 0\.759 | 0\.761 | 0\.000 | 1\.000 |
Table [10\.1](ch-modeling.html#tab:accuracytest) displays a number of model accuracy measures.
Note that each model performs slightly worse on the testing set than it did on the training set.
As expected, the null model has a sensitivity of 0 and a specificity of 1, because it always makes the same prediction.
While the model that includes all of the variables is slightly less specific than the single explanatory variable model, it is much more sensitive.
In this case, we should probably conclude that the `log_all` model is the most likely to be useful.
In Figure [10\.4](ch-modeling.html#fig:roc-log-compare), we compare the ROC curves for all census models on the testing data set. Some data wrangling is necessary before we can gather the information to make these curves.
```
mods <- mods %>%
mutate(
y_hat_prob_test = map(
mod,
~pull(predict(.x, new_data = test, type = "prob"), `.pred_>50K`)
),
type = fct_reorder(type, sens_test, .desc = TRUE)
)
```
```
mods %>%
select(type, y_test, y_hat_prob_test) %>%
unnest(cols = c(y_test, y_hat_prob_test)) %>%
group_by(type) %>%
roc_curve(truth = y_test, y_hat_prob_test, event_level = "second") %>%
autoplot() +
geom_point(
data = mods,
aes(x = 1 - spec_test, y = sens_test, color = type),
size = 3
) +
scale_color_brewer("Model", palette = "Set2")
```
Figure 10\.4: Comparison of ROC curves across three logistic regression models on the Census testing data. The null model has a true positive rate of zero and lies along the diagonal. The full model is the best overall performer, as its curve lies furthest from the diagonal.
10\.4 Extended example: Who has diabetes?
-----------------------------------------
Consider the relationship between age and [*diabetes mellitus*](https://en.wikipedia.org/w/index.php?search=diabetes%20mellitus), a group of metabolic diseases characterized by high blood sugar levels.
As with many diseases, the risk of contracting diabetes increases with age and is associated with many other factors.
Age does not suggest a way to avoid diabetes: there is no way for you to change your age.
You can, however, change things like diet, physical fitness, etc.
Knowing what is predictive of diabetes can be helpful in practice, for instance, to design an efficient screening program to test people for the disease.
Let’s start simply. What is the relationship between age, body\-mass index (BMI), and diabetes for adults surveyed in `NHANES`?
Note that the overall rate of diabetes is relatively low.
```
library(NHANES)
people <- NHANES %>%
select(Age, Gender, Diabetes, BMI, HHIncome, PhysActive) %>%
drop_na()
glimpse(people)
```
```
Rows: 7,555
Columns: 6
$ Age <int> 34, 34, 34, 49, 45, 45, 45, 66, 58, 54, 58, 50, 33, 60,…
$ Gender <fct> male, male, male, female, female, female, female, male,…
$ Diabetes <fct> No, No, No, No, No, No, No, No, No, No, No, No, No, No,…
$ BMI <dbl> 32.2, 32.2, 32.2, 30.6, 27.2, 27.2, 27.2, 23.7, 23.7, 2…
$ HHIncome <fct> 25000-34999, 25000-34999, 25000-34999, 35000-44999, 750…
$ PhysActive <fct> No, No, No, No, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes,…
```
```
people %>%
group_by(Diabetes) %>%
count() %>%
mutate(pct = n / nrow(people))
```
```
# A tibble: 2 × 3
# Groups: Diabetes [2]
Diabetes n pct
<fct> <int> <dbl>
1 No 6871 0.909
2 Yes 684 0.0905
```
We can visualize any model.
In this case, we will tile the \\((Age, BMI)\\)\-plane with a fine grid of 10,000 points.
```
library(modelr)
num_points <- 100
fake_grid <- data_grid(
people,
Age = seq_range(Age, num_points),
BMI = seq_range(BMI, num_points)
)
```
Next, we will evaluate each of our four models on each grid point, taking care to retrieve not the classification itself, but the probability of having diabetes.
The null model considers no variable.
The next two models consider only age, or BMI, while the last model considers both.
```
dmod_null <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(Diabetes ~ 1, data = people)
dmod_log_1 <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(Diabetes ~ Age, data = people)
dmod_log_2 <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(Diabetes ~ BMI, data = people)
dmod_log_12 <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(Diabetes ~ Age + BMI, data = people)
bmi_mods <- tibble(
type = factor(
c("Null", "Logistic (Age)", "Logistic (BMI)", "Logistic (Age, BMI)")
),
mod = list(dmod_null, dmod_log_1, dmod_log_2, dmod_log_12),
y_hat = map(mod, predict, new_data = fake_grid, type = "prob")
)
```
Next, we add the grid data (`X`), and then use `map2()` to combine the predictions (`y_hat`) with the grid data.
```
bmi_mods <- bmi_mods %>%
mutate(
X = list(fake_grid),
yX = map2(y_hat, X, bind_cols)
)
```
Finally, we use `unnest()` to stretch the data frame out.
We now have a prediction at each of our 10,000 grid points for each of our four models.
```
res <- bmi_mods %>%
select(type, yX) %>%
unnest(cols = yX)
res
```
```
# A tibble: 40,000 × 5
type .pred_No .pred_Yes Age BMI
<fct> <dbl> <dbl> <dbl> <dbl>
1 Null 0.909 0.0905 12 13.3
2 Null 0.909 0.0905 12 14.0
3 Null 0.909 0.0905 12 14.7
4 Null 0.909 0.0905 12 15.4
5 Null 0.909 0.0905 12 16.0
6 Null 0.909 0.0905 12 16.7
7 Null 0.909 0.0905 12 17.4
8 Null 0.909 0.0905 12 18.1
9 Null 0.909 0.0905 12 18.8
10 Null 0.909 0.0905 12 19.5
# … with 39,990 more rows
```
Figure [10\.5](ch-modeling.html#fig:mod-log-compare) illustrates each model in the data space. Whereas the null model predicts the probability of diabetes to be constant irrespective of age and BMI, including age (BMI) as an explanatory variable allows the predicted probability to vary in the horizontal (vertical) direction.
Older patients and those with larger body mass have a higher probability of having diabetes.
Having both variables as covariates allows the probability to vary with respect to both age and BMI.
```
ggplot(data = res, aes(x = Age, y = BMI)) +
geom_tile(aes(fill = .pred_Yes), color = NA) +
geom_count(
data = people,
aes(color = Diabetes), alpha = 0.4
) +
scale_fill_gradient("Prob of\nDiabetes", low = "white", high = "red") +
scale_color_manual(values = c("gold", "black")) +
scale_size(range = c(0, 2)) +
scale_x_continuous(expand = c(0.02, 0)) +
scale_y_continuous(expand = c(0.02, 0)) +
facet_wrap(~fct_rev(type))
```
Figure 10\.5: Comparison of logistic regression models in the data space. Note the greater flexibility as more variables are introduced.
10\.5 Further resources
-----------------------
The **tidymodels** package and documentation contains many vignettes[15](#fn15) that go into further detail on how the package can be used.
10\.6 Exercises
---------------
**Problem 1 (Easy)**: In the first example in the chapter, a training dataset of 80% of the rows was created for the Census data.
What would be the tradeoffs of using a 90%/10% split instead?
**Problem 2 (Easy)**: Without using jargon, describe what a receiver operating characteristic (ROC) curve is and why it is important in predictive analytics and machine learning.
**Problem 3 (Medium)**: Investigators in the HELP (Health Evaluation and Linkage to Primary Care) study were interested in modeling the probability of being `homeless` (one or more nights spent on the street or in a shelter in the past six months vs. housed) as a function of age.
1. Generate a confusion matrix for the null model and interpret the result.
2. Fit and interpret logistic regression model for the probability of being `homeless` as a function of age.
3. What is the predicted probability of being homeless for a 20 year old? For a 40 year old?
4. Generate a confusion matrix for the second model and interpret the result.
**Problem 4 (Medium)**: Investigators in the HELP (Health Evaluation and Linkage to Primary Care) study were interested in modeling associations between demographic factors and a baseline measure of depressive symptoms `cesd`.
They fit a linear regression model using the following predictors: `age`, `sex`, and `homeless` to the `HELPrct` data from the `mosaicData` package.
1. Calculate and interpret the coefficient of determination (\\(R^2\\)) for this model and the null model.
2. Calculate and interpret the root mean squared error for this model and for the null model.
3. Calculate and interpret the mean absolute error (MAE) for this model and the null model.
**Problem 5 (Medium)**: What impact does the random number seed have on our results?
1. Repeat the Census logistic regression model that controlled only for capital gains but using a different random number seed (365 instead of 364\) for the 80%/20% split. Would you expect big differences in the accuracy using the training data? Testing data?
2. Repeat the process using a random number seed of 366\. What do you conclude?
**Problem 6 (Hard)**: Smoking is an important public health concern. Use the `NHANES` data from the `NHANES` package to develop a logistic regression model that identifies predictors of current smoking among those 20 or older. (Hint: note that the `SmokeNow` variable is missing for those who have never smoked: you will need to recode the variable to construct your outcome variable.)
```
library(tidyverse)
library(NHANES)
mosaic::tally(~ SmokeNow + Smoke100, data = filter(NHANES, Age >= 20))
```
```
Smoke100
SmokeNow No Yes
No 0 1745
Yes 0 1466
<NA> 4024 0
```
10\.7 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-modeling.html\#modeling\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-modeling.html#modeling-online-exercises)
No exercises found
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-learningI.html |
Chapter 11 Supervised learning
==============================
In this chapter, we will extend our discussion on predictive modeling to include many other models that are not based on regression.
The framework for model evaluation that we developed in Chapter [10](ch-modeling.html#ch:modeling) will remain useful.
We continue with the example about high earners in the 1994 United States Census.
```
library(tidyverse)
library(mdsr)
url <-
"http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
census <- read_csv(
url,
col_names = c(
"age", "workclass", "fnlwgt", "education",
"education_1", "marital_status", "occupation", "relationship",
"race", "sex", "capital_gain", "capital_loss", "hours_per_week",
"native_country", "income"
)
) %>%
mutate(income = factor(income))
library(tidymodels)
set.seed(364)
n <- nrow(census)
census_parts <- census %>%
initial_split(prop = 0.8)
train <- census_parts %>% training()
test <- census_parts %>% testing()
pi_bar <- train %>%
count(income) %>%
mutate(pct = n / sum(n)) %>%
filter(income == ">50K") %>%
pull(pct)
```
11\.1 Non\-regression classifiers
---------------------------------
The classifiers we built in Chapter [10](ch-modeling.html#ch:modeling) were fit using logistic regression.
These models were smooth, in that they are based on continuous [*parametric functions*](https://en.wikipedia.org/w/index.php?search=parametric%20functions).
The models we explore in this chapter are not necessarily continuous, nor are they necessarily expressed as parametric functions.
### 11\.1\.1 Decision trees
A decision tree (also known as a classification and regression tree[16](#fn16) or “CART”) is a tree\-like flowchart that assigns class labels to individual observations.
Each branch of the tree separates the records in the data set into increasingly “pure” (i.e., homogeneous) subsets, in the sense that they are more likely to share the same class label.
How do we construct these trees?
First, note that the number of possible decision trees grows exponentially with respect to the number of variables \\(p\\).
In fact, it has been proven that an efficient algorithm to determine the optimal decision tree almost certainly does not exist (Hyafil and Rivest 1976\).[17](#fn17)
The lack of a globally optimal algorithm means that there are several competing heuristics for building decision trees that employ greedy (i.e., locally optimal) strategies.
While the differences among these algorithms can mean that they will return different results (even on the same data set), we will simplify our presentation by restricting our discussion to [*recursive partitioning*](https://en.wikipedia.org/w/index.php?search=recursive%20partitioning) decision trees.
One **R** package that builds these decision trees is called **rpart**, which works in conjunction with **tidymodels**.
The partitioning in a decision tree follows [*Hunt’s algorithm*](https://en.wikipedia.org/w/index.php?search=Hunt's%20algorithm), which is itself recursive.
Suppose that we are somewhere in the decision tree, and that \\(D\_t \= (y\_t, \\mathbf{X}\_t)\\) is the set of records that are associated with node \\(t\\) and that \\(\\{y\_1, y\_2\\}\\) are the available class labels for the response variable.[18](#fn18) Then:
* If all records in \\(D\_t\\) belong to a single class, say, \\(y\_1\\), then \\(t\\) is a leaf node labeled as \\(y\_1\\).
* Otherwise, split the records into at least two child nodes, in such a way that the [*purity*](https://en.wikipedia.org/w/index.php?search=purity) of the new set of nodes exceeds some threshold. That is, the records are separated more distinctly into groups corresponding to the response class. In practice, there are several competitive methods for optimizing the purity of the candidate child nodes, and—as noted above—we don’t know the optimal way of doing this.
A decision tree works by running Hunt’s algorithm on the full training data set.
What does it mean to say that a set of records is “purer” than another set? Two popular methods for measuring the purity of a set of candidate child nodes are the [*Gini coefficient*](https://en.wikipedia.org/w/index.php?search=Gini%20coefficient) and the [*information*](https://en.wikipedia.org/w/index.php?search=information) gain. Both are implemented in **rpart**, which uses the Gini measurement by default. If \\(w\_i(t)\\) is the fraction of records belonging to class \\(i\\) at node \\(t\\), then
\\\[
Gini(t) \= 1 \- \\sum\_{i\=1}^{2} (w\_i(t))^2 \\, , \\qquad Entropy(t) \= \- \\sum\_{i\=1}^2 w\_i(t) \\cdot \\log\_2 w\_i(t)
\\]
The information gain is the change in entropy. The following example should help to clarify how this works in practice.
```
mod_dtree <- decision_tree(mode = "classification") %>%
set_engine("rpart") %>%
fit(income ~ capital_gain, data = train)
split_val <- mod_dtree$fit$splits %>%
as_tibble() %>%
pull(index)
```
Let’s consider the optimal split for `income` using only the variable `capital_gain`, which measures the amount each person paid in capital gains taxes. According to our tree, the optimal split occurs for those paying more than $5,119 in capital gains.
```
mod_dtree
```
```
parsnip model object
Fit time: 86ms
n= 26048
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 26048 6280 <=50K (0.759 0.241)
2) capital_gain< 5.12e+03 24785 5090 <=50K (0.795 0.205) *
3) capital_gain>=5.12e+03 1263 67 >50K (0.053 0.947) *
```
Although nearly 79% of those who paid less than $5,119 in capital gains tax made less than $50k, about 95% of those who paid more than $5,119 in capital gains tax made *more* than $50k. Thus, splitting (partitioning) the records according to this criterion helps to divide them into relatively purer subsets. We can see this distinction geometrically as we divide the training records in Figure [11\.1](ch-learningI.html#fig:census-rpart).
```
train_plus <- train %>%
mutate(hi_cap_gains = capital_gain >= split_val)
ggplot(data = train_plus, aes(x = capital_gain, y = income)) +
geom_count(
aes(color = hi_cap_gains),
position = position_jitter(width = 0, height = 0.1),
alpha = 0.5
) +
geom_vline(xintercept = split_val, color = "dodgerblue", lty = 2) +
scale_x_log10(labels = scales::dollar)
```
Figure 11\.1: A single partition of the `census` data set using the capital gain variable to determine the split. Color and the vertical line at $5,119 in capital gains tax indicate the split. If one paid more than this amount, one almost certainly made more than $50,000 in income. On the other hand, if one paid less than this amount in capital gains, one almost certainly made less than $50,000\.
Comparing Figure [11\.1](ch-learningI.html#fig:census-rpart) to Figure [10\.1](ch-modeling.html#fig:log-cap-gains) reveals how the non\-parametric decision tree models differs geometrically from the parametric logistic regression model. In this case, the perfectly vertical split achieved by the decision tree is a mathematical impossibility for the logistic regression model.
Thus, this decision tree uses a single variable (`capital_gain`) to partition the data set into two parts: those who paid more than $5,119 in capital gains, and those who did not. For the former—who make up 0\.952 of all observations—we get 79\.5% right by predicting that they made less than $50k. For the latter, we get 94\.7% right by predicting that they made more than $50k. Thus, our overall accuracy jumps to 80\.2%, easily besting the 75\.9% in the null model. Note that this performance is comparable to the performance of the single variable logistic regression model from Chapter [10](ch-modeling.html#ch:modeling).
How did the algorithm know to pick $5,119 as the threshold value? It tried all of the sensible values, and this was the one that lowered the [*Gini coefficient*](https://en.wikipedia.org/w/index.php?search=Gini%20coefficient) the most. This can be done efficiently, since thresholds will always be between actual values of the splitting variable, and thus there are only \\(O(n)\\) possible splits to consider.
(We use [*Big O notation*](https://en.wikipedia.org/w/index.php?search=Big%20O%20notation) to denote the complexity of an algorithm, where \\(O(n)\\) means that the number of calculations scales with the sample size.)
So far, we have only used one variable, but we can build a decision tree for `income` in terms of all of the other variables in the data set. (We have left out `native_country` because it is a categorical variable with many levels, which can make some learning models computationally infeasible.)
```
form <- as.formula(
"income ~ age + workclass + education + marital_status +
occupation + relationship + race + sex +
capital_gain + capital_loss + hours_per_week"
)
```
```
mod_tree <- decision_tree(mode = "classification") %>%
set_engine("rpart") %>%
fit(form, data = train)
mod_tree
```
```
parsnip model object
Fit time: 1.2s
n= 26048
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 26048 6280 <=50K (0.7587 0.2413)
2) relationship=Not-in-family,Other-relative,Own-child,Unmarried 14231 941 <=50K (0.9339 0.0661)
4) capital_gain< 7.07e+03 13975 696 <=50K (0.9502 0.0498) *
5) capital_gain>=7.07e+03 256 11 >50K (0.0430 0.9570) *
3) relationship=Husband,Wife 11817 5340 <=50K (0.5478 0.4522)
6) education=10th,11th,12th,1st-4th,5th-6th,7th-8th,9th,Assoc-acdm,Assoc-voc,HS-grad,Preschool,Some-college 8294 2770 <=50K (0.6655 0.3345)
12) capital_gain< 5.1e+03 7875 2360 <=50K (0.6998 0.3002) *
13) capital_gain>=5.1e+03 419 9 >50K (0.0215 0.9785) *
7) education=Bachelors,Doctorate,Masters,Prof-school 3523 953 >50K (0.2705 0.7295) *
```
In this more complicated tree, the optimal first split now does not involve `capital_gain`, but rather `relationship`.
A plot (shown in Figure [11\.2](ch-learningI.html#fig:maptree)) that is more informative is available through the **partykit** package, which contains a series of functions for working with decision trees.
```
library(rpart)
library(partykit)
plot(as.party(mod_tree$fit))
```
Figure 11\.2: Decision tree for income using the `census` data.
Figure [11\.2](ch-learningI.html#fig:maptree) shows the decision tree itself, while Figure [11\.3](ch-learningI.html#fig:census-rpart2) shows how the tree recursively partitions the original data. Here, the first question is whether `relationship` status is `Husband` or `Wife`. If not, then a capital gains threshold of $7,073\.50 is used to determine one’s income. 95\.7% of those who paid more than the threshold earned more than $50k, but 95% of those who paid less than the threshold did not. For those whose `relationship` status was `Husband` or `Wife`, the next question was whether you had a college degree. If so, then the model predicts with 72\.9% accuracy that you made more than $50k. If not, then again we ask about capital gains tax paid, but this time the threshold is $5,095\.50\. 97\.9% of those who were neither a husband nor a wife, and had no college degree, but paid more than that amount in capital gains tax, made more than $50k. On the other hand, 70% of those who paid below the threshold made less than $50k.
```
train_plus <- train_plus %>%
mutate(
husband_or_wife = relationship %in% c("Husband", "Wife"),
college_degree = husband_or_wife & education %in%
c("Bachelors", "Doctorate", "Masters", "Prof-school")
) %>%
bind_cols(
predict(mod_tree, new_data = train, type = "class")
) %>%
rename(income_dtree = .pred_class)
cg_splits <- tribble(
~husband_or_wife, ~vals,
TRUE, 5095.5,
FALSE, 7073.5
)
```
```
ggplot(data = train_plus, aes(x = capital_gain, y = income)) +
geom_count(
aes(color = income_dtree, shape = college_degree),
position = position_jitter(width = 0, height = 0.1),
alpha = 0.5
) +
facet_wrap(~ husband_or_wife) +
geom_vline(
data = cg_splits, aes(xintercept = vals),
color = "dodgerblue", lty = 2
) +
scale_x_log10()
```
Figure 11\.3: Graphical depiction of the full recursive partitioning decision tree classifier. On the left, those whose relationship status is neither ‘Husband’ nor ‘Wife’ are classified based on their capital gains paid. On the right, not only is the capital gains threshold different, but the decision is also predicated on whether the person has a college degree.
Since there are exponentially many trees, how did the algorithm know to pick this one? The [*complexity parameter*](https://en.wikipedia.org/w/index.php?search=complexity%20parameter) controls whether to keep or prune possible splits. That is, the algorithm considers many possible splits (i.e., new branches on the tree), but prunes them if they do not sufficiently improve the predictive power of the model (i.e., bear fruit). By default, each split has to decrease the error by a factor of 1%. This will help to avoid *overfitting* (more on that later).
Note that as we add more splits to our model, the relative error decreases.
```
printcp(mod_tree$fit)
```
```
Classification tree:
`rpart::rpart`(data = train)
Variables actually used in tree construction:
[1] capital_gain education relationship
Root node error: 6285/26048 = 0.241
n= 26048
CP nsplit rel error xerror xstd
1 0.1286 0 1.000 1.000 0.01099
2 0.0638 2 0.743 0.743 0.00985
3 0.0372 3 0.679 0.679 0.00950
4 0.0100 4 0.642 0.642 0.00929
```
We can also use the model evaluation metrics we developed in Chapter [10](ch-modeling.html#ch:modeling). Namely, the confusion matrix and the accuracy.
```
library(yardstick)
pred <- train %>%
select(income) %>%
bind_cols(
predict(mod_tree, new_data = train, type = "class")
) %>%
rename(income_dtree = .pred_class)
confusion <- pred %>%
conf_mat(truth = income, estimate = income_dtree)
confusion
```
```
Truth
Prediction <=50K >50K
<=50K 18790 3060
>50K 973 3225
```
```
accuracy(pred, income, income_dtree)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.845
```
In this case, the accuracy of the decision tree classifier is now 84\.5%, a considerable improvement over the null model. Again, this is comparable to the analogous logistic regression model we build using this same set of variables in Chapter [10](ch-modeling.html#ch:modeling).
Figure [11\.4](ch-learningI.html#fig:autoplot-confusion) displays the confusion matrix for this model.
```
autoplot(confusion) +
geom_label(
aes(
x = (xmax + xmin) / 2,
y = (ymax + ymin) / 2,
label = c("TN", "FP", "FN", "TP")
)
)
```
Figure 11\.4: Visual summary of the predictive accuracy of our decision tree model. The largest rectangle represents the cases that are true negatives.
#### 11\.1\.1\.1 Tuning parameters
The decision tree that we built previously was based on the default parameters. Most notably, our tree was pruned so that only splits that decreased the overall lack of fit by 1% were retained. If we lower this threshold to 0\.2%, then we get a more complex tree.
```
mod_tree2 <- decision_tree(mode = "classification") %>%
set_engine("rpart", control = rpart.control(cp = 0.002)) %>%
fit(form, data = train)
```
Can you find the accuracy of this more complex tree. Is it more or less accurate than our original tree?
### 11\.1\.2 Random forests
A natural extension of a decision tree is a [*random forest*](https://en.wikipedia.org/w/index.php?search=random%20forest). A random forest is collection of decision trees that are aggregated by majority rule. In a sense, a random forest is like a collection of bootstrapped (see Chapter [9](ch-foundations.html#ch:foundations)) decision trees. A random forest is constructed by:
* Choosing the number of decision trees to grow (controlled by the `trees` argument) and the number of variables to consider in each tree (`mtry`)
* Randomly selecting the rows of the data frame [*with replacement*](https://en.wikipedia.org/w/index.php?search=with%20replacement)
* Randomly selecting `mtry` variables from the data frame
* Building a decision tree on the resulting data set
* Repeating this procedure `trees` times
A prediction for a new observation is made by taking the majority rule from all of the decision trees in the forest.
Random forests are available in **R** via the **randomForest** package. They can be very effective but are sometimes computationally expensive.
```
mod_forest <- rand_forest(
mode = "classification",
mtry = 3,
trees = 201
) %>%
set_engine("randomForest") %>%
fit(form, data = train)
pred <- pred %>%
bind_cols(
predict(mod_forest, new_data = train, type = "class")
) %>%
rename(income_rf = .pred_class)
pred %>%
conf_mat(income, income_rf)
```
```
Truth
Prediction <=50K >50K
<=50K 19199 1325
>50K 564 4960
```
```
pred %>%
accuracy(income, income_rf)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.927
```
Because each tree in a random forest uses a different set of variables, it is possible to keep track of which variables seem to be the most consistently influential. This is captured by the notion of [*importance*](https://en.wikipedia.org/w/index.php?search=importance). While—unlike p\-values in a regression model—there is no formal statistical inference here, importance plays an analogous role in that it may help to generate hypotheses. Here, we see that `capital_gain` and `age` seem to be influential, while `race` and `sex` do not.
```
randomForest::importance(mod_forest$fit) %>%
as_tibble(rownames = "variable") %>%
arrange(desc(MeanDecreaseGini))
```
```
# A tibble: 11 × 2
variable MeanDecreaseGini
<chr> <dbl>
1 capital_gain 1178.
2 age 1108.
3 relationship 1009.
4 education 780.
5 marital_status 671.
6 hours_per_week 667.
7 occupation 625.
8 capital_loss 394.
9 workclass 311.
10 race 131.
11 sex 99.3
```
The results are put into a `tibble` (simple data frame) to facilitate further wrangling.
A model object of class `randomForest` also has a `predict()` method for making new predictions.
#### 11\.1\.2\.1 Tuning parameters
Hastie, Tibshirani, and Friedman (2009\) recommend using \\(\\sqrt{p}\\) variables in each classification tree (and \\(p/3\\) for each regression tree), and this is the default behavior in **randomForest**.
However, this is a parameter that can be tuned for a particular application.
The number of trees is another parameter that can be tuned—we simply picked a reasonably large odd number.
### 11\.1\.3 Nearest neighbor
Thus far, we have focused on using data to build models that we can then use to predict outcomes on a new set of data. A slightly different approach is offered by [*lazy learners*](https://en.wikipedia.org/w/index.php?search=lazy%20learners), which seek to predict outcomes without constructing a “model.” A very simple, yet widely\-used approach is [*\\(k\\)\-nearest neighbor*](https://en.wikipedia.org/w/index.php?search=$k$-nearest%20neighbor).
Recall that data with \\(p\\) attributes (explanatory variables) are manifest as points in a \\(p\\)\-dimensional space.
The [*Euclidean distance*](https://en.wikipedia.org/w/index.php?search=Euclidean%20distance) between any two points in that space can be easily calculated in the usual way as the square root of the sum of the squared deviations.
Thus, it makes sense to talk about the *distance* between two points in this \\(p\\)\-dimensional space, and as a result, it makes sense to talk about the distance between two observations (rows of the data frame).
Nearest\-neighbor classifiers exploit this property by assuming that observations that are “close” to each other probably have similar outcomes.
Suppose we have a set of training data \\((\\mathbf{X}, y) \\in \\mathbb{R}^{n \\times p} \\times \\mathbb{R}^n\\). For some positive integer \\(k\\), a \\(k\\)\-nearest neighbor algorithm classifies a new observation \\(x^\*\\) by:
* Finding the \\(k\\) observations in the training data \\(\\mathbf{X}\\) that are closest to \\(x^\*\\), according to some distance metric (usually Euclidean). Let \\(D(x^\*) \\subseteq (\\mathbf{X}, y)\\) denote this set of observations.
* For some aggregate function \\(f\\), computing \\(f(y)\\) for the \\(k\\) values of \\(y\\) in \\(D(x^\*)\\) and assigning this value (\\(y^\*\\)) as the predicted value of the response associated with \\(x^\*\\). The logic is that since \\(x^\*\\) is similar to the \\(k\\) observations in \\(D(x^\*)\\), the response associated with \\(x^\*\\) is likely to be similar to the responses in \\(D(x^\*)\\). In practice, simply taking the value shared by the majority (or a plurality) of the \\(y\\)’s is enough.
Note that a \\(k\\)\-NN classifier does not need to process the training data before making new classifications—it can do this on the fly. A \\(k\\)\-NN classifier is provided by the `kknn()` function in the **kknn** package.
Note that since the distance metric only makes sense for quantitative variables, we have to restrict our data set to those first.
Setting the `scale` to `TRUE` rescales the explanatory variables to have the same [*standard deviation*](https://en.wikipedia.org/w/index.php?search=standard%20deviation).
We choose \\(k\=5\\) neighbors for reasons that we explain in the next section.
```
library(kknn)
# distance metric only works with quantitative variables
train_q <- train %>%
select(income, where(is.numeric), -fnlwgt)
mod_knn <- nearest_neighbor(neighbors = 5, mode = "classification") %>%
set_engine("kknn", scale = TRUE) %>%
fit(income ~ ., data = train_q)
pred <- pred %>%
bind_cols(
predict(mod_knn, new_data = train, type = "class")
) %>%
rename(income_knn = .pred_class)
pred %>%
conf_mat(income, income_knn)
```
```
Truth
Prediction <=50K >50K
<=50K 18088 2321
>50K 1675 3964
```
```
pred %>%
accuracy(income, income_knn)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.847
```
\\(k\\)\-NN classifiers are widely used in part because they are easy to understand and code. They also don’t require any pre\-processing time. However, predictions can be slow, since the data must be processed at that time.
The usefulness of \\(k\\)\-NN can depend importantly on the geometry of the data. Are the points clustered together? What is the distribution of the distances among each variable? A wider scale on one variable can dwarf a narrow scale on another variable.
#### 11\.1\.3\.1 Tuning parameters
An appropriate choice of \\(k\\) will depend on the application and the data.
Cross\-validation can be used to optimize the choice of \\(k\\).
Here, we compute the accuracy for several values of \\(k\\).
```
knn_fit <- function(.data, k) {
nearest_neighbor(neighbors = k, mode = "classification") %>%
set_engine("kknn", scale = TRUE) %>%
fit(income ~ ., data = .data)
}
knn_accuracy <- function(mod, .new_data) {
mod %>%
predict(new_data = .new_data) %>%
mutate(income = .new_data$income) %>%
accuracy(income, .pred_class) %>%
pull(.estimate)
}
```
```
ks <- c(1:10, 15, 20, 30, 40, 50)
```
```
knn_tune <- tibble(
k = ks,
mod = map(k, knn_fit, .data = train_q),
train_accuracy = map_dbl(mod, knn_accuracy, .new_data = train_q)
)
knn_tune
```
```
# A tibble: 5 × 3
k mod train_accuracy
<dbl> <list> <dbl>
1 1 <fit[+]> 0.839
2 5 <fit[+]> 0.847
3 10 <fit[+]> 0.848
4 20 <fit[+]> 0.843
5 40 <fit[+]> 0.839
```
In Figure [11\.5](ch-learningI.html#fig:cval), we show how the accuracy decreases as \\(k\\) increases.
That is, if one seeks to maximize the accuracy rate *on this data set*, then the optimal value of \\(k\\) is 5\.[19](#fn19) We will see why this method of optimizing the value of the parameter \\(k\\) is not robust when we learn about [*cross\-validation*](https://en.wikipedia.org/w/index.php?search=cross-validation) below.
```
ggplot(data = knn_tune, aes(x = k, y = train_accuracy)) +
geom_point() +
geom_line() +
ylab("Accuracy rate")
```
Figure 11\.5: Performance of nearest\-neighbor classifier for different choices of \\(k\\) on census training data.
### 11\.1\.4 Naïve Bayes
Another relatively simple classifier is based on Bayes Theorem.
Bayes theorem is a very useful result from probability that allows conditional probabilities to be calculated from other conditional probabilities. It states:
\\\[
\\Pr(y\|x) \= \\frac{\\Pr(xy)}{\\Pr(x)} \= \\frac{\\Pr(x\|y) \\Pr(y)}{\\Pr(x )} \\,.
\\]
How does this relate to a naïve Bayes classifier?
Suppose that we have a binary response variable \\(y\\) and we want to classify a new observation \\(x^\*\\) (recall that \\(x\\) is a vector). Then if we can compute that the conditional probability \\(\\Pr(y \= 1 \| x^\*) \> \\Pr(y\=0 \| x^\*)\\), we have evidence that \\(y\=1\\) is a more likely outcome for \\(x^\*\\) than \\(y\=0\\). This is the crux of a naïve Bayes classifier.
In practice, how we arrive at the estimates \\(\\Pr(y\=1\|x^\*)\\) are based on Bayes theorem and estimates of conditional probabilities derived from the training data \\((\\mathbf{X}, y)\\).
Consider the first person in the training data set. This is a 39\-year\-old white male with a bachelor’s degree working for a state government in a clerical role. In reality, this person made less than $50,000\.
```
train %>%
as.data.frame() %>%
head(1)
```
```
age workclass fnlwgt education education_1 marital_status
1 31 Private 291052 Some-college 10 Married-civ-spouse
occupation relationship race sex capital_gain capital_loss
1 Adm-clerical Husband White Male 0 2051
hours_per_week native_country income
1 40 United-States <=50K
```
The naïve Bayes classifier would make a prediction for this person based on the probabilities observed in the data. For example, in this case the probability \\(\\Pr(\\text{male} \| \\text{\> 50k})\\) of being male given that you had high income is 0\.845, while the unconditional probability of being male is \\(\\Pr(\\text{male}) \= 0\.670\\). We know that the overall probability of having high income is \\(\\Pr(\\text{\> 50k}) \=\\) 0\.241\. Bayes’s rule tells us that the resulting probability of having high income given that one is male is:
\\\[
\\Pr(\\text{\> 50k} \| \\text{male}) \= \\frac{\\Pr(\\text{male} \| \\text{\> 50k}) \\cdot \\Pr(\\text{\> 50k})}{\\Pr(\\text{male})} \= \\frac{0\.845 \\cdot 0\.243}{0\.670} \= 0\.306 \\,.
\\]
This simple example illustrates the case where we have a single explanatory variable (e.g., `sex`), but the naïve Bayes model extends to multiple variables by making the sometimes overly simplistic assumption that the explanatory variables are conditionally independent (hence the name “naïve”).
A naïve Bayes classifier is provided in **R** by the `naive_Bayes()` function from the **discrim** package. Note that like `lm()` and `glm()`, a `naive_Bayes()` object has a `predict()` method.
```
library(discrim)
mod_nb <- naive_Bayes(mode = "classification") %>%
set_engine("klaR") %>%
fit(form, data = train)
pred <- pred %>%
bind_cols(
predict(mod_nb, new_data = train, type = "class")
) %>%
rename(income_nb = .pred_class)
accuracy(pred, income, income_nb)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.824
```
### 11\.1\.5 Artificial neural networks
An [*artificial neural network*](https://en.wikipedia.org/w/index.php?search=artificial%20neural%20network) is yet another classifier. While the impetus for the artificial neural network comes from a biological understanding of the brain, the implementation here is entirely mathematical.
```
mod_nn <- mlp(mode = "classification", hidden_units = 5) %>%
set_engine("nnet") %>%
fit(form, data = train)
```
A neural network is a directed graph (see Chapter [20](ch-netsci.html#ch:netsci)) that proceeds in stages. First, there is one node for each input variable. In this case, because each factor level counts as its own variable, there are 57 input variables.
These are shown on the left in Figure [11\.6](ch-learningI.html#fig:plot-nnet). Next, there are a series of nodes specified as a [*hidden layer*](https://en.wikipedia.org/w/index.php?search=hidden%20layer).
In this case, we have specified five nodes for the hidden layer.
These are shown in the middle of Figure [11\.6](ch-learningI.html#fig:plot-nnet), and each of the input variables are connected to these hidden nodes. Each of the hidden nodes is connected to the single output variable. In addition, `nnet()` adds two control nodes, the first of which is connected to the five hidden nodes, and the latter is connected to the output node. The total number of edges is thus \\(pk \+ k \+ k \+ 1\\), where \\(k\\) is the number of hidden nodes. In this case, there are \\(57 \\cdot 5 \+ 5 \+ 5 \+ 1 \= 296\\) edges.
Figure 11\.6: Visualization of an artificial neural network. The 57 input variables are shown on the left, with the five hidden nodes in the middle, and the single output variable on the right.
The algorithm iteratively searches for the optimal set of weights for each edge. Once the weights are computed, the neural network can make predictions for new inputs by running these values through the network.
```
pred <- pred %>%
bind_cols(
predict(mod_nn, new_data = train, type = "class")
) %>%
rename(income_nn = .pred_class)
accuracy(pred, income, income_nn)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.844
```
### 11\.1\.6 Ensemble methods
The benefit of having multiple classifiers is that they can be easily combined into a single classifier. Note that there is a real probabilistic benefit to having multiple prediction systems, especially if they are independent. For example, if you have three independent classifiers with error rates \\(\\epsilon\_1, \\epsilon\_2\\), and \\(\\epsilon\_3\\), then the probability that all three are wrong is \\(\\prod\_{i\=1}^3 \\epsilon\_i\\). Since \\(\\epsilon\_i \< 1\\) for all \\(i\\), this probability is lower than any of the individual error rates. Moreover, the probability that at least one of the classifiers is correct is \\(1 \- \\prod\_{i\=1}^3 \\epsilon\_i\\), which will get closer to 1 as you add more classifiers—even if you have not improved the individual error rates!
Consider combining the five classifiers that we have built previously. Suppose that we build an ensemble classifier by taking the majority vote from each. Does this ensemble classifier outperform any of the individual classifiers? We can use the `rowwise()` and `c_across()` functions to easily compute these values.
```
pred <- pred %>%
rowwise() %>%
mutate(
rich_votes = sum(c_across(contains("income_")) == ">50K"),
income_ensemble = factor(ifelse(rich_votes >= 3, ">50K", "<=50K"))
) %>%
ungroup()
pred %>%
select(-rich_votes) %>%
pivot_longer(
cols = -income,
names_to = "model",
values_to = "prediction"
) %>%
group_by(model) %>%
summarize(accuracy = accuracy_vec(income, prediction)) %>%
arrange(desc(accuracy))
```
```
# A tibble: 6 × 2
model accuracy
<chr> <dbl>
1 income_rf 0.927
2 income_ensemble 0.884
3 income_knn 0.847
4 income_dtree 0.845
5 income_nn 0.844
6 income_nb 0.824
```
In this case, the ensemble model achieves a 88\.4% accuracy rate, which is slightly lower than our random forest.
Thus, ensemble methods are a simple but effective way of hedging your bets.
11\.2 Parameter tuning
----------------------
In Section [11\.1\.3](ch-learningI.html#sec:knn), we showed how after a certain point, the accuracy rate *on the training data* of the \\(k\\)\-NN model increased as \\(k\\) increased.
That is, as information from more neighbors—who are necessarily farther away from the target observation—was incorporated into the prediction for any given observation, those predictions got worse.
This is not surprising, since the actual observation is in the training data set and that observation necessarily has distance 0 from the target observation.
The error rate is not zero for \\(k\=1\\) likely due to many points having the exact same coordinates in this five\-dimensional space.
However, as seen in Figure [11\.7](ch-learningI.html#fig:knn-bias-var), the story is different when evaluating the \\(k\\)\-NN model *on the testing set*.
Here, the truth is *not* in the training set, and so pooling information across more observations leads to *better* predictions—at least for a while.
Again, this should not be surprising—we saw in Chapter [9](ch-foundations.html#ch:foundations) how means are less variable than individual observations.
Generally, one hopes to minimize the misclassification rate on data that the model has not seen (i.e., the testing data) without introducing too much bias.
In this case, that point occurs somewhere between \\(k\=5\\) and \\(k\=10\\).
We can see this in Figure [11\.7](ch-learningI.html#fig:knn-bias-var), since the accuracy on the testing data set improves rapidly up to \\(k\=5\\), but then very slowly for larger values of \\(k\\).
```
test_q <- test %>%
select(income, where(is.numeric), -fnlwgt)
knn_tune <- knn_tune %>%
mutate(test_accuracy = map_dbl(mod, knn_accuracy, .new_data = test_q))
knn_tune %>%
select(-mod) %>%
pivot_longer(-k, names_to = "type", values_to = "accuracy") %>%
ggplot(aes(x = k, y = accuracy, color = factor(type))) +
geom_point() +
geom_line() +
ylab("Accuracy") +
scale_color_discrete("Set")
```
Figure 11\.7: Performance of nearest\-neighbor classifier for different choices of \\(k\\) on census training and testing data.
11\.3 Example: Evaluation of income models redux
------------------------------------------------
Just as we did in Section [10\.3\.5](ch-modeling.html#sec:evaluate), we should evaluate these new models on both the training and testing sets.
First, we build the null model that simply predicts that everyone makes $50,000 with the same probability, regardless of the explanatory variables. (See Appendix [E](ch-regression.html#ch:regression) for an introduction to logistic regression.)
We’ll add this to the list of models that we built previously in this chapter.
```
mod_null <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(income ~ 1, data = train)
mod_log_all <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(form, data = train)
mods <- tibble(
type = c(
"null", "log_all", "tree", "forest",
"knn", "neural_net", "naive_bayes"
),
mod = list(
mod_null, mod_log_all, mod_tree, mod_forest,
mod_knn, mod_nn, mod_nb
)
)
```
While each of the models we have fit have different classes in **R** (see [B.3\.6](ch-R.html#appR:attr)), each of those classes has a `predict()` method that will generate predictions.
```
map(mods$mod, class)
```
```
[[1]]
[1] "_glm" "model_fit"
[[2]]
[1] "_glm" "model_fit"
[[3]]
[1] "_rpart" "model_fit"
[[4]]
[1] "_randomForest" "model_fit"
[[5]]
[1] "_train.kknn" "model_fit"
[[6]]
[1] "_nnet.formula" "model_fit"
[[7]]
[1] "_NaiveBayes" "model_fit"
```
Thus, we can iterate through the list of models and apply the appropriate `predict()` method to each object.
```
mods <- mods %>%
mutate(
y_train = list(pull(train, income)),
y_test = list(pull(test, income)),
y_hat_train = map(
mod,
~pull(predict(.x, new_data = train, type = "class"), .pred_class)
),
y_hat_test = map(
mod,
~pull(predict(.x, new_data = test, type = "class"), .pred_class)
)
)
mods
```
```
# A tibble: 7 × 6
type mod y_train y_test y_hat_train y_hat_test
<chr> <list> <list> <list> <list> <list>
1 null <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
2 log_all <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
3 tree <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
4 forest <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
5 knn <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
6 neural_net <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
7 naive_bayes <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
```
We can also add our majority rule ensemble classifier.
First, we write a function that will compute the majority vote when given a list of predictions.
```
predict_ensemble <- function(x) {
majority <- ceiling(length(x) / 2)
x %>%
data.frame() %>%
rowwise() %>%
mutate(
rich_votes = sum(c_across() == ">50K"),
.pred_class = factor(ifelse(rich_votes >= majority , ">50K", "<=50K"))
) %>%
pull(.pred_class) %>%
fct_relevel("<=50K")
}
```
Next, we use `bind_rows()` to add an additional row to our models data frame with the relevant information for the ensemble classifier.
```
ensemble <- tibble(
type = "ensemble",
mod = NA,
y_train = list(predict_ensemble(pull(mods, y_train))),
y_test = list(predict_ensemble(pull(mods, y_test))),
y_hat_train = list(predict_ensemble(pull(mods, y_hat_train))),
y_hat_test = list(predict_ensemble(pull(mods, y_hat_test))),
)
mods <- mods %>%
bind_rows(ensemble)
```
Now that we have the predictions for each model, we just need to compare them to the truth (`y`), and tally the results. We can do this using the `map2_dbl()` function from the **purrr** package.
```
mods <- mods %>%
mutate(
accuracy_train = map2_dbl(y_train, y_hat_train, accuracy_vec),
accuracy_test = map2_dbl(y_test, y_hat_test, accuracy_vec),
sens_test = map2_dbl(
y_test,
y_hat_test,
sens_vec,
event_level = "second"
),
spec_test = map2_dbl(y_test,
y_hat_test,
spec_vec,
event_level = "second"
)
)
```
```
mods %>%
select(-mod, -matches("^y")) %>%
arrange(desc(accuracy_test))
```
```
# A tibble: 8 × 5
type accuracy_train accuracy_test sens_test spec_test
<chr> <dbl> <dbl> <dbl> <dbl>
1 forest 0.927 0.866 0.628 0.941
2 ensemble 0.875 0.855 0.509 0.963
3 log_all 0.852 0.849 0.598 0.928
4 neural_net 0.844 0.843 0.640 0.906
5 tree 0.845 0.842 0.514 0.945
6 naive_bayes 0.824 0.824 0.328 0.980
7 knn 0.847 0.788 0.526 0.869
8 null 0.759 0.761 0 1
```
While the random forest performed notably better than the other models on the training set, its accuracy dropped the most on the testing set.
We note that even though the \\(k\\)\-NN model slightly outperformed the decision tree on the training set, the decision tree performed better on the testing set.
The ensemble model and the logistic regression model performed quite well.
In this case, however, the accuracy rates of all models were in the same ballpark on both the testing set.
In Figure [11\.8](ch-learningI.html#fig:roc-compare), we compare the ROC curves for all census models on the testing data set.
```
mods <- mods %>%
filter(type != "ensemble") %>%
mutate(
y_hat_prob_test = map(
mod,
~pull(predict(.x, new_data = test, type = "prob"), `.pred_>50K`)
),
type = fct_reorder(type, sens_test, .desc = TRUE)
)
```
```
mods %>%
select(type, y_test, y_hat_prob_test) %>%
unnest(cols = c(y_test, y_hat_prob_test)) %>%
group_by(type) %>%
roc_curve(truth = y_test, y_hat_prob_test, event_level = "second") %>%
autoplot() +
geom_point(
data = mods,
aes(x = 1 - spec_test, y = sens_test, color = type),
size = 3
)
```
Figure 11\.8: Comparison of ROC curves across five models on the Census testing data. The null model has a true positive rate of zero and lies along the diagonal. The naïve Bayes model has a lower true positive rate than the other models. The random forest may be the best overall performer, as its curve lies furthest from the diagonal.
11\.4 Extended example: Who has diabetes this time?
---------------------------------------------------
Recall the example about diabetes in Section [10\.4](ch-modeling.html#sec:diabetes).
```
library(NHANES)
people <- NHANES %>%
select(Age, Gender, Diabetes, BMI, HHIncome, PhysActive) %>%
drop_na()
glimpse(people)
```
```
Rows: 7,555
Columns: 6
$ Age <int> 34, 34, 34, 49, 45, 45, 45, 66, 58, 54, 58, 50, 33, 60,…
$ Gender <fct> male, male, male, female, female, female, female, male,…
$ Diabetes <fct> No, No, No, No, No, No, No, No, No, No, No, No, No, No,…
$ BMI <dbl> 32.22, 32.22, 32.22, 30.57, 27.24, 27.24, 27.24, 23.67,…
$ HHIncome <fct> 25000-34999, 25000-34999, 25000-34999, 35000-44999, 750…
$ PhysActive <fct> No, No, No, No, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes,…
```
```
people %>%
group_by(Diabetes) %>%
count() %>%
mutate(pct = n / nrow(people))
```
```
# A tibble: 2 × 3
# Groups: Diabetes [2]
Diabetes n pct
<fct> <int> <dbl>
1 No 6871 0.909
2 Yes 684 0.0905
```
We illustrate the use of a decision tree using all of the variables except for household income in Figure [11\.9](ch-learningI.html#fig:diabetes-rpart). From the original data shown in Figure [11\.10](ch-learningI.html#fig:diabetes), it appears that older people, and those with higher BMIs, are more likely to have diabetes.
```
mod_diabetes <- decision_tree(mode = "classification") %>%
set_engine(
"rpart",
control = rpart.control(cp = 0.005, minbucket = 30)
) %>%
fit(Diabetes ~ Age + BMI + Gender + PhysActive, data = people)
mod_diabetes
```
```
parsnip model object
Fit time: 80ms
n= 7555
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 7555 684 No (0.909464 0.090536)
2) Age< 52.5 5092 188 No (0.963079 0.036921) *
3) Age>=52.5 2463 496 No (0.798620 0.201380)
6) BMI< 39.985 2301 416 No (0.819209 0.180791) *
7) BMI>=39.985 162 80 No (0.506173 0.493827)
14) Age>=67.5 50 18 No (0.640000 0.360000) *
15) Age< 67.5 112 50 Yes (0.446429 0.553571)
30) Age< 60.5 71 30 No (0.577465 0.422535) *
31) Age>=60.5 41 9 Yes (0.219512 0.780488) *
```
```
plot(as.party(mod_diabetes$fit))
```
Figure 11\.9: Illustration of decision tree for diabetes.
If you are 52 or younger, then you very likely do not have diabetes. However, if you are 53 or older, your risk is higher. If your BMI is above 40—indicating obesity—then your risk increases again. Strangely—and this may be evidence of overfitting—your risk is highest if you are between 61 and 67 years old. This partition of the data is overlaid on Figure [11\.10](ch-learningI.html#fig:diabetes).
```
segments <- tribble(
~Age, ~xend, ~BMI, ~yend,
52.5, 100, 39.985, 39.985,
67.5, 67.5, 39.985, Inf,
60.5, 60.5, 39.985, Inf
)
ggplot(data = people, aes(x = Age, y = BMI)) +
geom_count(aes(color = Diabetes), alpha = 0.5) +
geom_vline(xintercept = 52.5) +
geom_segment(
data = segments,
aes(xend = xend, yend = yend)
) +
scale_fill_gradient(low = "white", high = "red") +
scale_color_manual(values = c("gold", "black")) +
annotate(
"rect", fill = "blue", alpha = 0.1,
xmin = 60.5, xmax = 67.5, ymin = 39.985, ymax = Inf
)
```
Figure 11\.10: Scatterplot of age against BMI for individuals in the NHANES data set. The black dots represent a collection of people with diabetes, while the gold dots represent those without diabetes.
Figure [11\.10](ch-learningI.html#fig:diabetes) is a nice way to visualize a complex model. We have plotted our data in two quantitative dimensions (`Age` and `BMI`) while using color to represent our binary response variable (`Diabetes`). The decision tree simply partitions this two\-dimensional space into axis\-parallel rectangles. The model makes the same prediction for all observations within each rectangle. It is not hard to imagine—although it is hard to draw—how this recursive partitioning will scale to higher dimensions.
Note, however, that Figure [11\.10](ch-learningI.html#fig:diabetes) provides a clear illustration of the strengths and weaknesses of models based on recursive partitioning. These types of models can *only* produce axis\-parallel rectangles in which all points in each rectangle receive the same prediction. This makes these models relatively easy to understand and apply, but it is not hard to imagine a situation in which they might perform miserably (e.g., what if the relationship was non\-linear?). Here again, this underscores the importance of visualizing your model *in the data space* (Hadley Wickham, Cook, and Hofmann 2015\) as demonstrated in Figure [11\.10](ch-learningI.html#fig:diabetes).
### 11\.4\.1 Comparing all models
We close the loop by extending this model visualization exerise to all of our models.
Once again, we tile the \\((Age, BMI)\\)\-plane with a fine grid of 10,000 points.
```
library(modelr)
fake_grid <- data_grid(
people,
Age = seq_range(Age, 100),
BMI = seq_range(BMI, 100)
)
```
Next, we evaluate each of our six models on each grid point, taking care to retrieve not the classification itself, but the probability of having diabetes.
```
form <- as.formula("Diabetes ~ Age + BMI")
dmod_null <- logistic_reg(mode = "classification") %>%
set_engine("glm")
dmod_tree <- decision_tree(mode = "classification") %>%
set_engine("rpart", control = rpart.control(cp = 0.005, minbucket = 30))
dmod_forest <- rand_forest(
mode = "classification",
trees = 201,
mtry = 2
) %>%
set_engine("randomForest")
dmod_knn <- nearest_neighbor(mode = "classification", neighbors = 5) %>%
set_engine("kknn", scale = TRUE)
dmod_nnet <- mlp(mode = "classification", hidden_units = 6) %>%
set_engine("nnet")
dmod_nb <- naive_Bayes() %>%
set_engine("klaR")
bmi_mods <- tibble(
type = c(
"Logistic Regression", "Decision Tree", "Random Forest",
"k-Nearest-Neighbor", "Neural Network", "Naive Bayes"
),
spec = list(
dmod_null, dmod_tree, dmod_forest, dmod_knn, dmod_nnet, dmod_nb
),
mod = map(spec, fit, form, data = people),
y_hat = map(mod, predict, new_data = fake_grid, type = "prob")
)
bmi_mods <- bmi_mods %>%
mutate(
X = list(fake_grid),
yX = map2(y_hat, X, bind_cols)
)
```
```
res <- bmi_mods %>%
select(type, yX) %>%
unnest(cols = yX)
res
```
```
# A tibble: 60,000 × 5
type .pred_No .pred_Yes Age BMI
<chr> <dbl> <dbl> <dbl> <dbl>
1 Logistic Regression 0.998 0.00234 12 13.3
2 Logistic Regression 0.998 0.00249 12 14.0
3 Logistic Regression 0.997 0.00265 12 14.7
4 Logistic Regression 0.997 0.00282 12 15.4
5 Logistic Regression 0.997 0.00300 12 16.0
6 Logistic Regression 0.997 0.00319 12 16.7
7 Logistic Regression 0.997 0.00340 12 17.4
8 Logistic Regression 0.996 0.00361 12 18.1
9 Logistic Regression 0.996 0.00384 12 18.8
10 Logistic Regression 0.996 0.00409 12 19.5
# … with 59,990 more rows
```
Figure [11\.11](ch-learningI.html#fig:mod-compare) illustrates each model in the data space. The differences between the models are striking. The rigidity of the decision tree is apparent, especially relative to the flexibility of the \\(k\\)\-NN model.
The \\(k\\)\-NN model and the random forest have similar flexibility, but regions in the former are based on polygons, while regions in the latter are based on rectangles.
Making \\(k\\) larger would result in smoother \\(k\\)\-NN predictions, while making \\(k\\) smaller would make the predictions more bold.
The logistic regression model makes predictions with a smooth grade, while the naïve Bayes model produces a non\-linear horizon. The neural network has made relatively uniform predictions in this case.
```
ggplot(data = res, aes(x = Age, y = BMI)) +
geom_tile(aes(fill = .pred_Yes), color = NA) +
geom_count(
data = people,
aes(color = Diabetes), alpha = 0.4
) +
scale_fill_gradient("Prob of\nDiabetes", low = "white", high = "red") +
scale_color_manual(values = c("gold", "black")) +
scale_size(range = c(0, 2)) +
scale_x_continuous(expand = c(0.02,0)) +
scale_y_continuous(expand = c(0.02,0)) +
facet_wrap(~type, ncol = 2)
```
Figure 11\.11: Comparison of predictive models in the data space. Note the rigidity of the decision tree, the flexibility of \\(k\\)\-NN and the random forest, and the bold predictions of \\(k\\)\-NN.
11\.5 Regularization
--------------------
Regularization is a technique where constraints are added to a regression
model to prevent overfitting.
Two techniques for [*regularization*](https://en.wikipedia.org/w/index.php?search=regularization)
include [*ridge regression*](https://en.wikipedia.org/w/index.php?search=ridge%20regression) and the [*LASSO*](https://en.wikipedia.org/w/index.php?search=LASSO) (least absolute
shrinkage and selection operator).
Instead of fitting a model that
minimizes \\(\\sum\_{i\=1}^n (y \- \\hat{y})^2\\) where \\(\\hat{y}\=\\bf{X}'\\beta\\),
ridge regression adds a constraint that \\(\\sum\_{j\=1}^p \\beta\_j^2 \\leq c\_1\\)
and the LASSO imposes the constraint that \\(\\sum\_{j\=1}^p \|\\beta\_j\| \\leq c\_2\\),
for some constants \\(c\_1\\) and \\(c\_2\\).
These methods are considered part of statistical or machine learning
since they automate model selection by shrinking coefficients (for ridge regression) or retaining predictors
(for the LASSO) automatically.
Such [*shrinkage*](https://en.wikipedia.org/w/index.php?search=shrinkage) may induce bias but decrease variability.
These regularization methods are particularly helpful when the set of predictors is large.
To help illustrate this process we consider a model for the flight delays example introduced in Chapter [9](ch-foundations.html#ch:foundations).
Here we are interested in arrival delays for flights from the two New York City airports that service California (EWR and JFK) to four California airports.
```
library(nycflights13)
California <- flights %>%
filter(
dest %in% c("LAX", "SFO", "OAK", "SJC"),
!is.na(arr_delay)
) %>%
mutate(
day = as.Date(time_hour),
dow = as.character(lubridate::wday(day, label = TRUE)),
month = as.factor(month),
hour = as.factor(hour)
)
dim(California)
```
```
[1] 29836 20
```
We begin by splitting the data into a training set (70%) and testing set (30%).
```
library(broom)
set.seed(386)
California_split <- initial_split(California, prop = 0.7)
California_train <- training(California_split)
California_test <- testing(California_split)
```
Now we can build a model that includes variables we want to use to explain arrival delay, including hour of day, originating airport, arrival airport, carrier, month of the year, day of week, plus interactions between destination and day of week and month.
```
flight_model <- formula(
"arr_delay ~ origin + dest + hour + carrier + month + dow")
mod_reg <- linear_reg() %>%
set_engine("lm") %>%
fit(flight_model, data = California_train)
tidy(mod_reg) %>%
head(4)
```
```
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -16.0 6.12 -2.61 0.00905
2 originJFK 2.88 0.783 3.67 0.000239
3 destOAK -4.61 3.10 -1.49 0.136
4 destSFO 1.89 0.620 3.05 0.00227
```
Our regression coefficient for `originJFK` indicates that controlling for other factors, we would anticipate an additional 3\.1\-minute delay flying from JFK compared to EWR (Newark), the reference airport.
```
California_test %>%
select(arr_delay) %>%
bind_cols(predict(mod_reg, new_data = California_test)) %>%
metrics(truth = arr_delay, estimate = .pred)
```
```
# A tibble: 3 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 rmse standard 42.6
2 rsq standard 0.0870
3 mae standard 26.1
```
Next we fit a LASSO model to the same data.
```
mod_lasso <- linear_reg(penalty = 0.01, mixture = 1) %>%
set_engine("glmnet") %>%
fit(flight_model, data = California_train)
tidy(mod_lasso) %>%
head(4)
```
```
# A tibble: 4 × 3
term estimate penalty
<chr> <dbl> <dbl>
1 (Intercept) -11.4 0.01
2 originJFK 2.78 0.01
3 destOAK -4.40 0.01
4 destSFO 1.88 0.01
```
We see that the coefficients for the LASSO tend to be attenuated slightly towards 0 (e.g., `originJFK` has shifted from 3\.08 to 2\.98\).
```
California_test %>%
select(arr_delay) %>%
bind_cols(predict(mod_lasso, new_data = California_test)) %>%
metrics(truth = arr_delay, estimate = .pred)
```
```
# A tibble: 3 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 rmse standard 42.6
2 rsq standard 0.0871
3 mae standard 26.1
```
In this example, the LASSO hasn’t improved the performance of our model on the test data.
In situations where there are many more predictors and the model may be overfit, it will tend to do better.
11\.6 Further resources
-----------------------
G. James et al. (2013\) provides an accessible introduction to these topics (see [http://www\-bcf.usc.edu/\~gareth/ISL](http://www-bcf.usc.edu/~gareth/ISL)).
A graduate\-level version of Hastie, Tibshirani, and Friedman (2009\) is freely downloadable at [http://www\-stat.stanford.edu/\~tibs/ElemStatLearn](http://www-stat.stanford.edu/~tibs/ElemStatLearn).
Another helpful source is Tan, Steinbach, and Kumar (2006\), which has more of a computer science flavor.
Breiman (2001\) is a classic paper that describes two cultures in statistics: prediction and modeling.
Bradley Efron (2020\) offers a more recent perspective.
The `ctree()` function from the **partykit** package builds a recursive partitioning model using conditional inference trees.
The functionality is similar to `rpart()` but uses different criteria to determine the splits. The **partykit** package also includes a `cforest()` function.
The **caret** package provides a number of useful functions for training and plotting classification and regression models.
The **glmnet** and **lars** packages include support for regularization methods.
The **RWeka** package provides an **R** interface to the comprehensive [Weka](http://www.cs.waikato.ac.nz/ml/weka/) machine learning library, which is written in Java.
11\.7 Exercises
---------------
**Problem 1 (Easy)**: Use the `HELPrct` data from the `mosaicData` to fit a tree model to the following predictors: `age`, `sex`, `cesd`, and `substance`.
1. Plot the resulting tree and interpret the results.
2. What is the accuracy of your decision tree?
**Problem 2 (Medium)**: Fit a series of supervised learning models to predict arrival delays for flights from
New York to `SFO` using the `nycflights13` package. How do the conclusions change from
the multiple regression model presented in the Statistical Foundations chapter?
**Problem 3 (Medium)**: Use the College Scorecard Data from the `CollegeScorecard` package to model student debt as a function of institutional characteristics using the techniques described in this chapter. Compare and contrast results from at least three methods.
```
# remotes::install_github("Amherst-Statistics/CollegeScorecard")
library(CollegeScorecard)
```
**Problem 4 (Medium)**: The `nasaweather` package contains data about tropical `storms` from 1995–2005\. Consider the scatterplot between the `wind` speed and `pressure` of these `storms` shown below.
```
library(mdsr)
library(nasaweather)
ggplot(data = storms, aes(x = pressure, y = wind, color = type)) +
geom_point(alpha = 0.5)
```
The `type` of storm is present in the data, and four types are given: extratropical, hurricane, tropical depression, and tropical storm. There are [complicated and not terribly precise definitions](https://en.wikipedia.org/wiki/Tropical_cyclone#Classifications.2C_terminology.2C_and_naming) for storm type. Build a classifier for the `type` of each storm as a function of its `wind` speed and `pressure`.
Why would a decision tree make a particularly good classifier for these data?
Visualize your classifier in the data space.
**Problem 5 (Medium)**: Pre\-natal care has been shown to be associated with better health of babies and mothers. Use the `NHANES` data set in the `NHANES` package to develop a predictive model for the `PregnantNow` variable. What did you learn about who is pregnant?
**Problem 6 (Hard)**: The ability to get a good night’s sleep is correlated with many positive health outcomes. The `NHANES` data set contains a binary variable `SleepTrouble` that indicates whether each person has trouble sleeping.
1. For each of the following models:
* Build a classifier for SleepTrouble
* Report its effectiveness on the NHANES training data
* Make an appropriate visualization of the model
* Interpret the results. What have you learned about people’s sleeping habits?
You may use whatever variable you like, except for `SleepHrsNight`.
* Null model
* Logistic regression
* Decision tree
* Random forest
* Neural network
* Naive Bayes
* \\(k\\)\-NN
2. Repeat the previous exercise, but now use the quantitative response variable `SleepHrsNight`. Build and interpret the following models:
* Null model
* Multiple regression
* Regression tree
* Random forest
* Ridge regression
* LASSO
3. Repeat either of the previous exercises, but this time first separate the `NHANES` data set uniformly at random into 75% training and 25% testing sets. Compare the effectiveness of each model on training vs. testing data.
4. Repeat the first exercise in part (a), but for the variable `PregnantNow`. What did you learn about who is pregnant?
11\.8 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-learningI.html\#learningI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-learningI.html#learningI-online-exercises)
No exercises found
---
11\.1 Non\-regression classifiers
---------------------------------
The classifiers we built in Chapter [10](ch-modeling.html#ch:modeling) were fit using logistic regression.
These models were smooth, in that they are based on continuous [*parametric functions*](https://en.wikipedia.org/w/index.php?search=parametric%20functions).
The models we explore in this chapter are not necessarily continuous, nor are they necessarily expressed as parametric functions.
### 11\.1\.1 Decision trees
A decision tree (also known as a classification and regression tree[16](#fn16) or “CART”) is a tree\-like flowchart that assigns class labels to individual observations.
Each branch of the tree separates the records in the data set into increasingly “pure” (i.e., homogeneous) subsets, in the sense that they are more likely to share the same class label.
How do we construct these trees?
First, note that the number of possible decision trees grows exponentially with respect to the number of variables \\(p\\).
In fact, it has been proven that an efficient algorithm to determine the optimal decision tree almost certainly does not exist (Hyafil and Rivest 1976\).[17](#fn17)
The lack of a globally optimal algorithm means that there are several competing heuristics for building decision trees that employ greedy (i.e., locally optimal) strategies.
While the differences among these algorithms can mean that they will return different results (even on the same data set), we will simplify our presentation by restricting our discussion to [*recursive partitioning*](https://en.wikipedia.org/w/index.php?search=recursive%20partitioning) decision trees.
One **R** package that builds these decision trees is called **rpart**, which works in conjunction with **tidymodels**.
The partitioning in a decision tree follows [*Hunt’s algorithm*](https://en.wikipedia.org/w/index.php?search=Hunt's%20algorithm), which is itself recursive.
Suppose that we are somewhere in the decision tree, and that \\(D\_t \= (y\_t, \\mathbf{X}\_t)\\) is the set of records that are associated with node \\(t\\) and that \\(\\{y\_1, y\_2\\}\\) are the available class labels for the response variable.[18](#fn18) Then:
* If all records in \\(D\_t\\) belong to a single class, say, \\(y\_1\\), then \\(t\\) is a leaf node labeled as \\(y\_1\\).
* Otherwise, split the records into at least two child nodes, in such a way that the [*purity*](https://en.wikipedia.org/w/index.php?search=purity) of the new set of nodes exceeds some threshold. That is, the records are separated more distinctly into groups corresponding to the response class. In practice, there are several competitive methods for optimizing the purity of the candidate child nodes, and—as noted above—we don’t know the optimal way of doing this.
A decision tree works by running Hunt’s algorithm on the full training data set.
What does it mean to say that a set of records is “purer” than another set? Two popular methods for measuring the purity of a set of candidate child nodes are the [*Gini coefficient*](https://en.wikipedia.org/w/index.php?search=Gini%20coefficient) and the [*information*](https://en.wikipedia.org/w/index.php?search=information) gain. Both are implemented in **rpart**, which uses the Gini measurement by default. If \\(w\_i(t)\\) is the fraction of records belonging to class \\(i\\) at node \\(t\\), then
\\\[
Gini(t) \= 1 \- \\sum\_{i\=1}^{2} (w\_i(t))^2 \\, , \\qquad Entropy(t) \= \- \\sum\_{i\=1}^2 w\_i(t) \\cdot \\log\_2 w\_i(t)
\\]
The information gain is the change in entropy. The following example should help to clarify how this works in practice.
```
mod_dtree <- decision_tree(mode = "classification") %>%
set_engine("rpart") %>%
fit(income ~ capital_gain, data = train)
split_val <- mod_dtree$fit$splits %>%
as_tibble() %>%
pull(index)
```
Let’s consider the optimal split for `income` using only the variable `capital_gain`, which measures the amount each person paid in capital gains taxes. According to our tree, the optimal split occurs for those paying more than $5,119 in capital gains.
```
mod_dtree
```
```
parsnip model object
Fit time: 86ms
n= 26048
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 26048 6280 <=50K (0.759 0.241)
2) capital_gain< 5.12e+03 24785 5090 <=50K (0.795 0.205) *
3) capital_gain>=5.12e+03 1263 67 >50K (0.053 0.947) *
```
Although nearly 79% of those who paid less than $5,119 in capital gains tax made less than $50k, about 95% of those who paid more than $5,119 in capital gains tax made *more* than $50k. Thus, splitting (partitioning) the records according to this criterion helps to divide them into relatively purer subsets. We can see this distinction geometrically as we divide the training records in Figure [11\.1](ch-learningI.html#fig:census-rpart).
```
train_plus <- train %>%
mutate(hi_cap_gains = capital_gain >= split_val)
ggplot(data = train_plus, aes(x = capital_gain, y = income)) +
geom_count(
aes(color = hi_cap_gains),
position = position_jitter(width = 0, height = 0.1),
alpha = 0.5
) +
geom_vline(xintercept = split_val, color = "dodgerblue", lty = 2) +
scale_x_log10(labels = scales::dollar)
```
Figure 11\.1: A single partition of the `census` data set using the capital gain variable to determine the split. Color and the vertical line at $5,119 in capital gains tax indicate the split. If one paid more than this amount, one almost certainly made more than $50,000 in income. On the other hand, if one paid less than this amount in capital gains, one almost certainly made less than $50,000\.
Comparing Figure [11\.1](ch-learningI.html#fig:census-rpart) to Figure [10\.1](ch-modeling.html#fig:log-cap-gains) reveals how the non\-parametric decision tree models differs geometrically from the parametric logistic regression model. In this case, the perfectly vertical split achieved by the decision tree is a mathematical impossibility for the logistic regression model.
Thus, this decision tree uses a single variable (`capital_gain`) to partition the data set into two parts: those who paid more than $5,119 in capital gains, and those who did not. For the former—who make up 0\.952 of all observations—we get 79\.5% right by predicting that they made less than $50k. For the latter, we get 94\.7% right by predicting that they made more than $50k. Thus, our overall accuracy jumps to 80\.2%, easily besting the 75\.9% in the null model. Note that this performance is comparable to the performance of the single variable logistic regression model from Chapter [10](ch-modeling.html#ch:modeling).
How did the algorithm know to pick $5,119 as the threshold value? It tried all of the sensible values, and this was the one that lowered the [*Gini coefficient*](https://en.wikipedia.org/w/index.php?search=Gini%20coefficient) the most. This can be done efficiently, since thresholds will always be between actual values of the splitting variable, and thus there are only \\(O(n)\\) possible splits to consider.
(We use [*Big O notation*](https://en.wikipedia.org/w/index.php?search=Big%20O%20notation) to denote the complexity of an algorithm, where \\(O(n)\\) means that the number of calculations scales with the sample size.)
So far, we have only used one variable, but we can build a decision tree for `income` in terms of all of the other variables in the data set. (We have left out `native_country` because it is a categorical variable with many levels, which can make some learning models computationally infeasible.)
```
form <- as.formula(
"income ~ age + workclass + education + marital_status +
occupation + relationship + race + sex +
capital_gain + capital_loss + hours_per_week"
)
```
```
mod_tree <- decision_tree(mode = "classification") %>%
set_engine("rpart") %>%
fit(form, data = train)
mod_tree
```
```
parsnip model object
Fit time: 1.2s
n= 26048
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 26048 6280 <=50K (0.7587 0.2413)
2) relationship=Not-in-family,Other-relative,Own-child,Unmarried 14231 941 <=50K (0.9339 0.0661)
4) capital_gain< 7.07e+03 13975 696 <=50K (0.9502 0.0498) *
5) capital_gain>=7.07e+03 256 11 >50K (0.0430 0.9570) *
3) relationship=Husband,Wife 11817 5340 <=50K (0.5478 0.4522)
6) education=10th,11th,12th,1st-4th,5th-6th,7th-8th,9th,Assoc-acdm,Assoc-voc,HS-grad,Preschool,Some-college 8294 2770 <=50K (0.6655 0.3345)
12) capital_gain< 5.1e+03 7875 2360 <=50K (0.6998 0.3002) *
13) capital_gain>=5.1e+03 419 9 >50K (0.0215 0.9785) *
7) education=Bachelors,Doctorate,Masters,Prof-school 3523 953 >50K (0.2705 0.7295) *
```
In this more complicated tree, the optimal first split now does not involve `capital_gain`, but rather `relationship`.
A plot (shown in Figure [11\.2](ch-learningI.html#fig:maptree)) that is more informative is available through the **partykit** package, which contains a series of functions for working with decision trees.
```
library(rpart)
library(partykit)
plot(as.party(mod_tree$fit))
```
Figure 11\.2: Decision tree for income using the `census` data.
Figure [11\.2](ch-learningI.html#fig:maptree) shows the decision tree itself, while Figure [11\.3](ch-learningI.html#fig:census-rpart2) shows how the tree recursively partitions the original data. Here, the first question is whether `relationship` status is `Husband` or `Wife`. If not, then a capital gains threshold of $7,073\.50 is used to determine one’s income. 95\.7% of those who paid more than the threshold earned more than $50k, but 95% of those who paid less than the threshold did not. For those whose `relationship` status was `Husband` or `Wife`, the next question was whether you had a college degree. If so, then the model predicts with 72\.9% accuracy that you made more than $50k. If not, then again we ask about capital gains tax paid, but this time the threshold is $5,095\.50\. 97\.9% of those who were neither a husband nor a wife, and had no college degree, but paid more than that amount in capital gains tax, made more than $50k. On the other hand, 70% of those who paid below the threshold made less than $50k.
```
train_plus <- train_plus %>%
mutate(
husband_or_wife = relationship %in% c("Husband", "Wife"),
college_degree = husband_or_wife & education %in%
c("Bachelors", "Doctorate", "Masters", "Prof-school")
) %>%
bind_cols(
predict(mod_tree, new_data = train, type = "class")
) %>%
rename(income_dtree = .pred_class)
cg_splits <- tribble(
~husband_or_wife, ~vals,
TRUE, 5095.5,
FALSE, 7073.5
)
```
```
ggplot(data = train_plus, aes(x = capital_gain, y = income)) +
geom_count(
aes(color = income_dtree, shape = college_degree),
position = position_jitter(width = 0, height = 0.1),
alpha = 0.5
) +
facet_wrap(~ husband_or_wife) +
geom_vline(
data = cg_splits, aes(xintercept = vals),
color = "dodgerblue", lty = 2
) +
scale_x_log10()
```
Figure 11\.3: Graphical depiction of the full recursive partitioning decision tree classifier. On the left, those whose relationship status is neither ‘Husband’ nor ‘Wife’ are classified based on their capital gains paid. On the right, not only is the capital gains threshold different, but the decision is also predicated on whether the person has a college degree.
Since there are exponentially many trees, how did the algorithm know to pick this one? The [*complexity parameter*](https://en.wikipedia.org/w/index.php?search=complexity%20parameter) controls whether to keep or prune possible splits. That is, the algorithm considers many possible splits (i.e., new branches on the tree), but prunes them if they do not sufficiently improve the predictive power of the model (i.e., bear fruit). By default, each split has to decrease the error by a factor of 1%. This will help to avoid *overfitting* (more on that later).
Note that as we add more splits to our model, the relative error decreases.
```
printcp(mod_tree$fit)
```
```
Classification tree:
`rpart::rpart`(data = train)
Variables actually used in tree construction:
[1] capital_gain education relationship
Root node error: 6285/26048 = 0.241
n= 26048
CP nsplit rel error xerror xstd
1 0.1286 0 1.000 1.000 0.01099
2 0.0638 2 0.743 0.743 0.00985
3 0.0372 3 0.679 0.679 0.00950
4 0.0100 4 0.642 0.642 0.00929
```
We can also use the model evaluation metrics we developed in Chapter [10](ch-modeling.html#ch:modeling). Namely, the confusion matrix and the accuracy.
```
library(yardstick)
pred <- train %>%
select(income) %>%
bind_cols(
predict(mod_tree, new_data = train, type = "class")
) %>%
rename(income_dtree = .pred_class)
confusion <- pred %>%
conf_mat(truth = income, estimate = income_dtree)
confusion
```
```
Truth
Prediction <=50K >50K
<=50K 18790 3060
>50K 973 3225
```
```
accuracy(pred, income, income_dtree)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.845
```
In this case, the accuracy of the decision tree classifier is now 84\.5%, a considerable improvement over the null model. Again, this is comparable to the analogous logistic regression model we build using this same set of variables in Chapter [10](ch-modeling.html#ch:modeling).
Figure [11\.4](ch-learningI.html#fig:autoplot-confusion) displays the confusion matrix for this model.
```
autoplot(confusion) +
geom_label(
aes(
x = (xmax + xmin) / 2,
y = (ymax + ymin) / 2,
label = c("TN", "FP", "FN", "TP")
)
)
```
Figure 11\.4: Visual summary of the predictive accuracy of our decision tree model. The largest rectangle represents the cases that are true negatives.
#### 11\.1\.1\.1 Tuning parameters
The decision tree that we built previously was based on the default parameters. Most notably, our tree was pruned so that only splits that decreased the overall lack of fit by 1% were retained. If we lower this threshold to 0\.2%, then we get a more complex tree.
```
mod_tree2 <- decision_tree(mode = "classification") %>%
set_engine("rpart", control = rpart.control(cp = 0.002)) %>%
fit(form, data = train)
```
Can you find the accuracy of this more complex tree. Is it more or less accurate than our original tree?
### 11\.1\.2 Random forests
A natural extension of a decision tree is a [*random forest*](https://en.wikipedia.org/w/index.php?search=random%20forest). A random forest is collection of decision trees that are aggregated by majority rule. In a sense, a random forest is like a collection of bootstrapped (see Chapter [9](ch-foundations.html#ch:foundations)) decision trees. A random forest is constructed by:
* Choosing the number of decision trees to grow (controlled by the `trees` argument) and the number of variables to consider in each tree (`mtry`)
* Randomly selecting the rows of the data frame [*with replacement*](https://en.wikipedia.org/w/index.php?search=with%20replacement)
* Randomly selecting `mtry` variables from the data frame
* Building a decision tree on the resulting data set
* Repeating this procedure `trees` times
A prediction for a new observation is made by taking the majority rule from all of the decision trees in the forest.
Random forests are available in **R** via the **randomForest** package. They can be very effective but are sometimes computationally expensive.
```
mod_forest <- rand_forest(
mode = "classification",
mtry = 3,
trees = 201
) %>%
set_engine("randomForest") %>%
fit(form, data = train)
pred <- pred %>%
bind_cols(
predict(mod_forest, new_data = train, type = "class")
) %>%
rename(income_rf = .pred_class)
pred %>%
conf_mat(income, income_rf)
```
```
Truth
Prediction <=50K >50K
<=50K 19199 1325
>50K 564 4960
```
```
pred %>%
accuracy(income, income_rf)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.927
```
Because each tree in a random forest uses a different set of variables, it is possible to keep track of which variables seem to be the most consistently influential. This is captured by the notion of [*importance*](https://en.wikipedia.org/w/index.php?search=importance). While—unlike p\-values in a regression model—there is no formal statistical inference here, importance plays an analogous role in that it may help to generate hypotheses. Here, we see that `capital_gain` and `age` seem to be influential, while `race` and `sex` do not.
```
randomForest::importance(mod_forest$fit) %>%
as_tibble(rownames = "variable") %>%
arrange(desc(MeanDecreaseGini))
```
```
# A tibble: 11 × 2
variable MeanDecreaseGini
<chr> <dbl>
1 capital_gain 1178.
2 age 1108.
3 relationship 1009.
4 education 780.
5 marital_status 671.
6 hours_per_week 667.
7 occupation 625.
8 capital_loss 394.
9 workclass 311.
10 race 131.
11 sex 99.3
```
The results are put into a `tibble` (simple data frame) to facilitate further wrangling.
A model object of class `randomForest` also has a `predict()` method for making new predictions.
#### 11\.1\.2\.1 Tuning parameters
Hastie, Tibshirani, and Friedman (2009\) recommend using \\(\\sqrt{p}\\) variables in each classification tree (and \\(p/3\\) for each regression tree), and this is the default behavior in **randomForest**.
However, this is a parameter that can be tuned for a particular application.
The number of trees is another parameter that can be tuned—we simply picked a reasonably large odd number.
### 11\.1\.3 Nearest neighbor
Thus far, we have focused on using data to build models that we can then use to predict outcomes on a new set of data. A slightly different approach is offered by [*lazy learners*](https://en.wikipedia.org/w/index.php?search=lazy%20learners), which seek to predict outcomes without constructing a “model.” A very simple, yet widely\-used approach is [*\\(k\\)\-nearest neighbor*](https://en.wikipedia.org/w/index.php?search=$k$-nearest%20neighbor).
Recall that data with \\(p\\) attributes (explanatory variables) are manifest as points in a \\(p\\)\-dimensional space.
The [*Euclidean distance*](https://en.wikipedia.org/w/index.php?search=Euclidean%20distance) between any two points in that space can be easily calculated in the usual way as the square root of the sum of the squared deviations.
Thus, it makes sense to talk about the *distance* between two points in this \\(p\\)\-dimensional space, and as a result, it makes sense to talk about the distance between two observations (rows of the data frame).
Nearest\-neighbor classifiers exploit this property by assuming that observations that are “close” to each other probably have similar outcomes.
Suppose we have a set of training data \\((\\mathbf{X}, y) \\in \\mathbb{R}^{n \\times p} \\times \\mathbb{R}^n\\). For some positive integer \\(k\\), a \\(k\\)\-nearest neighbor algorithm classifies a new observation \\(x^\*\\) by:
* Finding the \\(k\\) observations in the training data \\(\\mathbf{X}\\) that are closest to \\(x^\*\\), according to some distance metric (usually Euclidean). Let \\(D(x^\*) \\subseteq (\\mathbf{X}, y)\\) denote this set of observations.
* For some aggregate function \\(f\\), computing \\(f(y)\\) for the \\(k\\) values of \\(y\\) in \\(D(x^\*)\\) and assigning this value (\\(y^\*\\)) as the predicted value of the response associated with \\(x^\*\\). The logic is that since \\(x^\*\\) is similar to the \\(k\\) observations in \\(D(x^\*)\\), the response associated with \\(x^\*\\) is likely to be similar to the responses in \\(D(x^\*)\\). In practice, simply taking the value shared by the majority (or a plurality) of the \\(y\\)’s is enough.
Note that a \\(k\\)\-NN classifier does not need to process the training data before making new classifications—it can do this on the fly. A \\(k\\)\-NN classifier is provided by the `kknn()` function in the **kknn** package.
Note that since the distance metric only makes sense for quantitative variables, we have to restrict our data set to those first.
Setting the `scale` to `TRUE` rescales the explanatory variables to have the same [*standard deviation*](https://en.wikipedia.org/w/index.php?search=standard%20deviation).
We choose \\(k\=5\\) neighbors for reasons that we explain in the next section.
```
library(kknn)
# distance metric only works with quantitative variables
train_q <- train %>%
select(income, where(is.numeric), -fnlwgt)
mod_knn <- nearest_neighbor(neighbors = 5, mode = "classification") %>%
set_engine("kknn", scale = TRUE) %>%
fit(income ~ ., data = train_q)
pred <- pred %>%
bind_cols(
predict(mod_knn, new_data = train, type = "class")
) %>%
rename(income_knn = .pred_class)
pred %>%
conf_mat(income, income_knn)
```
```
Truth
Prediction <=50K >50K
<=50K 18088 2321
>50K 1675 3964
```
```
pred %>%
accuracy(income, income_knn)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.847
```
\\(k\\)\-NN classifiers are widely used in part because they are easy to understand and code. They also don’t require any pre\-processing time. However, predictions can be slow, since the data must be processed at that time.
The usefulness of \\(k\\)\-NN can depend importantly on the geometry of the data. Are the points clustered together? What is the distribution of the distances among each variable? A wider scale on one variable can dwarf a narrow scale on another variable.
#### 11\.1\.3\.1 Tuning parameters
An appropriate choice of \\(k\\) will depend on the application and the data.
Cross\-validation can be used to optimize the choice of \\(k\\).
Here, we compute the accuracy for several values of \\(k\\).
```
knn_fit <- function(.data, k) {
nearest_neighbor(neighbors = k, mode = "classification") %>%
set_engine("kknn", scale = TRUE) %>%
fit(income ~ ., data = .data)
}
knn_accuracy <- function(mod, .new_data) {
mod %>%
predict(new_data = .new_data) %>%
mutate(income = .new_data$income) %>%
accuracy(income, .pred_class) %>%
pull(.estimate)
}
```
```
ks <- c(1:10, 15, 20, 30, 40, 50)
```
```
knn_tune <- tibble(
k = ks,
mod = map(k, knn_fit, .data = train_q),
train_accuracy = map_dbl(mod, knn_accuracy, .new_data = train_q)
)
knn_tune
```
```
# A tibble: 5 × 3
k mod train_accuracy
<dbl> <list> <dbl>
1 1 <fit[+]> 0.839
2 5 <fit[+]> 0.847
3 10 <fit[+]> 0.848
4 20 <fit[+]> 0.843
5 40 <fit[+]> 0.839
```
In Figure [11\.5](ch-learningI.html#fig:cval), we show how the accuracy decreases as \\(k\\) increases.
That is, if one seeks to maximize the accuracy rate *on this data set*, then the optimal value of \\(k\\) is 5\.[19](#fn19) We will see why this method of optimizing the value of the parameter \\(k\\) is not robust when we learn about [*cross\-validation*](https://en.wikipedia.org/w/index.php?search=cross-validation) below.
```
ggplot(data = knn_tune, aes(x = k, y = train_accuracy)) +
geom_point() +
geom_line() +
ylab("Accuracy rate")
```
Figure 11\.5: Performance of nearest\-neighbor classifier for different choices of \\(k\\) on census training data.
### 11\.1\.4 Naïve Bayes
Another relatively simple classifier is based on Bayes Theorem.
Bayes theorem is a very useful result from probability that allows conditional probabilities to be calculated from other conditional probabilities. It states:
\\\[
\\Pr(y\|x) \= \\frac{\\Pr(xy)}{\\Pr(x)} \= \\frac{\\Pr(x\|y) \\Pr(y)}{\\Pr(x )} \\,.
\\]
How does this relate to a naïve Bayes classifier?
Suppose that we have a binary response variable \\(y\\) and we want to classify a new observation \\(x^\*\\) (recall that \\(x\\) is a vector). Then if we can compute that the conditional probability \\(\\Pr(y \= 1 \| x^\*) \> \\Pr(y\=0 \| x^\*)\\), we have evidence that \\(y\=1\\) is a more likely outcome for \\(x^\*\\) than \\(y\=0\\). This is the crux of a naïve Bayes classifier.
In practice, how we arrive at the estimates \\(\\Pr(y\=1\|x^\*)\\) are based on Bayes theorem and estimates of conditional probabilities derived from the training data \\((\\mathbf{X}, y)\\).
Consider the first person in the training data set. This is a 39\-year\-old white male with a bachelor’s degree working for a state government in a clerical role. In reality, this person made less than $50,000\.
```
train %>%
as.data.frame() %>%
head(1)
```
```
age workclass fnlwgt education education_1 marital_status
1 31 Private 291052 Some-college 10 Married-civ-spouse
occupation relationship race sex capital_gain capital_loss
1 Adm-clerical Husband White Male 0 2051
hours_per_week native_country income
1 40 United-States <=50K
```
The naïve Bayes classifier would make a prediction for this person based on the probabilities observed in the data. For example, in this case the probability \\(\\Pr(\\text{male} \| \\text{\> 50k})\\) of being male given that you had high income is 0\.845, while the unconditional probability of being male is \\(\\Pr(\\text{male}) \= 0\.670\\). We know that the overall probability of having high income is \\(\\Pr(\\text{\> 50k}) \=\\) 0\.241\. Bayes’s rule tells us that the resulting probability of having high income given that one is male is:
\\\[
\\Pr(\\text{\> 50k} \| \\text{male}) \= \\frac{\\Pr(\\text{male} \| \\text{\> 50k}) \\cdot \\Pr(\\text{\> 50k})}{\\Pr(\\text{male})} \= \\frac{0\.845 \\cdot 0\.243}{0\.670} \= 0\.306 \\,.
\\]
This simple example illustrates the case where we have a single explanatory variable (e.g., `sex`), but the naïve Bayes model extends to multiple variables by making the sometimes overly simplistic assumption that the explanatory variables are conditionally independent (hence the name “naïve”).
A naïve Bayes classifier is provided in **R** by the `naive_Bayes()` function from the **discrim** package. Note that like `lm()` and `glm()`, a `naive_Bayes()` object has a `predict()` method.
```
library(discrim)
mod_nb <- naive_Bayes(mode = "classification") %>%
set_engine("klaR") %>%
fit(form, data = train)
pred <- pred %>%
bind_cols(
predict(mod_nb, new_data = train, type = "class")
) %>%
rename(income_nb = .pred_class)
accuracy(pred, income, income_nb)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.824
```
### 11\.1\.5 Artificial neural networks
An [*artificial neural network*](https://en.wikipedia.org/w/index.php?search=artificial%20neural%20network) is yet another classifier. While the impetus for the artificial neural network comes from a biological understanding of the brain, the implementation here is entirely mathematical.
```
mod_nn <- mlp(mode = "classification", hidden_units = 5) %>%
set_engine("nnet") %>%
fit(form, data = train)
```
A neural network is a directed graph (see Chapter [20](ch-netsci.html#ch:netsci)) that proceeds in stages. First, there is one node for each input variable. In this case, because each factor level counts as its own variable, there are 57 input variables.
These are shown on the left in Figure [11\.6](ch-learningI.html#fig:plot-nnet). Next, there are a series of nodes specified as a [*hidden layer*](https://en.wikipedia.org/w/index.php?search=hidden%20layer).
In this case, we have specified five nodes for the hidden layer.
These are shown in the middle of Figure [11\.6](ch-learningI.html#fig:plot-nnet), and each of the input variables are connected to these hidden nodes. Each of the hidden nodes is connected to the single output variable. In addition, `nnet()` adds two control nodes, the first of which is connected to the five hidden nodes, and the latter is connected to the output node. The total number of edges is thus \\(pk \+ k \+ k \+ 1\\), where \\(k\\) is the number of hidden nodes. In this case, there are \\(57 \\cdot 5 \+ 5 \+ 5 \+ 1 \= 296\\) edges.
Figure 11\.6: Visualization of an artificial neural network. The 57 input variables are shown on the left, with the five hidden nodes in the middle, and the single output variable on the right.
The algorithm iteratively searches for the optimal set of weights for each edge. Once the weights are computed, the neural network can make predictions for new inputs by running these values through the network.
```
pred <- pred %>%
bind_cols(
predict(mod_nn, new_data = train, type = "class")
) %>%
rename(income_nn = .pred_class)
accuracy(pred, income, income_nn)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.844
```
### 11\.1\.6 Ensemble methods
The benefit of having multiple classifiers is that they can be easily combined into a single classifier. Note that there is a real probabilistic benefit to having multiple prediction systems, especially if they are independent. For example, if you have three independent classifiers with error rates \\(\\epsilon\_1, \\epsilon\_2\\), and \\(\\epsilon\_3\\), then the probability that all three are wrong is \\(\\prod\_{i\=1}^3 \\epsilon\_i\\). Since \\(\\epsilon\_i \< 1\\) for all \\(i\\), this probability is lower than any of the individual error rates. Moreover, the probability that at least one of the classifiers is correct is \\(1 \- \\prod\_{i\=1}^3 \\epsilon\_i\\), which will get closer to 1 as you add more classifiers—even if you have not improved the individual error rates!
Consider combining the five classifiers that we have built previously. Suppose that we build an ensemble classifier by taking the majority vote from each. Does this ensemble classifier outperform any of the individual classifiers? We can use the `rowwise()` and `c_across()` functions to easily compute these values.
```
pred <- pred %>%
rowwise() %>%
mutate(
rich_votes = sum(c_across(contains("income_")) == ">50K"),
income_ensemble = factor(ifelse(rich_votes >= 3, ">50K", "<=50K"))
) %>%
ungroup()
pred %>%
select(-rich_votes) %>%
pivot_longer(
cols = -income,
names_to = "model",
values_to = "prediction"
) %>%
group_by(model) %>%
summarize(accuracy = accuracy_vec(income, prediction)) %>%
arrange(desc(accuracy))
```
```
# A tibble: 6 × 2
model accuracy
<chr> <dbl>
1 income_rf 0.927
2 income_ensemble 0.884
3 income_knn 0.847
4 income_dtree 0.845
5 income_nn 0.844
6 income_nb 0.824
```
In this case, the ensemble model achieves a 88\.4% accuracy rate, which is slightly lower than our random forest.
Thus, ensemble methods are a simple but effective way of hedging your bets.
### 11\.1\.1 Decision trees
A decision tree (also known as a classification and regression tree[16](#fn16) or “CART”) is a tree\-like flowchart that assigns class labels to individual observations.
Each branch of the tree separates the records in the data set into increasingly “pure” (i.e., homogeneous) subsets, in the sense that they are more likely to share the same class label.
How do we construct these trees?
First, note that the number of possible decision trees grows exponentially with respect to the number of variables \\(p\\).
In fact, it has been proven that an efficient algorithm to determine the optimal decision tree almost certainly does not exist (Hyafil and Rivest 1976\).[17](#fn17)
The lack of a globally optimal algorithm means that there are several competing heuristics for building decision trees that employ greedy (i.e., locally optimal) strategies.
While the differences among these algorithms can mean that they will return different results (even on the same data set), we will simplify our presentation by restricting our discussion to [*recursive partitioning*](https://en.wikipedia.org/w/index.php?search=recursive%20partitioning) decision trees.
One **R** package that builds these decision trees is called **rpart**, which works in conjunction with **tidymodels**.
The partitioning in a decision tree follows [*Hunt’s algorithm*](https://en.wikipedia.org/w/index.php?search=Hunt's%20algorithm), which is itself recursive.
Suppose that we are somewhere in the decision tree, and that \\(D\_t \= (y\_t, \\mathbf{X}\_t)\\) is the set of records that are associated with node \\(t\\) and that \\(\\{y\_1, y\_2\\}\\) are the available class labels for the response variable.[18](#fn18) Then:
* If all records in \\(D\_t\\) belong to a single class, say, \\(y\_1\\), then \\(t\\) is a leaf node labeled as \\(y\_1\\).
* Otherwise, split the records into at least two child nodes, in such a way that the [*purity*](https://en.wikipedia.org/w/index.php?search=purity) of the new set of nodes exceeds some threshold. That is, the records are separated more distinctly into groups corresponding to the response class. In practice, there are several competitive methods for optimizing the purity of the candidate child nodes, and—as noted above—we don’t know the optimal way of doing this.
A decision tree works by running Hunt’s algorithm on the full training data set.
What does it mean to say that a set of records is “purer” than another set? Two popular methods for measuring the purity of a set of candidate child nodes are the [*Gini coefficient*](https://en.wikipedia.org/w/index.php?search=Gini%20coefficient) and the [*information*](https://en.wikipedia.org/w/index.php?search=information) gain. Both are implemented in **rpart**, which uses the Gini measurement by default. If \\(w\_i(t)\\) is the fraction of records belonging to class \\(i\\) at node \\(t\\), then
\\\[
Gini(t) \= 1 \- \\sum\_{i\=1}^{2} (w\_i(t))^2 \\, , \\qquad Entropy(t) \= \- \\sum\_{i\=1}^2 w\_i(t) \\cdot \\log\_2 w\_i(t)
\\]
The information gain is the change in entropy. The following example should help to clarify how this works in practice.
```
mod_dtree <- decision_tree(mode = "classification") %>%
set_engine("rpart") %>%
fit(income ~ capital_gain, data = train)
split_val <- mod_dtree$fit$splits %>%
as_tibble() %>%
pull(index)
```
Let’s consider the optimal split for `income` using only the variable `capital_gain`, which measures the amount each person paid in capital gains taxes. According to our tree, the optimal split occurs for those paying more than $5,119 in capital gains.
```
mod_dtree
```
```
parsnip model object
Fit time: 86ms
n= 26048
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 26048 6280 <=50K (0.759 0.241)
2) capital_gain< 5.12e+03 24785 5090 <=50K (0.795 0.205) *
3) capital_gain>=5.12e+03 1263 67 >50K (0.053 0.947) *
```
Although nearly 79% of those who paid less than $5,119 in capital gains tax made less than $50k, about 95% of those who paid more than $5,119 in capital gains tax made *more* than $50k. Thus, splitting (partitioning) the records according to this criterion helps to divide them into relatively purer subsets. We can see this distinction geometrically as we divide the training records in Figure [11\.1](ch-learningI.html#fig:census-rpart).
```
train_plus <- train %>%
mutate(hi_cap_gains = capital_gain >= split_val)
ggplot(data = train_plus, aes(x = capital_gain, y = income)) +
geom_count(
aes(color = hi_cap_gains),
position = position_jitter(width = 0, height = 0.1),
alpha = 0.5
) +
geom_vline(xintercept = split_val, color = "dodgerblue", lty = 2) +
scale_x_log10(labels = scales::dollar)
```
Figure 11\.1: A single partition of the `census` data set using the capital gain variable to determine the split. Color and the vertical line at $5,119 in capital gains tax indicate the split. If one paid more than this amount, one almost certainly made more than $50,000 in income. On the other hand, if one paid less than this amount in capital gains, one almost certainly made less than $50,000\.
Comparing Figure [11\.1](ch-learningI.html#fig:census-rpart) to Figure [10\.1](ch-modeling.html#fig:log-cap-gains) reveals how the non\-parametric decision tree models differs geometrically from the parametric logistic regression model. In this case, the perfectly vertical split achieved by the decision tree is a mathematical impossibility for the logistic regression model.
Thus, this decision tree uses a single variable (`capital_gain`) to partition the data set into two parts: those who paid more than $5,119 in capital gains, and those who did not. For the former—who make up 0\.952 of all observations—we get 79\.5% right by predicting that they made less than $50k. For the latter, we get 94\.7% right by predicting that they made more than $50k. Thus, our overall accuracy jumps to 80\.2%, easily besting the 75\.9% in the null model. Note that this performance is comparable to the performance of the single variable logistic regression model from Chapter [10](ch-modeling.html#ch:modeling).
How did the algorithm know to pick $5,119 as the threshold value? It tried all of the sensible values, and this was the one that lowered the [*Gini coefficient*](https://en.wikipedia.org/w/index.php?search=Gini%20coefficient) the most. This can be done efficiently, since thresholds will always be between actual values of the splitting variable, and thus there are only \\(O(n)\\) possible splits to consider.
(We use [*Big O notation*](https://en.wikipedia.org/w/index.php?search=Big%20O%20notation) to denote the complexity of an algorithm, where \\(O(n)\\) means that the number of calculations scales with the sample size.)
So far, we have only used one variable, but we can build a decision tree for `income` in terms of all of the other variables in the data set. (We have left out `native_country` because it is a categorical variable with many levels, which can make some learning models computationally infeasible.)
```
form <- as.formula(
"income ~ age + workclass + education + marital_status +
occupation + relationship + race + sex +
capital_gain + capital_loss + hours_per_week"
)
```
```
mod_tree <- decision_tree(mode = "classification") %>%
set_engine("rpart") %>%
fit(form, data = train)
mod_tree
```
```
parsnip model object
Fit time: 1.2s
n= 26048
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 26048 6280 <=50K (0.7587 0.2413)
2) relationship=Not-in-family,Other-relative,Own-child,Unmarried 14231 941 <=50K (0.9339 0.0661)
4) capital_gain< 7.07e+03 13975 696 <=50K (0.9502 0.0498) *
5) capital_gain>=7.07e+03 256 11 >50K (0.0430 0.9570) *
3) relationship=Husband,Wife 11817 5340 <=50K (0.5478 0.4522)
6) education=10th,11th,12th,1st-4th,5th-6th,7th-8th,9th,Assoc-acdm,Assoc-voc,HS-grad,Preschool,Some-college 8294 2770 <=50K (0.6655 0.3345)
12) capital_gain< 5.1e+03 7875 2360 <=50K (0.6998 0.3002) *
13) capital_gain>=5.1e+03 419 9 >50K (0.0215 0.9785) *
7) education=Bachelors,Doctorate,Masters,Prof-school 3523 953 >50K (0.2705 0.7295) *
```
In this more complicated tree, the optimal first split now does not involve `capital_gain`, but rather `relationship`.
A plot (shown in Figure [11\.2](ch-learningI.html#fig:maptree)) that is more informative is available through the **partykit** package, which contains a series of functions for working with decision trees.
```
library(rpart)
library(partykit)
plot(as.party(mod_tree$fit))
```
Figure 11\.2: Decision tree for income using the `census` data.
Figure [11\.2](ch-learningI.html#fig:maptree) shows the decision tree itself, while Figure [11\.3](ch-learningI.html#fig:census-rpart2) shows how the tree recursively partitions the original data. Here, the first question is whether `relationship` status is `Husband` or `Wife`. If not, then a capital gains threshold of $7,073\.50 is used to determine one’s income. 95\.7% of those who paid more than the threshold earned more than $50k, but 95% of those who paid less than the threshold did not. For those whose `relationship` status was `Husband` or `Wife`, the next question was whether you had a college degree. If so, then the model predicts with 72\.9% accuracy that you made more than $50k. If not, then again we ask about capital gains tax paid, but this time the threshold is $5,095\.50\. 97\.9% of those who were neither a husband nor a wife, and had no college degree, but paid more than that amount in capital gains tax, made more than $50k. On the other hand, 70% of those who paid below the threshold made less than $50k.
```
train_plus <- train_plus %>%
mutate(
husband_or_wife = relationship %in% c("Husband", "Wife"),
college_degree = husband_or_wife & education %in%
c("Bachelors", "Doctorate", "Masters", "Prof-school")
) %>%
bind_cols(
predict(mod_tree, new_data = train, type = "class")
) %>%
rename(income_dtree = .pred_class)
cg_splits <- tribble(
~husband_or_wife, ~vals,
TRUE, 5095.5,
FALSE, 7073.5
)
```
```
ggplot(data = train_plus, aes(x = capital_gain, y = income)) +
geom_count(
aes(color = income_dtree, shape = college_degree),
position = position_jitter(width = 0, height = 0.1),
alpha = 0.5
) +
facet_wrap(~ husband_or_wife) +
geom_vline(
data = cg_splits, aes(xintercept = vals),
color = "dodgerblue", lty = 2
) +
scale_x_log10()
```
Figure 11\.3: Graphical depiction of the full recursive partitioning decision tree classifier. On the left, those whose relationship status is neither ‘Husband’ nor ‘Wife’ are classified based on their capital gains paid. On the right, not only is the capital gains threshold different, but the decision is also predicated on whether the person has a college degree.
Since there are exponentially many trees, how did the algorithm know to pick this one? The [*complexity parameter*](https://en.wikipedia.org/w/index.php?search=complexity%20parameter) controls whether to keep or prune possible splits. That is, the algorithm considers many possible splits (i.e., new branches on the tree), but prunes them if they do not sufficiently improve the predictive power of the model (i.e., bear fruit). By default, each split has to decrease the error by a factor of 1%. This will help to avoid *overfitting* (more on that later).
Note that as we add more splits to our model, the relative error decreases.
```
printcp(mod_tree$fit)
```
```
Classification tree:
`rpart::rpart`(data = train)
Variables actually used in tree construction:
[1] capital_gain education relationship
Root node error: 6285/26048 = 0.241
n= 26048
CP nsplit rel error xerror xstd
1 0.1286 0 1.000 1.000 0.01099
2 0.0638 2 0.743 0.743 0.00985
3 0.0372 3 0.679 0.679 0.00950
4 0.0100 4 0.642 0.642 0.00929
```
We can also use the model evaluation metrics we developed in Chapter [10](ch-modeling.html#ch:modeling). Namely, the confusion matrix and the accuracy.
```
library(yardstick)
pred <- train %>%
select(income) %>%
bind_cols(
predict(mod_tree, new_data = train, type = "class")
) %>%
rename(income_dtree = .pred_class)
confusion <- pred %>%
conf_mat(truth = income, estimate = income_dtree)
confusion
```
```
Truth
Prediction <=50K >50K
<=50K 18790 3060
>50K 973 3225
```
```
accuracy(pred, income, income_dtree)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.845
```
In this case, the accuracy of the decision tree classifier is now 84\.5%, a considerable improvement over the null model. Again, this is comparable to the analogous logistic regression model we build using this same set of variables in Chapter [10](ch-modeling.html#ch:modeling).
Figure [11\.4](ch-learningI.html#fig:autoplot-confusion) displays the confusion matrix for this model.
```
autoplot(confusion) +
geom_label(
aes(
x = (xmax + xmin) / 2,
y = (ymax + ymin) / 2,
label = c("TN", "FP", "FN", "TP")
)
)
```
Figure 11\.4: Visual summary of the predictive accuracy of our decision tree model. The largest rectangle represents the cases that are true negatives.
#### 11\.1\.1\.1 Tuning parameters
The decision tree that we built previously was based on the default parameters. Most notably, our tree was pruned so that only splits that decreased the overall lack of fit by 1% were retained. If we lower this threshold to 0\.2%, then we get a more complex tree.
```
mod_tree2 <- decision_tree(mode = "classification") %>%
set_engine("rpart", control = rpart.control(cp = 0.002)) %>%
fit(form, data = train)
```
Can you find the accuracy of this more complex tree. Is it more or less accurate than our original tree?
#### 11\.1\.1\.1 Tuning parameters
The decision tree that we built previously was based on the default parameters. Most notably, our tree was pruned so that only splits that decreased the overall lack of fit by 1% were retained. If we lower this threshold to 0\.2%, then we get a more complex tree.
```
mod_tree2 <- decision_tree(mode = "classification") %>%
set_engine("rpart", control = rpart.control(cp = 0.002)) %>%
fit(form, data = train)
```
Can you find the accuracy of this more complex tree. Is it more or less accurate than our original tree?
### 11\.1\.2 Random forests
A natural extension of a decision tree is a [*random forest*](https://en.wikipedia.org/w/index.php?search=random%20forest). A random forest is collection of decision trees that are aggregated by majority rule. In a sense, a random forest is like a collection of bootstrapped (see Chapter [9](ch-foundations.html#ch:foundations)) decision trees. A random forest is constructed by:
* Choosing the number of decision trees to grow (controlled by the `trees` argument) and the number of variables to consider in each tree (`mtry`)
* Randomly selecting the rows of the data frame [*with replacement*](https://en.wikipedia.org/w/index.php?search=with%20replacement)
* Randomly selecting `mtry` variables from the data frame
* Building a decision tree on the resulting data set
* Repeating this procedure `trees` times
A prediction for a new observation is made by taking the majority rule from all of the decision trees in the forest.
Random forests are available in **R** via the **randomForest** package. They can be very effective but are sometimes computationally expensive.
```
mod_forest <- rand_forest(
mode = "classification",
mtry = 3,
trees = 201
) %>%
set_engine("randomForest") %>%
fit(form, data = train)
pred <- pred %>%
bind_cols(
predict(mod_forest, new_data = train, type = "class")
) %>%
rename(income_rf = .pred_class)
pred %>%
conf_mat(income, income_rf)
```
```
Truth
Prediction <=50K >50K
<=50K 19199 1325
>50K 564 4960
```
```
pred %>%
accuracy(income, income_rf)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.927
```
Because each tree in a random forest uses a different set of variables, it is possible to keep track of which variables seem to be the most consistently influential. This is captured by the notion of [*importance*](https://en.wikipedia.org/w/index.php?search=importance). While—unlike p\-values in a regression model—there is no formal statistical inference here, importance plays an analogous role in that it may help to generate hypotheses. Here, we see that `capital_gain` and `age` seem to be influential, while `race` and `sex` do not.
```
randomForest::importance(mod_forest$fit) %>%
as_tibble(rownames = "variable") %>%
arrange(desc(MeanDecreaseGini))
```
```
# A tibble: 11 × 2
variable MeanDecreaseGini
<chr> <dbl>
1 capital_gain 1178.
2 age 1108.
3 relationship 1009.
4 education 780.
5 marital_status 671.
6 hours_per_week 667.
7 occupation 625.
8 capital_loss 394.
9 workclass 311.
10 race 131.
11 sex 99.3
```
The results are put into a `tibble` (simple data frame) to facilitate further wrangling.
A model object of class `randomForest` also has a `predict()` method for making new predictions.
#### 11\.1\.2\.1 Tuning parameters
Hastie, Tibshirani, and Friedman (2009\) recommend using \\(\\sqrt{p}\\) variables in each classification tree (and \\(p/3\\) for each regression tree), and this is the default behavior in **randomForest**.
However, this is a parameter that can be tuned for a particular application.
The number of trees is another parameter that can be tuned—we simply picked a reasonably large odd number.
#### 11\.1\.2\.1 Tuning parameters
Hastie, Tibshirani, and Friedman (2009\) recommend using \\(\\sqrt{p}\\) variables in each classification tree (and \\(p/3\\) for each regression tree), and this is the default behavior in **randomForest**.
However, this is a parameter that can be tuned for a particular application.
The number of trees is another parameter that can be tuned—we simply picked a reasonably large odd number.
### 11\.1\.3 Nearest neighbor
Thus far, we have focused on using data to build models that we can then use to predict outcomes on a new set of data. A slightly different approach is offered by [*lazy learners*](https://en.wikipedia.org/w/index.php?search=lazy%20learners), which seek to predict outcomes without constructing a “model.” A very simple, yet widely\-used approach is [*\\(k\\)\-nearest neighbor*](https://en.wikipedia.org/w/index.php?search=$k$-nearest%20neighbor).
Recall that data with \\(p\\) attributes (explanatory variables) are manifest as points in a \\(p\\)\-dimensional space.
The [*Euclidean distance*](https://en.wikipedia.org/w/index.php?search=Euclidean%20distance) between any two points in that space can be easily calculated in the usual way as the square root of the sum of the squared deviations.
Thus, it makes sense to talk about the *distance* between two points in this \\(p\\)\-dimensional space, and as a result, it makes sense to talk about the distance between two observations (rows of the data frame).
Nearest\-neighbor classifiers exploit this property by assuming that observations that are “close” to each other probably have similar outcomes.
Suppose we have a set of training data \\((\\mathbf{X}, y) \\in \\mathbb{R}^{n \\times p} \\times \\mathbb{R}^n\\). For some positive integer \\(k\\), a \\(k\\)\-nearest neighbor algorithm classifies a new observation \\(x^\*\\) by:
* Finding the \\(k\\) observations in the training data \\(\\mathbf{X}\\) that are closest to \\(x^\*\\), according to some distance metric (usually Euclidean). Let \\(D(x^\*) \\subseteq (\\mathbf{X}, y)\\) denote this set of observations.
* For some aggregate function \\(f\\), computing \\(f(y)\\) for the \\(k\\) values of \\(y\\) in \\(D(x^\*)\\) and assigning this value (\\(y^\*\\)) as the predicted value of the response associated with \\(x^\*\\). The logic is that since \\(x^\*\\) is similar to the \\(k\\) observations in \\(D(x^\*)\\), the response associated with \\(x^\*\\) is likely to be similar to the responses in \\(D(x^\*)\\). In practice, simply taking the value shared by the majority (or a plurality) of the \\(y\\)’s is enough.
Note that a \\(k\\)\-NN classifier does not need to process the training data before making new classifications—it can do this on the fly. A \\(k\\)\-NN classifier is provided by the `kknn()` function in the **kknn** package.
Note that since the distance metric only makes sense for quantitative variables, we have to restrict our data set to those first.
Setting the `scale` to `TRUE` rescales the explanatory variables to have the same [*standard deviation*](https://en.wikipedia.org/w/index.php?search=standard%20deviation).
We choose \\(k\=5\\) neighbors for reasons that we explain in the next section.
```
library(kknn)
# distance metric only works with quantitative variables
train_q <- train %>%
select(income, where(is.numeric), -fnlwgt)
mod_knn <- nearest_neighbor(neighbors = 5, mode = "classification") %>%
set_engine("kknn", scale = TRUE) %>%
fit(income ~ ., data = train_q)
pred <- pred %>%
bind_cols(
predict(mod_knn, new_data = train, type = "class")
) %>%
rename(income_knn = .pred_class)
pred %>%
conf_mat(income, income_knn)
```
```
Truth
Prediction <=50K >50K
<=50K 18088 2321
>50K 1675 3964
```
```
pred %>%
accuracy(income, income_knn)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.847
```
\\(k\\)\-NN classifiers are widely used in part because they are easy to understand and code. They also don’t require any pre\-processing time. However, predictions can be slow, since the data must be processed at that time.
The usefulness of \\(k\\)\-NN can depend importantly on the geometry of the data. Are the points clustered together? What is the distribution of the distances among each variable? A wider scale on one variable can dwarf a narrow scale on another variable.
#### 11\.1\.3\.1 Tuning parameters
An appropriate choice of \\(k\\) will depend on the application and the data.
Cross\-validation can be used to optimize the choice of \\(k\\).
Here, we compute the accuracy for several values of \\(k\\).
```
knn_fit <- function(.data, k) {
nearest_neighbor(neighbors = k, mode = "classification") %>%
set_engine("kknn", scale = TRUE) %>%
fit(income ~ ., data = .data)
}
knn_accuracy <- function(mod, .new_data) {
mod %>%
predict(new_data = .new_data) %>%
mutate(income = .new_data$income) %>%
accuracy(income, .pred_class) %>%
pull(.estimate)
}
```
```
ks <- c(1:10, 15, 20, 30, 40, 50)
```
```
knn_tune <- tibble(
k = ks,
mod = map(k, knn_fit, .data = train_q),
train_accuracy = map_dbl(mod, knn_accuracy, .new_data = train_q)
)
knn_tune
```
```
# A tibble: 5 × 3
k mod train_accuracy
<dbl> <list> <dbl>
1 1 <fit[+]> 0.839
2 5 <fit[+]> 0.847
3 10 <fit[+]> 0.848
4 20 <fit[+]> 0.843
5 40 <fit[+]> 0.839
```
In Figure [11\.5](ch-learningI.html#fig:cval), we show how the accuracy decreases as \\(k\\) increases.
That is, if one seeks to maximize the accuracy rate *on this data set*, then the optimal value of \\(k\\) is 5\.[19](#fn19) We will see why this method of optimizing the value of the parameter \\(k\\) is not robust when we learn about [*cross\-validation*](https://en.wikipedia.org/w/index.php?search=cross-validation) below.
```
ggplot(data = knn_tune, aes(x = k, y = train_accuracy)) +
geom_point() +
geom_line() +
ylab("Accuracy rate")
```
Figure 11\.5: Performance of nearest\-neighbor classifier for different choices of \\(k\\) on census training data.
#### 11\.1\.3\.1 Tuning parameters
An appropriate choice of \\(k\\) will depend on the application and the data.
Cross\-validation can be used to optimize the choice of \\(k\\).
Here, we compute the accuracy for several values of \\(k\\).
```
knn_fit <- function(.data, k) {
nearest_neighbor(neighbors = k, mode = "classification") %>%
set_engine("kknn", scale = TRUE) %>%
fit(income ~ ., data = .data)
}
knn_accuracy <- function(mod, .new_data) {
mod %>%
predict(new_data = .new_data) %>%
mutate(income = .new_data$income) %>%
accuracy(income, .pred_class) %>%
pull(.estimate)
}
```
```
ks <- c(1:10, 15, 20, 30, 40, 50)
```
```
knn_tune <- tibble(
k = ks,
mod = map(k, knn_fit, .data = train_q),
train_accuracy = map_dbl(mod, knn_accuracy, .new_data = train_q)
)
knn_tune
```
```
# A tibble: 5 × 3
k mod train_accuracy
<dbl> <list> <dbl>
1 1 <fit[+]> 0.839
2 5 <fit[+]> 0.847
3 10 <fit[+]> 0.848
4 20 <fit[+]> 0.843
5 40 <fit[+]> 0.839
```
In Figure [11\.5](ch-learningI.html#fig:cval), we show how the accuracy decreases as \\(k\\) increases.
That is, if one seeks to maximize the accuracy rate *on this data set*, then the optimal value of \\(k\\) is 5\.[19](#fn19) We will see why this method of optimizing the value of the parameter \\(k\\) is not robust when we learn about [*cross\-validation*](https://en.wikipedia.org/w/index.php?search=cross-validation) below.
```
ggplot(data = knn_tune, aes(x = k, y = train_accuracy)) +
geom_point() +
geom_line() +
ylab("Accuracy rate")
```
Figure 11\.5: Performance of nearest\-neighbor classifier for different choices of \\(k\\) on census training data.
### 11\.1\.4 Naïve Bayes
Another relatively simple classifier is based on Bayes Theorem.
Bayes theorem is a very useful result from probability that allows conditional probabilities to be calculated from other conditional probabilities. It states:
\\\[
\\Pr(y\|x) \= \\frac{\\Pr(xy)}{\\Pr(x)} \= \\frac{\\Pr(x\|y) \\Pr(y)}{\\Pr(x )} \\,.
\\]
How does this relate to a naïve Bayes classifier?
Suppose that we have a binary response variable \\(y\\) and we want to classify a new observation \\(x^\*\\) (recall that \\(x\\) is a vector). Then if we can compute that the conditional probability \\(\\Pr(y \= 1 \| x^\*) \> \\Pr(y\=0 \| x^\*)\\), we have evidence that \\(y\=1\\) is a more likely outcome for \\(x^\*\\) than \\(y\=0\\). This is the crux of a naïve Bayes classifier.
In practice, how we arrive at the estimates \\(\\Pr(y\=1\|x^\*)\\) are based on Bayes theorem and estimates of conditional probabilities derived from the training data \\((\\mathbf{X}, y)\\).
Consider the first person in the training data set. This is a 39\-year\-old white male with a bachelor’s degree working for a state government in a clerical role. In reality, this person made less than $50,000\.
```
train %>%
as.data.frame() %>%
head(1)
```
```
age workclass fnlwgt education education_1 marital_status
1 31 Private 291052 Some-college 10 Married-civ-spouse
occupation relationship race sex capital_gain capital_loss
1 Adm-clerical Husband White Male 0 2051
hours_per_week native_country income
1 40 United-States <=50K
```
The naïve Bayes classifier would make a prediction for this person based on the probabilities observed in the data. For example, in this case the probability \\(\\Pr(\\text{male} \| \\text{\> 50k})\\) of being male given that you had high income is 0\.845, while the unconditional probability of being male is \\(\\Pr(\\text{male}) \= 0\.670\\). We know that the overall probability of having high income is \\(\\Pr(\\text{\> 50k}) \=\\) 0\.241\. Bayes’s rule tells us that the resulting probability of having high income given that one is male is:
\\\[
\\Pr(\\text{\> 50k} \| \\text{male}) \= \\frac{\\Pr(\\text{male} \| \\text{\> 50k}) \\cdot \\Pr(\\text{\> 50k})}{\\Pr(\\text{male})} \= \\frac{0\.845 \\cdot 0\.243}{0\.670} \= 0\.306 \\,.
\\]
This simple example illustrates the case where we have a single explanatory variable (e.g., `sex`), but the naïve Bayes model extends to multiple variables by making the sometimes overly simplistic assumption that the explanatory variables are conditionally independent (hence the name “naïve”).
A naïve Bayes classifier is provided in **R** by the `naive_Bayes()` function from the **discrim** package. Note that like `lm()` and `glm()`, a `naive_Bayes()` object has a `predict()` method.
```
library(discrim)
mod_nb <- naive_Bayes(mode = "classification") %>%
set_engine("klaR") %>%
fit(form, data = train)
pred <- pred %>%
bind_cols(
predict(mod_nb, new_data = train, type = "class")
) %>%
rename(income_nb = .pred_class)
accuracy(pred, income, income_nb)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.824
```
### 11\.1\.5 Artificial neural networks
An [*artificial neural network*](https://en.wikipedia.org/w/index.php?search=artificial%20neural%20network) is yet another classifier. While the impetus for the artificial neural network comes from a biological understanding of the brain, the implementation here is entirely mathematical.
```
mod_nn <- mlp(mode = "classification", hidden_units = 5) %>%
set_engine("nnet") %>%
fit(form, data = train)
```
A neural network is a directed graph (see Chapter [20](ch-netsci.html#ch:netsci)) that proceeds in stages. First, there is one node for each input variable. In this case, because each factor level counts as its own variable, there are 57 input variables.
These are shown on the left in Figure [11\.6](ch-learningI.html#fig:plot-nnet). Next, there are a series of nodes specified as a [*hidden layer*](https://en.wikipedia.org/w/index.php?search=hidden%20layer).
In this case, we have specified five nodes for the hidden layer.
These are shown in the middle of Figure [11\.6](ch-learningI.html#fig:plot-nnet), and each of the input variables are connected to these hidden nodes. Each of the hidden nodes is connected to the single output variable. In addition, `nnet()` adds two control nodes, the first of which is connected to the five hidden nodes, and the latter is connected to the output node. The total number of edges is thus \\(pk \+ k \+ k \+ 1\\), where \\(k\\) is the number of hidden nodes. In this case, there are \\(57 \\cdot 5 \+ 5 \+ 5 \+ 1 \= 296\\) edges.
Figure 11\.6: Visualization of an artificial neural network. The 57 input variables are shown on the left, with the five hidden nodes in the middle, and the single output variable on the right.
The algorithm iteratively searches for the optimal set of weights for each edge. Once the weights are computed, the neural network can make predictions for new inputs by running these values through the network.
```
pred <- pred %>%
bind_cols(
predict(mod_nn, new_data = train, type = "class")
) %>%
rename(income_nn = .pred_class)
accuracy(pred, income, income_nn)
```
```
# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 accuracy binary 0.844
```
### 11\.1\.6 Ensemble methods
The benefit of having multiple classifiers is that they can be easily combined into a single classifier. Note that there is a real probabilistic benefit to having multiple prediction systems, especially if they are independent. For example, if you have three independent classifiers with error rates \\(\\epsilon\_1, \\epsilon\_2\\), and \\(\\epsilon\_3\\), then the probability that all three are wrong is \\(\\prod\_{i\=1}^3 \\epsilon\_i\\). Since \\(\\epsilon\_i \< 1\\) for all \\(i\\), this probability is lower than any of the individual error rates. Moreover, the probability that at least one of the classifiers is correct is \\(1 \- \\prod\_{i\=1}^3 \\epsilon\_i\\), which will get closer to 1 as you add more classifiers—even if you have not improved the individual error rates!
Consider combining the five classifiers that we have built previously. Suppose that we build an ensemble classifier by taking the majority vote from each. Does this ensemble classifier outperform any of the individual classifiers? We can use the `rowwise()` and `c_across()` functions to easily compute these values.
```
pred <- pred %>%
rowwise() %>%
mutate(
rich_votes = sum(c_across(contains("income_")) == ">50K"),
income_ensemble = factor(ifelse(rich_votes >= 3, ">50K", "<=50K"))
) %>%
ungroup()
pred %>%
select(-rich_votes) %>%
pivot_longer(
cols = -income,
names_to = "model",
values_to = "prediction"
) %>%
group_by(model) %>%
summarize(accuracy = accuracy_vec(income, prediction)) %>%
arrange(desc(accuracy))
```
```
# A tibble: 6 × 2
model accuracy
<chr> <dbl>
1 income_rf 0.927
2 income_ensemble 0.884
3 income_knn 0.847
4 income_dtree 0.845
5 income_nn 0.844
6 income_nb 0.824
```
In this case, the ensemble model achieves a 88\.4% accuracy rate, which is slightly lower than our random forest.
Thus, ensemble methods are a simple but effective way of hedging your bets.
11\.2 Parameter tuning
----------------------
In Section [11\.1\.3](ch-learningI.html#sec:knn), we showed how after a certain point, the accuracy rate *on the training data* of the \\(k\\)\-NN model increased as \\(k\\) increased.
That is, as information from more neighbors—who are necessarily farther away from the target observation—was incorporated into the prediction for any given observation, those predictions got worse.
This is not surprising, since the actual observation is in the training data set and that observation necessarily has distance 0 from the target observation.
The error rate is not zero for \\(k\=1\\) likely due to many points having the exact same coordinates in this five\-dimensional space.
However, as seen in Figure [11\.7](ch-learningI.html#fig:knn-bias-var), the story is different when evaluating the \\(k\\)\-NN model *on the testing set*.
Here, the truth is *not* in the training set, and so pooling information across more observations leads to *better* predictions—at least for a while.
Again, this should not be surprising—we saw in Chapter [9](ch-foundations.html#ch:foundations) how means are less variable than individual observations.
Generally, one hopes to minimize the misclassification rate on data that the model has not seen (i.e., the testing data) without introducing too much bias.
In this case, that point occurs somewhere between \\(k\=5\\) and \\(k\=10\\).
We can see this in Figure [11\.7](ch-learningI.html#fig:knn-bias-var), since the accuracy on the testing data set improves rapidly up to \\(k\=5\\), but then very slowly for larger values of \\(k\\).
```
test_q <- test %>%
select(income, where(is.numeric), -fnlwgt)
knn_tune <- knn_tune %>%
mutate(test_accuracy = map_dbl(mod, knn_accuracy, .new_data = test_q))
knn_tune %>%
select(-mod) %>%
pivot_longer(-k, names_to = "type", values_to = "accuracy") %>%
ggplot(aes(x = k, y = accuracy, color = factor(type))) +
geom_point() +
geom_line() +
ylab("Accuracy") +
scale_color_discrete("Set")
```
Figure 11\.7: Performance of nearest\-neighbor classifier for different choices of \\(k\\) on census training and testing data.
11\.3 Example: Evaluation of income models redux
------------------------------------------------
Just as we did in Section [10\.3\.5](ch-modeling.html#sec:evaluate), we should evaluate these new models on both the training and testing sets.
First, we build the null model that simply predicts that everyone makes $50,000 with the same probability, regardless of the explanatory variables. (See Appendix [E](ch-regression.html#ch:regression) for an introduction to logistic regression.)
We’ll add this to the list of models that we built previously in this chapter.
```
mod_null <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(income ~ 1, data = train)
mod_log_all <- logistic_reg(mode = "classification") %>%
set_engine("glm") %>%
fit(form, data = train)
mods <- tibble(
type = c(
"null", "log_all", "tree", "forest",
"knn", "neural_net", "naive_bayes"
),
mod = list(
mod_null, mod_log_all, mod_tree, mod_forest,
mod_knn, mod_nn, mod_nb
)
)
```
While each of the models we have fit have different classes in **R** (see [B.3\.6](ch-R.html#appR:attr)), each of those classes has a `predict()` method that will generate predictions.
```
map(mods$mod, class)
```
```
[[1]]
[1] "_glm" "model_fit"
[[2]]
[1] "_glm" "model_fit"
[[3]]
[1] "_rpart" "model_fit"
[[4]]
[1] "_randomForest" "model_fit"
[[5]]
[1] "_train.kknn" "model_fit"
[[6]]
[1] "_nnet.formula" "model_fit"
[[7]]
[1] "_NaiveBayes" "model_fit"
```
Thus, we can iterate through the list of models and apply the appropriate `predict()` method to each object.
```
mods <- mods %>%
mutate(
y_train = list(pull(train, income)),
y_test = list(pull(test, income)),
y_hat_train = map(
mod,
~pull(predict(.x, new_data = train, type = "class"), .pred_class)
),
y_hat_test = map(
mod,
~pull(predict(.x, new_data = test, type = "class"), .pred_class)
)
)
mods
```
```
# A tibble: 7 × 6
type mod y_train y_test y_hat_train y_hat_test
<chr> <list> <list> <list> <list> <list>
1 null <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
2 log_all <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
3 tree <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
4 forest <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
5 knn <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
6 neural_net <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
7 naive_bayes <fit[+]> <fct [26,048… <fct [6,513… <fct [26,048… <fct [6,513…
```
We can also add our majority rule ensemble classifier.
First, we write a function that will compute the majority vote when given a list of predictions.
```
predict_ensemble <- function(x) {
majority <- ceiling(length(x) / 2)
x %>%
data.frame() %>%
rowwise() %>%
mutate(
rich_votes = sum(c_across() == ">50K"),
.pred_class = factor(ifelse(rich_votes >= majority , ">50K", "<=50K"))
) %>%
pull(.pred_class) %>%
fct_relevel("<=50K")
}
```
Next, we use `bind_rows()` to add an additional row to our models data frame with the relevant information for the ensemble classifier.
```
ensemble <- tibble(
type = "ensemble",
mod = NA,
y_train = list(predict_ensemble(pull(mods, y_train))),
y_test = list(predict_ensemble(pull(mods, y_test))),
y_hat_train = list(predict_ensemble(pull(mods, y_hat_train))),
y_hat_test = list(predict_ensemble(pull(mods, y_hat_test))),
)
mods <- mods %>%
bind_rows(ensemble)
```
Now that we have the predictions for each model, we just need to compare them to the truth (`y`), and tally the results. We can do this using the `map2_dbl()` function from the **purrr** package.
```
mods <- mods %>%
mutate(
accuracy_train = map2_dbl(y_train, y_hat_train, accuracy_vec),
accuracy_test = map2_dbl(y_test, y_hat_test, accuracy_vec),
sens_test = map2_dbl(
y_test,
y_hat_test,
sens_vec,
event_level = "second"
),
spec_test = map2_dbl(y_test,
y_hat_test,
spec_vec,
event_level = "second"
)
)
```
```
mods %>%
select(-mod, -matches("^y")) %>%
arrange(desc(accuracy_test))
```
```
# A tibble: 8 × 5
type accuracy_train accuracy_test sens_test spec_test
<chr> <dbl> <dbl> <dbl> <dbl>
1 forest 0.927 0.866 0.628 0.941
2 ensemble 0.875 0.855 0.509 0.963
3 log_all 0.852 0.849 0.598 0.928
4 neural_net 0.844 0.843 0.640 0.906
5 tree 0.845 0.842 0.514 0.945
6 naive_bayes 0.824 0.824 0.328 0.980
7 knn 0.847 0.788 0.526 0.869
8 null 0.759 0.761 0 1
```
While the random forest performed notably better than the other models on the training set, its accuracy dropped the most on the testing set.
We note that even though the \\(k\\)\-NN model slightly outperformed the decision tree on the training set, the decision tree performed better on the testing set.
The ensemble model and the logistic regression model performed quite well.
In this case, however, the accuracy rates of all models were in the same ballpark on both the testing set.
In Figure [11\.8](ch-learningI.html#fig:roc-compare), we compare the ROC curves for all census models on the testing data set.
```
mods <- mods %>%
filter(type != "ensemble") %>%
mutate(
y_hat_prob_test = map(
mod,
~pull(predict(.x, new_data = test, type = "prob"), `.pred_>50K`)
),
type = fct_reorder(type, sens_test, .desc = TRUE)
)
```
```
mods %>%
select(type, y_test, y_hat_prob_test) %>%
unnest(cols = c(y_test, y_hat_prob_test)) %>%
group_by(type) %>%
roc_curve(truth = y_test, y_hat_prob_test, event_level = "second") %>%
autoplot() +
geom_point(
data = mods,
aes(x = 1 - spec_test, y = sens_test, color = type),
size = 3
)
```
Figure 11\.8: Comparison of ROC curves across five models on the Census testing data. The null model has a true positive rate of zero and lies along the diagonal. The naïve Bayes model has a lower true positive rate than the other models. The random forest may be the best overall performer, as its curve lies furthest from the diagonal.
11\.4 Extended example: Who has diabetes this time?
---------------------------------------------------
Recall the example about diabetes in Section [10\.4](ch-modeling.html#sec:diabetes).
```
library(NHANES)
people <- NHANES %>%
select(Age, Gender, Diabetes, BMI, HHIncome, PhysActive) %>%
drop_na()
glimpse(people)
```
```
Rows: 7,555
Columns: 6
$ Age <int> 34, 34, 34, 49, 45, 45, 45, 66, 58, 54, 58, 50, 33, 60,…
$ Gender <fct> male, male, male, female, female, female, female, male,…
$ Diabetes <fct> No, No, No, No, No, No, No, No, No, No, No, No, No, No,…
$ BMI <dbl> 32.22, 32.22, 32.22, 30.57, 27.24, 27.24, 27.24, 23.67,…
$ HHIncome <fct> 25000-34999, 25000-34999, 25000-34999, 35000-44999, 750…
$ PhysActive <fct> No, No, No, No, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes,…
```
```
people %>%
group_by(Diabetes) %>%
count() %>%
mutate(pct = n / nrow(people))
```
```
# A tibble: 2 × 3
# Groups: Diabetes [2]
Diabetes n pct
<fct> <int> <dbl>
1 No 6871 0.909
2 Yes 684 0.0905
```
We illustrate the use of a decision tree using all of the variables except for household income in Figure [11\.9](ch-learningI.html#fig:diabetes-rpart). From the original data shown in Figure [11\.10](ch-learningI.html#fig:diabetes), it appears that older people, and those with higher BMIs, are more likely to have diabetes.
```
mod_diabetes <- decision_tree(mode = "classification") %>%
set_engine(
"rpart",
control = rpart.control(cp = 0.005, minbucket = 30)
) %>%
fit(Diabetes ~ Age + BMI + Gender + PhysActive, data = people)
mod_diabetes
```
```
parsnip model object
Fit time: 80ms
n= 7555
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 7555 684 No (0.909464 0.090536)
2) Age< 52.5 5092 188 No (0.963079 0.036921) *
3) Age>=52.5 2463 496 No (0.798620 0.201380)
6) BMI< 39.985 2301 416 No (0.819209 0.180791) *
7) BMI>=39.985 162 80 No (0.506173 0.493827)
14) Age>=67.5 50 18 No (0.640000 0.360000) *
15) Age< 67.5 112 50 Yes (0.446429 0.553571)
30) Age< 60.5 71 30 No (0.577465 0.422535) *
31) Age>=60.5 41 9 Yes (0.219512 0.780488) *
```
```
plot(as.party(mod_diabetes$fit))
```
Figure 11\.9: Illustration of decision tree for diabetes.
If you are 52 or younger, then you very likely do not have diabetes. However, if you are 53 or older, your risk is higher. If your BMI is above 40—indicating obesity—then your risk increases again. Strangely—and this may be evidence of overfitting—your risk is highest if you are between 61 and 67 years old. This partition of the data is overlaid on Figure [11\.10](ch-learningI.html#fig:diabetes).
```
segments <- tribble(
~Age, ~xend, ~BMI, ~yend,
52.5, 100, 39.985, 39.985,
67.5, 67.5, 39.985, Inf,
60.5, 60.5, 39.985, Inf
)
ggplot(data = people, aes(x = Age, y = BMI)) +
geom_count(aes(color = Diabetes), alpha = 0.5) +
geom_vline(xintercept = 52.5) +
geom_segment(
data = segments,
aes(xend = xend, yend = yend)
) +
scale_fill_gradient(low = "white", high = "red") +
scale_color_manual(values = c("gold", "black")) +
annotate(
"rect", fill = "blue", alpha = 0.1,
xmin = 60.5, xmax = 67.5, ymin = 39.985, ymax = Inf
)
```
Figure 11\.10: Scatterplot of age against BMI for individuals in the NHANES data set. The black dots represent a collection of people with diabetes, while the gold dots represent those without diabetes.
Figure [11\.10](ch-learningI.html#fig:diabetes) is a nice way to visualize a complex model. We have plotted our data in two quantitative dimensions (`Age` and `BMI`) while using color to represent our binary response variable (`Diabetes`). The decision tree simply partitions this two\-dimensional space into axis\-parallel rectangles. The model makes the same prediction for all observations within each rectangle. It is not hard to imagine—although it is hard to draw—how this recursive partitioning will scale to higher dimensions.
Note, however, that Figure [11\.10](ch-learningI.html#fig:diabetes) provides a clear illustration of the strengths and weaknesses of models based on recursive partitioning. These types of models can *only* produce axis\-parallel rectangles in which all points in each rectangle receive the same prediction. This makes these models relatively easy to understand and apply, but it is not hard to imagine a situation in which they might perform miserably (e.g., what if the relationship was non\-linear?). Here again, this underscores the importance of visualizing your model *in the data space* (Hadley Wickham, Cook, and Hofmann 2015\) as demonstrated in Figure [11\.10](ch-learningI.html#fig:diabetes).
### 11\.4\.1 Comparing all models
We close the loop by extending this model visualization exerise to all of our models.
Once again, we tile the \\((Age, BMI)\\)\-plane with a fine grid of 10,000 points.
```
library(modelr)
fake_grid <- data_grid(
people,
Age = seq_range(Age, 100),
BMI = seq_range(BMI, 100)
)
```
Next, we evaluate each of our six models on each grid point, taking care to retrieve not the classification itself, but the probability of having diabetes.
```
form <- as.formula("Diabetes ~ Age + BMI")
dmod_null <- logistic_reg(mode = "classification") %>%
set_engine("glm")
dmod_tree <- decision_tree(mode = "classification") %>%
set_engine("rpart", control = rpart.control(cp = 0.005, minbucket = 30))
dmod_forest <- rand_forest(
mode = "classification",
trees = 201,
mtry = 2
) %>%
set_engine("randomForest")
dmod_knn <- nearest_neighbor(mode = "classification", neighbors = 5) %>%
set_engine("kknn", scale = TRUE)
dmod_nnet <- mlp(mode = "classification", hidden_units = 6) %>%
set_engine("nnet")
dmod_nb <- naive_Bayes() %>%
set_engine("klaR")
bmi_mods <- tibble(
type = c(
"Logistic Regression", "Decision Tree", "Random Forest",
"k-Nearest-Neighbor", "Neural Network", "Naive Bayes"
),
spec = list(
dmod_null, dmod_tree, dmod_forest, dmod_knn, dmod_nnet, dmod_nb
),
mod = map(spec, fit, form, data = people),
y_hat = map(mod, predict, new_data = fake_grid, type = "prob")
)
bmi_mods <- bmi_mods %>%
mutate(
X = list(fake_grid),
yX = map2(y_hat, X, bind_cols)
)
```
```
res <- bmi_mods %>%
select(type, yX) %>%
unnest(cols = yX)
res
```
```
# A tibble: 60,000 × 5
type .pred_No .pred_Yes Age BMI
<chr> <dbl> <dbl> <dbl> <dbl>
1 Logistic Regression 0.998 0.00234 12 13.3
2 Logistic Regression 0.998 0.00249 12 14.0
3 Logistic Regression 0.997 0.00265 12 14.7
4 Logistic Regression 0.997 0.00282 12 15.4
5 Logistic Regression 0.997 0.00300 12 16.0
6 Logistic Regression 0.997 0.00319 12 16.7
7 Logistic Regression 0.997 0.00340 12 17.4
8 Logistic Regression 0.996 0.00361 12 18.1
9 Logistic Regression 0.996 0.00384 12 18.8
10 Logistic Regression 0.996 0.00409 12 19.5
# … with 59,990 more rows
```
Figure [11\.11](ch-learningI.html#fig:mod-compare) illustrates each model in the data space. The differences between the models are striking. The rigidity of the decision tree is apparent, especially relative to the flexibility of the \\(k\\)\-NN model.
The \\(k\\)\-NN model and the random forest have similar flexibility, but regions in the former are based on polygons, while regions in the latter are based on rectangles.
Making \\(k\\) larger would result in smoother \\(k\\)\-NN predictions, while making \\(k\\) smaller would make the predictions more bold.
The logistic regression model makes predictions with a smooth grade, while the naïve Bayes model produces a non\-linear horizon. The neural network has made relatively uniform predictions in this case.
```
ggplot(data = res, aes(x = Age, y = BMI)) +
geom_tile(aes(fill = .pred_Yes), color = NA) +
geom_count(
data = people,
aes(color = Diabetes), alpha = 0.4
) +
scale_fill_gradient("Prob of\nDiabetes", low = "white", high = "red") +
scale_color_manual(values = c("gold", "black")) +
scale_size(range = c(0, 2)) +
scale_x_continuous(expand = c(0.02,0)) +
scale_y_continuous(expand = c(0.02,0)) +
facet_wrap(~type, ncol = 2)
```
Figure 11\.11: Comparison of predictive models in the data space. Note the rigidity of the decision tree, the flexibility of \\(k\\)\-NN and the random forest, and the bold predictions of \\(k\\)\-NN.
### 11\.4\.1 Comparing all models
We close the loop by extending this model visualization exerise to all of our models.
Once again, we tile the \\((Age, BMI)\\)\-plane with a fine grid of 10,000 points.
```
library(modelr)
fake_grid <- data_grid(
people,
Age = seq_range(Age, 100),
BMI = seq_range(BMI, 100)
)
```
Next, we evaluate each of our six models on each grid point, taking care to retrieve not the classification itself, but the probability of having diabetes.
```
form <- as.formula("Diabetes ~ Age + BMI")
dmod_null <- logistic_reg(mode = "classification") %>%
set_engine("glm")
dmod_tree <- decision_tree(mode = "classification") %>%
set_engine("rpart", control = rpart.control(cp = 0.005, minbucket = 30))
dmod_forest <- rand_forest(
mode = "classification",
trees = 201,
mtry = 2
) %>%
set_engine("randomForest")
dmod_knn <- nearest_neighbor(mode = "classification", neighbors = 5) %>%
set_engine("kknn", scale = TRUE)
dmod_nnet <- mlp(mode = "classification", hidden_units = 6) %>%
set_engine("nnet")
dmod_nb <- naive_Bayes() %>%
set_engine("klaR")
bmi_mods <- tibble(
type = c(
"Logistic Regression", "Decision Tree", "Random Forest",
"k-Nearest-Neighbor", "Neural Network", "Naive Bayes"
),
spec = list(
dmod_null, dmod_tree, dmod_forest, dmod_knn, dmod_nnet, dmod_nb
),
mod = map(spec, fit, form, data = people),
y_hat = map(mod, predict, new_data = fake_grid, type = "prob")
)
bmi_mods <- bmi_mods %>%
mutate(
X = list(fake_grid),
yX = map2(y_hat, X, bind_cols)
)
```
```
res <- bmi_mods %>%
select(type, yX) %>%
unnest(cols = yX)
res
```
```
# A tibble: 60,000 × 5
type .pred_No .pred_Yes Age BMI
<chr> <dbl> <dbl> <dbl> <dbl>
1 Logistic Regression 0.998 0.00234 12 13.3
2 Logistic Regression 0.998 0.00249 12 14.0
3 Logistic Regression 0.997 0.00265 12 14.7
4 Logistic Regression 0.997 0.00282 12 15.4
5 Logistic Regression 0.997 0.00300 12 16.0
6 Logistic Regression 0.997 0.00319 12 16.7
7 Logistic Regression 0.997 0.00340 12 17.4
8 Logistic Regression 0.996 0.00361 12 18.1
9 Logistic Regression 0.996 0.00384 12 18.8
10 Logistic Regression 0.996 0.00409 12 19.5
# … with 59,990 more rows
```
Figure [11\.11](ch-learningI.html#fig:mod-compare) illustrates each model in the data space. The differences between the models are striking. The rigidity of the decision tree is apparent, especially relative to the flexibility of the \\(k\\)\-NN model.
The \\(k\\)\-NN model and the random forest have similar flexibility, but regions in the former are based on polygons, while regions in the latter are based on rectangles.
Making \\(k\\) larger would result in smoother \\(k\\)\-NN predictions, while making \\(k\\) smaller would make the predictions more bold.
The logistic regression model makes predictions with a smooth grade, while the naïve Bayes model produces a non\-linear horizon. The neural network has made relatively uniform predictions in this case.
```
ggplot(data = res, aes(x = Age, y = BMI)) +
geom_tile(aes(fill = .pred_Yes), color = NA) +
geom_count(
data = people,
aes(color = Diabetes), alpha = 0.4
) +
scale_fill_gradient("Prob of\nDiabetes", low = "white", high = "red") +
scale_color_manual(values = c("gold", "black")) +
scale_size(range = c(0, 2)) +
scale_x_continuous(expand = c(0.02,0)) +
scale_y_continuous(expand = c(0.02,0)) +
facet_wrap(~type, ncol = 2)
```
Figure 11\.11: Comparison of predictive models in the data space. Note the rigidity of the decision tree, the flexibility of \\(k\\)\-NN and the random forest, and the bold predictions of \\(k\\)\-NN.
11\.5 Regularization
--------------------
Regularization is a technique where constraints are added to a regression
model to prevent overfitting.
Two techniques for [*regularization*](https://en.wikipedia.org/w/index.php?search=regularization)
include [*ridge regression*](https://en.wikipedia.org/w/index.php?search=ridge%20regression) and the [*LASSO*](https://en.wikipedia.org/w/index.php?search=LASSO) (least absolute
shrinkage and selection operator).
Instead of fitting a model that
minimizes \\(\\sum\_{i\=1}^n (y \- \\hat{y})^2\\) where \\(\\hat{y}\=\\bf{X}'\\beta\\),
ridge regression adds a constraint that \\(\\sum\_{j\=1}^p \\beta\_j^2 \\leq c\_1\\)
and the LASSO imposes the constraint that \\(\\sum\_{j\=1}^p \|\\beta\_j\| \\leq c\_2\\),
for some constants \\(c\_1\\) and \\(c\_2\\).
These methods are considered part of statistical or machine learning
since they automate model selection by shrinking coefficients (for ridge regression) or retaining predictors
(for the LASSO) automatically.
Such [*shrinkage*](https://en.wikipedia.org/w/index.php?search=shrinkage) may induce bias but decrease variability.
These regularization methods are particularly helpful when the set of predictors is large.
To help illustrate this process we consider a model for the flight delays example introduced in Chapter [9](ch-foundations.html#ch:foundations).
Here we are interested in arrival delays for flights from the two New York City airports that service California (EWR and JFK) to four California airports.
```
library(nycflights13)
California <- flights %>%
filter(
dest %in% c("LAX", "SFO", "OAK", "SJC"),
!is.na(arr_delay)
) %>%
mutate(
day = as.Date(time_hour),
dow = as.character(lubridate::wday(day, label = TRUE)),
month = as.factor(month),
hour = as.factor(hour)
)
dim(California)
```
```
[1] 29836 20
```
We begin by splitting the data into a training set (70%) and testing set (30%).
```
library(broom)
set.seed(386)
California_split <- initial_split(California, prop = 0.7)
California_train <- training(California_split)
California_test <- testing(California_split)
```
Now we can build a model that includes variables we want to use to explain arrival delay, including hour of day, originating airport, arrival airport, carrier, month of the year, day of week, plus interactions between destination and day of week and month.
```
flight_model <- formula(
"arr_delay ~ origin + dest + hour + carrier + month + dow")
mod_reg <- linear_reg() %>%
set_engine("lm") %>%
fit(flight_model, data = California_train)
tidy(mod_reg) %>%
head(4)
```
```
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -16.0 6.12 -2.61 0.00905
2 originJFK 2.88 0.783 3.67 0.000239
3 destOAK -4.61 3.10 -1.49 0.136
4 destSFO 1.89 0.620 3.05 0.00227
```
Our regression coefficient for `originJFK` indicates that controlling for other factors, we would anticipate an additional 3\.1\-minute delay flying from JFK compared to EWR (Newark), the reference airport.
```
California_test %>%
select(arr_delay) %>%
bind_cols(predict(mod_reg, new_data = California_test)) %>%
metrics(truth = arr_delay, estimate = .pred)
```
```
# A tibble: 3 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 rmse standard 42.6
2 rsq standard 0.0870
3 mae standard 26.1
```
Next we fit a LASSO model to the same data.
```
mod_lasso <- linear_reg(penalty = 0.01, mixture = 1) %>%
set_engine("glmnet") %>%
fit(flight_model, data = California_train)
tidy(mod_lasso) %>%
head(4)
```
```
# A tibble: 4 × 3
term estimate penalty
<chr> <dbl> <dbl>
1 (Intercept) -11.4 0.01
2 originJFK 2.78 0.01
3 destOAK -4.40 0.01
4 destSFO 1.88 0.01
```
We see that the coefficients for the LASSO tend to be attenuated slightly towards 0 (e.g., `originJFK` has shifted from 3\.08 to 2\.98\).
```
California_test %>%
select(arr_delay) %>%
bind_cols(predict(mod_lasso, new_data = California_test)) %>%
metrics(truth = arr_delay, estimate = .pred)
```
```
# A tibble: 3 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 rmse standard 42.6
2 rsq standard 0.0871
3 mae standard 26.1
```
In this example, the LASSO hasn’t improved the performance of our model on the test data.
In situations where there are many more predictors and the model may be overfit, it will tend to do better.
11\.6 Further resources
-----------------------
G. James et al. (2013\) provides an accessible introduction to these topics (see [http://www\-bcf.usc.edu/\~gareth/ISL](http://www-bcf.usc.edu/~gareth/ISL)).
A graduate\-level version of Hastie, Tibshirani, and Friedman (2009\) is freely downloadable at [http://www\-stat.stanford.edu/\~tibs/ElemStatLearn](http://www-stat.stanford.edu/~tibs/ElemStatLearn).
Another helpful source is Tan, Steinbach, and Kumar (2006\), which has more of a computer science flavor.
Breiman (2001\) is a classic paper that describes two cultures in statistics: prediction and modeling.
Bradley Efron (2020\) offers a more recent perspective.
The `ctree()` function from the **partykit** package builds a recursive partitioning model using conditional inference trees.
The functionality is similar to `rpart()` but uses different criteria to determine the splits. The **partykit** package also includes a `cforest()` function.
The **caret** package provides a number of useful functions for training and plotting classification and regression models.
The **glmnet** and **lars** packages include support for regularization methods.
The **RWeka** package provides an **R** interface to the comprehensive [Weka](http://www.cs.waikato.ac.nz/ml/weka/) machine learning library, which is written in Java.
11\.7 Exercises
---------------
**Problem 1 (Easy)**: Use the `HELPrct` data from the `mosaicData` to fit a tree model to the following predictors: `age`, `sex`, `cesd`, and `substance`.
1. Plot the resulting tree and interpret the results.
2. What is the accuracy of your decision tree?
**Problem 2 (Medium)**: Fit a series of supervised learning models to predict arrival delays for flights from
New York to `SFO` using the `nycflights13` package. How do the conclusions change from
the multiple regression model presented in the Statistical Foundations chapter?
**Problem 3 (Medium)**: Use the College Scorecard Data from the `CollegeScorecard` package to model student debt as a function of institutional characteristics using the techniques described in this chapter. Compare and contrast results from at least three methods.
```
# remotes::install_github("Amherst-Statistics/CollegeScorecard")
library(CollegeScorecard)
```
**Problem 4 (Medium)**: The `nasaweather` package contains data about tropical `storms` from 1995–2005\. Consider the scatterplot between the `wind` speed and `pressure` of these `storms` shown below.
```
library(mdsr)
library(nasaweather)
ggplot(data = storms, aes(x = pressure, y = wind, color = type)) +
geom_point(alpha = 0.5)
```
The `type` of storm is present in the data, and four types are given: extratropical, hurricane, tropical depression, and tropical storm. There are [complicated and not terribly precise definitions](https://en.wikipedia.org/wiki/Tropical_cyclone#Classifications.2C_terminology.2C_and_naming) for storm type. Build a classifier for the `type` of each storm as a function of its `wind` speed and `pressure`.
Why would a decision tree make a particularly good classifier for these data?
Visualize your classifier in the data space.
**Problem 5 (Medium)**: Pre\-natal care has been shown to be associated with better health of babies and mothers. Use the `NHANES` data set in the `NHANES` package to develop a predictive model for the `PregnantNow` variable. What did you learn about who is pregnant?
**Problem 6 (Hard)**: The ability to get a good night’s sleep is correlated with many positive health outcomes. The `NHANES` data set contains a binary variable `SleepTrouble` that indicates whether each person has trouble sleeping.
1. For each of the following models:
* Build a classifier for SleepTrouble
* Report its effectiveness on the NHANES training data
* Make an appropriate visualization of the model
* Interpret the results. What have you learned about people’s sleeping habits?
You may use whatever variable you like, except for `SleepHrsNight`.
* Null model
* Logistic regression
* Decision tree
* Random forest
* Neural network
* Naive Bayes
* \\(k\\)\-NN
2. Repeat the previous exercise, but now use the quantitative response variable `SleepHrsNight`. Build and interpret the following models:
* Null model
* Multiple regression
* Regression tree
* Random forest
* Ridge regression
* LASSO
3. Repeat either of the previous exercises, but this time first separate the `NHANES` data set uniformly at random into 75% training and 25% testing sets. Compare the effectiveness of each model on training vs. testing data.
4. Repeat the first exercise in part (a), but for the variable `PregnantNow`. What did you learn about who is pregnant?
11\.8 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-learningI.html\#learningI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-learningI.html#learningI-online-exercises)
No exercises found
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-learningII.html |
Chapter 12 Unsupervised learning
================================
In the previous chapter, we explored models for learning about a response variable \\(y\\) from a set of explanatory variables \\(\\mathbf{X}\\). This process is called [*supervised learning*](https://en.wikipedia.org/w/index.php?search=supervised%20learning) because the response variable provides not just a clear goal for the modeling (to improve predictions about future \\(y\\)’s), but also a guide (sometimes called the “ground truth”). In this chapter, we explore techniques in [*unsupervised learning*](https://en.wikipedia.org/w/index.php?search=unsupervised%20learning), where there is no response variable \\(y\\). Here, we simply have a set of observations \\(\\mathbf{X}\\), and we want to understand the relationships among them.
12\.1 Clustering
----------------
Figure [12\.1](ch-learningII.html#fig:evolution-mammals) shows an evolutionary tree of mammals.
We humans ([hominidae](http://en.wikipedia.org/wiki/Hominidae)) are on the far left of the tree.
The numbers at the branch points are estimates of how long ago—in millions of years—the branches separated.
According to the diagram, rodents and primates diverged about 90 million years ago.
Figure 12\.1: An evolutionary tree for mammals. Reprinted with permission under [Creative Commons Attribution 2\.0 Generic](https://creativecommons.org/licenses/by/2.0/). No changes were made to this image. Source: Graphodatsky, Trifonov, and Stanyon (2011\)
How do evolutionary biologists construct a tree like this? They study various traits of different kinds of mammals. Two mammals that have similar traits are deemed closely related. Animals with dissimilar traits are distantly related. By combining all of this information about the proximity of species, biologists can propose these kinds of evolutionary trees.
A tree—sometimes called a [*dendrogram*](https://en.wikipedia.org/w/index.php?search=dendrogram)—is an attractive organizing structure for relationships. Evolutionary biologists imagine that at each branch point there was an actual animal whose descendants split into groups that developed in different directions. In evolutionary biology, the inferences about branches come from comparing existing creatures to one another (as well as creatures from the fossil record). Creatures with similar traits are in nearby branches while creatures with different traits are in faraway branches. It takes considerable expertise in anatomy and morphology to know which similarities and differences are important. Note, however, that there is no outcome variable—just a construction of what is closely related or distantly related.
Trees can describe degrees of similarity between different things, regardless of how those relationships came to be. If you have a set of objects or cases, and you can measure how similar any two of the objects are, you can construct a tree. The tree may or may not reflect some deeper relationship among the objects, but it often provides a simple way to visualize relationships.
### 12\.1\.1 Hierarchical clustering
When the description of an object consists of a set of numerical variables (none of which is a [*response variable*](https://en.wikipedia.org/w/index.php?search=response%20variable)), there are two main steps in constructing a tree to describe the relationship among the cases in the data:
1. Represent each case as a point in a Cartesian space.
2. Make branching decisions based on how close together points or clouds of points are.
To illustrate, consider the unsupervised learning process of identifying different types of cars. The [*United States Department of Energy*](https://en.wikipedia.org/w/index.php?search=United%20States%20Department%20of%20Energy) maintains [automobile characteristics for thousands of cars](https://www.fueleconomy.gov/feg/download.shtml): miles per gallon, engine size, number of cylinders, number of gears, etc. Please see [their guide](https://www.fueleconomy.gov/feg/pdfs/guides/FEG2016.pdf) for more information.[20](#fn20) Here, we download a ZIP file from their website that contains fuel economy rating for the 2016 model year.
```
src <- "https://www.fueleconomy.gov/feg/epadata/16data.zip"
lcl <- usethis::use_zip(src)
```
Next, we use the **readxl** package to read this file into **R**, clean up some of the resulting variable names, select a small subset of the variables, and filter for distinct models of [*Toyota*](https://en.wikipedia.org/w/index.php?search=Toyota) vehicles. The resulting data set contains information about 75 different models that Toyota produces.
```
library(tidyverse)
library(mdsr)
library(readxl)
filename <- fs::dir_ls("data", regexp = "public\\.xlsx") %>%
head(1)
cars <- read_excel(filename) %>%
janitor::clean_names() %>%
select(
make = mfr_name,
model = carline,
displacement = eng_displ,
number_cyl,
number_gears,
city_mpg = city_fe_guide_conventional_fuel,
hwy_mpg = hwy_fe_guide_conventional_fuel
) %>%
distinct(model, .keep_all = TRUE) %>%
filter(make == "Toyota")
glimpse(cars)
```
```
Rows: 75
Columns: 7
$ make <chr> "Toyota", "Toyota", "Toyota", "Toyota", "Toyota", "To…
$ model <chr> "FR-S", "RC 200t", "RC 300 AWD", "RC 350", "RC 350 AW…
$ displacement <dbl> 2.0, 2.0, 3.5, 3.5, 3.5, 5.0, 1.5, 1.8, 5.0, 2.0, 3.5…
$ number_cyl <dbl> 4, 4, 6, 6, 6, 8, 4, 4, 8, 4, 6, 6, 6, 4, 4, 4, 4, 6,…
$ number_gears <dbl> 6, 8, 6, 8, 6, 8, 6, 1, 8, 8, 6, 8, 6, 6, 1, 4, 6, 6,…
$ city_mpg <dbl> 25, 22, 19, 19, 19, 16, 33, 43, 16, 22, 19, 19, 19, 2…
$ hwy_mpg <dbl> 34, 32, 26, 28, 26, 25, 42, 40, 24, 33, 26, 28, 26, 3…
```
As a large automaker, Toyota has a diverse lineup of cars, trucks, SUVs, and hybrid vehicles. Can we use unsupervised learning to categorize these vehicles in a sensible way with only the data we have been given?
For an individual quantitative variable, it is easy to measure how far apart any two cars are: Take the difference between the numerical values. The different variables are, however, on different scales and in different units. For example, `gears` ranges only from 1 to 8, while `city_mpg` goes from 13 to 58\. This means that some decision needs to be made about rescaling the variables so that the differences along each variable reasonably reflect how different the respective cars are. There is more than one way to do this, and in fact, there is no universally “best” solution—the best solution will always depend on the data and your domain expertise. The `dist()` function takes a simple and pragmatic point of view: Each variable is equally important.[21](#fn21)
The output of `dist()` gives the [*distance*](https://en.wikipedia.org/w/index.php?search=distance) from each individual car to every other car.
```
car_diffs <- cars %>%
column_to_rownames(var = "model") %>%
dist()
str(car_diffs)
```
```
'dist' num [1:2775] 4.52 11.29 9.93 11.29 15.14 ...
- attr(*, "Size")= int 75
- attr(*, "Labels")= chr [1:75] "FR-S" "RC 200t" "RC 300 AWD" "RC 350" ...
- attr(*, "Diag")= logi FALSE
- attr(*, "Upper")= logi FALSE
- attr(*, "method")= chr "euclidean"
- attr(*, "call")= language dist(x = .)
```
```
car_mat <- car_diffs %>%
as.matrix()
car_mat[1:6, 1:6] %>%
round(digits = 2)
```
```
FR-S RC 200t RC 300 AWD RC 350 RC 350 AWD RC F
FR-S 0.00 4.52 11.29 9.93 11.29 15.14
RC 200t 4.52 0.00 8.14 6.12 8.14 11.49
RC 300 AWD 11.29 8.14 0.00 3.10 0.00 4.93
RC 350 9.93 6.12 3.10 0.00 3.10 5.39
RC 350 AWD 11.29 8.14 0.00 3.10 0.00 4.93
RC F 15.14 11.49 4.93 5.39 4.93 0.00
```
This point\-to\-point distance matrix is analogous to the tables that used to be printed on road maps giving the distance from one city to another, like Figure [12\.2](ch-learningII.html#fig:city-distances), which states that it is 1,095 miles from Atlanta to Boston, or 715 miles from Atlanta to Chicago.
Notice that the distances are symmetric: It is the same distance from Boston to Los Angeles as from Los Angeles to Boston (3,036 miles, according to the table).
Figure 12\.2: Distances between some U.S. cities.
Knowing the distances between the cities is not the same thing as knowing their locations. But the set of mutual distances is enough information to reconstruct the relative positions of the cities.
Cities, of course, lie on the surface of the earth. That need not be true for the “distance” between automobile types. Even so, the set of mutual distances provides information equivalent to knowing the relative positions of these cars in a \\(p\\)\-dimensional space. This can be used to construct branches between nearby items, then to connect those branches, and so on until an entire tree has been constructed. The process is called [*hierarchical clustering*](https://en.wikipedia.org/w/index.php?search=hierarchical%20clustering). Figure [12\.3](ch-learningII.html#fig:cars-tree) shows a tree constructed by hierarchical clustering that relates Toyota car models to one another.
```
library(ape)
car_diffs %>%
hclust() %>%
as.phylo() %>%
plot(cex = 0.8, label.offset = 1)
```
Figure 12\.3: A dendrogram constructed by hierarchical clustering from car\-to\-car distances implied by the Toyota fuel economy data.
There are many ways to graph such trees, but here we have borrowed from biology by graphing these cars as a [phylogenetic tree](https://en.wikipedia.org/wiki/Phylogenetic_tree), similar to Figure [12\.1](ch-learningII.html#fig:evolution-mammals).
Careful inspection of Figure [12\.3](ch-learningII.html#fig:cars-tree) reveals some interesting insights. The first branch in the tree is evidently between hybrid vehicles and all others. This makes sense, since hybrid vehicles use a fundamentally different type of power to achieve considerably better fuel economy. Moreover, the first branch among conventional cars divides large trucks and SUVs (e.g., Sienna, Tacoma, Sequoia, Tundra, Land Cruiser) from smaller cars and cross\-over SUVs (e.g., Camry, Corolla, Yaris, RAV4\). We are confident that the gearheads in the readership will identify even more subtle logic to this clustering. One could imagine that this type of analysis might help a car\-buyer or marketing executive quickly decipher what might otherwise be a bewildering product line.
### 12\.1\.2 \\(k\\)\-means
Another way to group similar cases is to assign each case to one of several distinct groups, but without constructing a hierarchy. The output is not a tree but a choice of group to which each case belongs. (There can be more detail than this; for instance, a probability for each specific case that it belongs to each group.) This is like classification except that here there is no response variable. Thus, the definition of the groups must be inferred implicitly from the data.
As an example, consider the cities of the world (in `world_cities`). Cities can be different and similar in many ways: population, age structure, public transportation and roads, building space per person, etc. The choice of [*features*](https://en.wikipedia.org/w/index.php?search=features) (or variables) depends on the purpose you have for making the grouping.
Our purpose is to show you that clustering via machine learning can actually identify genuine patterns in the data. We will choose features that are utterly familiar: the latitude and longitude of each city.
You already know about the location of cities. They are on land. And you know about the organization of land on earth: most land falls in one of the large clusters called continents. But the `world_cities` data doesn’t have any notion of continents. Perhaps it is possible that this feature, which you long ago internalized, can be learned by a computer that has never even taken grade\-school geography.
For simplicity, consider the 4,000 biggest cities in the world and their longitudes and latitudes.
```
big_cities <- world_cities %>%
arrange(desc(population)) %>%
head(4000) %>%
select(longitude, latitude)
glimpse(big_cities)
```
```
Rows: 4,000
Columns: 2
$ longitude <dbl> 121.46, 28.95, -58.38, 72.88, -99.13, 116.40, 67.01, 117…
$ latitude <dbl> 31.22, 41.01, -34.61, 19.07, 19.43, 39.91, 24.86, 39.14,…
```
Note that in these data, there is no ancillary information—not even the name of the city. However, the *\\(k\\)\-means* clustering algorithm will separate these 4,000 points—each of which is located in a two\-dimensional plane—into \\(k\\) clusters based on their locations alone.
```
set.seed(15)
library(mclust)
city_clusts <- big_cities %>%
kmeans(centers = 6) %>%
fitted("classes") %>%
as.character()
big_cities <- big_cities %>%
mutate(cluster = city_clusts)
big_cities %>%
ggplot(aes(x = longitude, y = latitude)) +
geom_point(aes(color = cluster), alpha = 0.5) +
scale_color_brewer(palette = "Set2")
```
Figure 12\.4: The world’s 4,000 largest cities, clustered by the 6\-means clustering algorithm.
As shown in Figure [12\.4](ch-learningII.html#fig:cluster-cities), the clustering algorithm seems to have identified the continents. North and South America are clearly distinguished, as is most of Africa. The cities in North Africa are matched to Europe, but this is consistent with history, given the European influence in places like Morocco, Tunisia, and Egypt.
Similarly, while the cluster for Europe extends into what is called Asia, the distinction between Europe and Asia is essentially historic, not geographic.
Note that to the algorithm, there is little difference between oceans and deserts—both represent large areas where no big cities exist.
12\.2 Dimension reduction
-------------------------
Often, a variable carries little information that is relevant to the task at hand. Even for variables that are informative, there can be redundancy or near duplication of variables. That is, two or more variables are giving essentially the same information—they have similar patterns across the cases.
Such irrelevant or redundant variables make it harder to learn from data. The irrelevant variables are simply noise that obscures actual patterns. Similarly, when two or more variables are redundant, the differences between them may represent random noise. Furthermore, for some machine learning algorithms, a large number of variables \\(p\\) will present computational challenges.
It is usually helpful to remove irrelevant or redundant variables so that they—and the noise they carry—don’t obscure the patterns that machine learning algorithms could identify.
For example, consider votes in a parliament or congress.
Specifically, we will explore the Scottish Parliament in 2008\.[22](#fn22)
Legislators often vote together in pre\-organized blocs, and thus the pattern of “ayes” and “nays” on particular ballots may indicate which members are affiliated (i.e., members of the same political party). To test this idea, you might try clustering the members by their voting record.
Table 12\.1: Sample voting records data from the Scottish Parliament.
| name | S1M\-1 | S1M\-4\.1 | S1M\-4\.3 | S1M\-4 |
| --- | --- | --- | --- | --- |
| Canavan, Dennis | 1 | 1 | 1 | \-1 |
| Aitken, Bill | 1 | 1 | 0 | \-1 |
| Davidson, Mr David | 1 | 1 | 0 | 0 |
| Douglas Hamilton, Lord James | 1 | 1 | 0 | 0 |
| Fergusson, Alex | 1 | 1 | 0 | 0 |
| Fraser, Murdo | 0 | 0 | 0 | 0 |
| Gallie, Phil | 1 | 1 | 0 | \-1 |
| Goldie, Annabel | 1 | 1 | 0 | 0 |
| Harding, Mr Keith | 1 | 1 | 0 | 0 |
| Johnston, Nick | 0 | 1 | 0 | \-1 |
Table [12\.1](ch-learningII.html#tab:scot-votes-small) shows a small part of the voting record. The names of the members of parliament are the cases. Each ballot—identified by a file number such as —is a variable. A `1` means an “aye” vote, `-1` is “nay,” and `0` is an abstention. There are \\(n\=134\\) members and \\(p\=773\\) ballots—note that in this data set \\(p\\) far exceeds \\(n\\). It is impractical to show all of the more than 100,000 votes in a table, but there are only 3 possible votes, so displaying the table as an image (as in Figure [12\.5](ch-learningII.html#fig:ballot-grid)) works well.
```
Votes %>%
mutate(Vote = factor(vote, labels = c("Nay", "Abstain", "Aye"))) %>%
ggplot(aes(x = bill, y = name, fill = Vote)) +
geom_tile() +
xlab("Ballot") +
ylab("Member of Parliament") +
scale_fill_manual(values = c("darkgray", "white", "goldenrod")) +
scale_x_discrete(breaks = NULL, labels = NULL) +
scale_y_discrete(breaks = NULL, labels = NULL)
```
Figure 12\.5: Visualization of the Scottish Parliament votes.
Figure [12\.5](ch-learningII.html#fig:ballot-grid) is a \\(134 \\times 773\\) grid in which each cell is color\-coded based on one member of Parliament’s vote on one ballot. It is hard to see much of a pattern here, although you may notice the Scottish [tartan](https://en.wikipedia.org/wiki/Tartan) structure. The tartan pattern provides an indication to experts that the matrix can be approximated by a matrix of low\-rank.
### 12\.2\.1 Intuitive approaches
As a start, Figure [12\.6](ch-learningII.html#fig:two-ballots) shows the ballot values for all of the members of parliament for just two arbitrarily selected ballots. To give a better idea of the point count
at each position, the values are jittered by adding some random noise. The red dots are the actual positions. Each point is one member of parliament. Similarly aligned members are grouped together at one of the nine possibilities marked in red: (Aye, Nay), (Aye, Abstain), (Aye, Aye), and so on through to (Nay, Nay). In these two ballots, eight of the nine possibilities are populated. Does this mean that there are eight clusters of members?
```
Votes %>%
filter(bill %in% c("S1M-240.2", "S1M-639.1")) %>%
pivot_wider(names_from = bill, values_from = vote) %>%
ggplot(aes(x = `S1M-240.2`, y = `S1M-639.1`)) +
geom_point(
alpha = 0.7,
position = position_jitter(width = 0.1, height = 0.1)
) +
geom_point(alpha = 0.01, size = 10, color = "red" )
```
Figure 12\.6: Scottish Parliament votes for two ballots.
Intuition suggests that it would be better to use *all* of the ballots, rather than just two. In Figure [12\.7](ch-learningII.html#fig:many-ballots), the first 387 ballots (half) have been added together, as have the remaining ballots. Figure [12\.7](ch-learningII.html#fig:many-ballots) suggests that there might be two clusters of members who are aligned with each other. Using all of the data seems to give more information than using just two ballots.
```
Votes %>%
mutate(
set_num = as.numeric(factor(bill)),
set = ifelse(
set_num < max(set_num) / 2, "First_Half", "Second_Half"
)
) %>%
group_by(name, set) %>%
summarize(Ayes = sum(vote)) %>%
pivot_wider(names_from = set, values_from = Ayes) %>%
ggplot(aes(x = First_Half, y = Second_Half)) +
geom_point(alpha = 0.7, size = 5)
```
Figure 12\.7: Scatterplot showing the correlation between Scottish Parliament votes in two arbitrary collections of ballots.
### 12\.2\.2 Singular value decomposition
You may ask why the choice was made to add up the first half of the ballots as \\(x\\) and the remaining ballots as \\(y\\). Perhaps there is a better choice to display the underlying patterns. Perhaps we can think of a way to add up the ballots in a more meaningful way.
In fact, there is a mathematical approach to finding the *best* approximation to the ballot–voter matrix using simple matrices, called [*singular value decomposition*](https://en.wikipedia.org/w/index.php?search=singular%20value%20decomposition) (SVD). (The statistical dimension reduction technique of [*principal component analysis*](https://en.wikipedia.org/w/index.php?search=principal%20component%20analysis) (PCA) can be accomplished using SVD.) The mathematics of SVD draw on a knowledge of matrix algebra, but the operation itself is accessible to anyone.
Geometrically, SVD (or PCA) amounts to a rotation of the coordinate axes so that more of the variability can be explained using just a few variables.
Figure [12\.8](ch-learningII.html#fig:ballot-PCA) shows the position of each member on the two principal components that explain the most variability.
```
Votes_wide <- Votes %>%
pivot_wider(names_from = bill, values_from = vote)
vote_svd <- Votes_wide %>%
select(-name) %>%
svd()
num_clusters <- 5 # desired number of clusters
library(broom)
vote_svd_tidy <- vote_svd %>%
tidy(matrix = "u") %>%
filter(PC < num_clusters) %>%
mutate(PC = paste0("pc_", PC)) %>%
pivot_wider(names_from = PC, values_from = value) %>%
select(-row)
clusts <- vote_svd_tidy %>%
kmeans(centers = num_clusters)
tidy(clusts)
```
```
# A tibble: 5 × 7
pc_1 pc_2 pc_3 pc_4 size withinss cluster
<dbl> <dbl> <dbl> <dbl> <int> <dbl> <fct>
1 -0.0529 0.142 0.0840 0.0260 26 0.0118 1
2 0.0851 0.0367 0.0257 -0.182 20 0.160 2
3 -0.0435 0.109 0.0630 -0.0218 10 0.0160 3
4 -0.0306 -0.116 0.183 -0.00962 20 0.0459 4
5 0.106 0.0206 0.0323 0.0456 58 0.112 5
```
```
voters <- clusts %>%
augment(vote_svd_tidy)
ggplot(data = voters, aes(x = pc_1, y = pc_2)) +
geom_point(aes(x = 0, y = 0), color = "red", shape = 1, size = 7) +
geom_point(size = 5, alpha = 0.6, aes(color = .cluster)) +
xlab("Best Vector from SVD") +
ylab("Second Best Vector from SVD") +
ggtitle("Political Positions of Members of Parliament") +
scale_color_brewer(palette = "Set2")
```
Figure 12\.8: Clustering members of Scottish Parliament based on SVD along the members.
Figure [12\.8](ch-learningII.html#fig:ballot-PCA) shows, at a glance, that there are three main clusters.
The red circle marks the *average* member. The three clusters move away from average in different directions.
There are several members whose position is in\-between the average and the cluster to which they are closest.
These clusters may reveal the alignment of Scottish members of parliament according to party affiliation and voting history.
For a graphic, one is limited to using two variables for position.
Clustering, however, can be based on many more variables. Using more SVD sums may allow the three clusters to be split up further.
The color in Figure [12\.8](ch-learningII.html#fig:ballot-PCA) above shows the result of asking for five clusters using the five best SVD sums.
The confusion matrix below compares the actual party of each member to the cluster memberships.
```
voters <- voters %>%
mutate(name = Votes_wide$name) %>%
left_join(Parties, by = c("name" = "name"))
mosaic::tally(party ~ .cluster, data = voters)
```
```
.cluster
party 1 2 3 4 5
Member for Falkirk West 0 1 0 0 0
Scottish Conservative and Unionist Party 0 0 0 20 0
Scottish Green Party 0 1 0 0 0
Scottish Labour 0 1 0 0 57
Scottish Liberal Democrats 0 16 0 0 1
Scottish National Party 26 0 10 0 0
Scottish Socialist Party 0 1 0 0 0
```
How well did the clustering algorithm do?
The party affiliation of each member of parliament is known, even though it wasn’t used in finding the clusters.
Cluster 1 consists of most of the members of the [*Scottish National Party*](https://en.wikipedia.org/w/index.php?search=Scottish%20National%20Party) (SNP).
Cluster 2 includes a number of individuals plus all but one of the [*Scottish Liberal Democrats*](https://en.wikipedia.org/w/index.php?search=Scottish%20Liberal%20Democrats).
Cluster 3 picks up the remaining 10 members of the SNP.
Cluster 4 includes all of the members of the [*Scottish Conservative and Unionist Party*](https://en.wikipedia.org/w/index.php?search=Scottish%20Conservative%20and%20Unionist%20Party), while Cluster 5 accounts for all but one member of the [*Scottish Labour*](https://en.wikipedia.org/w/index.php?search=Scottish%20Labour) party.
For most parties, the large majority of members were placed into a unique cluster for that party with a small smattering of other like\-voting colleagues.
In other words, the technique has identified correctly nearly all of the members of the four different parties with significant representation (i.e., Conservative and Unionist, Labour, Liberal Democrats, and National).
```
ballots <- vote_svd %>%
tidy(matrix = "v") %>%
filter(PC < num_clusters) %>%
mutate(PC = paste0("pc_", PC)) %>%
pivot_wider(names_from = PC, values_from = value) %>%
select(-column)
clust_ballots <- kmeans(ballots, centers = num_clusters)
ballots <- clust_ballots %>%
augment(ballots) %>%
mutate(bill = names(select(Votes_wide, -name)))
```
```
ggplot(data = ballots, aes(x = pc_1, y = pc_2)) +
geom_point(aes(x = 0, y = 0), color = "red", shape = 1, size = 7) +
geom_point(size = 5, alpha = 0.6, aes(color = .cluster)) +
xlab("Best Vector from SVD") +
ylab("Second Best Vector from SVD") +
ggtitle("Influential Ballots") +
scale_color_brewer(palette = "Set2")
```
Figure 12\.9: Clustering of Scottish Parliament ballots based on SVD along the ballots.
There is more information to be extracted from the ballot data.
Just as there are clusters of political positions, there are clusters of ballots that might correspond to such factors as social effect, economic effect, etc.
Figure [12\.9](ch-learningII.html#fig:issue-clusters) shows the position of ballots, using the first two principal components.
There are obvious clusters in this figure. Still, interpretation can be tricky. Remember that, on each issue, there are both “aye” and “nay” votes. This accounts for the symmetry of the dots around the center (indicated in red). The opposing dots along each angle from the center might be interpreted in terms of *socially liberal* versus *socially conservative* and *economically liberal* versus *economically conservative*. Deciding which is which likely involves reading the bill itself, as well as a nuanced understanding of Scottish politics.
Finally, the principal components can be used to rearrange members of parliament and separately rearrange ballots while maintaining each person’s vote.
This amounts simply to re\-ordering the members in a way other than alphabetical and similarly with the ballots.
Such a transformation can bring dramatic clarity to the appearance of the data—as shown in Figure [12\.10](ch-learningII.html#fig:SVD-ballots)—where the large, nearly equally sized, and opposing voting blocs of the two major political parties (the National and Labour parties) become obvious.
Alliances among the smaller political parties muddy the waters on the lower half of Figure [12\.10](ch-learningII.html#fig:SVD-ballots).
```
Votes_svd <- Votes %>%
mutate(Vote = factor(vote, labels = c("Nay", "Abstain", "Aye"))) %>%
inner_join(ballots, by = "bill") %>%
inner_join(voters, by = "name")
ggplot(data = Votes_svd,
aes(x = reorder(bill, pc_1.x), y = reorder(name, pc_1.y), fill = Vote)) +
geom_tile() +
xlab("Ballot") +
ylab("Member of Parliament") +
scale_fill_manual(values = c("darkgray", "white", "goldenrod")) +
scale_x_discrete(breaks = NULL, labels = NULL) +
scale_y_discrete(breaks = NULL, labels = NULL)
```
Figure 12\.10: Illustration of the Scottish Parliament votes when ordered by the primary vector of the SVD.
The person represented by the top row in Figure [12\.10](ch-learningII.html#fig:SVD-ballots) is [Nicola Sturgeon](https://en.wikipedia.org/w/index.php?search=Nicola%20Sturgeon), the leader of the Scottish National Party.
Along the primary vector identified by our SVD, she is the most extreme voter.
According to Wikipedia, the National Party belongs to a “[mainstream European social democratic tradition](https://en.wikipedia.org/wiki/Scottish_National_Party#Party_ideology).”
```
Votes_svd %>%
arrange(pc_1.y) %>%
head(1)
```
```
bill name vote Vote pc_1.x pc_2.x pc_3.x pc_4.x
1 S1M-1 Sturgeon, Nicola 1 Aye -0.00391 -0.00167 0.0498 -0.0734
.cluster.x pc_1.y pc_2.y pc_3.y pc_4.y .cluster.y party
1 4 -0.059 0.153 0.0832 0.0396 1 Scottish National Party
```
Conversely, the person at the bottom of Figure [12\.10](ch-learningII.html#fig:SVD-ballots) is [Paul Martin](https://en.wikipedia.org/w/index.php?search=Paul%20Martin), a member of the Scottish Labour Party. It is easy to see in Figure [12\.10](ch-learningII.html#fig:SVD-ballots) that Martin opposed Sturgeon on most ballot votes.
```
Votes_svd %>%
arrange(pc_1.y) %>%
tail(1)
```
```
bill name vote Vote pc_1.x pc_2.x pc_3.x pc_4.x
103582 S1M-4064 Martin, Paul 1 Aye 0.0322 -0.00484 0.0653 -0.0317
.cluster.x pc_1.y pc_2.y pc_3.y pc_4.y .cluster.y party
103582 4 0.126 0.0267 0.0425 0.056 5 Scottish Labour
```
The beauty of Figure [12\.10](ch-learningII.html#fig:SVD-ballots) is that it brings profound order to the chaos apparent in Figure [12\.5](ch-learningII.html#fig:ballot-grid).
This was accomplished by simply ordering the rows (members of Parliament) and the columns (ballots) in a sensible way.
In this case, the ordering was determined by the primary vector identified by the SVD of the voting matrix.
This is yet another example of how machine learning techniques can identify meaningful patterns in data, but human beings are required to bring domain knowledge to bear on the problem in order to extract meaningful contextual understanding.
12\.3 Further resources
-----------------------
The machine learning and phylogenetics CRAN task views provide guidance on functionality within **R**.
Readers interested in learning more about unsupervised learning are encouraged to consult G. James et al. (2013\) or Hastie, Tibshirani, and Friedman (2009\). Kuiper and Sklar (2012\) includes an accessible treatment of [*principal component analysis*](https://en.wikipedia.org/w/index.php?search=principal%20component%20analysis).
12\.4 Exercises
---------------
**Problem 1 (Medium)**: Re\-fit the \\(k\\)–means algorithm on the `BigCities` data with a different value of \\(k\\) (i.e., not six). Experiment with different values of \\(k\\) and report on the sensitivity of the algorithm to changes in this parameter.
**Problem 2 (Medium)**: Carry out and interpret a clustering of vehicles from another manufacturer using the
approach outlined in the first section of the chapter.
**Problem 3 (Medium)**: Perform the clustering on *pitchers* who have been elected to the Hall of Fame using the `Pitching` dataset in the `Lahman` package. Use wins (`W`), strikeouts (`SO`), and saves (`SV`) as criteria.
**Problem 4 (Medium)**: Consider the \\(k\\)\-means clustering algorithm applied to the `BigCities` data.
Would you expect to obtain different results if the location coordinates were *projected* (see the “Working with spatial data” chapter)?
**Problem 5 (Hard)**: Use the College Scorecard Data from the `CollegeScorecard` package to cluster educational institutions using the techniques described in this chapter. Be sure to include variables related to student debt, number of students, graduation rate, and selectivity.
```
# remotes::install_github("Amherst-Statistics/CollegeScorecard")
```
**Problem 6 (Hard)**: Baseball players are voted into the Hall of Fame by the members of the Baseball Writers of America Association. Quantitative criteria are used by the voters, but they are also allowed wide discretion. The following code identifies the position players who have been elected to the Hall of Fame and tabulates a few basic statistics, including their number of career hits (`H`), home runs (`HR`}), and stolen bases (`SB`).
1. Use the `kmeans` function to perform a cluster analysis on these players. Describe the properties that seem common to each cluster.
```
library(mdsr)
library(Lahman)
hof <- Batting %>%
group_by(playerID) %>%
inner_join(HallOfFame, by = c("playerID" = "playerID")) %>%
filter(inducted == "Y" & votedBy == "BBWAA") %>%
summarize(tH = sum(H), tHR = sum(HR), tRBI = sum(RBI), tSB = sum(SB)) %>%
filter(tH > 1000)
```
2. Building on the previous exercise, compute new statistics and run the clustering algorithm again. Can you produce clusters that you think are more pure? Justify your choices.
**Problem 7 (Hard)**: Project the `world_cities` coordinates using the Gall\-Peters projection and run the \\(k\\)\-means algorithm again. Are the resulting clusters importantly different from those identified in the chapter?
```
library(tidyverse)
library(mdsr)
big_cities <- world_cities %>%
arrange(desc(population)) %>%
head(4000) %>%
select(longitude, latitude)
```
12\.5 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-learningII.html\#learningII\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-learningII.html#learningII-online-exercises)
No exercises found
---
12\.1 Clustering
----------------
Figure [12\.1](ch-learningII.html#fig:evolution-mammals) shows an evolutionary tree of mammals.
We humans ([hominidae](http://en.wikipedia.org/wiki/Hominidae)) are on the far left of the tree.
The numbers at the branch points are estimates of how long ago—in millions of years—the branches separated.
According to the diagram, rodents and primates diverged about 90 million years ago.
Figure 12\.1: An evolutionary tree for mammals. Reprinted with permission under [Creative Commons Attribution 2\.0 Generic](https://creativecommons.org/licenses/by/2.0/). No changes were made to this image. Source: Graphodatsky, Trifonov, and Stanyon (2011\)
How do evolutionary biologists construct a tree like this? They study various traits of different kinds of mammals. Two mammals that have similar traits are deemed closely related. Animals with dissimilar traits are distantly related. By combining all of this information about the proximity of species, biologists can propose these kinds of evolutionary trees.
A tree—sometimes called a [*dendrogram*](https://en.wikipedia.org/w/index.php?search=dendrogram)—is an attractive organizing structure for relationships. Evolutionary biologists imagine that at each branch point there was an actual animal whose descendants split into groups that developed in different directions. In evolutionary biology, the inferences about branches come from comparing existing creatures to one another (as well as creatures from the fossil record). Creatures with similar traits are in nearby branches while creatures with different traits are in faraway branches. It takes considerable expertise in anatomy and morphology to know which similarities and differences are important. Note, however, that there is no outcome variable—just a construction of what is closely related or distantly related.
Trees can describe degrees of similarity between different things, regardless of how those relationships came to be. If you have a set of objects or cases, and you can measure how similar any two of the objects are, you can construct a tree. The tree may or may not reflect some deeper relationship among the objects, but it often provides a simple way to visualize relationships.
### 12\.1\.1 Hierarchical clustering
When the description of an object consists of a set of numerical variables (none of which is a [*response variable*](https://en.wikipedia.org/w/index.php?search=response%20variable)), there are two main steps in constructing a tree to describe the relationship among the cases in the data:
1. Represent each case as a point in a Cartesian space.
2. Make branching decisions based on how close together points or clouds of points are.
To illustrate, consider the unsupervised learning process of identifying different types of cars. The [*United States Department of Energy*](https://en.wikipedia.org/w/index.php?search=United%20States%20Department%20of%20Energy) maintains [automobile characteristics for thousands of cars](https://www.fueleconomy.gov/feg/download.shtml): miles per gallon, engine size, number of cylinders, number of gears, etc. Please see [their guide](https://www.fueleconomy.gov/feg/pdfs/guides/FEG2016.pdf) for more information.[20](#fn20) Here, we download a ZIP file from their website that contains fuel economy rating for the 2016 model year.
```
src <- "https://www.fueleconomy.gov/feg/epadata/16data.zip"
lcl <- usethis::use_zip(src)
```
Next, we use the **readxl** package to read this file into **R**, clean up some of the resulting variable names, select a small subset of the variables, and filter for distinct models of [*Toyota*](https://en.wikipedia.org/w/index.php?search=Toyota) vehicles. The resulting data set contains information about 75 different models that Toyota produces.
```
library(tidyverse)
library(mdsr)
library(readxl)
filename <- fs::dir_ls("data", regexp = "public\\.xlsx") %>%
head(1)
cars <- read_excel(filename) %>%
janitor::clean_names() %>%
select(
make = mfr_name,
model = carline,
displacement = eng_displ,
number_cyl,
number_gears,
city_mpg = city_fe_guide_conventional_fuel,
hwy_mpg = hwy_fe_guide_conventional_fuel
) %>%
distinct(model, .keep_all = TRUE) %>%
filter(make == "Toyota")
glimpse(cars)
```
```
Rows: 75
Columns: 7
$ make <chr> "Toyota", "Toyota", "Toyota", "Toyota", "Toyota", "To…
$ model <chr> "FR-S", "RC 200t", "RC 300 AWD", "RC 350", "RC 350 AW…
$ displacement <dbl> 2.0, 2.0, 3.5, 3.5, 3.5, 5.0, 1.5, 1.8, 5.0, 2.0, 3.5…
$ number_cyl <dbl> 4, 4, 6, 6, 6, 8, 4, 4, 8, 4, 6, 6, 6, 4, 4, 4, 4, 6,…
$ number_gears <dbl> 6, 8, 6, 8, 6, 8, 6, 1, 8, 8, 6, 8, 6, 6, 1, 4, 6, 6,…
$ city_mpg <dbl> 25, 22, 19, 19, 19, 16, 33, 43, 16, 22, 19, 19, 19, 2…
$ hwy_mpg <dbl> 34, 32, 26, 28, 26, 25, 42, 40, 24, 33, 26, 28, 26, 3…
```
As a large automaker, Toyota has a diverse lineup of cars, trucks, SUVs, and hybrid vehicles. Can we use unsupervised learning to categorize these vehicles in a sensible way with only the data we have been given?
For an individual quantitative variable, it is easy to measure how far apart any two cars are: Take the difference between the numerical values. The different variables are, however, on different scales and in different units. For example, `gears` ranges only from 1 to 8, while `city_mpg` goes from 13 to 58\. This means that some decision needs to be made about rescaling the variables so that the differences along each variable reasonably reflect how different the respective cars are. There is more than one way to do this, and in fact, there is no universally “best” solution—the best solution will always depend on the data and your domain expertise. The `dist()` function takes a simple and pragmatic point of view: Each variable is equally important.[21](#fn21)
The output of `dist()` gives the [*distance*](https://en.wikipedia.org/w/index.php?search=distance) from each individual car to every other car.
```
car_diffs <- cars %>%
column_to_rownames(var = "model") %>%
dist()
str(car_diffs)
```
```
'dist' num [1:2775] 4.52 11.29 9.93 11.29 15.14 ...
- attr(*, "Size")= int 75
- attr(*, "Labels")= chr [1:75] "FR-S" "RC 200t" "RC 300 AWD" "RC 350" ...
- attr(*, "Diag")= logi FALSE
- attr(*, "Upper")= logi FALSE
- attr(*, "method")= chr "euclidean"
- attr(*, "call")= language dist(x = .)
```
```
car_mat <- car_diffs %>%
as.matrix()
car_mat[1:6, 1:6] %>%
round(digits = 2)
```
```
FR-S RC 200t RC 300 AWD RC 350 RC 350 AWD RC F
FR-S 0.00 4.52 11.29 9.93 11.29 15.14
RC 200t 4.52 0.00 8.14 6.12 8.14 11.49
RC 300 AWD 11.29 8.14 0.00 3.10 0.00 4.93
RC 350 9.93 6.12 3.10 0.00 3.10 5.39
RC 350 AWD 11.29 8.14 0.00 3.10 0.00 4.93
RC F 15.14 11.49 4.93 5.39 4.93 0.00
```
This point\-to\-point distance matrix is analogous to the tables that used to be printed on road maps giving the distance from one city to another, like Figure [12\.2](ch-learningII.html#fig:city-distances), which states that it is 1,095 miles from Atlanta to Boston, or 715 miles from Atlanta to Chicago.
Notice that the distances are symmetric: It is the same distance from Boston to Los Angeles as from Los Angeles to Boston (3,036 miles, according to the table).
Figure 12\.2: Distances between some U.S. cities.
Knowing the distances between the cities is not the same thing as knowing their locations. But the set of mutual distances is enough information to reconstruct the relative positions of the cities.
Cities, of course, lie on the surface of the earth. That need not be true for the “distance” between automobile types. Even so, the set of mutual distances provides information equivalent to knowing the relative positions of these cars in a \\(p\\)\-dimensional space. This can be used to construct branches between nearby items, then to connect those branches, and so on until an entire tree has been constructed. The process is called [*hierarchical clustering*](https://en.wikipedia.org/w/index.php?search=hierarchical%20clustering). Figure [12\.3](ch-learningII.html#fig:cars-tree) shows a tree constructed by hierarchical clustering that relates Toyota car models to one another.
```
library(ape)
car_diffs %>%
hclust() %>%
as.phylo() %>%
plot(cex = 0.8, label.offset = 1)
```
Figure 12\.3: A dendrogram constructed by hierarchical clustering from car\-to\-car distances implied by the Toyota fuel economy data.
There are many ways to graph such trees, but here we have borrowed from biology by graphing these cars as a [phylogenetic tree](https://en.wikipedia.org/wiki/Phylogenetic_tree), similar to Figure [12\.1](ch-learningII.html#fig:evolution-mammals).
Careful inspection of Figure [12\.3](ch-learningII.html#fig:cars-tree) reveals some interesting insights. The first branch in the tree is evidently between hybrid vehicles and all others. This makes sense, since hybrid vehicles use a fundamentally different type of power to achieve considerably better fuel economy. Moreover, the first branch among conventional cars divides large trucks and SUVs (e.g., Sienna, Tacoma, Sequoia, Tundra, Land Cruiser) from smaller cars and cross\-over SUVs (e.g., Camry, Corolla, Yaris, RAV4\). We are confident that the gearheads in the readership will identify even more subtle logic to this clustering. One could imagine that this type of analysis might help a car\-buyer or marketing executive quickly decipher what might otherwise be a bewildering product line.
### 12\.1\.2 \\(k\\)\-means
Another way to group similar cases is to assign each case to one of several distinct groups, but without constructing a hierarchy. The output is not a tree but a choice of group to which each case belongs. (There can be more detail than this; for instance, a probability for each specific case that it belongs to each group.) This is like classification except that here there is no response variable. Thus, the definition of the groups must be inferred implicitly from the data.
As an example, consider the cities of the world (in `world_cities`). Cities can be different and similar in many ways: population, age structure, public transportation and roads, building space per person, etc. The choice of [*features*](https://en.wikipedia.org/w/index.php?search=features) (or variables) depends on the purpose you have for making the grouping.
Our purpose is to show you that clustering via machine learning can actually identify genuine patterns in the data. We will choose features that are utterly familiar: the latitude and longitude of each city.
You already know about the location of cities. They are on land. And you know about the organization of land on earth: most land falls in one of the large clusters called continents. But the `world_cities` data doesn’t have any notion of continents. Perhaps it is possible that this feature, which you long ago internalized, can be learned by a computer that has never even taken grade\-school geography.
For simplicity, consider the 4,000 biggest cities in the world and their longitudes and latitudes.
```
big_cities <- world_cities %>%
arrange(desc(population)) %>%
head(4000) %>%
select(longitude, latitude)
glimpse(big_cities)
```
```
Rows: 4,000
Columns: 2
$ longitude <dbl> 121.46, 28.95, -58.38, 72.88, -99.13, 116.40, 67.01, 117…
$ latitude <dbl> 31.22, 41.01, -34.61, 19.07, 19.43, 39.91, 24.86, 39.14,…
```
Note that in these data, there is no ancillary information—not even the name of the city. However, the *\\(k\\)\-means* clustering algorithm will separate these 4,000 points—each of which is located in a two\-dimensional plane—into \\(k\\) clusters based on their locations alone.
```
set.seed(15)
library(mclust)
city_clusts <- big_cities %>%
kmeans(centers = 6) %>%
fitted("classes") %>%
as.character()
big_cities <- big_cities %>%
mutate(cluster = city_clusts)
big_cities %>%
ggplot(aes(x = longitude, y = latitude)) +
geom_point(aes(color = cluster), alpha = 0.5) +
scale_color_brewer(palette = "Set2")
```
Figure 12\.4: The world’s 4,000 largest cities, clustered by the 6\-means clustering algorithm.
As shown in Figure [12\.4](ch-learningII.html#fig:cluster-cities), the clustering algorithm seems to have identified the continents. North and South America are clearly distinguished, as is most of Africa. The cities in North Africa are matched to Europe, but this is consistent with history, given the European influence in places like Morocco, Tunisia, and Egypt.
Similarly, while the cluster for Europe extends into what is called Asia, the distinction between Europe and Asia is essentially historic, not geographic.
Note that to the algorithm, there is little difference between oceans and deserts—both represent large areas where no big cities exist.
### 12\.1\.1 Hierarchical clustering
When the description of an object consists of a set of numerical variables (none of which is a [*response variable*](https://en.wikipedia.org/w/index.php?search=response%20variable)), there are two main steps in constructing a tree to describe the relationship among the cases in the data:
1. Represent each case as a point in a Cartesian space.
2. Make branching decisions based on how close together points or clouds of points are.
To illustrate, consider the unsupervised learning process of identifying different types of cars. The [*United States Department of Energy*](https://en.wikipedia.org/w/index.php?search=United%20States%20Department%20of%20Energy) maintains [automobile characteristics for thousands of cars](https://www.fueleconomy.gov/feg/download.shtml): miles per gallon, engine size, number of cylinders, number of gears, etc. Please see [their guide](https://www.fueleconomy.gov/feg/pdfs/guides/FEG2016.pdf) for more information.[20](#fn20) Here, we download a ZIP file from their website that contains fuel economy rating for the 2016 model year.
```
src <- "https://www.fueleconomy.gov/feg/epadata/16data.zip"
lcl <- usethis::use_zip(src)
```
Next, we use the **readxl** package to read this file into **R**, clean up some of the resulting variable names, select a small subset of the variables, and filter for distinct models of [*Toyota*](https://en.wikipedia.org/w/index.php?search=Toyota) vehicles. The resulting data set contains information about 75 different models that Toyota produces.
```
library(tidyverse)
library(mdsr)
library(readxl)
filename <- fs::dir_ls("data", regexp = "public\\.xlsx") %>%
head(1)
cars <- read_excel(filename) %>%
janitor::clean_names() %>%
select(
make = mfr_name,
model = carline,
displacement = eng_displ,
number_cyl,
number_gears,
city_mpg = city_fe_guide_conventional_fuel,
hwy_mpg = hwy_fe_guide_conventional_fuel
) %>%
distinct(model, .keep_all = TRUE) %>%
filter(make == "Toyota")
glimpse(cars)
```
```
Rows: 75
Columns: 7
$ make <chr> "Toyota", "Toyota", "Toyota", "Toyota", "Toyota", "To…
$ model <chr> "FR-S", "RC 200t", "RC 300 AWD", "RC 350", "RC 350 AW…
$ displacement <dbl> 2.0, 2.0, 3.5, 3.5, 3.5, 5.0, 1.5, 1.8, 5.0, 2.0, 3.5…
$ number_cyl <dbl> 4, 4, 6, 6, 6, 8, 4, 4, 8, 4, 6, 6, 6, 4, 4, 4, 4, 6,…
$ number_gears <dbl> 6, 8, 6, 8, 6, 8, 6, 1, 8, 8, 6, 8, 6, 6, 1, 4, 6, 6,…
$ city_mpg <dbl> 25, 22, 19, 19, 19, 16, 33, 43, 16, 22, 19, 19, 19, 2…
$ hwy_mpg <dbl> 34, 32, 26, 28, 26, 25, 42, 40, 24, 33, 26, 28, 26, 3…
```
As a large automaker, Toyota has a diverse lineup of cars, trucks, SUVs, and hybrid vehicles. Can we use unsupervised learning to categorize these vehicles in a sensible way with only the data we have been given?
For an individual quantitative variable, it is easy to measure how far apart any two cars are: Take the difference between the numerical values. The different variables are, however, on different scales and in different units. For example, `gears` ranges only from 1 to 8, while `city_mpg` goes from 13 to 58\. This means that some decision needs to be made about rescaling the variables so that the differences along each variable reasonably reflect how different the respective cars are. There is more than one way to do this, and in fact, there is no universally “best” solution—the best solution will always depend on the data and your domain expertise. The `dist()` function takes a simple and pragmatic point of view: Each variable is equally important.[21](#fn21)
The output of `dist()` gives the [*distance*](https://en.wikipedia.org/w/index.php?search=distance) from each individual car to every other car.
```
car_diffs <- cars %>%
column_to_rownames(var = "model") %>%
dist()
str(car_diffs)
```
```
'dist' num [1:2775] 4.52 11.29 9.93 11.29 15.14 ...
- attr(*, "Size")= int 75
- attr(*, "Labels")= chr [1:75] "FR-S" "RC 200t" "RC 300 AWD" "RC 350" ...
- attr(*, "Diag")= logi FALSE
- attr(*, "Upper")= logi FALSE
- attr(*, "method")= chr "euclidean"
- attr(*, "call")= language dist(x = .)
```
```
car_mat <- car_diffs %>%
as.matrix()
car_mat[1:6, 1:6] %>%
round(digits = 2)
```
```
FR-S RC 200t RC 300 AWD RC 350 RC 350 AWD RC F
FR-S 0.00 4.52 11.29 9.93 11.29 15.14
RC 200t 4.52 0.00 8.14 6.12 8.14 11.49
RC 300 AWD 11.29 8.14 0.00 3.10 0.00 4.93
RC 350 9.93 6.12 3.10 0.00 3.10 5.39
RC 350 AWD 11.29 8.14 0.00 3.10 0.00 4.93
RC F 15.14 11.49 4.93 5.39 4.93 0.00
```
This point\-to\-point distance matrix is analogous to the tables that used to be printed on road maps giving the distance from one city to another, like Figure [12\.2](ch-learningII.html#fig:city-distances), which states that it is 1,095 miles from Atlanta to Boston, or 715 miles from Atlanta to Chicago.
Notice that the distances are symmetric: It is the same distance from Boston to Los Angeles as from Los Angeles to Boston (3,036 miles, according to the table).
Figure 12\.2: Distances between some U.S. cities.
Knowing the distances between the cities is not the same thing as knowing their locations. But the set of mutual distances is enough information to reconstruct the relative positions of the cities.
Cities, of course, lie on the surface of the earth. That need not be true for the “distance” between automobile types. Even so, the set of mutual distances provides information equivalent to knowing the relative positions of these cars in a \\(p\\)\-dimensional space. This can be used to construct branches between nearby items, then to connect those branches, and so on until an entire tree has been constructed. The process is called [*hierarchical clustering*](https://en.wikipedia.org/w/index.php?search=hierarchical%20clustering). Figure [12\.3](ch-learningII.html#fig:cars-tree) shows a tree constructed by hierarchical clustering that relates Toyota car models to one another.
```
library(ape)
car_diffs %>%
hclust() %>%
as.phylo() %>%
plot(cex = 0.8, label.offset = 1)
```
Figure 12\.3: A dendrogram constructed by hierarchical clustering from car\-to\-car distances implied by the Toyota fuel economy data.
There are many ways to graph such trees, but here we have borrowed from biology by graphing these cars as a [phylogenetic tree](https://en.wikipedia.org/wiki/Phylogenetic_tree), similar to Figure [12\.1](ch-learningII.html#fig:evolution-mammals).
Careful inspection of Figure [12\.3](ch-learningII.html#fig:cars-tree) reveals some interesting insights. The first branch in the tree is evidently between hybrid vehicles and all others. This makes sense, since hybrid vehicles use a fundamentally different type of power to achieve considerably better fuel economy. Moreover, the first branch among conventional cars divides large trucks and SUVs (e.g., Sienna, Tacoma, Sequoia, Tundra, Land Cruiser) from smaller cars and cross\-over SUVs (e.g., Camry, Corolla, Yaris, RAV4\). We are confident that the gearheads in the readership will identify even more subtle logic to this clustering. One could imagine that this type of analysis might help a car\-buyer or marketing executive quickly decipher what might otherwise be a bewildering product line.
### 12\.1\.2 \\(k\\)\-means
Another way to group similar cases is to assign each case to one of several distinct groups, but without constructing a hierarchy. The output is not a tree but a choice of group to which each case belongs. (There can be more detail than this; for instance, a probability for each specific case that it belongs to each group.) This is like classification except that here there is no response variable. Thus, the definition of the groups must be inferred implicitly from the data.
As an example, consider the cities of the world (in `world_cities`). Cities can be different and similar in many ways: population, age structure, public transportation and roads, building space per person, etc. The choice of [*features*](https://en.wikipedia.org/w/index.php?search=features) (or variables) depends on the purpose you have for making the grouping.
Our purpose is to show you that clustering via machine learning can actually identify genuine patterns in the data. We will choose features that are utterly familiar: the latitude and longitude of each city.
You already know about the location of cities. They are on land. And you know about the organization of land on earth: most land falls in one of the large clusters called continents. But the `world_cities` data doesn’t have any notion of continents. Perhaps it is possible that this feature, which you long ago internalized, can be learned by a computer that has never even taken grade\-school geography.
For simplicity, consider the 4,000 biggest cities in the world and their longitudes and latitudes.
```
big_cities <- world_cities %>%
arrange(desc(population)) %>%
head(4000) %>%
select(longitude, latitude)
glimpse(big_cities)
```
```
Rows: 4,000
Columns: 2
$ longitude <dbl> 121.46, 28.95, -58.38, 72.88, -99.13, 116.40, 67.01, 117…
$ latitude <dbl> 31.22, 41.01, -34.61, 19.07, 19.43, 39.91, 24.86, 39.14,…
```
Note that in these data, there is no ancillary information—not even the name of the city. However, the *\\(k\\)\-means* clustering algorithm will separate these 4,000 points—each of which is located in a two\-dimensional plane—into \\(k\\) clusters based on their locations alone.
```
set.seed(15)
library(mclust)
city_clusts <- big_cities %>%
kmeans(centers = 6) %>%
fitted("classes") %>%
as.character()
big_cities <- big_cities %>%
mutate(cluster = city_clusts)
big_cities %>%
ggplot(aes(x = longitude, y = latitude)) +
geom_point(aes(color = cluster), alpha = 0.5) +
scale_color_brewer(palette = "Set2")
```
Figure 12\.4: The world’s 4,000 largest cities, clustered by the 6\-means clustering algorithm.
As shown in Figure [12\.4](ch-learningII.html#fig:cluster-cities), the clustering algorithm seems to have identified the continents. North and South America are clearly distinguished, as is most of Africa. The cities in North Africa are matched to Europe, but this is consistent with history, given the European influence in places like Morocco, Tunisia, and Egypt.
Similarly, while the cluster for Europe extends into what is called Asia, the distinction between Europe and Asia is essentially historic, not geographic.
Note that to the algorithm, there is little difference between oceans and deserts—both represent large areas where no big cities exist.
12\.2 Dimension reduction
-------------------------
Often, a variable carries little information that is relevant to the task at hand. Even for variables that are informative, there can be redundancy or near duplication of variables. That is, two or more variables are giving essentially the same information—they have similar patterns across the cases.
Such irrelevant or redundant variables make it harder to learn from data. The irrelevant variables are simply noise that obscures actual patterns. Similarly, when two or more variables are redundant, the differences between them may represent random noise. Furthermore, for some machine learning algorithms, a large number of variables \\(p\\) will present computational challenges.
It is usually helpful to remove irrelevant or redundant variables so that they—and the noise they carry—don’t obscure the patterns that machine learning algorithms could identify.
For example, consider votes in a parliament or congress.
Specifically, we will explore the Scottish Parliament in 2008\.[22](#fn22)
Legislators often vote together in pre\-organized blocs, and thus the pattern of “ayes” and “nays” on particular ballots may indicate which members are affiliated (i.e., members of the same political party). To test this idea, you might try clustering the members by their voting record.
Table 12\.1: Sample voting records data from the Scottish Parliament.
| name | S1M\-1 | S1M\-4\.1 | S1M\-4\.3 | S1M\-4 |
| --- | --- | --- | --- | --- |
| Canavan, Dennis | 1 | 1 | 1 | \-1 |
| Aitken, Bill | 1 | 1 | 0 | \-1 |
| Davidson, Mr David | 1 | 1 | 0 | 0 |
| Douglas Hamilton, Lord James | 1 | 1 | 0 | 0 |
| Fergusson, Alex | 1 | 1 | 0 | 0 |
| Fraser, Murdo | 0 | 0 | 0 | 0 |
| Gallie, Phil | 1 | 1 | 0 | \-1 |
| Goldie, Annabel | 1 | 1 | 0 | 0 |
| Harding, Mr Keith | 1 | 1 | 0 | 0 |
| Johnston, Nick | 0 | 1 | 0 | \-1 |
Table [12\.1](ch-learningII.html#tab:scot-votes-small) shows a small part of the voting record. The names of the members of parliament are the cases. Each ballot—identified by a file number such as —is a variable. A `1` means an “aye” vote, `-1` is “nay,” and `0` is an abstention. There are \\(n\=134\\) members and \\(p\=773\\) ballots—note that in this data set \\(p\\) far exceeds \\(n\\). It is impractical to show all of the more than 100,000 votes in a table, but there are only 3 possible votes, so displaying the table as an image (as in Figure [12\.5](ch-learningII.html#fig:ballot-grid)) works well.
```
Votes %>%
mutate(Vote = factor(vote, labels = c("Nay", "Abstain", "Aye"))) %>%
ggplot(aes(x = bill, y = name, fill = Vote)) +
geom_tile() +
xlab("Ballot") +
ylab("Member of Parliament") +
scale_fill_manual(values = c("darkgray", "white", "goldenrod")) +
scale_x_discrete(breaks = NULL, labels = NULL) +
scale_y_discrete(breaks = NULL, labels = NULL)
```
Figure 12\.5: Visualization of the Scottish Parliament votes.
Figure [12\.5](ch-learningII.html#fig:ballot-grid) is a \\(134 \\times 773\\) grid in which each cell is color\-coded based on one member of Parliament’s vote on one ballot. It is hard to see much of a pattern here, although you may notice the Scottish [tartan](https://en.wikipedia.org/wiki/Tartan) structure. The tartan pattern provides an indication to experts that the matrix can be approximated by a matrix of low\-rank.
### 12\.2\.1 Intuitive approaches
As a start, Figure [12\.6](ch-learningII.html#fig:two-ballots) shows the ballot values for all of the members of parliament for just two arbitrarily selected ballots. To give a better idea of the point count
at each position, the values are jittered by adding some random noise. The red dots are the actual positions. Each point is one member of parliament. Similarly aligned members are grouped together at one of the nine possibilities marked in red: (Aye, Nay), (Aye, Abstain), (Aye, Aye), and so on through to (Nay, Nay). In these two ballots, eight of the nine possibilities are populated. Does this mean that there are eight clusters of members?
```
Votes %>%
filter(bill %in% c("S1M-240.2", "S1M-639.1")) %>%
pivot_wider(names_from = bill, values_from = vote) %>%
ggplot(aes(x = `S1M-240.2`, y = `S1M-639.1`)) +
geom_point(
alpha = 0.7,
position = position_jitter(width = 0.1, height = 0.1)
) +
geom_point(alpha = 0.01, size = 10, color = "red" )
```
Figure 12\.6: Scottish Parliament votes for two ballots.
Intuition suggests that it would be better to use *all* of the ballots, rather than just two. In Figure [12\.7](ch-learningII.html#fig:many-ballots), the first 387 ballots (half) have been added together, as have the remaining ballots. Figure [12\.7](ch-learningII.html#fig:many-ballots) suggests that there might be two clusters of members who are aligned with each other. Using all of the data seems to give more information than using just two ballots.
```
Votes %>%
mutate(
set_num = as.numeric(factor(bill)),
set = ifelse(
set_num < max(set_num) / 2, "First_Half", "Second_Half"
)
) %>%
group_by(name, set) %>%
summarize(Ayes = sum(vote)) %>%
pivot_wider(names_from = set, values_from = Ayes) %>%
ggplot(aes(x = First_Half, y = Second_Half)) +
geom_point(alpha = 0.7, size = 5)
```
Figure 12\.7: Scatterplot showing the correlation between Scottish Parliament votes in two arbitrary collections of ballots.
### 12\.2\.2 Singular value decomposition
You may ask why the choice was made to add up the first half of the ballots as \\(x\\) and the remaining ballots as \\(y\\). Perhaps there is a better choice to display the underlying patterns. Perhaps we can think of a way to add up the ballots in a more meaningful way.
In fact, there is a mathematical approach to finding the *best* approximation to the ballot–voter matrix using simple matrices, called [*singular value decomposition*](https://en.wikipedia.org/w/index.php?search=singular%20value%20decomposition) (SVD). (The statistical dimension reduction technique of [*principal component analysis*](https://en.wikipedia.org/w/index.php?search=principal%20component%20analysis) (PCA) can be accomplished using SVD.) The mathematics of SVD draw on a knowledge of matrix algebra, but the operation itself is accessible to anyone.
Geometrically, SVD (or PCA) amounts to a rotation of the coordinate axes so that more of the variability can be explained using just a few variables.
Figure [12\.8](ch-learningII.html#fig:ballot-PCA) shows the position of each member on the two principal components that explain the most variability.
```
Votes_wide <- Votes %>%
pivot_wider(names_from = bill, values_from = vote)
vote_svd <- Votes_wide %>%
select(-name) %>%
svd()
num_clusters <- 5 # desired number of clusters
library(broom)
vote_svd_tidy <- vote_svd %>%
tidy(matrix = "u") %>%
filter(PC < num_clusters) %>%
mutate(PC = paste0("pc_", PC)) %>%
pivot_wider(names_from = PC, values_from = value) %>%
select(-row)
clusts <- vote_svd_tidy %>%
kmeans(centers = num_clusters)
tidy(clusts)
```
```
# A tibble: 5 × 7
pc_1 pc_2 pc_3 pc_4 size withinss cluster
<dbl> <dbl> <dbl> <dbl> <int> <dbl> <fct>
1 -0.0529 0.142 0.0840 0.0260 26 0.0118 1
2 0.0851 0.0367 0.0257 -0.182 20 0.160 2
3 -0.0435 0.109 0.0630 -0.0218 10 0.0160 3
4 -0.0306 -0.116 0.183 -0.00962 20 0.0459 4
5 0.106 0.0206 0.0323 0.0456 58 0.112 5
```
```
voters <- clusts %>%
augment(vote_svd_tidy)
ggplot(data = voters, aes(x = pc_1, y = pc_2)) +
geom_point(aes(x = 0, y = 0), color = "red", shape = 1, size = 7) +
geom_point(size = 5, alpha = 0.6, aes(color = .cluster)) +
xlab("Best Vector from SVD") +
ylab("Second Best Vector from SVD") +
ggtitle("Political Positions of Members of Parliament") +
scale_color_brewer(palette = "Set2")
```
Figure 12\.8: Clustering members of Scottish Parliament based on SVD along the members.
Figure [12\.8](ch-learningII.html#fig:ballot-PCA) shows, at a glance, that there are three main clusters.
The red circle marks the *average* member. The three clusters move away from average in different directions.
There are several members whose position is in\-between the average and the cluster to which they are closest.
These clusters may reveal the alignment of Scottish members of parliament according to party affiliation and voting history.
For a graphic, one is limited to using two variables for position.
Clustering, however, can be based on many more variables. Using more SVD sums may allow the three clusters to be split up further.
The color in Figure [12\.8](ch-learningII.html#fig:ballot-PCA) above shows the result of asking for five clusters using the five best SVD sums.
The confusion matrix below compares the actual party of each member to the cluster memberships.
```
voters <- voters %>%
mutate(name = Votes_wide$name) %>%
left_join(Parties, by = c("name" = "name"))
mosaic::tally(party ~ .cluster, data = voters)
```
```
.cluster
party 1 2 3 4 5
Member for Falkirk West 0 1 0 0 0
Scottish Conservative and Unionist Party 0 0 0 20 0
Scottish Green Party 0 1 0 0 0
Scottish Labour 0 1 0 0 57
Scottish Liberal Democrats 0 16 0 0 1
Scottish National Party 26 0 10 0 0
Scottish Socialist Party 0 1 0 0 0
```
How well did the clustering algorithm do?
The party affiliation of each member of parliament is known, even though it wasn’t used in finding the clusters.
Cluster 1 consists of most of the members of the [*Scottish National Party*](https://en.wikipedia.org/w/index.php?search=Scottish%20National%20Party) (SNP).
Cluster 2 includes a number of individuals plus all but one of the [*Scottish Liberal Democrats*](https://en.wikipedia.org/w/index.php?search=Scottish%20Liberal%20Democrats).
Cluster 3 picks up the remaining 10 members of the SNP.
Cluster 4 includes all of the members of the [*Scottish Conservative and Unionist Party*](https://en.wikipedia.org/w/index.php?search=Scottish%20Conservative%20and%20Unionist%20Party), while Cluster 5 accounts for all but one member of the [*Scottish Labour*](https://en.wikipedia.org/w/index.php?search=Scottish%20Labour) party.
For most parties, the large majority of members were placed into a unique cluster for that party with a small smattering of other like\-voting colleagues.
In other words, the technique has identified correctly nearly all of the members of the four different parties with significant representation (i.e., Conservative and Unionist, Labour, Liberal Democrats, and National).
```
ballots <- vote_svd %>%
tidy(matrix = "v") %>%
filter(PC < num_clusters) %>%
mutate(PC = paste0("pc_", PC)) %>%
pivot_wider(names_from = PC, values_from = value) %>%
select(-column)
clust_ballots <- kmeans(ballots, centers = num_clusters)
ballots <- clust_ballots %>%
augment(ballots) %>%
mutate(bill = names(select(Votes_wide, -name)))
```
```
ggplot(data = ballots, aes(x = pc_1, y = pc_2)) +
geom_point(aes(x = 0, y = 0), color = "red", shape = 1, size = 7) +
geom_point(size = 5, alpha = 0.6, aes(color = .cluster)) +
xlab("Best Vector from SVD") +
ylab("Second Best Vector from SVD") +
ggtitle("Influential Ballots") +
scale_color_brewer(palette = "Set2")
```
Figure 12\.9: Clustering of Scottish Parliament ballots based on SVD along the ballots.
There is more information to be extracted from the ballot data.
Just as there are clusters of political positions, there are clusters of ballots that might correspond to such factors as social effect, economic effect, etc.
Figure [12\.9](ch-learningII.html#fig:issue-clusters) shows the position of ballots, using the first two principal components.
There are obvious clusters in this figure. Still, interpretation can be tricky. Remember that, on each issue, there are both “aye” and “nay” votes. This accounts for the symmetry of the dots around the center (indicated in red). The opposing dots along each angle from the center might be interpreted in terms of *socially liberal* versus *socially conservative* and *economically liberal* versus *economically conservative*. Deciding which is which likely involves reading the bill itself, as well as a nuanced understanding of Scottish politics.
Finally, the principal components can be used to rearrange members of parliament and separately rearrange ballots while maintaining each person’s vote.
This amounts simply to re\-ordering the members in a way other than alphabetical and similarly with the ballots.
Such a transformation can bring dramatic clarity to the appearance of the data—as shown in Figure [12\.10](ch-learningII.html#fig:SVD-ballots)—where the large, nearly equally sized, and opposing voting blocs of the two major political parties (the National and Labour parties) become obvious.
Alliances among the smaller political parties muddy the waters on the lower half of Figure [12\.10](ch-learningII.html#fig:SVD-ballots).
```
Votes_svd <- Votes %>%
mutate(Vote = factor(vote, labels = c("Nay", "Abstain", "Aye"))) %>%
inner_join(ballots, by = "bill") %>%
inner_join(voters, by = "name")
ggplot(data = Votes_svd,
aes(x = reorder(bill, pc_1.x), y = reorder(name, pc_1.y), fill = Vote)) +
geom_tile() +
xlab("Ballot") +
ylab("Member of Parliament") +
scale_fill_manual(values = c("darkgray", "white", "goldenrod")) +
scale_x_discrete(breaks = NULL, labels = NULL) +
scale_y_discrete(breaks = NULL, labels = NULL)
```
Figure 12\.10: Illustration of the Scottish Parliament votes when ordered by the primary vector of the SVD.
The person represented by the top row in Figure [12\.10](ch-learningII.html#fig:SVD-ballots) is [Nicola Sturgeon](https://en.wikipedia.org/w/index.php?search=Nicola%20Sturgeon), the leader of the Scottish National Party.
Along the primary vector identified by our SVD, she is the most extreme voter.
According to Wikipedia, the National Party belongs to a “[mainstream European social democratic tradition](https://en.wikipedia.org/wiki/Scottish_National_Party#Party_ideology).”
```
Votes_svd %>%
arrange(pc_1.y) %>%
head(1)
```
```
bill name vote Vote pc_1.x pc_2.x pc_3.x pc_4.x
1 S1M-1 Sturgeon, Nicola 1 Aye -0.00391 -0.00167 0.0498 -0.0734
.cluster.x pc_1.y pc_2.y pc_3.y pc_4.y .cluster.y party
1 4 -0.059 0.153 0.0832 0.0396 1 Scottish National Party
```
Conversely, the person at the bottom of Figure [12\.10](ch-learningII.html#fig:SVD-ballots) is [Paul Martin](https://en.wikipedia.org/w/index.php?search=Paul%20Martin), a member of the Scottish Labour Party. It is easy to see in Figure [12\.10](ch-learningII.html#fig:SVD-ballots) that Martin opposed Sturgeon on most ballot votes.
```
Votes_svd %>%
arrange(pc_1.y) %>%
tail(1)
```
```
bill name vote Vote pc_1.x pc_2.x pc_3.x pc_4.x
103582 S1M-4064 Martin, Paul 1 Aye 0.0322 -0.00484 0.0653 -0.0317
.cluster.x pc_1.y pc_2.y pc_3.y pc_4.y .cluster.y party
103582 4 0.126 0.0267 0.0425 0.056 5 Scottish Labour
```
The beauty of Figure [12\.10](ch-learningII.html#fig:SVD-ballots) is that it brings profound order to the chaos apparent in Figure [12\.5](ch-learningII.html#fig:ballot-grid).
This was accomplished by simply ordering the rows (members of Parliament) and the columns (ballots) in a sensible way.
In this case, the ordering was determined by the primary vector identified by the SVD of the voting matrix.
This is yet another example of how machine learning techniques can identify meaningful patterns in data, but human beings are required to bring domain knowledge to bear on the problem in order to extract meaningful contextual understanding.
### 12\.2\.1 Intuitive approaches
As a start, Figure [12\.6](ch-learningII.html#fig:two-ballots) shows the ballot values for all of the members of parliament for just two arbitrarily selected ballots. To give a better idea of the point count
at each position, the values are jittered by adding some random noise. The red dots are the actual positions. Each point is one member of parliament. Similarly aligned members are grouped together at one of the nine possibilities marked in red: (Aye, Nay), (Aye, Abstain), (Aye, Aye), and so on through to (Nay, Nay). In these two ballots, eight of the nine possibilities are populated. Does this mean that there are eight clusters of members?
```
Votes %>%
filter(bill %in% c("S1M-240.2", "S1M-639.1")) %>%
pivot_wider(names_from = bill, values_from = vote) %>%
ggplot(aes(x = `S1M-240.2`, y = `S1M-639.1`)) +
geom_point(
alpha = 0.7,
position = position_jitter(width = 0.1, height = 0.1)
) +
geom_point(alpha = 0.01, size = 10, color = "red" )
```
Figure 12\.6: Scottish Parliament votes for two ballots.
Intuition suggests that it would be better to use *all* of the ballots, rather than just two. In Figure [12\.7](ch-learningII.html#fig:many-ballots), the first 387 ballots (half) have been added together, as have the remaining ballots. Figure [12\.7](ch-learningII.html#fig:many-ballots) suggests that there might be two clusters of members who are aligned with each other. Using all of the data seems to give more information than using just two ballots.
```
Votes %>%
mutate(
set_num = as.numeric(factor(bill)),
set = ifelse(
set_num < max(set_num) / 2, "First_Half", "Second_Half"
)
) %>%
group_by(name, set) %>%
summarize(Ayes = sum(vote)) %>%
pivot_wider(names_from = set, values_from = Ayes) %>%
ggplot(aes(x = First_Half, y = Second_Half)) +
geom_point(alpha = 0.7, size = 5)
```
Figure 12\.7: Scatterplot showing the correlation between Scottish Parliament votes in two arbitrary collections of ballots.
### 12\.2\.2 Singular value decomposition
You may ask why the choice was made to add up the first half of the ballots as \\(x\\) and the remaining ballots as \\(y\\). Perhaps there is a better choice to display the underlying patterns. Perhaps we can think of a way to add up the ballots in a more meaningful way.
In fact, there is a mathematical approach to finding the *best* approximation to the ballot–voter matrix using simple matrices, called [*singular value decomposition*](https://en.wikipedia.org/w/index.php?search=singular%20value%20decomposition) (SVD). (The statistical dimension reduction technique of [*principal component analysis*](https://en.wikipedia.org/w/index.php?search=principal%20component%20analysis) (PCA) can be accomplished using SVD.) The mathematics of SVD draw on a knowledge of matrix algebra, but the operation itself is accessible to anyone.
Geometrically, SVD (or PCA) amounts to a rotation of the coordinate axes so that more of the variability can be explained using just a few variables.
Figure [12\.8](ch-learningII.html#fig:ballot-PCA) shows the position of each member on the two principal components that explain the most variability.
```
Votes_wide <- Votes %>%
pivot_wider(names_from = bill, values_from = vote)
vote_svd <- Votes_wide %>%
select(-name) %>%
svd()
num_clusters <- 5 # desired number of clusters
library(broom)
vote_svd_tidy <- vote_svd %>%
tidy(matrix = "u") %>%
filter(PC < num_clusters) %>%
mutate(PC = paste0("pc_", PC)) %>%
pivot_wider(names_from = PC, values_from = value) %>%
select(-row)
clusts <- vote_svd_tidy %>%
kmeans(centers = num_clusters)
tidy(clusts)
```
```
# A tibble: 5 × 7
pc_1 pc_2 pc_3 pc_4 size withinss cluster
<dbl> <dbl> <dbl> <dbl> <int> <dbl> <fct>
1 -0.0529 0.142 0.0840 0.0260 26 0.0118 1
2 0.0851 0.0367 0.0257 -0.182 20 0.160 2
3 -0.0435 0.109 0.0630 -0.0218 10 0.0160 3
4 -0.0306 -0.116 0.183 -0.00962 20 0.0459 4
5 0.106 0.0206 0.0323 0.0456 58 0.112 5
```
```
voters <- clusts %>%
augment(vote_svd_tidy)
ggplot(data = voters, aes(x = pc_1, y = pc_2)) +
geom_point(aes(x = 0, y = 0), color = "red", shape = 1, size = 7) +
geom_point(size = 5, alpha = 0.6, aes(color = .cluster)) +
xlab("Best Vector from SVD") +
ylab("Second Best Vector from SVD") +
ggtitle("Political Positions of Members of Parliament") +
scale_color_brewer(palette = "Set2")
```
Figure 12\.8: Clustering members of Scottish Parliament based on SVD along the members.
Figure [12\.8](ch-learningII.html#fig:ballot-PCA) shows, at a glance, that there are three main clusters.
The red circle marks the *average* member. The three clusters move away from average in different directions.
There are several members whose position is in\-between the average and the cluster to which they are closest.
These clusters may reveal the alignment of Scottish members of parliament according to party affiliation and voting history.
For a graphic, one is limited to using two variables for position.
Clustering, however, can be based on many more variables. Using more SVD sums may allow the three clusters to be split up further.
The color in Figure [12\.8](ch-learningII.html#fig:ballot-PCA) above shows the result of asking for five clusters using the five best SVD sums.
The confusion matrix below compares the actual party of each member to the cluster memberships.
```
voters <- voters %>%
mutate(name = Votes_wide$name) %>%
left_join(Parties, by = c("name" = "name"))
mosaic::tally(party ~ .cluster, data = voters)
```
```
.cluster
party 1 2 3 4 5
Member for Falkirk West 0 1 0 0 0
Scottish Conservative and Unionist Party 0 0 0 20 0
Scottish Green Party 0 1 0 0 0
Scottish Labour 0 1 0 0 57
Scottish Liberal Democrats 0 16 0 0 1
Scottish National Party 26 0 10 0 0
Scottish Socialist Party 0 1 0 0 0
```
How well did the clustering algorithm do?
The party affiliation of each member of parliament is known, even though it wasn’t used in finding the clusters.
Cluster 1 consists of most of the members of the [*Scottish National Party*](https://en.wikipedia.org/w/index.php?search=Scottish%20National%20Party) (SNP).
Cluster 2 includes a number of individuals plus all but one of the [*Scottish Liberal Democrats*](https://en.wikipedia.org/w/index.php?search=Scottish%20Liberal%20Democrats).
Cluster 3 picks up the remaining 10 members of the SNP.
Cluster 4 includes all of the members of the [*Scottish Conservative and Unionist Party*](https://en.wikipedia.org/w/index.php?search=Scottish%20Conservative%20and%20Unionist%20Party), while Cluster 5 accounts for all but one member of the [*Scottish Labour*](https://en.wikipedia.org/w/index.php?search=Scottish%20Labour) party.
For most parties, the large majority of members were placed into a unique cluster for that party with a small smattering of other like\-voting colleagues.
In other words, the technique has identified correctly nearly all of the members of the four different parties with significant representation (i.e., Conservative and Unionist, Labour, Liberal Democrats, and National).
```
ballots <- vote_svd %>%
tidy(matrix = "v") %>%
filter(PC < num_clusters) %>%
mutate(PC = paste0("pc_", PC)) %>%
pivot_wider(names_from = PC, values_from = value) %>%
select(-column)
clust_ballots <- kmeans(ballots, centers = num_clusters)
ballots <- clust_ballots %>%
augment(ballots) %>%
mutate(bill = names(select(Votes_wide, -name)))
```
```
ggplot(data = ballots, aes(x = pc_1, y = pc_2)) +
geom_point(aes(x = 0, y = 0), color = "red", shape = 1, size = 7) +
geom_point(size = 5, alpha = 0.6, aes(color = .cluster)) +
xlab("Best Vector from SVD") +
ylab("Second Best Vector from SVD") +
ggtitle("Influential Ballots") +
scale_color_brewer(palette = "Set2")
```
Figure 12\.9: Clustering of Scottish Parliament ballots based on SVD along the ballots.
There is more information to be extracted from the ballot data.
Just as there are clusters of political positions, there are clusters of ballots that might correspond to such factors as social effect, economic effect, etc.
Figure [12\.9](ch-learningII.html#fig:issue-clusters) shows the position of ballots, using the first two principal components.
There are obvious clusters in this figure. Still, interpretation can be tricky. Remember that, on each issue, there are both “aye” and “nay” votes. This accounts for the symmetry of the dots around the center (indicated in red). The opposing dots along each angle from the center might be interpreted in terms of *socially liberal* versus *socially conservative* and *economically liberal* versus *economically conservative*. Deciding which is which likely involves reading the bill itself, as well as a nuanced understanding of Scottish politics.
Finally, the principal components can be used to rearrange members of parliament and separately rearrange ballots while maintaining each person’s vote.
This amounts simply to re\-ordering the members in a way other than alphabetical and similarly with the ballots.
Such a transformation can bring dramatic clarity to the appearance of the data—as shown in Figure [12\.10](ch-learningII.html#fig:SVD-ballots)—where the large, nearly equally sized, and opposing voting blocs of the two major political parties (the National and Labour parties) become obvious.
Alliances among the smaller political parties muddy the waters on the lower half of Figure [12\.10](ch-learningII.html#fig:SVD-ballots).
```
Votes_svd <- Votes %>%
mutate(Vote = factor(vote, labels = c("Nay", "Abstain", "Aye"))) %>%
inner_join(ballots, by = "bill") %>%
inner_join(voters, by = "name")
ggplot(data = Votes_svd,
aes(x = reorder(bill, pc_1.x), y = reorder(name, pc_1.y), fill = Vote)) +
geom_tile() +
xlab("Ballot") +
ylab("Member of Parliament") +
scale_fill_manual(values = c("darkgray", "white", "goldenrod")) +
scale_x_discrete(breaks = NULL, labels = NULL) +
scale_y_discrete(breaks = NULL, labels = NULL)
```
Figure 12\.10: Illustration of the Scottish Parliament votes when ordered by the primary vector of the SVD.
The person represented by the top row in Figure [12\.10](ch-learningII.html#fig:SVD-ballots) is [Nicola Sturgeon](https://en.wikipedia.org/w/index.php?search=Nicola%20Sturgeon), the leader of the Scottish National Party.
Along the primary vector identified by our SVD, she is the most extreme voter.
According to Wikipedia, the National Party belongs to a “[mainstream European social democratic tradition](https://en.wikipedia.org/wiki/Scottish_National_Party#Party_ideology).”
```
Votes_svd %>%
arrange(pc_1.y) %>%
head(1)
```
```
bill name vote Vote pc_1.x pc_2.x pc_3.x pc_4.x
1 S1M-1 Sturgeon, Nicola 1 Aye -0.00391 -0.00167 0.0498 -0.0734
.cluster.x pc_1.y pc_2.y pc_3.y pc_4.y .cluster.y party
1 4 -0.059 0.153 0.0832 0.0396 1 Scottish National Party
```
Conversely, the person at the bottom of Figure [12\.10](ch-learningII.html#fig:SVD-ballots) is [Paul Martin](https://en.wikipedia.org/w/index.php?search=Paul%20Martin), a member of the Scottish Labour Party. It is easy to see in Figure [12\.10](ch-learningII.html#fig:SVD-ballots) that Martin opposed Sturgeon on most ballot votes.
```
Votes_svd %>%
arrange(pc_1.y) %>%
tail(1)
```
```
bill name vote Vote pc_1.x pc_2.x pc_3.x pc_4.x
103582 S1M-4064 Martin, Paul 1 Aye 0.0322 -0.00484 0.0653 -0.0317
.cluster.x pc_1.y pc_2.y pc_3.y pc_4.y .cluster.y party
103582 4 0.126 0.0267 0.0425 0.056 5 Scottish Labour
```
The beauty of Figure [12\.10](ch-learningII.html#fig:SVD-ballots) is that it brings profound order to the chaos apparent in Figure [12\.5](ch-learningII.html#fig:ballot-grid).
This was accomplished by simply ordering the rows (members of Parliament) and the columns (ballots) in a sensible way.
In this case, the ordering was determined by the primary vector identified by the SVD of the voting matrix.
This is yet another example of how machine learning techniques can identify meaningful patterns in data, but human beings are required to bring domain knowledge to bear on the problem in order to extract meaningful contextual understanding.
12\.3 Further resources
-----------------------
The machine learning and phylogenetics CRAN task views provide guidance on functionality within **R**.
Readers interested in learning more about unsupervised learning are encouraged to consult G. James et al. (2013\) or Hastie, Tibshirani, and Friedman (2009\). Kuiper and Sklar (2012\) includes an accessible treatment of [*principal component analysis*](https://en.wikipedia.org/w/index.php?search=principal%20component%20analysis).
12\.4 Exercises
---------------
**Problem 1 (Medium)**: Re\-fit the \\(k\\)–means algorithm on the `BigCities` data with a different value of \\(k\\) (i.e., not six). Experiment with different values of \\(k\\) and report on the sensitivity of the algorithm to changes in this parameter.
**Problem 2 (Medium)**: Carry out and interpret a clustering of vehicles from another manufacturer using the
approach outlined in the first section of the chapter.
**Problem 3 (Medium)**: Perform the clustering on *pitchers* who have been elected to the Hall of Fame using the `Pitching` dataset in the `Lahman` package. Use wins (`W`), strikeouts (`SO`), and saves (`SV`) as criteria.
**Problem 4 (Medium)**: Consider the \\(k\\)\-means clustering algorithm applied to the `BigCities` data.
Would you expect to obtain different results if the location coordinates were *projected* (see the “Working with spatial data” chapter)?
**Problem 5 (Hard)**: Use the College Scorecard Data from the `CollegeScorecard` package to cluster educational institutions using the techniques described in this chapter. Be sure to include variables related to student debt, number of students, graduation rate, and selectivity.
```
# remotes::install_github("Amherst-Statistics/CollegeScorecard")
```
**Problem 6 (Hard)**: Baseball players are voted into the Hall of Fame by the members of the Baseball Writers of America Association. Quantitative criteria are used by the voters, but they are also allowed wide discretion. The following code identifies the position players who have been elected to the Hall of Fame and tabulates a few basic statistics, including their number of career hits (`H`), home runs (`HR`}), and stolen bases (`SB`).
1. Use the `kmeans` function to perform a cluster analysis on these players. Describe the properties that seem common to each cluster.
```
library(mdsr)
library(Lahman)
hof <- Batting %>%
group_by(playerID) %>%
inner_join(HallOfFame, by = c("playerID" = "playerID")) %>%
filter(inducted == "Y" & votedBy == "BBWAA") %>%
summarize(tH = sum(H), tHR = sum(HR), tRBI = sum(RBI), tSB = sum(SB)) %>%
filter(tH > 1000)
```
2. Building on the previous exercise, compute new statistics and run the clustering algorithm again. Can you produce clusters that you think are more pure? Justify your choices.
**Problem 7 (Hard)**: Project the `world_cities` coordinates using the Gall\-Peters projection and run the \\(k\\)\-means algorithm again. Are the resulting clusters importantly different from those identified in the chapter?
```
library(tidyverse)
library(mdsr)
big_cities <- world_cities %>%
arrange(desc(population)) %>%
head(4000) %>%
select(longitude, latitude)
```
12\.5 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-learningII.html\#learningII\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-learningII.html#learningII-online-exercises)
No exercises found
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-simulation.html |
Chapter 13 Simulation
=====================
13\.1 Reasoning in reverse
--------------------------
In Chapter [1](ch-prologue.html#ch:prologue) of this book we stated a simple truth: The purpose of data science is to turn data into usable information. Another way to think of this is that we use data to improve our understanding of the systems we live and work with: Data \\(\\rightarrow\\) Understanding.
This chapter is about computing techniques relating to the reverse way of thinking: Speculation \\(\\rightarrow\\) Data. In other words, this chapter is about “making up data.”
Many people associate “making up data” with deception. Certainly, data can be made up for exactly that purpose. Our purpose is different. We are interested in legitimate purposes for making up data, purposes that support the proper use of data science in transforming data into understanding.
How can made\-up data be legitimately useful? In order to make up data, you need to build a mechanism that contains—implicitly—an idea about how the system you are interested in works. The data you make up tell you what data generated by that system would look like. There are two main (legitimate) purposes for doing this:
* Conditional inference. If our mechanism is reflective of how the real system works, the data it generates are similar to real data. You might use these to inform tweaks to the mechanism in order to produce even more representative results. This process can help you refine your understanding in ways that are relevant to the real world.
* Winnowing out hypotheses. To “winnow” means to remove from a set the less desirable choices so that what remains is useful. Traditionally, grain was winnowed to separate the edible parts from the inedible chaff. For data science, the set is composed of hypotheses, which are ideas about how the world works. Data are generated from each hypothesis and compared to the data we collect from the real world. When the hypothesis\-generated data fails to resemble the
real\-world data, we can remove that hypothesis from the set. What remains are hypotheses that are plausible candidates for describing the real\-world mechanisms.
“Making up” data is undignified, so we will leave that term to refer to fraud and trickery. In its place we’ll use use [*simulation*](https://en.wikipedia.org/w/index.php?search=simulation), which derives from “similar.” Simulations involve constructing mechanisms that are similar to how systems in the real world work—or at least to our belief and understanding of how such systems work.
13\.2 Extended example: Grouping cancers
----------------------------------------
There are many different kinds of cancer.
They are often given the name of the tissue in which they originate: lung cancer, ovarian cancer, prostate cancer, and so on.
Different kinds of cancer are treated with different chemotherapeutic drugs.
But perhaps the tissue origin of each cancer is not the best indicator of how it should be treated.
Could we find a better way?
Let’s revisit the data introduced in Section [3\.2\.4](ch-vizII.html#sec:datanetwork).
Like all cells, cancer cells have a genome containing tens of thousands of genes.
Sometimes just a few genes dictate a cell’s behavior. Other times there are networks of genes that regulate one another’s expression in ways that shape cell features, such as the over\-rapid reproduction characteristic of cancer cells.
It is now possible to examine the expression of individual genes within a cell.
So\-called [*microarrays*](https://en.wikipedia.org/w/index.php?search=microarrays) are routinely used for this purpose. Each microarray has tens to hundreds of thousands of [*probes*](https://en.wikipedia.org/w/index.php?search=probes) for gene activity.
The result of a microarray assay is a snapshot of gene activity.
By comparing snapshots of cells in different states, it’s possible to identify the genes that are expressed differently in the states.
This can provide insight into how specific genes govern various aspects of cell activity.
A data scientist, as part of a team of biomedical researchers, might take on the job of compiling data from many microarray assays to identify whether different types of cancer are related based on their gene expression.
For instance, the `NCI60` data (provided by the `etl_NCI60()` function in the **mdsr** package) contains readings from assays of \\(n\=60\\) different cell lines of cancer of different tissue types. For each cell line, the data contain readings on over \\(p\>40,000\\) different probes.
Your job might be to find relationships between different cell lines based on the patterns of probe expression.
For this purpose, you might find the techniques of statistical learning and unsupervised learning from Chapters [10](ch-modeling.html#ch:modeling)–[12](ch-learningII.html#ch:learningII) to be helpful.
However, there is a problem.
Even cancer cells have to carry out the routine actions that all cells use to maintain themselves.
Presumably, the expression of most of the genes in the `NCI60` data are irrelevant to the pecularities of cancer and the similarities and differences between different cancer types.
Data interpreting methods—including those in Chapters [10](ch-modeling.html#ch:modeling) and [11](ch-learningI.html#ch:learningI)—can be swamped by a wave of irrelevant data.
They are more likely to be effective if the irrelevant data can be removed.
Dimension reduction methods such as those described in Chapter [12](ch-learningII.html#ch:learningII) can be attractive for this purpose.
When you start down the road toward your goal of finding links among different cancer types, you don’t know if you will reach your destination.
If you don’t, before concluding that there are no relationships, it’s helpful to rule out some other possibilities. Perhaps the data reduction and data interpretation methods you used are not powerful enough.
Another set of methods might be better. Or perhaps there isn’t enough data to be able to detect the patterns you are looking for.
Simulations can help here. To illustrate, consider a rather simple data reduction technique for the `NCI60` microarray data.
If the expression of a probe is the same or very similar across all the different cancers, there’s nothing that that probe can tell us about the links among cancers.
One way to quantify the variation in a probe from cell line to cell line is the standard deviation of microarray readings for that probe.
It is a straightforward exercise in data wrangling to calculate this for each probe. The `NCI60` data come in a wide form: a matrix that’s 60 columns wide (one for each cell line) and 41,078 rows long (one row for each probe). This pipeline will find the standard deviation across cell lines for each probe.
```
library(tidyverse)
library(mdsr)
NCI60 <- etl_NCI60()
spreads <- NCI60 %>%
pivot_longer(
-Probe, values_to = "expression",
names_to = "cellLine"
) %>%
group_by(Probe) %>%
summarize(N = n(), spread = sd(expression)) %>%
arrange(desc(spread)) %>%
mutate(order = row_number())
```
`NCI60` has been rearranged into narrow format in `spreads`, with columns `Probe` and `spread` for each of 32,344 probes.
(A large number of the probes appear several times in the microarray, in one case as many as 14 times.)
We arrange this dataset in descending order by the size of the standard deviation, so we can collect the probes that exhibit the most variation across cell lines by taking the topmost ones in `spreads`.
For ease in plotting, we add the variable `order` to mark the order of each probe in the list.
How many of the probes with top standard deviations should we include in further data reduction and interpretation? 1? 10? 1000? 10,000? How should we go about answering this question?
We’ll use a simulation to help determine the number of probes that we select.
```
sim_spreads <- NCI60 %>%
pivot_longer(
-Probe, values_to = "expression",
names_to = "cellLine"
) %>%
mutate(Probe = mosaic::shuffle(Probe)) %>%
group_by(Probe) %>%
summarize(N = n(), spread = sd(expression)) %>%
arrange(desc(spread)) %>%
mutate(order = row_number())
```
What makes this a simulation is the `mutate()` command where we call `shuffle()`.
In that line, we replace each of the probe labels with a randomly selected label.
The result is that the `expression` has been statistically disconnected from any other variable, particularly `cellLine`.
The simulation creates the kind of data that would result from a system in which the probe expression data is meaningless.
In other words, the simulation mechanism matches the null hypothesis that the probe labels are irrelevant.
By comparing the real `NCI60` data to the simulated data, we can see which probes give evidence that the null hypothesis is false.
Let’s compare the top\-500 spread values in `spreads` and `sim_spreads`.
```
spreads %>%
filter(order <= 500) %>%
ggplot(aes(x = order, y = spread)) +
geom_line(color = "blue", size = 2) +
geom_line(
data = filter(sim_spreads, order <= 500),
color = "red",
size = 2
) +
geom_text(
label = "simulated", x = 275, y = 4.4,
size = 3, color = "red"
) +
geom_text(
label = "observed", x = 75, y = 5.5,
size = 3, color = "blue"
)
```
Figure 13\.1: Comparing the variation in expression for individual probes across cell lines data (blue) and a simulation of a null hypothesis (red).
We can tell a lot from the results of the simulation shown in Figure [13\.1](ch-simulation.html#fig:nci60sim).
If we decided to use the top\-500 probes, we would risk including many that were no more variable than random noise (i.e., that which could have been generated under the null hypothesis).
But if we set the threshold much lower, including, say, only those probes with a spread greater than 5\.0, we would be unlikely to include any that were generated by a mechanism consistent with the null hypothesis.
The simulation is telling us that it would be good to look at roughly the top\-50 probes, since that is about how many in `NCI60` were out of the range of the simulated results for the null hypothesis.
Methods of this sort are often identified as [*false discovery rate*](https://en.wikipedia.org/w/index.php?search=false%20discovery%20rate) methods.
13\.3 Randomizing functions
---------------------------
There are as many possible simulations as there are possible hypotheses—that is, an unlimited number. Different hypotheses call for different techniques for building simulations. But there are some techniques that appear in a wide range of simulations. It’s worth knowing about these.
The previous example about false discovery rates in gene expression uses an everyday method of randomization: [*shuffling*](https://en.wikipedia.org/w/index.php?search=shuffling). Shuffling is, of course, a way of destroying any genuine order in a sequence, leaving only those appearances of order that are due to chance. Closely\-related methods, [*sampling*](https://en.wikipedia.org/w/index.php?search=sampling) and [*resampling*](https://en.wikipedia.org/w/index.php?search=resampling), were introduced in Chapter [9](ch-foundations.html#ch:foundations) when we used simulation to assess the statistical significance of patterns observed in data.
Counter\-intuitively, the use of random numbers is an important component of many simulations. In simulation, we want to induce variation. For instance, the simulated probes for the cancer example do not all have the same spread. But in creating that variation, we do not want to introduce any structure other than what we specify explicitly in the simulation. Using random numbers ensures that any structure that we find in the simulation is either due to the mechanism we’ve built for the simulation or is purely accidental.
The workhorse of simulation is the generation of random numbers in the range from zero to one, with each possibility being equally likely. In **R**, the most widely\-used such [*uniform random number generator*](https://en.wikipedia.org/w/index.php?search=uniform%20random%20number%20generator) is `runif()`. For instance, here we ask for five uniform random numbers:
```
runif(5)
```
```
[1] 0.280 0.435 0.717 0.407 0.131
```
Other randomization devices can be built out of uniform random number generators. To illustrate, here is a device for selecting one value at random from a vector:
```
select_one <- function(vec) {
n <- length(vec)
ind <- which.max(runif(n))
vec[ind]
}
select_one(letters) # letters are a, b, c, ..., z
```
```
[1] "r"
```
```
select_one(letters)
```
```
[1] "i"
```
The `select_one()` function is functionally equivalent to `slice_sample()` with the `size` argument set to 1\.
Random numbers are so important that you should try to use generators that have been written by experts and vetted by the community.
There is a lot of sophisticated theory behind programs that generate uniform random numbers. After all, you generally don’t want sequences of random numbers to repeat themselves. (An exception is described in Section [13\.6](ch-simulation.html#sec:key-principles-random).) The theory has to do with techniques for making repeated sub\-sequences as rare as possible.
Perhaps the widest use of simulation in data analysis involves the randomness introduced by sampling, resampling, and shuffling. These operations are provided by the functions `sample()`, `resample()`, and `shuffle()` from the **mosaic** package.
These functions sample uniformly at random from a data frame (or vector) with or without replacement, or permute the rows of a data frame. `resample()` is equivalent to `sample()` with the `replace` argument set to `TRUE`, while `shuffle()` is equivalent to `sample()` with `size` equal to the number of rows in the data frame and `replace` equal to `FALSE`. Non\-uniform sampling can be achieved using the `prob` option.
Other important functions for building simulations are those that generate random numbers with certain important properties. We’ve already seen `runif()` for creating uniform random numbers. Very widely used are `rnorm()`, `rexp()`, and `rpois()` for generating numbers that are distributed normally (that is, in the bell\-shaped, [*Gaussian distribution*](https://en.wikipedia.org/w/index.php?search=Gaussian%20distribution), exponentially, and with a Poisson (count) pattern, respectively. These different distributions correspond to idealized descriptions of mechanisms in the real world. For instance, events that are equally likely to happen at any time (e.g., earthquakes) will tend to have a time spacing between events that is exponential. Events that have a rate that remains the same over time (e.g., the number of cars passing a point on a low\-traffic road in one minute) are often modeled using a [*Poisson distribution*](https://en.wikipedia.org/w/index.php?search=Poisson%20distribution). There are many other forms of distributions that are considered good models of particular random processes.
Functions analogous to `runif()` and `rnorm()` are available for other common probability distributions (see the [Probability Distributions CRAN Task View](https://cran.r-project.org/web/views/Distributions.html)).
13\.4 Simulating variability
----------------------------
### 13\.4\.1 The partially\-planned rendezvous
Imagine a situation where Sally and Joan plan to meet to study in their college campus center (F. Mosteller 1987\).
They are both impatient people who will wait only 10 minutes for the other before leaving.
But their planning was incomplete.
Sally said, “Meet me between 7 and 8 tonight at the center.”
When should Joan plan to arrive at the campus center? And what is the probability that they actually meet?
A simulation can help answer these questions. Joan might reasonably assume that it doesn’t really matter when she arrives, and that Sally is equally likely to arrive any time between 7:00 and 8:00 pm.
So to Joan, Sally’s arrival time is random and uniformly distributed between 7:00 and 8:00 pm.
The same is true for Sally.
Such a simulation is easy to write: generate uniform random numbers between 0 and 60 minutes after 7:00 pm. For each pair of such numbers, check whether or not the time difference between them is 10 minutes or less. If so, they successfully met. Otherwise, they missed each other.
Here’s an implementation in **R**, with 100,000 trials of the simulation being run to make sure that the possibilities are well covered.
```
n <- 100000
sim_meet <- tibble(
sally = runif(n, min = 0, max = 60),
joan = runif(n, min = 0, max = 60),
result = ifelse(
abs(sally - joan) <= 10, "They meet", "They do not"
)
)
mosaic::tally(~ result, format = "percent", data = sim_meet)
```
```
result
They do not They meet
69.4 30.6
```
```
mosaic::binom.test(~result, n, success = "They meet", data = sim_meet)
```
```
data: sim_meet$result [with success = They meet]
number of successes = 30601, number of trials = 1e+05, p-value
<2e-16
alternative hypothesis: true probability of success is not equal to 0.5
95 percent confidence interval:
0.303 0.309
sample estimates:
probability of success
0.306
```
There’s about a 30% chance that they meet (the true probability is \\(11/36 \\approx 0\.3055556\\)).
The confidence interval, the width of which is determined in part by the number of simulations, is relatively narrow.
For any reasonable decision that Joan might consider (\`\`Oh, it seems unlikely we’ll meet. I’ll just skip it.") would be the same regardless of which end of the confidence interval is considered.
So the simulation is good enough for Joan’s purposes.
(If the interval was not narrow enough for this, you would want to add more trials. The \\(1/\\sqrt{n}\\) rule for the width of a confidence interval described in Chapter [9](ch-foundations.html#ch:foundations) can guide your choice.)
```
ggplot(data = sim_meet, aes(x = joan, y = sally, color = result)) +
geom_point(alpha = 0.3) +
geom_abline(intercept = 10, slope = 1) +
geom_abline(intercept = -10, slope = 1) +
scale_color_brewer(palette = "Set2")
```
Figure 13\.2: Distribution of Sally and Joan arrival times (shaded area indicates where they meet).
Often, it’s valuable to visualize the possibilities generated in the simulation as in Figure [13\.2](ch-simulation.html#fig:sally1). The arrival times uniformly cover the rectangle of possibilities, but only those that fall into the stripe in the center of the plot are successful. Looking at the plot,
Joan notices a pattern. For any arrival time she plans, the probability of success is the fraction of a vertical band of the plot that is covered in blue. For instance, if Joan chose to arrive at 7:20, the probability of success is the proportion of blue in the vertical band with boundaries of 20 minutes and 30 minutes on the horizontal axis. Joan observes that near 0 and 60 minutes, the probability goes down, since the diagonal band tapers. This observation guides an important decision: Joan will plan to arrive somewhere from 7:10 to 7:50\. Following this strategy, what is the probability of success? (Hint: Repeat the simulation but set Joan’s `min()` to 10 and her `max()` to 50\.)
If Joan had additional information about Sally (“She wouldn’t arrange to meet at 7:21—most likely at 7:00, 7:15, 7:30, or 7:45\.”) the simulation can be easily modified, e.g., `sally = resample(c(0, 15, 30, 45), n)` to incorporate that hypothesis.
### 13\.4\.2 The jobs report
One hour before the opening of the stock market on the first Friday of each month, the [*U.S. Bureau of Labor Statistics*](https://en.wikipedia.org/w/index.php?search=U.S.%20Bureau%20of%20Labor%20Statistics) releases the employment report.
This widely anticipated estimate of the monthly change in non\-farm payroll is an economic indicator that often leads to stock market shifts.
If you read the financial blogs, you’ll hear lots of speculation before the report is released, and lots to account for the change in the stock market in the minutes *after* the report comes out. And you’ll hear a lot of anticipation of the consequences of that month’s job report on the prospects for the economy as a whole. It happens that many financiers read a lot into the ups and downs of the jobs report. (And other people, who don’t take the report so seriously, see opportunities in responding to the actions of the believers.)
You are a skeptic. You know that in the months after the jobs report, an updated number is reported that is able to take into account late\-arriving data that couldn’t be included in the original report. One analysis, the article [\`\`How not to be misled by the jobs report"](http://www.nytimes.com/2014/05/02/upshot/how-not-to-be-misled-by-the-jobs-report.html) from the May 1, 2014 *New York Times* modeled the monthly report as a random number from a Gaussian distribution with a mean of 150,000 jobs and a standard deviation of 65,000 jobs.
You are preparing a briefing for your bosses to convince them not to take the jobs report itself seriously as an economic indicator. For many bosses, the phrases “Gaussian distribution,” “standard deviation,” and “confidence interval” will trigger a primitive “I’m not listening!” response, so your message won’t get through in that form.
It turns out that many such people will have a better understanding of a simulation than of theoretical concepts. You decide on a strategy: Use a simulation to generate a year’s worth of job reports. Ask the bosses what patterns they see and what they would look for in the next month’s report. Then inform them that there are no actual patterns in the graphs you showed them.
```
jobs_true <- 150
jobs_se <- 65 # in thousands of jobs
gen_samp <- function(true_mean, true_sd,
num_months = 12, delta = 0, id = 1) {
samp_year <- rep(true_mean, num_months) +
rnorm(num_months, mean = delta * (1:num_months), sd = true_sd)
return(
tibble(
jobs_number = samp_year,
month = as.factor(1:num_months),
id = id
)
)
}
```
We begin by defining some constants that will be needed, along with a function to calculate a year’s worth of monthly samples from this known truth.
Since the default value of `delta` is equal to zero, the “true” value remains constant over time. When the function argument `true_sd` is set to
`0`, no random noise is added to the system.
Next, we prepare a data frame that contains the function argument values over which we want to simulate. In this case, we want our first simulation to have no random noise—thus the `true_sd` argument will be set to `0` and the `id` argument will be set to `Truth`. Following that, we will generate three random simulations with `true_sd` set to the assumed value of `jobs_se`. The data frame `params` contains complete information about the simulations we want to run.
```
n_sims <- 3
params <- tibble(
sd = c(0, rep(jobs_se, n_sims)),
id = c("Truth", paste("Sample", 1:n_sims))
)
params
```
```
# A tibble: 4 × 2
sd id
<dbl> <chr>
1 0 Truth
2 65 Sample 1
3 65 Sample 2
4 65 Sample 3
```
Finally, we will actually perform the simulation using the `pmap_dfr()` function from the **purrr** package (see Chapter [7](ch-iteration.html#ch:iteration)). This will iterate over the `params` data frame and apply the appropriate values to each simulation.
```
df <- params %>%
pmap_dfr(~gen_samp(true_mean = jobs_true, true_sd = ..1, id = ..2))
```
Note how the two arguments are given in a compact and flexible form (`..1` and `..2`).
```
ggplot(data = df, aes(x = month, y = jobs_number)) +
geom_hline(yintercept = jobs_true, linetype = 2) +
geom_col() +
facet_wrap(~ id) +
ylab("Number of new jobs (in thousands)")
```
Figure 13\.3: True number of new jobs from simulation as well as three realizations from a simulation.
Figure [13\.3](ch-simulation.html#fig:nytimes) displays the “true” number as well as three realizations from the simulation.
While all of the three samples are taken from a “true” universe where the jobs number is constant, each could easily be misinterpreted
to conclude that the numbers of new jobs was decreasing at some point during the series. The moral is clear: It is important to be able to
understand the underlying variability of a system before making inferential conclusions.
### 13\.4\.3 Restaurant health and sanitation grades
We take our next simulation from the data set of restaurant health violations in [*New York City*](https://en.wikipedia.org/w/index.php?search=New%20York%20City).
To help ensure the safety of patrons, health inspectors make unannounced inspections at least once per year to each restaurant.
Establishments are graded based on [a range of criteria](http://www.nyc.gov/html/doh/downloads/pdf/rii/how-we-score-grade.pdf) including food handling, personal hygiene, and vermin control.
Those with a score between 0 and 13 points receive a coveted A grade, those with 14 to 27 points receive the less desirable B, and those of 28 or above receive a C.
We’ll display values in a subset of this range to focus on the threshold between an A and B grade, after grouping by `dba` (Doing Business As) and `score`.
We focus our analysis on the year 2015\.
```
minval <- 7
maxval <- 19
violation_scores <- Violations %>%
filter(lubridate::year(inspection_date) == 2015) %>%
filter(score >= minval & score <= maxval) %>%
select(dba, score)
```
```
ggplot(data = violation_scores, aes(x = score)) +
geom_histogram(binwidth = 0.5) +
geom_vline(xintercept = 13, linetype = 2) +
scale_x_continuous(breaks = minval:maxval) +
annotate(
"text", x = 10, y = 15000,
label = "'A' grade: score of 13 or less"
)
```
Figure 13\.4: Distribution of NYC restaurant health violation scores.
Figure [13\.4](ch-simulation.html#fig:rest) displays the distribution of restaurant violation scores.
Is something unusual happening at the threshold of 13 points (the highest value to still receive an A)?
Or could sampling variability be the cause of the dramatic decrease in the frequency of restaurants graded between 13 and 14 points?
Let’s carry out a simple simulation in which a grade of 13 or 14 is equally likely. The `nflip()` function allows us to flip a fair coin that determines whether a grade is a 14 (heads) or 13 (tails).
```
scores <- mosaic::tally(~score, data = violation_scores)
scores
```
```
score
7 8 9 10 11 12 13 14 15 16 17 18
5985 3026 8401 9007 8443 13907 9021 2155 2679 2973 4720 4119
19
4939
```
```
mean(scores[c("13", "14")])
```
```
[1] 5588
```
```
random_flip <- 1:1000 %>%
map_dbl(~mosaic::nflip(scores["13"] + scores["14"])) %>%
enframe(name = "sim", value = "heads")
head(random_flip, 3)
```
```
# A tibble: 3 × 2
sim heads
<int> <dbl>
1 1 5648
2 2 5614
3 3 5642
```
```
ggplot(data = random_flip, aes(x = heads)) +
geom_histogram(binwidth = 10) +
geom_vline(xintercept = scores["14"], col = "red") +
annotate(
"text", x = 2200, y = 75,
label = "observed", hjust = "left"
) +
xlab("Number of restaurants with scores of 14 (if equal probability)")
```
Figure 13\.5: Distribution of health violation scores under a randomization procedure (permutation test).
Figure [13\.5](ch-simulation.html#fig:rest2) demonstrates that the observed number of restaurants with a 14 are nowhere near what we would expect if there was an equal chance of receiving a score of 13 or 14\.
While the number of restaurants receiving a 13 might exceed the number receiving a 14 by 100 or so due to chance alone, there is essentially no chance of observing 5,000 more 13s than 14s if the two scores are truly equally likely.
(It is not surprising given the large number of restaurants inspected in New York City that we wouldn’t observe much sampling variability in terms of the proportion that are 14\.)
It appears as if the inspectors tend to give restaurants near the threshold the benefit of the doubt, and not drop their grade from A to B if the restaurant is on the margin between a 13 and 14 grade.
This is another situation where simulation can provide a more intuitive solution starting from first principles than an investigation using more formal statistical methods.
(A more nuanced test of the “edge effect” might be considered given the drop in the numbers of restaurants with violation scores between 14 and 19\.)
13\.5 Random networks
---------------------
As noted in Chapter [2](ch-vizI.html#ch:vizI), a network (or graph) is a collection of nodes, along with edges that connect certain pairs of those nodes.
Networks are often used to model real\-world systems that contain these pairwise relationships.
Although these networks are often simple to describe, many of the interesting problems in the mathematical discipline of graph theory are very hard to solve analytically, and intractable computationally (Garey and Johnson 1979\).
For this reason, simulation has become a useful technique for exploring questions in [*network science*](https://en.wikipedia.org/w/index.php?search=network%20science).
We illustrate how simulation can be used to verify properties of random graphs in Chapter [20](ch-netsci.html#ch:netsci).
13\.6 Key principles of simulation
----------------------------------
Many of the key principles needed to develop the capacity to simulate come straight from computer science, including aspects of design, modularity, and reproducibility.
In this section, we will briefly propose guidelines for simulations.
### 13\.6\.1 Design
It is important to consider design issues relative to simulation. As the analyst, you control all aspects and
decide what assumptions and scenarios to explore.
You have the ability (and responsibility) to determine which scenarios are relevant and what assumptions are appropriate.
The choice of scenarios depends on the underlying model: they should reflect plausible situations that are relevant to the problem at hand.
It is often useful to start with a simple setting, then gradually add complexity as needed.
### 13\.6\.2 Modularity
It is very helpful to write a function to implement the simulation, which can be called repeatedly with different options and parameters (see Appendix [C](ch-function.html#ch:function)). Spending time planning what features the simulation might have,
and how these can be split off into different functions (that might be reused in other simulations)
will pay off handsomely.
### 13\.6\.3 Reproducibility and random number seeds
It is important that simulations are both reproducible and representative.
Sampling variability is inherent in simulations: Our results will be sensitive to the number of computations that we are willing to carry out.
We need to find a balance to avoid unneeded calculations while ensuring that our results aren’t subject to random fluctuation. What is a reasonable number of simulations to consider?
Let’s revisit Sally and Joan, who will meet only if they both arrive within 10 minutes of each other.
How variable are our estimates if we carry out only `num_sims` \\(\=100\\) simulations?
We’ll assess this by carrying out 5,000 replications, saving the results from each simulation of 100 possible meetings.
Then we’ll repeat the process, but with `num_sims` \\(\=400\\) and `num_sims` \\(\=1600\\).
Note that we can do this efficiently using `map_dfr()` twice (once to iterate over the changing number of simulations, and once to repeat the procedure 5,000 times.
```
campus_sim <- function(sims = 1000, wait = 10) {
sally <- runif(sims, min = 0, max = 60)
joan <- runif(sims, min = 0, max = 60)
return(
tibble(
num_sims = sims,
meet = sum(abs(sally - joan) <= wait),
meet_pct = meet / num_sims,
)
)
}
reps <- 5000
sim_results <- 1:reps %>%
map_dfr(~map_dfr(c(100, 400, 1600), campus_sim))
sim_results %>%
group_by(num_sims) %>%
skim(meet_pct)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var num_sims n na mean sd p0 p25 p50 p75 p100
1 meet_pct 100 5000 0 0.305 0.0460 0.12 0.28 0.3 0.33 0.49
2 meet_pct 400 5000 0 0.306 0.0231 0.23 0.29 0.305 0.322 0.39
3 meet_pct 1600 5000 0 0.306 0.0116 0.263 0.298 0.306 0.314 0.352
```
Note that each of the simulations yields an unbiased estimate of the true probability that they meet, but there is variability within each individual simulation (of size 100, 400, or 1600\).
The standard deviation is halved each time we increase the number of simulations by a factor of 4\.
We can display the results graphically (see Figure [13\.6](ch-simulation.html#fig:converge)).
```
sim_results %>%
ggplot(aes(x = meet_pct, color = factor(num_sims))) +
geom_density(size = 2) +
geom_vline(aes(xintercept = 11/36), linetype = 3) +
scale_x_continuous("Proportion of times that Sally and Joan meet") +
scale_color_brewer("Number\nof sims", palette = "Set2")
```
Figure 13\.6: Convergence of the estimate of the proportion of times that Sally and Joan meet.
What would be a reasonable value for `num_sims` in this setting?
The answer depends on how accurate we want to be.
(And we can also simulate to see how variable our results are!)
Carrying out 20,000 simulations yields relatively little variability and would likely be sufficient for a first pass.
We could state that these results have [*converged*](https://en.wikipedia.org/w/index.php?search=converged) sufficiently close to the true value since the sampling variability due to the simulation is negligible.
```
1:reps %>%
map_dfr(~campus_sim(20000)) %>%
group_by(num_sims) %>%
skim(meet_pct)
```
Given the inherent nature of variability due to sampling, it can be very useful to set (and save) a [*seed*](https://en.wikipedia.org/w/index.php?search=seed) for the
pseudo\-random number generator (using the `set.seed()` function).
This ensures that the results are the same each time the simulation is run since the simulation will use the same list of random numbers.
The seed itself is arbitrary, but each seed defines a different sequence of random numbers.
```
set.seed(1974)
campus_sim()
```
```
# A tibble: 1 × 3
num_sims meet meet_pct
<dbl> <int> <dbl>
1 1000 308 0.308
```
```
campus_sim()
```
```
# A tibble: 1 × 3
num_sims meet meet_pct
<dbl> <int> <dbl>
1 1000 331 0.331
```
```
set.seed(1974)
campus_sim()
```
```
# A tibble: 1 × 3
num_sims meet meet_pct
<dbl> <int> <dbl>
1 1000 308 0.308
```
13\.7 Further resources
-----------------------
This chapter has been a basic introduction to simulation.
Over the last 30 years, the ability to use simulation to match observed data has become an essential component of [*Bayesian statistics*](https://en.wikipedia.org/w/index.php?search=Bayesian%20statistics).
A central technique is called [*Markov Chain Monte Carlo*](https://en.wikipedia.org/w/index.php?search=Markov%20Chain%20Monte%20Carlo) (MCMC).
For an accessible introduction to Bayesian methods, see Jim Albert and Hu (2019\).
Rizzo (2019\) provides a comprehensive introduction to statistical computing in **R**, while Horton, Brown, and Qian (2004\) and Horton (2013\) describe the use of **R** for simulation studies.
The importance of simulation as part of an analyst’s toolbox is enunciated in American Statistical Association Undergraduate Guidelines Workgroup (2014\), Horton (2015\), and National Academies of Science, Engineering, and Medicine (2018\).
The **simstudy** package can be used to simplify data generation or exploration using simulation.
13\.8 Exercises
---------------
**Problem 1 (Medium)**: The time a manager takes to interview a job applicant has an exponential distribution with mean of half an hour, and these times are independent of each other. The applicants are scheduled at quarter\-hour intervals beginning at 8:00 am, and all of the applicants arrive exactly on time (this is an excellent thing to do, by the way). When the applicant with an 8:15 am appointment arrives at the manager’s office office, what is the probability that she will have to wait before seeing the manager? What is the expected time that her interview will finish?
**Problem 2 (Medium)**: Consider an example where a recording device that measures remote activity is placed in a remote location. The time, \\(T\\), to failure of the remote device has an exponential distribution with mean of 3 years. Since the location is so remote, the device will not be monitored during its first 2 years of service. As a result, the time to
discovery of its failure is \\(X\\) \= max\\((T, 2\)\\). The problem here is to determine the average of the time to discovery of the truncated variable (in probability parlance, the expected value of the observed variable \\(X\\), E\[X]).
The analytic solution is fairly straightforward but requires calculus. We need to evaluate:
\\\[E\[X] \= \\int\_0^{2} 2 f(u) du \+ \\int\_2^{\\infty} u f(u) du,\\] where \\(f(u) \= 1/3 \\exp{(\-1/3 u)}\\) for \\(u \> 0\\).
We can use the calculus functions in the `mosaicCalc` package to find the answer.
Is calculus strictly necessary here?
Conduct a simulation to estimate (or check) the value for the average time to discovery.
**Problem 3 (Medium)**: Two people toss a fair coin 4 times each. Find the probability that they throw equal numbers of heads. Also estimate the probability that they throw equal numbers of heads using a simulation in R (with an associated 95% confidence interval for your estimate).
**Problem 4 (Medium)**: In this chapter, we considered a simulation where the true jobs number remained constant over time. Modify the call to the function provided in that example so that the true situation is that there are 15,000 new jobs created every month. Set your random number seed to the value \\(1976\\). Summarize what you might conclude from these results as if you were a journalist without a background in data science.
**Problem 5 (Medium)**: The `Violations` dataset in the `mdsr` package contains information about health violations across different restaurants in New York City. Is there evidence that restaurant health inspectors in New York City give the benefit of the doubt to those at the threshold between a B grade (14 to 27\) or C grade (28 or above)?
**Problem 6 (Medium)**: Sally and Joan plan to meet to study in their college campus center. They are both impatient people who will only wait 10 minutes for the other before leaving. Rather than pick a specific time to meet, they agree to head over to the campus center sometime between 7:00 and 8:00 pm. Let both arrival times be normally distributed with mean 30 minutes past and a standard deviation of 10 minutes. Assume that they are independent of each other. What is the probability that they actually meet? Estimate the answer using simulation techniques introduced in this chapter, with at least 10,000 simulations.
**Problem 7 (Medium)**: What is the impact if the residuals from a linear regression model are skewed (and not from a normal distribution)?
Repeatedly generate data from a “true” model given by:
```
n <- 250
rmse <- 1
x1 <- rep(c(0, 1), each = n / 2) # x1 resembles 0 0 0 ... 1 1 1
x2 <- runif(n, min = 0, max = 5)
beta0 <- -1
beta1 <- 0.5
beta2 <- 1.5
y <- beta0 + beta1 * x1 + beta2 * x2 + rexp(n, rate = 1 / 2)
```
For each simulation, fit the linear regression model and display the distribution of 1,000 estimates of the \\(\\beta\_1\\) parameter (note that you need to generate the vector of outcomes each time).
**Problem 8 (Medium)**: What is the impact of the violation of the equal variance
assumption for linear regression models? Repeatedly generate data from a “true” model given by the following code.
```
n <- 250
rmse <- 1
x1 <- rep(c(0, 1), each = n / 2) # x1 resembles 0 0 0 ... 1 1 1
x2 <- runif(n, min = 0, max = 5)
beta0 <- -1
beta1 <- 0.5
beta2 <- 1.5
y <- beta0 + beta1 * x1 + beta2 * x2 + rnorm(n, mean = 0, sd = rmse + x2)
```
For each simulation, fit the linear regression model and display the distribution of 1,000 estimates of the \\(\\beta\_1\\) parameter (note that you need to generate the vector of outcomes each time). Does the distribution of the parameter follow a normal distribution?
**Problem 9 (Medium)**: Generate \\(n\=5,000\\) observations from a logistic regression model with parameters
intercept \\(\\beta\_0\=\-1\\), slope \\(\\beta\_1\=0\.5\\), and distribution of the predictor being normal with mean 1 and standard deviation 1\. Calculate and interpret the resulting parameter estimates and confidence intervals.
13\.9 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/simulation.html\#simulation\-online\-exercises](https://mdsr-book.github.io/mdsr2e/simulation.html#simulation-online-exercises)
**Problem 1 (Medium)**: Consider a queuing example where customers arrive at a bank at a given minute past the hour and are served by the next available teller.
Use the following data to explore wait times for a bank with one teller vs. one with two tellers, where the duration of the transaction is given below.
| arrival | duration |
| --- | --- |
| 1 | 3 |
| 3 | 2 |
| 7 | 5 |
| 10 | 6 |
| 11 | 8 |
| 15 | 1 |
What is the average total time for customers in the bank with one teller? What is the average for a bank with two tellers?
**Problem 2 (Hard)**: The Monty Hall problem illustrates a simple setting where intuition is often misleading. The situation is based on the TV game show *Let’s Make a Deal*.
First, Monty (the host) puts a prize behind one of three doors. Then the player chooses a door. Next, (without moving the prize) Monty opens an unselected door, revealing that the prize is not behind it. The player may then switch to the other nonselected door. Should the player switch?
Many people see that there are now two doors to choose between and feel that since Monty can always open a nonprize door, there is still equal probability for each door. If that were the case, the player might as well keep the original door. This intuition is so attractive that when Marilyn vos Savant asserted that the player should switch (in her *Parade* magazine column), there were reportedly 10,000 letters asserting she was wrong.
A correct intuitive route is to observe that Monty’s door is fixed. The probability that the player has the right door is 1/3 before Monty opens the nonprize door, and remains 1/3 after that door is open. This means that the probability the prize is behind one of the other doors is 2/3, both before and after Monty opens the nonprize door. After Monty opens the nonprize door, the player gets a 2/3 chance of winning by switching to the remaining door. If the player wants to win, they should switch doors.
One way to prove to yourself that switching improves your chances of winning is through simulation. In fact, even deciding how to code the problem may be enough to convince yourself to switch.
In the simulation, you need to assign the prize to a door, then make an initial guess. If the guess was right, Monty can open either door. We’ll switch to the other door. Rather than have Monty choose a door, we’ll choose one, under the assumption that Monty opened the other one. If our initial guess was wrong, Monty will open the only remaining nonprize door, and when we switch we’ll be choosing the prize door.
13\.1 Reasoning in reverse
--------------------------
In Chapter [1](ch-prologue.html#ch:prologue) of this book we stated a simple truth: The purpose of data science is to turn data into usable information. Another way to think of this is that we use data to improve our understanding of the systems we live and work with: Data \\(\\rightarrow\\) Understanding.
This chapter is about computing techniques relating to the reverse way of thinking: Speculation \\(\\rightarrow\\) Data. In other words, this chapter is about “making up data.”
Many people associate “making up data” with deception. Certainly, data can be made up for exactly that purpose. Our purpose is different. We are interested in legitimate purposes for making up data, purposes that support the proper use of data science in transforming data into understanding.
How can made\-up data be legitimately useful? In order to make up data, you need to build a mechanism that contains—implicitly—an idea about how the system you are interested in works. The data you make up tell you what data generated by that system would look like. There are two main (legitimate) purposes for doing this:
* Conditional inference. If our mechanism is reflective of how the real system works, the data it generates are similar to real data. You might use these to inform tweaks to the mechanism in order to produce even more representative results. This process can help you refine your understanding in ways that are relevant to the real world.
* Winnowing out hypotheses. To “winnow” means to remove from a set the less desirable choices so that what remains is useful. Traditionally, grain was winnowed to separate the edible parts from the inedible chaff. For data science, the set is composed of hypotheses, which are ideas about how the world works. Data are generated from each hypothesis and compared to the data we collect from the real world. When the hypothesis\-generated data fails to resemble the
real\-world data, we can remove that hypothesis from the set. What remains are hypotheses that are plausible candidates for describing the real\-world mechanisms.
“Making up” data is undignified, so we will leave that term to refer to fraud and trickery. In its place we’ll use use [*simulation*](https://en.wikipedia.org/w/index.php?search=simulation), which derives from “similar.” Simulations involve constructing mechanisms that are similar to how systems in the real world work—or at least to our belief and understanding of how such systems work.
13\.2 Extended example: Grouping cancers
----------------------------------------
There are many different kinds of cancer.
They are often given the name of the tissue in which they originate: lung cancer, ovarian cancer, prostate cancer, and so on.
Different kinds of cancer are treated with different chemotherapeutic drugs.
But perhaps the tissue origin of each cancer is not the best indicator of how it should be treated.
Could we find a better way?
Let’s revisit the data introduced in Section [3\.2\.4](ch-vizII.html#sec:datanetwork).
Like all cells, cancer cells have a genome containing tens of thousands of genes.
Sometimes just a few genes dictate a cell’s behavior. Other times there are networks of genes that regulate one another’s expression in ways that shape cell features, such as the over\-rapid reproduction characteristic of cancer cells.
It is now possible to examine the expression of individual genes within a cell.
So\-called [*microarrays*](https://en.wikipedia.org/w/index.php?search=microarrays) are routinely used for this purpose. Each microarray has tens to hundreds of thousands of [*probes*](https://en.wikipedia.org/w/index.php?search=probes) for gene activity.
The result of a microarray assay is a snapshot of gene activity.
By comparing snapshots of cells in different states, it’s possible to identify the genes that are expressed differently in the states.
This can provide insight into how specific genes govern various aspects of cell activity.
A data scientist, as part of a team of biomedical researchers, might take on the job of compiling data from many microarray assays to identify whether different types of cancer are related based on their gene expression.
For instance, the `NCI60` data (provided by the `etl_NCI60()` function in the **mdsr** package) contains readings from assays of \\(n\=60\\) different cell lines of cancer of different tissue types. For each cell line, the data contain readings on over \\(p\>40,000\\) different probes.
Your job might be to find relationships between different cell lines based on the patterns of probe expression.
For this purpose, you might find the techniques of statistical learning and unsupervised learning from Chapters [10](ch-modeling.html#ch:modeling)–[12](ch-learningII.html#ch:learningII) to be helpful.
However, there is a problem.
Even cancer cells have to carry out the routine actions that all cells use to maintain themselves.
Presumably, the expression of most of the genes in the `NCI60` data are irrelevant to the pecularities of cancer and the similarities and differences between different cancer types.
Data interpreting methods—including those in Chapters [10](ch-modeling.html#ch:modeling) and [11](ch-learningI.html#ch:learningI)—can be swamped by a wave of irrelevant data.
They are more likely to be effective if the irrelevant data can be removed.
Dimension reduction methods such as those described in Chapter [12](ch-learningII.html#ch:learningII) can be attractive for this purpose.
When you start down the road toward your goal of finding links among different cancer types, you don’t know if you will reach your destination.
If you don’t, before concluding that there are no relationships, it’s helpful to rule out some other possibilities. Perhaps the data reduction and data interpretation methods you used are not powerful enough.
Another set of methods might be better. Or perhaps there isn’t enough data to be able to detect the patterns you are looking for.
Simulations can help here. To illustrate, consider a rather simple data reduction technique for the `NCI60` microarray data.
If the expression of a probe is the same or very similar across all the different cancers, there’s nothing that that probe can tell us about the links among cancers.
One way to quantify the variation in a probe from cell line to cell line is the standard deviation of microarray readings for that probe.
It is a straightforward exercise in data wrangling to calculate this for each probe. The `NCI60` data come in a wide form: a matrix that’s 60 columns wide (one for each cell line) and 41,078 rows long (one row for each probe). This pipeline will find the standard deviation across cell lines for each probe.
```
library(tidyverse)
library(mdsr)
NCI60 <- etl_NCI60()
spreads <- NCI60 %>%
pivot_longer(
-Probe, values_to = "expression",
names_to = "cellLine"
) %>%
group_by(Probe) %>%
summarize(N = n(), spread = sd(expression)) %>%
arrange(desc(spread)) %>%
mutate(order = row_number())
```
`NCI60` has been rearranged into narrow format in `spreads`, with columns `Probe` and `spread` for each of 32,344 probes.
(A large number of the probes appear several times in the microarray, in one case as many as 14 times.)
We arrange this dataset in descending order by the size of the standard deviation, so we can collect the probes that exhibit the most variation across cell lines by taking the topmost ones in `spreads`.
For ease in plotting, we add the variable `order` to mark the order of each probe in the list.
How many of the probes with top standard deviations should we include in further data reduction and interpretation? 1? 10? 1000? 10,000? How should we go about answering this question?
We’ll use a simulation to help determine the number of probes that we select.
```
sim_spreads <- NCI60 %>%
pivot_longer(
-Probe, values_to = "expression",
names_to = "cellLine"
) %>%
mutate(Probe = mosaic::shuffle(Probe)) %>%
group_by(Probe) %>%
summarize(N = n(), spread = sd(expression)) %>%
arrange(desc(spread)) %>%
mutate(order = row_number())
```
What makes this a simulation is the `mutate()` command where we call `shuffle()`.
In that line, we replace each of the probe labels with a randomly selected label.
The result is that the `expression` has been statistically disconnected from any other variable, particularly `cellLine`.
The simulation creates the kind of data that would result from a system in which the probe expression data is meaningless.
In other words, the simulation mechanism matches the null hypothesis that the probe labels are irrelevant.
By comparing the real `NCI60` data to the simulated data, we can see which probes give evidence that the null hypothesis is false.
Let’s compare the top\-500 spread values in `spreads` and `sim_spreads`.
```
spreads %>%
filter(order <= 500) %>%
ggplot(aes(x = order, y = spread)) +
geom_line(color = "blue", size = 2) +
geom_line(
data = filter(sim_spreads, order <= 500),
color = "red",
size = 2
) +
geom_text(
label = "simulated", x = 275, y = 4.4,
size = 3, color = "red"
) +
geom_text(
label = "observed", x = 75, y = 5.5,
size = 3, color = "blue"
)
```
Figure 13\.1: Comparing the variation in expression for individual probes across cell lines data (blue) and a simulation of a null hypothesis (red).
We can tell a lot from the results of the simulation shown in Figure [13\.1](ch-simulation.html#fig:nci60sim).
If we decided to use the top\-500 probes, we would risk including many that were no more variable than random noise (i.e., that which could have been generated under the null hypothesis).
But if we set the threshold much lower, including, say, only those probes with a spread greater than 5\.0, we would be unlikely to include any that were generated by a mechanism consistent with the null hypothesis.
The simulation is telling us that it would be good to look at roughly the top\-50 probes, since that is about how many in `NCI60` were out of the range of the simulated results for the null hypothesis.
Methods of this sort are often identified as [*false discovery rate*](https://en.wikipedia.org/w/index.php?search=false%20discovery%20rate) methods.
13\.3 Randomizing functions
---------------------------
There are as many possible simulations as there are possible hypotheses—that is, an unlimited number. Different hypotheses call for different techniques for building simulations. But there are some techniques that appear in a wide range of simulations. It’s worth knowing about these.
The previous example about false discovery rates in gene expression uses an everyday method of randomization: [*shuffling*](https://en.wikipedia.org/w/index.php?search=shuffling). Shuffling is, of course, a way of destroying any genuine order in a sequence, leaving only those appearances of order that are due to chance. Closely\-related methods, [*sampling*](https://en.wikipedia.org/w/index.php?search=sampling) and [*resampling*](https://en.wikipedia.org/w/index.php?search=resampling), were introduced in Chapter [9](ch-foundations.html#ch:foundations) when we used simulation to assess the statistical significance of patterns observed in data.
Counter\-intuitively, the use of random numbers is an important component of many simulations. In simulation, we want to induce variation. For instance, the simulated probes for the cancer example do not all have the same spread. But in creating that variation, we do not want to introduce any structure other than what we specify explicitly in the simulation. Using random numbers ensures that any structure that we find in the simulation is either due to the mechanism we’ve built for the simulation or is purely accidental.
The workhorse of simulation is the generation of random numbers in the range from zero to one, with each possibility being equally likely. In **R**, the most widely\-used such [*uniform random number generator*](https://en.wikipedia.org/w/index.php?search=uniform%20random%20number%20generator) is `runif()`. For instance, here we ask for five uniform random numbers:
```
runif(5)
```
```
[1] 0.280 0.435 0.717 0.407 0.131
```
Other randomization devices can be built out of uniform random number generators. To illustrate, here is a device for selecting one value at random from a vector:
```
select_one <- function(vec) {
n <- length(vec)
ind <- which.max(runif(n))
vec[ind]
}
select_one(letters) # letters are a, b, c, ..., z
```
```
[1] "r"
```
```
select_one(letters)
```
```
[1] "i"
```
The `select_one()` function is functionally equivalent to `slice_sample()` with the `size` argument set to 1\.
Random numbers are so important that you should try to use generators that have been written by experts and vetted by the community.
There is a lot of sophisticated theory behind programs that generate uniform random numbers. After all, you generally don’t want sequences of random numbers to repeat themselves. (An exception is described in Section [13\.6](ch-simulation.html#sec:key-principles-random).) The theory has to do with techniques for making repeated sub\-sequences as rare as possible.
Perhaps the widest use of simulation in data analysis involves the randomness introduced by sampling, resampling, and shuffling. These operations are provided by the functions `sample()`, `resample()`, and `shuffle()` from the **mosaic** package.
These functions sample uniformly at random from a data frame (or vector) with or without replacement, or permute the rows of a data frame. `resample()` is equivalent to `sample()` with the `replace` argument set to `TRUE`, while `shuffle()` is equivalent to `sample()` with `size` equal to the number of rows in the data frame and `replace` equal to `FALSE`. Non\-uniform sampling can be achieved using the `prob` option.
Other important functions for building simulations are those that generate random numbers with certain important properties. We’ve already seen `runif()` for creating uniform random numbers. Very widely used are `rnorm()`, `rexp()`, and `rpois()` for generating numbers that are distributed normally (that is, in the bell\-shaped, [*Gaussian distribution*](https://en.wikipedia.org/w/index.php?search=Gaussian%20distribution), exponentially, and with a Poisson (count) pattern, respectively. These different distributions correspond to idealized descriptions of mechanisms in the real world. For instance, events that are equally likely to happen at any time (e.g., earthquakes) will tend to have a time spacing between events that is exponential. Events that have a rate that remains the same over time (e.g., the number of cars passing a point on a low\-traffic road in one minute) are often modeled using a [*Poisson distribution*](https://en.wikipedia.org/w/index.php?search=Poisson%20distribution). There are many other forms of distributions that are considered good models of particular random processes.
Functions analogous to `runif()` and `rnorm()` are available for other common probability distributions (see the [Probability Distributions CRAN Task View](https://cran.r-project.org/web/views/Distributions.html)).
13\.4 Simulating variability
----------------------------
### 13\.4\.1 The partially\-planned rendezvous
Imagine a situation where Sally and Joan plan to meet to study in their college campus center (F. Mosteller 1987\).
They are both impatient people who will wait only 10 minutes for the other before leaving.
But their planning was incomplete.
Sally said, “Meet me between 7 and 8 tonight at the center.”
When should Joan plan to arrive at the campus center? And what is the probability that they actually meet?
A simulation can help answer these questions. Joan might reasonably assume that it doesn’t really matter when she arrives, and that Sally is equally likely to arrive any time between 7:00 and 8:00 pm.
So to Joan, Sally’s arrival time is random and uniformly distributed between 7:00 and 8:00 pm.
The same is true for Sally.
Such a simulation is easy to write: generate uniform random numbers between 0 and 60 minutes after 7:00 pm. For each pair of such numbers, check whether or not the time difference between them is 10 minutes or less. If so, they successfully met. Otherwise, they missed each other.
Here’s an implementation in **R**, with 100,000 trials of the simulation being run to make sure that the possibilities are well covered.
```
n <- 100000
sim_meet <- tibble(
sally = runif(n, min = 0, max = 60),
joan = runif(n, min = 0, max = 60),
result = ifelse(
abs(sally - joan) <= 10, "They meet", "They do not"
)
)
mosaic::tally(~ result, format = "percent", data = sim_meet)
```
```
result
They do not They meet
69.4 30.6
```
```
mosaic::binom.test(~result, n, success = "They meet", data = sim_meet)
```
```
data: sim_meet$result [with success = They meet]
number of successes = 30601, number of trials = 1e+05, p-value
<2e-16
alternative hypothesis: true probability of success is not equal to 0.5
95 percent confidence interval:
0.303 0.309
sample estimates:
probability of success
0.306
```
There’s about a 30% chance that they meet (the true probability is \\(11/36 \\approx 0\.3055556\\)).
The confidence interval, the width of which is determined in part by the number of simulations, is relatively narrow.
For any reasonable decision that Joan might consider (\`\`Oh, it seems unlikely we’ll meet. I’ll just skip it.") would be the same regardless of which end of the confidence interval is considered.
So the simulation is good enough for Joan’s purposes.
(If the interval was not narrow enough for this, you would want to add more trials. The \\(1/\\sqrt{n}\\) rule for the width of a confidence interval described in Chapter [9](ch-foundations.html#ch:foundations) can guide your choice.)
```
ggplot(data = sim_meet, aes(x = joan, y = sally, color = result)) +
geom_point(alpha = 0.3) +
geom_abline(intercept = 10, slope = 1) +
geom_abline(intercept = -10, slope = 1) +
scale_color_brewer(palette = "Set2")
```
Figure 13\.2: Distribution of Sally and Joan arrival times (shaded area indicates where they meet).
Often, it’s valuable to visualize the possibilities generated in the simulation as in Figure [13\.2](ch-simulation.html#fig:sally1). The arrival times uniformly cover the rectangle of possibilities, but only those that fall into the stripe in the center of the plot are successful. Looking at the plot,
Joan notices a pattern. For any arrival time she plans, the probability of success is the fraction of a vertical band of the plot that is covered in blue. For instance, if Joan chose to arrive at 7:20, the probability of success is the proportion of blue in the vertical band with boundaries of 20 minutes and 30 minutes on the horizontal axis. Joan observes that near 0 and 60 minutes, the probability goes down, since the diagonal band tapers. This observation guides an important decision: Joan will plan to arrive somewhere from 7:10 to 7:50\. Following this strategy, what is the probability of success? (Hint: Repeat the simulation but set Joan’s `min()` to 10 and her `max()` to 50\.)
If Joan had additional information about Sally (“She wouldn’t arrange to meet at 7:21—most likely at 7:00, 7:15, 7:30, or 7:45\.”) the simulation can be easily modified, e.g., `sally = resample(c(0, 15, 30, 45), n)` to incorporate that hypothesis.
### 13\.4\.2 The jobs report
One hour before the opening of the stock market on the first Friday of each month, the [*U.S. Bureau of Labor Statistics*](https://en.wikipedia.org/w/index.php?search=U.S.%20Bureau%20of%20Labor%20Statistics) releases the employment report.
This widely anticipated estimate of the monthly change in non\-farm payroll is an economic indicator that often leads to stock market shifts.
If you read the financial blogs, you’ll hear lots of speculation before the report is released, and lots to account for the change in the stock market in the minutes *after* the report comes out. And you’ll hear a lot of anticipation of the consequences of that month’s job report on the prospects for the economy as a whole. It happens that many financiers read a lot into the ups and downs of the jobs report. (And other people, who don’t take the report so seriously, see opportunities in responding to the actions of the believers.)
You are a skeptic. You know that in the months after the jobs report, an updated number is reported that is able to take into account late\-arriving data that couldn’t be included in the original report. One analysis, the article [\`\`How not to be misled by the jobs report"](http://www.nytimes.com/2014/05/02/upshot/how-not-to-be-misled-by-the-jobs-report.html) from the May 1, 2014 *New York Times* modeled the monthly report as a random number from a Gaussian distribution with a mean of 150,000 jobs and a standard deviation of 65,000 jobs.
You are preparing a briefing for your bosses to convince them not to take the jobs report itself seriously as an economic indicator. For many bosses, the phrases “Gaussian distribution,” “standard deviation,” and “confidence interval” will trigger a primitive “I’m not listening!” response, so your message won’t get through in that form.
It turns out that many such people will have a better understanding of a simulation than of theoretical concepts. You decide on a strategy: Use a simulation to generate a year’s worth of job reports. Ask the bosses what patterns they see and what they would look for in the next month’s report. Then inform them that there are no actual patterns in the graphs you showed them.
```
jobs_true <- 150
jobs_se <- 65 # in thousands of jobs
gen_samp <- function(true_mean, true_sd,
num_months = 12, delta = 0, id = 1) {
samp_year <- rep(true_mean, num_months) +
rnorm(num_months, mean = delta * (1:num_months), sd = true_sd)
return(
tibble(
jobs_number = samp_year,
month = as.factor(1:num_months),
id = id
)
)
}
```
We begin by defining some constants that will be needed, along with a function to calculate a year’s worth of monthly samples from this known truth.
Since the default value of `delta` is equal to zero, the “true” value remains constant over time. When the function argument `true_sd` is set to
`0`, no random noise is added to the system.
Next, we prepare a data frame that contains the function argument values over which we want to simulate. In this case, we want our first simulation to have no random noise—thus the `true_sd` argument will be set to `0` and the `id` argument will be set to `Truth`. Following that, we will generate three random simulations with `true_sd` set to the assumed value of `jobs_se`. The data frame `params` contains complete information about the simulations we want to run.
```
n_sims <- 3
params <- tibble(
sd = c(0, rep(jobs_se, n_sims)),
id = c("Truth", paste("Sample", 1:n_sims))
)
params
```
```
# A tibble: 4 × 2
sd id
<dbl> <chr>
1 0 Truth
2 65 Sample 1
3 65 Sample 2
4 65 Sample 3
```
Finally, we will actually perform the simulation using the `pmap_dfr()` function from the **purrr** package (see Chapter [7](ch-iteration.html#ch:iteration)). This will iterate over the `params` data frame and apply the appropriate values to each simulation.
```
df <- params %>%
pmap_dfr(~gen_samp(true_mean = jobs_true, true_sd = ..1, id = ..2))
```
Note how the two arguments are given in a compact and flexible form (`..1` and `..2`).
```
ggplot(data = df, aes(x = month, y = jobs_number)) +
geom_hline(yintercept = jobs_true, linetype = 2) +
geom_col() +
facet_wrap(~ id) +
ylab("Number of new jobs (in thousands)")
```
Figure 13\.3: True number of new jobs from simulation as well as three realizations from a simulation.
Figure [13\.3](ch-simulation.html#fig:nytimes) displays the “true” number as well as three realizations from the simulation.
While all of the three samples are taken from a “true” universe where the jobs number is constant, each could easily be misinterpreted
to conclude that the numbers of new jobs was decreasing at some point during the series. The moral is clear: It is important to be able to
understand the underlying variability of a system before making inferential conclusions.
### 13\.4\.3 Restaurant health and sanitation grades
We take our next simulation from the data set of restaurant health violations in [*New York City*](https://en.wikipedia.org/w/index.php?search=New%20York%20City).
To help ensure the safety of patrons, health inspectors make unannounced inspections at least once per year to each restaurant.
Establishments are graded based on [a range of criteria](http://www.nyc.gov/html/doh/downloads/pdf/rii/how-we-score-grade.pdf) including food handling, personal hygiene, and vermin control.
Those with a score between 0 and 13 points receive a coveted A grade, those with 14 to 27 points receive the less desirable B, and those of 28 or above receive a C.
We’ll display values in a subset of this range to focus on the threshold between an A and B grade, after grouping by `dba` (Doing Business As) and `score`.
We focus our analysis on the year 2015\.
```
minval <- 7
maxval <- 19
violation_scores <- Violations %>%
filter(lubridate::year(inspection_date) == 2015) %>%
filter(score >= minval & score <= maxval) %>%
select(dba, score)
```
```
ggplot(data = violation_scores, aes(x = score)) +
geom_histogram(binwidth = 0.5) +
geom_vline(xintercept = 13, linetype = 2) +
scale_x_continuous(breaks = minval:maxval) +
annotate(
"text", x = 10, y = 15000,
label = "'A' grade: score of 13 or less"
)
```
Figure 13\.4: Distribution of NYC restaurant health violation scores.
Figure [13\.4](ch-simulation.html#fig:rest) displays the distribution of restaurant violation scores.
Is something unusual happening at the threshold of 13 points (the highest value to still receive an A)?
Or could sampling variability be the cause of the dramatic decrease in the frequency of restaurants graded between 13 and 14 points?
Let’s carry out a simple simulation in which a grade of 13 or 14 is equally likely. The `nflip()` function allows us to flip a fair coin that determines whether a grade is a 14 (heads) or 13 (tails).
```
scores <- mosaic::tally(~score, data = violation_scores)
scores
```
```
score
7 8 9 10 11 12 13 14 15 16 17 18
5985 3026 8401 9007 8443 13907 9021 2155 2679 2973 4720 4119
19
4939
```
```
mean(scores[c("13", "14")])
```
```
[1] 5588
```
```
random_flip <- 1:1000 %>%
map_dbl(~mosaic::nflip(scores["13"] + scores["14"])) %>%
enframe(name = "sim", value = "heads")
head(random_flip, 3)
```
```
# A tibble: 3 × 2
sim heads
<int> <dbl>
1 1 5648
2 2 5614
3 3 5642
```
```
ggplot(data = random_flip, aes(x = heads)) +
geom_histogram(binwidth = 10) +
geom_vline(xintercept = scores["14"], col = "red") +
annotate(
"text", x = 2200, y = 75,
label = "observed", hjust = "left"
) +
xlab("Number of restaurants with scores of 14 (if equal probability)")
```
Figure 13\.5: Distribution of health violation scores under a randomization procedure (permutation test).
Figure [13\.5](ch-simulation.html#fig:rest2) demonstrates that the observed number of restaurants with a 14 are nowhere near what we would expect if there was an equal chance of receiving a score of 13 or 14\.
While the number of restaurants receiving a 13 might exceed the number receiving a 14 by 100 or so due to chance alone, there is essentially no chance of observing 5,000 more 13s than 14s if the two scores are truly equally likely.
(It is not surprising given the large number of restaurants inspected in New York City that we wouldn’t observe much sampling variability in terms of the proportion that are 14\.)
It appears as if the inspectors tend to give restaurants near the threshold the benefit of the doubt, and not drop their grade from A to B if the restaurant is on the margin between a 13 and 14 grade.
This is another situation where simulation can provide a more intuitive solution starting from first principles than an investigation using more formal statistical methods.
(A more nuanced test of the “edge effect” might be considered given the drop in the numbers of restaurants with violation scores between 14 and 19\.)
### 13\.4\.1 The partially\-planned rendezvous
Imagine a situation where Sally and Joan plan to meet to study in their college campus center (F. Mosteller 1987\).
They are both impatient people who will wait only 10 minutes for the other before leaving.
But their planning was incomplete.
Sally said, “Meet me between 7 and 8 tonight at the center.”
When should Joan plan to arrive at the campus center? And what is the probability that they actually meet?
A simulation can help answer these questions. Joan might reasonably assume that it doesn’t really matter when she arrives, and that Sally is equally likely to arrive any time between 7:00 and 8:00 pm.
So to Joan, Sally’s arrival time is random and uniformly distributed between 7:00 and 8:00 pm.
The same is true for Sally.
Such a simulation is easy to write: generate uniform random numbers between 0 and 60 minutes after 7:00 pm. For each pair of such numbers, check whether or not the time difference between them is 10 minutes or less. If so, they successfully met. Otherwise, they missed each other.
Here’s an implementation in **R**, with 100,000 trials of the simulation being run to make sure that the possibilities are well covered.
```
n <- 100000
sim_meet <- tibble(
sally = runif(n, min = 0, max = 60),
joan = runif(n, min = 0, max = 60),
result = ifelse(
abs(sally - joan) <= 10, "They meet", "They do not"
)
)
mosaic::tally(~ result, format = "percent", data = sim_meet)
```
```
result
They do not They meet
69.4 30.6
```
```
mosaic::binom.test(~result, n, success = "They meet", data = sim_meet)
```
```
data: sim_meet$result [with success = They meet]
number of successes = 30601, number of trials = 1e+05, p-value
<2e-16
alternative hypothesis: true probability of success is not equal to 0.5
95 percent confidence interval:
0.303 0.309
sample estimates:
probability of success
0.306
```
There’s about a 30% chance that they meet (the true probability is \\(11/36 \\approx 0\.3055556\\)).
The confidence interval, the width of which is determined in part by the number of simulations, is relatively narrow.
For any reasonable decision that Joan might consider (\`\`Oh, it seems unlikely we’ll meet. I’ll just skip it.") would be the same regardless of which end of the confidence interval is considered.
So the simulation is good enough for Joan’s purposes.
(If the interval was not narrow enough for this, you would want to add more trials. The \\(1/\\sqrt{n}\\) rule for the width of a confidence interval described in Chapter [9](ch-foundations.html#ch:foundations) can guide your choice.)
```
ggplot(data = sim_meet, aes(x = joan, y = sally, color = result)) +
geom_point(alpha = 0.3) +
geom_abline(intercept = 10, slope = 1) +
geom_abline(intercept = -10, slope = 1) +
scale_color_brewer(palette = "Set2")
```
Figure 13\.2: Distribution of Sally and Joan arrival times (shaded area indicates where they meet).
Often, it’s valuable to visualize the possibilities generated in the simulation as in Figure [13\.2](ch-simulation.html#fig:sally1). The arrival times uniformly cover the rectangle of possibilities, but only those that fall into the stripe in the center of the plot are successful. Looking at the plot,
Joan notices a pattern. For any arrival time she plans, the probability of success is the fraction of a vertical band of the plot that is covered in blue. For instance, if Joan chose to arrive at 7:20, the probability of success is the proportion of blue in the vertical band with boundaries of 20 minutes and 30 minutes on the horizontal axis. Joan observes that near 0 and 60 minutes, the probability goes down, since the diagonal band tapers. This observation guides an important decision: Joan will plan to arrive somewhere from 7:10 to 7:50\. Following this strategy, what is the probability of success? (Hint: Repeat the simulation but set Joan’s `min()` to 10 and her `max()` to 50\.)
If Joan had additional information about Sally (“She wouldn’t arrange to meet at 7:21—most likely at 7:00, 7:15, 7:30, or 7:45\.”) the simulation can be easily modified, e.g., `sally = resample(c(0, 15, 30, 45), n)` to incorporate that hypothesis.
### 13\.4\.2 The jobs report
One hour before the opening of the stock market on the first Friday of each month, the [*U.S. Bureau of Labor Statistics*](https://en.wikipedia.org/w/index.php?search=U.S.%20Bureau%20of%20Labor%20Statistics) releases the employment report.
This widely anticipated estimate of the monthly change in non\-farm payroll is an economic indicator that often leads to stock market shifts.
If you read the financial blogs, you’ll hear lots of speculation before the report is released, and lots to account for the change in the stock market in the minutes *after* the report comes out. And you’ll hear a lot of anticipation of the consequences of that month’s job report on the prospects for the economy as a whole. It happens that many financiers read a lot into the ups and downs of the jobs report. (And other people, who don’t take the report so seriously, see opportunities in responding to the actions of the believers.)
You are a skeptic. You know that in the months after the jobs report, an updated number is reported that is able to take into account late\-arriving data that couldn’t be included in the original report. One analysis, the article [\`\`How not to be misled by the jobs report"](http://www.nytimes.com/2014/05/02/upshot/how-not-to-be-misled-by-the-jobs-report.html) from the May 1, 2014 *New York Times* modeled the monthly report as a random number from a Gaussian distribution with a mean of 150,000 jobs and a standard deviation of 65,000 jobs.
You are preparing a briefing for your bosses to convince them not to take the jobs report itself seriously as an economic indicator. For many bosses, the phrases “Gaussian distribution,” “standard deviation,” and “confidence interval” will trigger a primitive “I’m not listening!” response, so your message won’t get through in that form.
It turns out that many such people will have a better understanding of a simulation than of theoretical concepts. You decide on a strategy: Use a simulation to generate a year’s worth of job reports. Ask the bosses what patterns they see and what they would look for in the next month’s report. Then inform them that there are no actual patterns in the graphs you showed them.
```
jobs_true <- 150
jobs_se <- 65 # in thousands of jobs
gen_samp <- function(true_mean, true_sd,
num_months = 12, delta = 0, id = 1) {
samp_year <- rep(true_mean, num_months) +
rnorm(num_months, mean = delta * (1:num_months), sd = true_sd)
return(
tibble(
jobs_number = samp_year,
month = as.factor(1:num_months),
id = id
)
)
}
```
We begin by defining some constants that will be needed, along with a function to calculate a year’s worth of monthly samples from this known truth.
Since the default value of `delta` is equal to zero, the “true” value remains constant over time. When the function argument `true_sd` is set to
`0`, no random noise is added to the system.
Next, we prepare a data frame that contains the function argument values over which we want to simulate. In this case, we want our first simulation to have no random noise—thus the `true_sd` argument will be set to `0` and the `id` argument will be set to `Truth`. Following that, we will generate three random simulations with `true_sd` set to the assumed value of `jobs_se`. The data frame `params` contains complete information about the simulations we want to run.
```
n_sims <- 3
params <- tibble(
sd = c(0, rep(jobs_se, n_sims)),
id = c("Truth", paste("Sample", 1:n_sims))
)
params
```
```
# A tibble: 4 × 2
sd id
<dbl> <chr>
1 0 Truth
2 65 Sample 1
3 65 Sample 2
4 65 Sample 3
```
Finally, we will actually perform the simulation using the `pmap_dfr()` function from the **purrr** package (see Chapter [7](ch-iteration.html#ch:iteration)). This will iterate over the `params` data frame and apply the appropriate values to each simulation.
```
df <- params %>%
pmap_dfr(~gen_samp(true_mean = jobs_true, true_sd = ..1, id = ..2))
```
Note how the two arguments are given in a compact and flexible form (`..1` and `..2`).
```
ggplot(data = df, aes(x = month, y = jobs_number)) +
geom_hline(yintercept = jobs_true, linetype = 2) +
geom_col() +
facet_wrap(~ id) +
ylab("Number of new jobs (in thousands)")
```
Figure 13\.3: True number of new jobs from simulation as well as three realizations from a simulation.
Figure [13\.3](ch-simulation.html#fig:nytimes) displays the “true” number as well as three realizations from the simulation.
While all of the three samples are taken from a “true” universe where the jobs number is constant, each could easily be misinterpreted
to conclude that the numbers of new jobs was decreasing at some point during the series. The moral is clear: It is important to be able to
understand the underlying variability of a system before making inferential conclusions.
### 13\.4\.3 Restaurant health and sanitation grades
We take our next simulation from the data set of restaurant health violations in [*New York City*](https://en.wikipedia.org/w/index.php?search=New%20York%20City).
To help ensure the safety of patrons, health inspectors make unannounced inspections at least once per year to each restaurant.
Establishments are graded based on [a range of criteria](http://www.nyc.gov/html/doh/downloads/pdf/rii/how-we-score-grade.pdf) including food handling, personal hygiene, and vermin control.
Those with a score between 0 and 13 points receive a coveted A grade, those with 14 to 27 points receive the less desirable B, and those of 28 or above receive a C.
We’ll display values in a subset of this range to focus on the threshold between an A and B grade, after grouping by `dba` (Doing Business As) and `score`.
We focus our analysis on the year 2015\.
```
minval <- 7
maxval <- 19
violation_scores <- Violations %>%
filter(lubridate::year(inspection_date) == 2015) %>%
filter(score >= minval & score <= maxval) %>%
select(dba, score)
```
```
ggplot(data = violation_scores, aes(x = score)) +
geom_histogram(binwidth = 0.5) +
geom_vline(xintercept = 13, linetype = 2) +
scale_x_continuous(breaks = minval:maxval) +
annotate(
"text", x = 10, y = 15000,
label = "'A' grade: score of 13 or less"
)
```
Figure 13\.4: Distribution of NYC restaurant health violation scores.
Figure [13\.4](ch-simulation.html#fig:rest) displays the distribution of restaurant violation scores.
Is something unusual happening at the threshold of 13 points (the highest value to still receive an A)?
Or could sampling variability be the cause of the dramatic decrease in the frequency of restaurants graded between 13 and 14 points?
Let’s carry out a simple simulation in which a grade of 13 or 14 is equally likely. The `nflip()` function allows us to flip a fair coin that determines whether a grade is a 14 (heads) or 13 (tails).
```
scores <- mosaic::tally(~score, data = violation_scores)
scores
```
```
score
7 8 9 10 11 12 13 14 15 16 17 18
5985 3026 8401 9007 8443 13907 9021 2155 2679 2973 4720 4119
19
4939
```
```
mean(scores[c("13", "14")])
```
```
[1] 5588
```
```
random_flip <- 1:1000 %>%
map_dbl(~mosaic::nflip(scores["13"] + scores["14"])) %>%
enframe(name = "sim", value = "heads")
head(random_flip, 3)
```
```
# A tibble: 3 × 2
sim heads
<int> <dbl>
1 1 5648
2 2 5614
3 3 5642
```
```
ggplot(data = random_flip, aes(x = heads)) +
geom_histogram(binwidth = 10) +
geom_vline(xintercept = scores["14"], col = "red") +
annotate(
"text", x = 2200, y = 75,
label = "observed", hjust = "left"
) +
xlab("Number of restaurants with scores of 14 (if equal probability)")
```
Figure 13\.5: Distribution of health violation scores under a randomization procedure (permutation test).
Figure [13\.5](ch-simulation.html#fig:rest2) demonstrates that the observed number of restaurants with a 14 are nowhere near what we would expect if there was an equal chance of receiving a score of 13 or 14\.
While the number of restaurants receiving a 13 might exceed the number receiving a 14 by 100 or so due to chance alone, there is essentially no chance of observing 5,000 more 13s than 14s if the two scores are truly equally likely.
(It is not surprising given the large number of restaurants inspected in New York City that we wouldn’t observe much sampling variability in terms of the proportion that are 14\.)
It appears as if the inspectors tend to give restaurants near the threshold the benefit of the doubt, and not drop their grade from A to B if the restaurant is on the margin between a 13 and 14 grade.
This is another situation where simulation can provide a more intuitive solution starting from first principles than an investigation using more formal statistical methods.
(A more nuanced test of the “edge effect” might be considered given the drop in the numbers of restaurants with violation scores between 14 and 19\.)
13\.5 Random networks
---------------------
As noted in Chapter [2](ch-vizI.html#ch:vizI), a network (or graph) is a collection of nodes, along with edges that connect certain pairs of those nodes.
Networks are often used to model real\-world systems that contain these pairwise relationships.
Although these networks are often simple to describe, many of the interesting problems in the mathematical discipline of graph theory are very hard to solve analytically, and intractable computationally (Garey and Johnson 1979\).
For this reason, simulation has become a useful technique for exploring questions in [*network science*](https://en.wikipedia.org/w/index.php?search=network%20science).
We illustrate how simulation can be used to verify properties of random graphs in Chapter [20](ch-netsci.html#ch:netsci).
13\.6 Key principles of simulation
----------------------------------
Many of the key principles needed to develop the capacity to simulate come straight from computer science, including aspects of design, modularity, and reproducibility.
In this section, we will briefly propose guidelines for simulations.
### 13\.6\.1 Design
It is important to consider design issues relative to simulation. As the analyst, you control all aspects and
decide what assumptions and scenarios to explore.
You have the ability (and responsibility) to determine which scenarios are relevant and what assumptions are appropriate.
The choice of scenarios depends on the underlying model: they should reflect plausible situations that are relevant to the problem at hand.
It is often useful to start with a simple setting, then gradually add complexity as needed.
### 13\.6\.2 Modularity
It is very helpful to write a function to implement the simulation, which can be called repeatedly with different options and parameters (see Appendix [C](ch-function.html#ch:function)). Spending time planning what features the simulation might have,
and how these can be split off into different functions (that might be reused in other simulations)
will pay off handsomely.
### 13\.6\.3 Reproducibility and random number seeds
It is important that simulations are both reproducible and representative.
Sampling variability is inherent in simulations: Our results will be sensitive to the number of computations that we are willing to carry out.
We need to find a balance to avoid unneeded calculations while ensuring that our results aren’t subject to random fluctuation. What is a reasonable number of simulations to consider?
Let’s revisit Sally and Joan, who will meet only if they both arrive within 10 minutes of each other.
How variable are our estimates if we carry out only `num_sims` \\(\=100\\) simulations?
We’ll assess this by carrying out 5,000 replications, saving the results from each simulation of 100 possible meetings.
Then we’ll repeat the process, but with `num_sims` \\(\=400\\) and `num_sims` \\(\=1600\\).
Note that we can do this efficiently using `map_dfr()` twice (once to iterate over the changing number of simulations, and once to repeat the procedure 5,000 times.
```
campus_sim <- function(sims = 1000, wait = 10) {
sally <- runif(sims, min = 0, max = 60)
joan <- runif(sims, min = 0, max = 60)
return(
tibble(
num_sims = sims,
meet = sum(abs(sally - joan) <= wait),
meet_pct = meet / num_sims,
)
)
}
reps <- 5000
sim_results <- 1:reps %>%
map_dfr(~map_dfr(c(100, 400, 1600), campus_sim))
sim_results %>%
group_by(num_sims) %>%
skim(meet_pct)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var num_sims n na mean sd p0 p25 p50 p75 p100
1 meet_pct 100 5000 0 0.305 0.0460 0.12 0.28 0.3 0.33 0.49
2 meet_pct 400 5000 0 0.306 0.0231 0.23 0.29 0.305 0.322 0.39
3 meet_pct 1600 5000 0 0.306 0.0116 0.263 0.298 0.306 0.314 0.352
```
Note that each of the simulations yields an unbiased estimate of the true probability that they meet, but there is variability within each individual simulation (of size 100, 400, or 1600\).
The standard deviation is halved each time we increase the number of simulations by a factor of 4\.
We can display the results graphically (see Figure [13\.6](ch-simulation.html#fig:converge)).
```
sim_results %>%
ggplot(aes(x = meet_pct, color = factor(num_sims))) +
geom_density(size = 2) +
geom_vline(aes(xintercept = 11/36), linetype = 3) +
scale_x_continuous("Proportion of times that Sally and Joan meet") +
scale_color_brewer("Number\nof sims", palette = "Set2")
```
Figure 13\.6: Convergence of the estimate of the proportion of times that Sally and Joan meet.
What would be a reasonable value for `num_sims` in this setting?
The answer depends on how accurate we want to be.
(And we can also simulate to see how variable our results are!)
Carrying out 20,000 simulations yields relatively little variability and would likely be sufficient for a first pass.
We could state that these results have [*converged*](https://en.wikipedia.org/w/index.php?search=converged) sufficiently close to the true value since the sampling variability due to the simulation is negligible.
```
1:reps %>%
map_dfr(~campus_sim(20000)) %>%
group_by(num_sims) %>%
skim(meet_pct)
```
Given the inherent nature of variability due to sampling, it can be very useful to set (and save) a [*seed*](https://en.wikipedia.org/w/index.php?search=seed) for the
pseudo\-random number generator (using the `set.seed()` function).
This ensures that the results are the same each time the simulation is run since the simulation will use the same list of random numbers.
The seed itself is arbitrary, but each seed defines a different sequence of random numbers.
```
set.seed(1974)
campus_sim()
```
```
# A tibble: 1 × 3
num_sims meet meet_pct
<dbl> <int> <dbl>
1 1000 308 0.308
```
```
campus_sim()
```
```
# A tibble: 1 × 3
num_sims meet meet_pct
<dbl> <int> <dbl>
1 1000 331 0.331
```
```
set.seed(1974)
campus_sim()
```
```
# A tibble: 1 × 3
num_sims meet meet_pct
<dbl> <int> <dbl>
1 1000 308 0.308
```
### 13\.6\.1 Design
It is important to consider design issues relative to simulation. As the analyst, you control all aspects and
decide what assumptions and scenarios to explore.
You have the ability (and responsibility) to determine which scenarios are relevant and what assumptions are appropriate.
The choice of scenarios depends on the underlying model: they should reflect plausible situations that are relevant to the problem at hand.
It is often useful to start with a simple setting, then gradually add complexity as needed.
### 13\.6\.2 Modularity
It is very helpful to write a function to implement the simulation, which can be called repeatedly with different options and parameters (see Appendix [C](ch-function.html#ch:function)). Spending time planning what features the simulation might have,
and how these can be split off into different functions (that might be reused in other simulations)
will pay off handsomely.
### 13\.6\.3 Reproducibility and random number seeds
It is important that simulations are both reproducible and representative.
Sampling variability is inherent in simulations: Our results will be sensitive to the number of computations that we are willing to carry out.
We need to find a balance to avoid unneeded calculations while ensuring that our results aren’t subject to random fluctuation. What is a reasonable number of simulations to consider?
Let’s revisit Sally and Joan, who will meet only if they both arrive within 10 minutes of each other.
How variable are our estimates if we carry out only `num_sims` \\(\=100\\) simulations?
We’ll assess this by carrying out 5,000 replications, saving the results from each simulation of 100 possible meetings.
Then we’ll repeat the process, but with `num_sims` \\(\=400\\) and `num_sims` \\(\=1600\\).
Note that we can do this efficiently using `map_dfr()` twice (once to iterate over the changing number of simulations, and once to repeat the procedure 5,000 times.
```
campus_sim <- function(sims = 1000, wait = 10) {
sally <- runif(sims, min = 0, max = 60)
joan <- runif(sims, min = 0, max = 60)
return(
tibble(
num_sims = sims,
meet = sum(abs(sally - joan) <= wait),
meet_pct = meet / num_sims,
)
)
}
reps <- 5000
sim_results <- 1:reps %>%
map_dfr(~map_dfr(c(100, 400, 1600), campus_sim))
sim_results %>%
group_by(num_sims) %>%
skim(meet_pct)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var num_sims n na mean sd p0 p25 p50 p75 p100
1 meet_pct 100 5000 0 0.305 0.0460 0.12 0.28 0.3 0.33 0.49
2 meet_pct 400 5000 0 0.306 0.0231 0.23 0.29 0.305 0.322 0.39
3 meet_pct 1600 5000 0 0.306 0.0116 0.263 0.298 0.306 0.314 0.352
```
Note that each of the simulations yields an unbiased estimate of the true probability that they meet, but there is variability within each individual simulation (of size 100, 400, or 1600\).
The standard deviation is halved each time we increase the number of simulations by a factor of 4\.
We can display the results graphically (see Figure [13\.6](ch-simulation.html#fig:converge)).
```
sim_results %>%
ggplot(aes(x = meet_pct, color = factor(num_sims))) +
geom_density(size = 2) +
geom_vline(aes(xintercept = 11/36), linetype = 3) +
scale_x_continuous("Proportion of times that Sally and Joan meet") +
scale_color_brewer("Number\nof sims", palette = "Set2")
```
Figure 13\.6: Convergence of the estimate of the proportion of times that Sally and Joan meet.
What would be a reasonable value for `num_sims` in this setting?
The answer depends on how accurate we want to be.
(And we can also simulate to see how variable our results are!)
Carrying out 20,000 simulations yields relatively little variability and would likely be sufficient for a first pass.
We could state that these results have [*converged*](https://en.wikipedia.org/w/index.php?search=converged) sufficiently close to the true value since the sampling variability due to the simulation is negligible.
```
1:reps %>%
map_dfr(~campus_sim(20000)) %>%
group_by(num_sims) %>%
skim(meet_pct)
```
Given the inherent nature of variability due to sampling, it can be very useful to set (and save) a [*seed*](https://en.wikipedia.org/w/index.php?search=seed) for the
pseudo\-random number generator (using the `set.seed()` function).
This ensures that the results are the same each time the simulation is run since the simulation will use the same list of random numbers.
The seed itself is arbitrary, but each seed defines a different sequence of random numbers.
```
set.seed(1974)
campus_sim()
```
```
# A tibble: 1 × 3
num_sims meet meet_pct
<dbl> <int> <dbl>
1 1000 308 0.308
```
```
campus_sim()
```
```
# A tibble: 1 × 3
num_sims meet meet_pct
<dbl> <int> <dbl>
1 1000 331 0.331
```
```
set.seed(1974)
campus_sim()
```
```
# A tibble: 1 × 3
num_sims meet meet_pct
<dbl> <int> <dbl>
1 1000 308 0.308
```
13\.7 Further resources
-----------------------
This chapter has been a basic introduction to simulation.
Over the last 30 years, the ability to use simulation to match observed data has become an essential component of [*Bayesian statistics*](https://en.wikipedia.org/w/index.php?search=Bayesian%20statistics).
A central technique is called [*Markov Chain Monte Carlo*](https://en.wikipedia.org/w/index.php?search=Markov%20Chain%20Monte%20Carlo) (MCMC).
For an accessible introduction to Bayesian methods, see Jim Albert and Hu (2019\).
Rizzo (2019\) provides a comprehensive introduction to statistical computing in **R**, while Horton, Brown, and Qian (2004\) and Horton (2013\) describe the use of **R** for simulation studies.
The importance of simulation as part of an analyst’s toolbox is enunciated in American Statistical Association Undergraduate Guidelines Workgroup (2014\), Horton (2015\), and National Academies of Science, Engineering, and Medicine (2018\).
The **simstudy** package can be used to simplify data generation or exploration using simulation.
13\.8 Exercises
---------------
**Problem 1 (Medium)**: The time a manager takes to interview a job applicant has an exponential distribution with mean of half an hour, and these times are independent of each other. The applicants are scheduled at quarter\-hour intervals beginning at 8:00 am, and all of the applicants arrive exactly on time (this is an excellent thing to do, by the way). When the applicant with an 8:15 am appointment arrives at the manager’s office office, what is the probability that she will have to wait before seeing the manager? What is the expected time that her interview will finish?
**Problem 2 (Medium)**: Consider an example where a recording device that measures remote activity is placed in a remote location. The time, \\(T\\), to failure of the remote device has an exponential distribution with mean of 3 years. Since the location is so remote, the device will not be monitored during its first 2 years of service. As a result, the time to
discovery of its failure is \\(X\\) \= max\\((T, 2\)\\). The problem here is to determine the average of the time to discovery of the truncated variable (in probability parlance, the expected value of the observed variable \\(X\\), E\[X]).
The analytic solution is fairly straightforward but requires calculus. We need to evaluate:
\\\[E\[X] \= \\int\_0^{2} 2 f(u) du \+ \\int\_2^{\\infty} u f(u) du,\\] where \\(f(u) \= 1/3 \\exp{(\-1/3 u)}\\) for \\(u \> 0\\).
We can use the calculus functions in the `mosaicCalc` package to find the answer.
Is calculus strictly necessary here?
Conduct a simulation to estimate (or check) the value for the average time to discovery.
**Problem 3 (Medium)**: Two people toss a fair coin 4 times each. Find the probability that they throw equal numbers of heads. Also estimate the probability that they throw equal numbers of heads using a simulation in R (with an associated 95% confidence interval for your estimate).
**Problem 4 (Medium)**: In this chapter, we considered a simulation where the true jobs number remained constant over time. Modify the call to the function provided in that example so that the true situation is that there are 15,000 new jobs created every month. Set your random number seed to the value \\(1976\\). Summarize what you might conclude from these results as if you were a journalist without a background in data science.
**Problem 5 (Medium)**: The `Violations` dataset in the `mdsr` package contains information about health violations across different restaurants in New York City. Is there evidence that restaurant health inspectors in New York City give the benefit of the doubt to those at the threshold between a B grade (14 to 27\) or C grade (28 or above)?
**Problem 6 (Medium)**: Sally and Joan plan to meet to study in their college campus center. They are both impatient people who will only wait 10 minutes for the other before leaving. Rather than pick a specific time to meet, they agree to head over to the campus center sometime between 7:00 and 8:00 pm. Let both arrival times be normally distributed with mean 30 minutes past and a standard deviation of 10 minutes. Assume that they are independent of each other. What is the probability that they actually meet? Estimate the answer using simulation techniques introduced in this chapter, with at least 10,000 simulations.
**Problem 7 (Medium)**: What is the impact if the residuals from a linear regression model are skewed (and not from a normal distribution)?
Repeatedly generate data from a “true” model given by:
```
n <- 250
rmse <- 1
x1 <- rep(c(0, 1), each = n / 2) # x1 resembles 0 0 0 ... 1 1 1
x2 <- runif(n, min = 0, max = 5)
beta0 <- -1
beta1 <- 0.5
beta2 <- 1.5
y <- beta0 + beta1 * x1 + beta2 * x2 + rexp(n, rate = 1 / 2)
```
For each simulation, fit the linear regression model and display the distribution of 1,000 estimates of the \\(\\beta\_1\\) parameter (note that you need to generate the vector of outcomes each time).
**Problem 8 (Medium)**: What is the impact of the violation of the equal variance
assumption for linear regression models? Repeatedly generate data from a “true” model given by the following code.
```
n <- 250
rmse <- 1
x1 <- rep(c(0, 1), each = n / 2) # x1 resembles 0 0 0 ... 1 1 1
x2 <- runif(n, min = 0, max = 5)
beta0 <- -1
beta1 <- 0.5
beta2 <- 1.5
y <- beta0 + beta1 * x1 + beta2 * x2 + rnorm(n, mean = 0, sd = rmse + x2)
```
For each simulation, fit the linear regression model and display the distribution of 1,000 estimates of the \\(\\beta\_1\\) parameter (note that you need to generate the vector of outcomes each time). Does the distribution of the parameter follow a normal distribution?
**Problem 9 (Medium)**: Generate \\(n\=5,000\\) observations from a logistic regression model with parameters
intercept \\(\\beta\_0\=\-1\\), slope \\(\\beta\_1\=0\.5\\), and distribution of the predictor being normal with mean 1 and standard deviation 1\. Calculate and interpret the resulting parameter estimates and confidence intervals.
13\.9 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/simulation.html\#simulation\-online\-exercises](https://mdsr-book.github.io/mdsr2e/simulation.html#simulation-online-exercises)
**Problem 1 (Medium)**: Consider a queuing example where customers arrive at a bank at a given minute past the hour and are served by the next available teller.
Use the following data to explore wait times for a bank with one teller vs. one with two tellers, where the duration of the transaction is given below.
| arrival | duration |
| --- | --- |
| 1 | 3 |
| 3 | 2 |
| 7 | 5 |
| 10 | 6 |
| 11 | 8 |
| 15 | 1 |
What is the average total time for customers in the bank with one teller? What is the average for a bank with two tellers?
**Problem 2 (Hard)**: The Monty Hall problem illustrates a simple setting where intuition is often misleading. The situation is based on the TV game show *Let’s Make a Deal*.
First, Monty (the host) puts a prize behind one of three doors. Then the player chooses a door. Next, (without moving the prize) Monty opens an unselected door, revealing that the prize is not behind it. The player may then switch to the other nonselected door. Should the player switch?
Many people see that there are now two doors to choose between and feel that since Monty can always open a nonprize door, there is still equal probability for each door. If that were the case, the player might as well keep the original door. This intuition is so attractive that when Marilyn vos Savant asserted that the player should switch (in her *Parade* magazine column), there were reportedly 10,000 letters asserting she was wrong.
A correct intuitive route is to observe that Monty’s door is fixed. The probability that the player has the right door is 1/3 before Monty opens the nonprize door, and remains 1/3 after that door is open. This means that the probability the prize is behind one of the other doors is 2/3, both before and after Monty opens the nonprize door. After Monty opens the nonprize door, the player gets a 2/3 chance of winning by switching to the remaining door. If the player wants to win, they should switch doors.
One way to prove to yourself that switching improves your chances of winning is through simulation. In fact, even deciding how to code the problem may be enough to convince yourself to switch.
In the simulation, you need to assign the prize to a door, then make an initial guess. If the guess was right, Monty can open either door. We’ll switch to the other door. Rather than have Monty choose a door, we’ll choose one, under the assumption that Monty opened the other one. If our initial guess was wrong, Monty will open the only remaining nonprize door, and when we switch we’ll be choosing the prize door.
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-vizIII.html |
Chapter 14 Dynamic and customized data graphics
===============================================
As we discussed in Chapter [1](ch-prologue.html#ch:prologue), the practice of data science involves many different elements.
In Part I, we laid a foundation for data science by developing a basic understanding of data wrangling, data visualization, and ethics.
In Part II, we focused on building statistical models and using those models to learn from data.
However, to this point we have focused mainly on traditional two\-dimensional data (e.g., rows and columns) and data graphics.
In this part, we tackle the heterogeneity found in many modern data: spatial, text, network, and relational data.
We explore interactive data graphics that leap out of the printed page.
Finally, we address the volume of data—concluding with a discussion of “big data” and the tools that you are likely to see when working with it.
In Chapter [2](ch-vizI.html#ch:vizI), we laid out a systematic framework for composing data graphics.
A similar grammar of graphics employed by the **ggplot2** package provided a mechanism for creating static data graphics in Chapter [3](ch-vizII.html#ch:vizII).
In this chapter, we explore a few alternatives for making more sophisticated—and in particular, dynamic—data graphics.
14\.1 Rich Web content using `D3.js` and **htmlwidgets**
--------------------------------------------------------
As Web browsers became more complex, the desire to have interactive data visualizations in the browser grew.
Thus far, all of the data visualization techniques that we have discussed are based on static images. However, newer tools have made it considerably easier to create interactive data graphics.
JavaScript is a programming language that allows Web developers to create client\-side [*web applications*](https://en.wikipedia.org/w/index.php?search=web%20applications).
This means that computations are happening *in the client’s browser*, as opposed to taking place on the host’s Web servers.
[*JavaScript*](https://en.wikipedia.org/w/index.php?search=JavaScript) applications can be more responsive to client interaction than dynamically\-served Web pages that rely on a server\-side scripting language, like [*PHP*](https://en.wikipedia.org/w/index.php?search=PHP) or [*Ruby*](https://en.wikipedia.org/w/index.php?search=Ruby).
The current state of the art for client\-side dynamic data graphics on the Web is a JavaScript library called `D3.js`, or just `D3`, which stands for “data\-driven documents.” One of the lead developers of `D3` is [Mike Bostock](https://en.wikipedia.org/w/index.php?search=Mike%20Bostock), formerly of *The New York Times* and [*Stanford University*](https://en.wikipedia.org/w/index.php?search=Stanford%20University).
More recently, [Ramnath Vaidyanathan](https://en.wikipedia.org/w/index.php?search=Ramnath%20Vaidyanathan) and the developers at **RStudio** have created the **htmlwidgets** package, which provides a bridge between **R** and `D3`. Specifically, the [**htmlwidgets** framework](http://www.htmlwidgets.org/showcase_leaflet.html) allows **R** developers to create packages that render data graphics in HTML using `D3`.
Thus, **R** programmers can now make use of `D3` without having to learn JavaScript. Furthermore, since R Markdown documents also render as HTML, **R** users can easily create interactive data graphics embedded in annotated Web documents.
This is a highly active area of development.
In what follows, we illustrate a few of the more obviously useful **htmlwidgets** packages.
### 14\.1\.1 Leaflet
Perhaps the **htmlwidgets** that is getting the greatest attention is **leaflet**, which
enables dynamic geospatial maps to be drawn using the `Leaflet` JavaScript library and the [*OpenStreetMaps*](https://en.wikipedia.org/w/index.php?search=OpenStreetMaps) [API](http://wiki.openstreetmap.org/wiki/API).
The use of this package requires knowledge of spatial data, and thus we postpone our illustration of its use until Chapter [17](ch-spatial.html#ch:spatial).
### 14\.1\.2 Plot.ly
[Plot.ly](https://plot.ly/r/) specializes in online dynamic data visualizations and, in particular, the ability to translate code to generate data graphics between **R**, Python, and other data software tools.
This project is based on the `plotly.js` JavaScript library, which is available under an open\-source license.
The functionality of Plot.ly can be accessed in **R** through the **plotly** package.
What makes **plotly** especially attractive is that it can convert any **ggplot2** object into a **plotly** object using the `ggplotly()` function.
This enables immediate interactivity for existing data graphics. Features like [*brushing*](https://en.wikipedia.org/w/index.php?search=brushing) (where selected points are marked) and [*mouse\-over*](https://en.wikipedia.org/w/index.php?search=mouse-over) annotations (where points display additional information when the mouse hovers over them) are automatic.
For example, in Figure [14\.1](ch-vizIII.html#fig:beatles) we display a static plot of the frequency of the names of births in the United States of the four members of the [*Beatles*](https://en.wikipedia.org/w/index.php?search=Beatles) over
time (using data from the **babynames** package).
```
library(tidyverse)
library(mdsr)
library(babynames)
Beatles <- babynames %>%
filter(name %in% c("John", "Paul", "George", "Ringo") & sex == "M") %>%
mutate(name = factor(name, levels = c("John", "George", "Paul", "Ringo")))
beatles_plot <- ggplot(data = Beatles, aes(x = year, y = n)) +
geom_line(aes(color = name), size = 2)
beatles_plot
```
Figure 14\.1: **ggplot2** depiction of the frequency of Beatles names over time.
After running the `ggplotly()` function on that object, a plot is displayed in **RStudio** or in a Web browser.
The exact values can be displayed by mousing\-over the lines.
In addition, brushing, panning, and zooming are supported.
In Figure [14\.2](ch-vizIII.html#fig:plotly-beatles), we show that image.
```
library(plotly)
ggplotly(beatles_plot)
```
Figure 14\.2: An interactive plot of the frequency of Beatles names over time.
### 14\.1\.3 DataTables
The DataTables (**DT**) package provides a quick way to make data tables interactive.
Simply put, it enables tables to be searchable, sortable, and pageable automatically.
Figure [14\.3](ch-vizIII.html#fig:DT-beatles) displays the first 10 rows of the `Beatles` table as rendered by **DT**. Note the search box and clickable sorting arrows.
```
datatable(Beatles, options = list(pageLength = 10))
```
Figure 14\.3: Output of the **DT** package applied to the Beatles names.
### 14\.1\.4 Dygraphs
The **dygraphs** package generates interactive time series plots with the ability to brush over time intervals and zoom in and out.
For example, the popularity of Beatles names could be made dynamic with just a little bit of extra code.
Here, the dynamic range selector allows for the easy selection of specific time periods on which to focus.
In Figure [14\.4](ch-vizIII.html#fig:dygraphs-beatles), one can zoom in on the uptick in the popularity of the names `John` and `Paul` during the first half of the 1960s.
```
library(dygraphs)
Beatles %>%
filter(sex == "M") %>%
select(year, name, prop) %>%
pivot_wider(names_from = name, values_from = prop) %>%
dygraph(main = "Popularity of Beatles names over time") %>%
dyRangeSelector(dateWindow = c("1940", "1980"))
```
Figure 14\.4: (ref:dygraphs\-beatles\-cap)
(ref:dygraphs\-beatles\-cap) The **dygraphs** display of the popularity of Beatles names over time.
In this screenshot, the years range from 1940 to 1980, and one can expand or contract that timespan.
### 14\.1\.5 Streamgraphs
A [*streamgraph*](https://en.wikipedia.org/w/index.php?search=streamgraph) is a particular type of time series plot that uses area as a visual cue for quantity.
Streamgraphs allow you to compare the values of several time series at once.
The [**streamgraph**](https://github.com/hrbrmstr/streamgraph) `htmlwidget` provides access to the `streamgraphs.js` D3 library.
Figure [14\.5](ch-vizIII.html#fig:streambeatles) displays our `Beatles` names time series as a streamgraph.
```
# remotes::install_github("hrbrmstr/streamgraph")
library(streamgraph)
Beatles %>%
streamgraph(key = "name", value = "n", date = "year") %>%
sg_fill_brewer("Accent")
```
Figure 14\.5: A screenshot of the **streamgraph** display of Beatles names over time.
14\.2 Animation
---------------
The **gganimate** package provides a simple way to create animations (i.e., [*GIF*](https://en.wikipedia.org/w/index.php?search=GIF)s) from **ggplot2** data graphics.
In Figure [14\.6](ch-vizIII.html#fig:beatles-gganminate), we illustrate a simple transition, wherein the lines indicating the popularity of each band member’s name over time grows and shrinks.
```
library(gganimate)
library(transformr)
beatles_animation <- beatles_plot +
transition_states(
name,
transition_length = 2,
state_length = 1
) +
enter_grow() +
exit_shrink()
animate(beatles_animation, height = 400, width = 800)
```
Figure 14\.6: Evolving Beatles plot created by **gganimate**.
14\.3 Flexdashboard
-------------------
The **flexdashboard** package provides a straightforward way to create and publish data visualizations as a [*dashboard*](https://en.wikipedia.org/w/index.php?search=dashboard).
Dashboards are a common way that data scientists make data available to managers and others to make decisions.
They will often include a mix of graphical and textual displays that can be targeted to their needs.
Here we provide an example of an R Markdown file that creates a static dashboard of information from the **palmerpenguins** package.
**flexdashboard** divides up the page into rows and columns.
In this case, we create two columns of nearly equal width.
The second column (which appears on the right in Figure [14\.8](ch-vizIII.html#fig:flex2)) is further subdivided into two rows, each marked by a third\-level section header.
```
---
title: "Flexdashboard example (Palmer Penguins)"
output:
flexdashboard::flex_dashboard:
orientation: columns
vertical_layout: fill
---
```{r setup, include=FALSE}
library(flexdashboard)
library(palmerpenguins)
library(tidyverse)
```
Column {data-width=400}
-----------------------------------------------------------------------
### Chart A
```{r}
ggplot(
penguins,
aes(x = bill_length_mm, y = bill_depth_mm, color = species)
) +
geom_point()
```
Column {data-width=300}
-----------------------------------------------------------------------
### Chart B
```{r}
DT::datatable(penguins)
```
### Chart C
```{r}
roundval <- 2
cleanmean <- function(x, roundval = 2, na.rm = TRUE) {
return(round(mean(x, na.rm = na.rm), digits = roundval))
}
summarystat <- penguins %>%
group_by(species) %>%
summarize(
`Average bill length (mm)` = cleanmean(bill_length_mm),
`Average bill depth (mm)` = cleanmean(bill_depth_mm)
)
knitr::kable(summarystat)
```
```
Figure 14\.7: Sample **flexdashboard** input file.
Figure 14\.8: Sample **flexdashboard** output.
The upper\-right panel of this dashboard employs **DT** to provide a data table that the user can interact with.
However, the dashboard itself is not interactive, in the sense that the user can only change the display through this HTML widget.
Changing the display in that upper\-right panel has no effect on the other panels.
To create a fully interactive [*web application*](https://en.wikipedia.org/w/index.php?search=web%20application), we need a more powerful tool, which we introduce in the next section.
14\.4 Interactive web apps with Shiny
-------------------------------------
Shiny is a framework for **R** that can be used to create interactive [*web applications*](https://en.wikipedia.org/w/index.php?search=web%20applications) and dynamic dashboards.
It is particularly attractive because it provides a high\-level structure to easily prototype and deploy apps.
While a full discussion of Shiny is outside the scope of this book, we will demonstrate how one might create a dynamic web app that allows the user to explore the data set of babies with the same names as the Beatles.
One way to write a Shiny app involves creating a `ui.R` file that controls the user interface, and a `server.R` file to
display the results.
(Alternatively, the two files can be combined into a single `app.R` file that includes both components.)
These files communicate with each other using [*reactive objects*](https://en.wikipedia.org/w/index.php?search=reactive%20objects) `input` and `output`.
Reactive expressions are special constructions that use input from [*widgets*](https://en.wikipedia.org/w/index.php?search=widgets) to return a value.
These allow the application to automatically update when the user clicks on a button, changes a slider, or provides other input.
### 14\.4\.1 Example: interactive display of the Beatles
For this example, we’d like to let the user pick the start and end years along with a set of checkboxes to include their favorite Beatles.
The `ui.R` file shown in Figure [14\.9](ch-vizIII.html#fig:shiny-ui) sets up a title, creates inputs for the start and end years (with default values), creates a set of check boxes for each of the Beatles’ names, then plots the result.
```
# ui.R
beatles_names <- c("John", "Paul", "George", "Ringo")
shinyUI(
bootstrapPage(
h3("Frequency of Beatles names over time"),
numericInput(
"startyear", "Enter starting year",
value = 1960, min = 1880, max = 2014, step = 1
),
numericInput(
"endyear", "Enter ending year",
value = 1970, min = 1881, max = 2014, step = 1
),
checkboxGroupInput(
'names', 'Names to display:',
sort(unique(beatles_names)),
selected = c("George", "Paul")
),
plotOutput("plot")
)
)
```
Figure 14\.9: User interface code for a simple Shiny app.
The `server.R` file shown in Figure [14\.10](ch-vizIII.html#fig:shiny-server) loads needed packages, performs some data wrangling, extracts the reactive objects using the `input` object, then generates the desired plot.
The `renderPlot()` function returns a reactive object called `plot` that is referenced in `ui.R`.
Within this function, the values for the years and Beatles are used within a call to `filter()` to identify what to plot.
```
# server.R
library(tidyverse)
library(babynames)
library(shiny)
Beatles <- babynames %>%
filter(name %in% c("John", "Paul", "George", "Ringo") & sex == "M")
shinyServer(
function(input, output) {
output$plot <- renderPlot({
ds <- Beatles %>%
filter(
year >= input$startyear, year <= input$endyear,
name %in% input$names
)
ggplot(data = ds, aes(x = year, y = prop, color = name)) +
geom_line(size = 2)
})
}
)
```
Figure 14\.10: Server processing code for a simple Shiny app.
Shiny Apps can be run locally within **RStudio**, or deployed on a Shiny App server (such as <http://shinyapps.io>).
Please see the book website at [https://mdsr\-book.github.io](https://mdsr-book.github.io) for access to the code files.
Figure [14\.11](ch-vizIII.html#fig:beatlesshiny) displays the results when only Paul and George are checked when run locally.
```
library(shiny)
runApp('.')
```
Figure 14\.11: A screenshot of the Shiny app displaying babies with Beatles names.
### 14\.4\.2 More on reactive programming
Shiny is an extremely powerful and complicated system to master.
Repeated and gradual exposure to reactive programming and widgets will pay off in terms of flexible and attractive displays.
For this example, we demonstrate some additional features that show off some of the possibilities: more general reactive objects, dynamic user interfaces, and progress indicators.
Here we display information about health violations from [*New York City*](https://en.wikipedia.org/w/index.php?search=New%20York%20City) restaurants.
The user has the option to specify a [*borough*](https://en.wikipedia.org/w/index.php?search=borough) (district) within New York City and a cuisine.
Since not every cuisine is available within every borough, we need to dynamically filter the list.
We do this by calling `uiOutput()`.
This references a reactive object created within the `server()` function.
The output is displayed in a `dataTableOuput()` widget from the **DT** package.
```
library(tidyverse)
library(shiny)
library(shinybusy)
library(mdsr)
mergedViolations <- Violations %>%
left_join(Cuisines)
ui <- fluidPage(
titlePanel("Restaurant Explorer"),
fluidRow(
# some things take time: this lets users know
add_busy_spinner(spin = "fading-circle"),
column(
4,
selectInput(inputId = "boro",
label = "Borough:",
choices = c(
"ALL",
unique(as.character(mergedViolations$boro))
)
)
),
# display dynamic list of cuisines
column(4, uiOutput("cuisinecontrols"))
),
# Create a new row for the table.
fluidRow(
DT::dataTableOutput("table")
)
)
```
Figure 14\.12: User interface processing code for a more sophisticated Shiny app.
The code shown in Figure [14\.12](ch-vizIII.html#fig:shiny-part1) also includes a call to the `add_busy_spinner()` function from the **shinybusy** package.
It takes time to render the various reactive objects, and the spinner shows up to alert the user that there will be a slight delay.
```
server <- function(input, output) {
datasetboro <- reactive({ # Filter data based on selections
data <- mergedViolations %>%
select(
dba, cuisine_code, cuisine_description, street,
boro, zipcode, score, violation_code, grade_date
) %>%
distinct()
req(input$boro) # wait until there's a selection
if (input$boro != "ALL") {
data <- data %>%
filter(boro == input$boro)
}
data
})
datasetcuisine <- reactive({ # dynamic list of cuisines
req(input$cuisine) # wait until list is available
data <- datasetboro() %>%
unique()
if (input$cuisine != "ALL") {
data <- data %>%
filter(cuisine_description == input$cuisine)
}
data
})
output$table <- DT::renderDataTable(DT::datatable(datasetcuisine()))
output$cuisinecontrols <- renderUI({
availablelevels <-
unique(sort(as.character(datasetboro()$cuisine_description)))
selectInput(
inputId = "cuisine",
label = "Cuisine:",
choices = c("ALL", availablelevels)
)
})
}
shinyApp(ui = ui, server = server)
```
Figure 14\.13: Server processing code for a more sophisticated Shiny app.
The code shown in Figure [14\.13](ch-vizIII.html#fig:shiny-part2) makes up the rest of the Shiny app.
We create a reactive object that is dynamically filtered based on which borough and cuisine are selected.
Calls made to the `req()` function wait until the reactive inputs are available (at startup these will take time to populate with the default values).
The two functions are linked with a call to the `shinyApp()` function.
Figure [14\.14](ch-vizIII.html#fig:bakeshoppe) displays the Shiny app when it is running.
Figure 14\.14: A screenshot of the Shiny app displaying New York City restaurants.
14\.5 Customization of **ggplot2** graphics
-------------------------------------------
There are endless possibilities for customizing plots in **R** and **ggplot2**.
One important concept is the notion of *themes*.
In the next section, we will illustrate how to customize a **ggplot2** theme by defining one we include in the **mdsr** package.
**ggplot2** provides many different ways to change the appearance of a plot.
A comprehensive system of customizations is called a [*theme*](https://en.wikipedia.org/w/index.php?search=theme).
In **ggplot2**, a theme is a `list` of 93 different attributes that define how axis labels, titles, grid lines, etc. are drawn.
The default theme is `theme_grey()`.
```
length(theme_grey())
```
```
[1] 93
```
For example, notable features of `theme_grey()` are the distinctive grey background and white grid lines.
The `panel.background` and `panel.grid` properties control these aspects of the theme.
```
theme_grey() %>%
pluck("panel.background")
```
```
List of 5
$ fill : chr "grey92"
$ colour : logi NA
$ size : NULL
$ linetype : NULL
$ inherit.blank: logi TRUE
- attr(*, "class")= chr [1:2] "element_rect" "element"
```
```
theme_grey() %>%
pluck("panel.grid")
```
```
List of 6
$ colour : chr "white"
$ size : NULL
$ linetype : NULL
$ lineend : NULL
$ arrow : logi FALSE
$ inherit.blank: logi TRUE
- attr(*, "class")= chr [1:2] "element_line" "element"
```
A number of useful themes are built into **ggplot2**, including `theme_bw()` for a more traditional white background, `theme_minimal()`, and `theme_classic()`.
These can be invoked using the eponymous functions. We compare `theme_grey()` with `theme_bw()` in Figure [14\.15](ch-vizIII.html#fig:theme-bw).
```
beatles_plot
beatles_plot + theme_bw()
```
Figure 14\.15: (ref:theme\-bw\-cap)
(ref:theme\-bw\-cap) Comparison of two **ggplot2** themes.
At left, the default grey theme. At right, the black\-and\-white theme.
We can modify a theme on\-the\-fly using the `theme()` function.
In Figure [14\.16](ch-vizIII.html#fig:theme-mod) we illustrate how to change the background color and major grid lines color.
```
beatles_plot +
theme(
panel.background = element_rect(fill = "cornsilk"),
panel.grid.major = element_line(color = "dodgerblue")
)
```
Figure 14\.16: Beatles plot with custom **ggplot2** theme.
How did we know the names of those colors? You can display **R**’s built\-in colors using the `colors()` function.
There are [more intuitive color maps](http://bc.bojanorama.pl/wp-content/uploads/2013/04/rcolorsheet-0.png) on the Web.
```
head(colors())
```
```
[1] "white" "aliceblue" "antiquewhite" "antiquewhite1"
[5] "antiquewhite2" "antiquewhite3"
```
To create a new theme, write a function that will return a complete **ggplot2** theme.
One could write this function by completely specifying all 93 items.
However, in this case we illustrate how the `%+replace%` operator can be used to modify an existing theme.
We start with `theme_grey()` and change the background color, major and minor grid lines colors, and the default font.
```
theme_mdsr <- function(base_size = 12, base_family = "Helvetica") {
theme_grey(base_size = base_size, base_family = base_family) %+replace%
theme(
axis.text = element_text(size = rel(0.8)),
axis.ticks = element_line(color = "black"),
legend.key = element_rect(color = "grey80"),
panel.background = element_rect(fill = "whitesmoke", color = NA),
panel.border = element_rect(fill = NA, color = "grey50"),
panel.grid.major = element_line(color = "grey80", size = 0.2),
panel.grid.minor = element_line(color = "grey92", size = 0.5),
strip.background = element_rect(fill = "grey80", color = "grey50",
size = 0.2)
)
}
```
With our new theme defined, we can apply it in the same way as any of the built\-in themes—namely, by calling the `theme_mdsr()` function.
Figure [14\.17](ch-vizIII.html#fig:theme-mod2) shows how this stylizes the faceted Beatles time series plot.
```
beatles_plot + facet_wrap(~name) + theme_mdsr()
```
Figure 14\.17: Beatles plot with customized **mdsr** theme.
Many people have taken to creating their own themes for **ggplot2**.
In particular, the **ggthemes** package features useful (`theme_solarized()`), humorous (`theme_tufte()`), whimsical (`theme_fivethirtyeight()`), and even derisive (`theme_excel()`) themes. Another humorous theme is `theme_xkcd()`, which attempts to mimic the popular Web comic’s distinctive hand\-drawn styling.
This functionality is provided by the [**xkcd**](http://xkcd.r-forge.r-project.org) package.
```
library(xkcd)
```
To set **xkcd** up, we need to download the pseudo\-handwritten font, import it, and then `loadfonts()`.
Note that the destination for the fonts is system dependent: On Mac OS X this should be `~/Library/Fonts` while for Ubuntu it is `~/.fonts`.
```
download.file(
"http://simonsoftware.se/other/xkcd.ttf",
# ~/Library/Fonts/ for Mac OS X
dest = "~/.fonts/xkcd.ttf", mode = "wb"
)
```
```
font_import(pattern = "[X/x]kcd", prompt = FALSE)
loadfonts()
```
In Figure [14\.18](ch-vizIII.html#fig:beatles-xkcd), we show the xkcd\-styled plot of the popularity of the Beatles names.
```
beatles_plot + theme_xkcd()
```
Figure 14\.18: Prevalence of Beatles names drawn in the style of an **xkcd** Web comic.
14\.6 Extended example: Hot dog eating
--------------------------------------
Writing in 2011, former *New York Times* data graphic intern [Nathan Yau](https://en.wikipedia.org/w/index.php?search=Nathan%20Yau) noted that “[*Adobe Illustrator*](https://en.wikipedia.org/w/index.php?search=Adobe%20Illustrator) is the industry standard.
Every graphic that goes to print at *The New York Times* either was created or edited in Illustrator” (Yau 2011\).
To underscore his point, Yau presents the [data graphic](http://flowingdata.com/2008/07/03/nathans-annual-hot-dog-eating-contest-kobayashi-vs-chestnut/hot-dogs/) shown in Figure [14\.19](ch-vizIII.html#fig:hot-dogs), created in **R** but modified in Illustrator.
Figure 14\.19: Nathan Yau’s Hot Dog Eating data graphic that was created in **R** but modified using Adobe Illustrator (reprinted with permission from [flowingdata.com](http://www.flowingdata.com)).
Ten years later, *The New York Times* data graphic department now produces much of their content using `D3.js`, an interactive JavaScript library that we discussed in Section [14\.1](ch-vizIII.html#sec:d3).
What follows is our best attempt to recreate a static version of Figure [14\.19](ch-vizIII.html#fig:hot-dogs) entirely within **R** using **ggplot2** graphics.
After saving the plot as a PDF, we can open it in Illustrator or [*Inkscape*](https://en.wikipedia.org/w/index.php?search=Inkscape) for further customization if necessary.
Undertaking such “Copy the Master” exercises (D. Nolan and Perrett 2016\) is a good way to deepen your skills.
```
library(tidyverse)
library(mdsr)
hd <- read_csv(
"http://datasets.flowingdata.com/hot-dog-contest-winners.csv"
) %>%
janitor::clean_names()
glimpse(hd)
```
```
Rows: 31
Columns: 5
$ year <dbl> 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1…
$ winner <chr> "Paul Siederman & Joe Baldini", "Thomas DeBerry", "Stev…
$ dogs_eaten <dbl> 9.1, 11.0, 11.0, 19.5, 9.5, 11.8, 15.5, 12.0, 14.0, 13.…
$ country <chr> "United States", "United States", "United States", "Mex…
$ new_record <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0…
```
The `hd` data table doesn’t provide any data from before 1980, so we need to estimate them from Figure [14\.19](ch-vizIII.html#fig:hot-dogs) and manually add these rows to our data frame.
```
new_data <- tibble(
year = c(1979, 1978, 1974, 1972, 1916),
winner = c(NA, "Walter Paul", NA, NA, "James Mullen"),
dogs_eaten = c(19.5, 17, 10, 14, 13),
country = rep(NA, 5), new_record = c(1,1,0,0,0)
)
hd <- hd %>%
bind_rows(new_data)
glimpse(hd)
```
```
Rows: 36
Columns: 5
$ year <dbl> 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1…
$ winner <chr> "Paul Siederman & Joe Baldini", "Thomas DeBerry", "Stev…
$ dogs_eaten <dbl> 9.1, 11.0, 11.0, 19.5, 9.5, 11.8, 15.5, 12.0, 14.0, 13.…
$ country <chr> "United States", "United States", "United States", "Mex…
$ new_record <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0…
```
Note that we only want to draw some of the `year`s on the horizontal axis and only every 10th value on the vertical axis.
```
xlabs <- c(1916, 1972, 1980, 1990, 2007)
ylabs <- seq(from = 0, to = 70, by = 10)
```
Finally, the plot only shows the data up until 2008, even though the file contains more recent information than that.
Let’s define a subset that we’ll use for plotting.
```
hd_plot <- hd %>%
filter(year < 2008)
```
Our most basic plot is shown in Figure [14\.20](ch-vizIII.html#fig:hd-basic).
```
p <- ggplot(data = hd_plot, aes(x = year, y = dogs_eaten)) +
geom_col()
p
```
Figure 14\.20: A simple bar graph of hot dog eating.
This doesn’t provide the context of Figure [14\.19](ch-vizIII.html#fig:hot-dogs), nor the pizzazz.
Although most of the important data are already there, we still have a great deal of work to do to make this data graphic as engaging as Figure [14\.19](ch-vizIII.html#fig:hot-dogs).
Our recreation is shown in Figure [14\.21](ch-vizIII.html#fig:our-hot-dogs).
We aren’t actually going to draw the \\(y\\)\-axis—instead we are going to place the labels for the \\(y\\) values on the plot.
We’ll put the locations for those values in a data frame.
```
ticks_y <- tibble(x = 1912, y = ylabs)
```
There are many text annotations, and we will collect those into a single data frame. Here, we use the `tribble()` function to create a data frame row\-by\-row. The format of the input is similar to a CSV (see Section [6\.4\.1\.1](ch-dataII.html#sec:csv)).
```
text <- tribble(
~x, ~y, ~label, ~adj,
# Frank Dellarosa
1953, 37, paste(
"Frank Dellarosa eats 21 and a half HDBs over 12",
"\nminutes, breaking the previous record of 19 and a half."), 0,
# Joey Chestnut
1985, 69, paste(
"For the first time since 1999, an American",
"\nreclaims the title when Joey Chestnut",
"\nconsumes 66 HDBs, a new world record."), 0,
# Kobayashi
1972, 55, paste(
"Through 2001-2005, Takeru Kobayashi wins by no less",
"\nthan 12 HDBs. In 2006, he only wins by 1.75. After win-",
"\nning 6 years in a row and setting the world record 4 times,",
"\nKobayashi places second in 2007."), 0,
# Walter Paul
1942, 26, paste(
"Walter Paul sets a new",
"\nworld record with 17 HDBs."), 0,
# James Mullen
1917, 10.5, paste(
"James Mullen wins the inaugural",
"\ncontest, scarfing 13 HDBs. Length",
"\nof contest unavailable."), 0,
1935, 72, "NEW WORLD RECORD", 0,
1914, 72, "Hot dogs and buns (HDBs)", 0,
1940, 2, "*Data between 1916 and 1972 were unavailable", 0,
1922, 2, "Source: FlowingData", 0,
)
```
The grey segments that connect the text labels to the bars in the plot must be manually specified in another data frame.
Here, we use `tribble()` to construct a data frame in which each row corresponds to a single segment.
Next, we use the `unnest()` function to expand the data frame so that each row corresponds to a single point.
This will allow us to pass it to the `geom_segment()` function.
```
segments <- tribble(
~x, ~y,
c(1978, 1991, 1991, NA), c(37, 37, 21, NA),
c(2004, 2007, 2007, NA), c(69, 69, 66, NA),
c(1998, 2006, 2006, NA), c(58, 58, 53.75, NA),
c(2005, 2005, NA), c(58, 49, NA),
c(2004, 2004, NA), c(58, 53.5, NA),
c(2003, 2003, NA), c(58, 44.5, NA),
c(2002, 2002, NA), c(58, 50.5, NA),
c(2001, 2001, NA), c(58, 50, NA),
c(1955, 1978, 1978), c(26, 26, 17)
) %>%
unnest(cols = c(x, y))
```
Finally, we draw the plot, layering on each of the elements that we defined above.
```
p +
geom_col(aes(fill = factor(new_record))) +
geom_hline(yintercept = 0, color = "darkgray") +
scale_fill_manual(name = NULL,
values = c("0" = "#006f3c", "1" = "#81c450")
) +
scale_x_continuous(
name = NULL, breaks = xlabs, minor_breaks = NULL,
limits = c(1912, 2008), expand = c(0, 1)
) +
scale_y_continuous(
name = NULL, breaks = ylabs, labels = NULL,
minor_breaks = NULL, expand = c(0.01, 1)
) +
geom_text(
data = ticks_y, aes(x = x, y = y + 2, label = y),
size = 3
) +
labs(
title = "Winners from Nathan's Hot Dog Eating Contest",
subtitle = paste(
"Since 1916, the annual eating competition has grown substantially",
"attracting competitors from around\nthe world.",
"This year's competition will be televised on July 4, 2008",
"at 12pm EDT live on ESPN.\n\n\n"
)
) +
geom_text(
data = text, aes(x = x, y = y, label = label),
hjust = "left", size = 3
) +
geom_path(
data = segments, aes(x = x, y = y), col = "darkgray"
) +
# Key
geom_rect(
xmin = 1933, ymin = 70.75, xmax = 1934.3, ymax = 73.25,
fill = "#81c450", color = "white"
) +
guides(fill = FALSE) +
theme(
panel.background = element_rect(fill = "white"),
panel.grid.major.y =
element_line(color = "gray", linetype = "dotted"),
plot.title = element_text(face = "bold", size = 16),
plot.subtitle = element_text(size = 10),
axis.ticks.length = unit(0, "cm")
)
```
Figure 14\.21: Recreation of the hot dog graphic.
14\.7 Further resources
-----------------------
The [`htmlwidgets`](http://www.htmlwidgets.org) website includes a gallery of showcase applications of JavaScript in **R**.
Details and examples of use of the **flexdashboard** package can be found at <https://rmarkdown.rstudio.com/flexdashboard>.
The Shiny gallery (<http://shiny.rstudio.com/gallery>) includes a number of interactive visualizations (and associated
code), many of which feature JavaScript libraries.
Nearly 200 examples of widgets and idioms in Shiny are available at [https://github.com/rstudio/shiny\-examples](https://github.com/rstudio/shiny-examples).
The **RStudio** Shiny cheat sheet is a useful reference.
Hadley Wickham (2020a) provides a comprehensive guide to many aspects of Shiny development.
The **extrafont** package makes use of the full suite of fonts that are installed on your computer, rather than the relatively small sets of fonts that **R** knows about. (These are often device and operating system dependent, but three fonts—`sans`, `serif`, and `mono`—are always available.) For a
more extensive tutorial on how to use the **extrafont** package, see [http://tinyurl.com/fonts\-rcharts](http://tinyurl.com/fonts-rcharts).
14\.8 Exercises
---------------
**Problem 1 (Easy)**: Modify the Shiny app that displays the frequency of Beatles names over time so that it has a `checkboxInput()` widget that uses the `theme_tufte()` theme from the `ggthemes` package.
**Problem 2 (Medium)**: Create a Shiny app that demonstrates the use of at least five widgets.
**Problem 3 (Medium)**: The `macleish` package contains weather data collected every 10 minutes in 2015 from two weather stations in Whately, Massachusetts.
Using the `ggplot2` package, create a data graphic that displays the average temperature over each 10\-minute interval (`temperature`) as a function of time (`when`) from the `whately_2015` dataframe. Create annotations to include context about the four seasons: the date of the vernal and autumnal equinoxes, and the summer and winter solstices.
**Problem 4 (Medium)**: Modify the restaurant violations Shiny app so that it displays a table of the number of restaurants within a given type of cuisine along with a count of restaurants (as specified by the `dba` variable. (Hint: Be sure not to double count. The dataset should include 842 unique pizza restaurants in all boroughs and 281 Caribbean restaurants in Brooklyn.)
**Problem 5 (Medium)**: Create your own `ggplot2` theme. Describe the choices you made and justify why you made them using the principles introduced earlier.
**Problem 6 (Medium)**: The following code generates a scatterplot with marginal histograms.
```
p <- ggplot(HELPrct, aes(x = age, y = cesd)) +
geom_point() +
theme_classic() +
stat_smooth(method = "loess", formula = y ~ x, size = 2)
ggExtra::ggMarginal(p, type = "histogram", binwidth = 3)
```
Find an example where such a display might be useful. Be sure to interpret your graphical display
**Problem 7 (Medium)**: Using data from the `palmerpenguins` package, create a Shiny app that displays measurements from the `penguins` dataframe. Allow the user to select a species or a gender, and to choose between various attributes on a scatterplot. (Hint: examples of similar apps can be found at the [Shiny gallery](https://shiny.rstudio.com/gallery)).
**Problem 8 (Medium)**: Create a Shiny app to display an interactive time series plot of the `macleish` weather data. Include a selection box to alternate between data from the `whately_2015` and `orchard_2015` weather stations.
Add a selector of dates to include in the display. Do you notice any irregularities?
**Problem 9 (Hard)**: Repeat the earlier question using the weather data from the MacLeish field station, but include context on major storms listed on the Wikipedia pages: [2014–2015 North American Winter](https://en.wikipedia.org/wiki/2014%E2%80%9315_North_American_winter) and
[2015–2016 North American Winter](https://en.wikipedia.org/wiki/2015%E2%80%9316_North_American_winter).
**Problem 10 (Hard)**: Using data from the `Lahman` package, create a Shiny app that displays career leaderboards similar to the one at [http://www.baseball\-reference.com/leaders/HR\_season.shtml](http://www.baseball-reference.com/leaders/HR_season.shtml). Allow the user to select a statistic of their choice, and to choose between `Career`, `Active`, `Progressive`, and `Yearly League` leaderboards. (Hint: examples of similar apps can be found at the [Shiny gallery](https://shiny.rstudio.com/gallery).)
14\.9 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/dataviz\-III.html\#dataviz\-III\-online\-exercises](https://mdsr-book.github.io/mdsr2e/dataviz-III.html#dataviz-III-online-exercises)
**Problem 1 (Medium)**:
Write a Shiny app that allows the user to pick variables from the `HELPrct` data in the `mosaicData` package and to generate a scatterplot.
Include a checkbox to add a smoother and a choice of transformations for the y axis variable.
14\.1 Rich Web content using `D3.js` and **htmlwidgets**
--------------------------------------------------------
As Web browsers became more complex, the desire to have interactive data visualizations in the browser grew.
Thus far, all of the data visualization techniques that we have discussed are based on static images. However, newer tools have made it considerably easier to create interactive data graphics.
JavaScript is a programming language that allows Web developers to create client\-side [*web applications*](https://en.wikipedia.org/w/index.php?search=web%20applications).
This means that computations are happening *in the client’s browser*, as opposed to taking place on the host’s Web servers.
[*JavaScript*](https://en.wikipedia.org/w/index.php?search=JavaScript) applications can be more responsive to client interaction than dynamically\-served Web pages that rely on a server\-side scripting language, like [*PHP*](https://en.wikipedia.org/w/index.php?search=PHP) or [*Ruby*](https://en.wikipedia.org/w/index.php?search=Ruby).
The current state of the art for client\-side dynamic data graphics on the Web is a JavaScript library called `D3.js`, or just `D3`, which stands for “data\-driven documents.” One of the lead developers of `D3` is [Mike Bostock](https://en.wikipedia.org/w/index.php?search=Mike%20Bostock), formerly of *The New York Times* and [*Stanford University*](https://en.wikipedia.org/w/index.php?search=Stanford%20University).
More recently, [Ramnath Vaidyanathan](https://en.wikipedia.org/w/index.php?search=Ramnath%20Vaidyanathan) and the developers at **RStudio** have created the **htmlwidgets** package, which provides a bridge between **R** and `D3`. Specifically, the [**htmlwidgets** framework](http://www.htmlwidgets.org/showcase_leaflet.html) allows **R** developers to create packages that render data graphics in HTML using `D3`.
Thus, **R** programmers can now make use of `D3` without having to learn JavaScript. Furthermore, since R Markdown documents also render as HTML, **R** users can easily create interactive data graphics embedded in annotated Web documents.
This is a highly active area of development.
In what follows, we illustrate a few of the more obviously useful **htmlwidgets** packages.
### 14\.1\.1 Leaflet
Perhaps the **htmlwidgets** that is getting the greatest attention is **leaflet**, which
enables dynamic geospatial maps to be drawn using the `Leaflet` JavaScript library and the [*OpenStreetMaps*](https://en.wikipedia.org/w/index.php?search=OpenStreetMaps) [API](http://wiki.openstreetmap.org/wiki/API).
The use of this package requires knowledge of spatial data, and thus we postpone our illustration of its use until Chapter [17](ch-spatial.html#ch:spatial).
### 14\.1\.2 Plot.ly
[Plot.ly](https://plot.ly/r/) specializes in online dynamic data visualizations and, in particular, the ability to translate code to generate data graphics between **R**, Python, and other data software tools.
This project is based on the `plotly.js` JavaScript library, which is available under an open\-source license.
The functionality of Plot.ly can be accessed in **R** through the **plotly** package.
What makes **plotly** especially attractive is that it can convert any **ggplot2** object into a **plotly** object using the `ggplotly()` function.
This enables immediate interactivity for existing data graphics. Features like [*brushing*](https://en.wikipedia.org/w/index.php?search=brushing) (where selected points are marked) and [*mouse\-over*](https://en.wikipedia.org/w/index.php?search=mouse-over) annotations (where points display additional information when the mouse hovers over them) are automatic.
For example, in Figure [14\.1](ch-vizIII.html#fig:beatles) we display a static plot of the frequency of the names of births in the United States of the four members of the [*Beatles*](https://en.wikipedia.org/w/index.php?search=Beatles) over
time (using data from the **babynames** package).
```
library(tidyverse)
library(mdsr)
library(babynames)
Beatles <- babynames %>%
filter(name %in% c("John", "Paul", "George", "Ringo") & sex == "M") %>%
mutate(name = factor(name, levels = c("John", "George", "Paul", "Ringo")))
beatles_plot <- ggplot(data = Beatles, aes(x = year, y = n)) +
geom_line(aes(color = name), size = 2)
beatles_plot
```
Figure 14\.1: **ggplot2** depiction of the frequency of Beatles names over time.
After running the `ggplotly()` function on that object, a plot is displayed in **RStudio** or in a Web browser.
The exact values can be displayed by mousing\-over the lines.
In addition, brushing, panning, and zooming are supported.
In Figure [14\.2](ch-vizIII.html#fig:plotly-beatles), we show that image.
```
library(plotly)
ggplotly(beatles_plot)
```
Figure 14\.2: An interactive plot of the frequency of Beatles names over time.
### 14\.1\.3 DataTables
The DataTables (**DT**) package provides a quick way to make data tables interactive.
Simply put, it enables tables to be searchable, sortable, and pageable automatically.
Figure [14\.3](ch-vizIII.html#fig:DT-beatles) displays the first 10 rows of the `Beatles` table as rendered by **DT**. Note the search box and clickable sorting arrows.
```
datatable(Beatles, options = list(pageLength = 10))
```
Figure 14\.3: Output of the **DT** package applied to the Beatles names.
### 14\.1\.4 Dygraphs
The **dygraphs** package generates interactive time series plots with the ability to brush over time intervals and zoom in and out.
For example, the popularity of Beatles names could be made dynamic with just a little bit of extra code.
Here, the dynamic range selector allows for the easy selection of specific time periods on which to focus.
In Figure [14\.4](ch-vizIII.html#fig:dygraphs-beatles), one can zoom in on the uptick in the popularity of the names `John` and `Paul` during the first half of the 1960s.
```
library(dygraphs)
Beatles %>%
filter(sex == "M") %>%
select(year, name, prop) %>%
pivot_wider(names_from = name, values_from = prop) %>%
dygraph(main = "Popularity of Beatles names over time") %>%
dyRangeSelector(dateWindow = c("1940", "1980"))
```
Figure 14\.4: (ref:dygraphs\-beatles\-cap)
(ref:dygraphs\-beatles\-cap) The **dygraphs** display of the popularity of Beatles names over time.
In this screenshot, the years range from 1940 to 1980, and one can expand or contract that timespan.
### 14\.1\.5 Streamgraphs
A [*streamgraph*](https://en.wikipedia.org/w/index.php?search=streamgraph) is a particular type of time series plot that uses area as a visual cue for quantity.
Streamgraphs allow you to compare the values of several time series at once.
The [**streamgraph**](https://github.com/hrbrmstr/streamgraph) `htmlwidget` provides access to the `streamgraphs.js` D3 library.
Figure [14\.5](ch-vizIII.html#fig:streambeatles) displays our `Beatles` names time series as a streamgraph.
```
# remotes::install_github("hrbrmstr/streamgraph")
library(streamgraph)
Beatles %>%
streamgraph(key = "name", value = "n", date = "year") %>%
sg_fill_brewer("Accent")
```
Figure 14\.5: A screenshot of the **streamgraph** display of Beatles names over time.
### 14\.1\.1 Leaflet
Perhaps the **htmlwidgets** that is getting the greatest attention is **leaflet**, which
enables dynamic geospatial maps to be drawn using the `Leaflet` JavaScript library and the [*OpenStreetMaps*](https://en.wikipedia.org/w/index.php?search=OpenStreetMaps) [API](http://wiki.openstreetmap.org/wiki/API).
The use of this package requires knowledge of spatial data, and thus we postpone our illustration of its use until Chapter [17](ch-spatial.html#ch:spatial).
### 14\.1\.2 Plot.ly
[Plot.ly](https://plot.ly/r/) specializes in online dynamic data visualizations and, in particular, the ability to translate code to generate data graphics between **R**, Python, and other data software tools.
This project is based on the `plotly.js` JavaScript library, which is available under an open\-source license.
The functionality of Plot.ly can be accessed in **R** through the **plotly** package.
What makes **plotly** especially attractive is that it can convert any **ggplot2** object into a **plotly** object using the `ggplotly()` function.
This enables immediate interactivity for existing data graphics. Features like [*brushing*](https://en.wikipedia.org/w/index.php?search=brushing) (where selected points are marked) and [*mouse\-over*](https://en.wikipedia.org/w/index.php?search=mouse-over) annotations (where points display additional information when the mouse hovers over them) are automatic.
For example, in Figure [14\.1](ch-vizIII.html#fig:beatles) we display a static plot of the frequency of the names of births in the United States of the four members of the [*Beatles*](https://en.wikipedia.org/w/index.php?search=Beatles) over
time (using data from the **babynames** package).
```
library(tidyverse)
library(mdsr)
library(babynames)
Beatles <- babynames %>%
filter(name %in% c("John", "Paul", "George", "Ringo") & sex == "M") %>%
mutate(name = factor(name, levels = c("John", "George", "Paul", "Ringo")))
beatles_plot <- ggplot(data = Beatles, aes(x = year, y = n)) +
geom_line(aes(color = name), size = 2)
beatles_plot
```
Figure 14\.1: **ggplot2** depiction of the frequency of Beatles names over time.
After running the `ggplotly()` function on that object, a plot is displayed in **RStudio** or in a Web browser.
The exact values can be displayed by mousing\-over the lines.
In addition, brushing, panning, and zooming are supported.
In Figure [14\.2](ch-vizIII.html#fig:plotly-beatles), we show that image.
```
library(plotly)
ggplotly(beatles_plot)
```
Figure 14\.2: An interactive plot of the frequency of Beatles names over time.
### 14\.1\.3 DataTables
The DataTables (**DT**) package provides a quick way to make data tables interactive.
Simply put, it enables tables to be searchable, sortable, and pageable automatically.
Figure [14\.3](ch-vizIII.html#fig:DT-beatles) displays the first 10 rows of the `Beatles` table as rendered by **DT**. Note the search box and clickable sorting arrows.
```
datatable(Beatles, options = list(pageLength = 10))
```
Figure 14\.3: Output of the **DT** package applied to the Beatles names.
### 14\.1\.4 Dygraphs
The **dygraphs** package generates interactive time series plots with the ability to brush over time intervals and zoom in and out.
For example, the popularity of Beatles names could be made dynamic with just a little bit of extra code.
Here, the dynamic range selector allows for the easy selection of specific time periods on which to focus.
In Figure [14\.4](ch-vizIII.html#fig:dygraphs-beatles), one can zoom in on the uptick in the popularity of the names `John` and `Paul` during the first half of the 1960s.
```
library(dygraphs)
Beatles %>%
filter(sex == "M") %>%
select(year, name, prop) %>%
pivot_wider(names_from = name, values_from = prop) %>%
dygraph(main = "Popularity of Beatles names over time") %>%
dyRangeSelector(dateWindow = c("1940", "1980"))
```
Figure 14\.4: (ref:dygraphs\-beatles\-cap)
(ref:dygraphs\-beatles\-cap) The **dygraphs** display of the popularity of Beatles names over time.
In this screenshot, the years range from 1940 to 1980, and one can expand or contract that timespan.
### 14\.1\.5 Streamgraphs
A [*streamgraph*](https://en.wikipedia.org/w/index.php?search=streamgraph) is a particular type of time series plot that uses area as a visual cue for quantity.
Streamgraphs allow you to compare the values of several time series at once.
The [**streamgraph**](https://github.com/hrbrmstr/streamgraph) `htmlwidget` provides access to the `streamgraphs.js` D3 library.
Figure [14\.5](ch-vizIII.html#fig:streambeatles) displays our `Beatles` names time series as a streamgraph.
```
# remotes::install_github("hrbrmstr/streamgraph")
library(streamgraph)
Beatles %>%
streamgraph(key = "name", value = "n", date = "year") %>%
sg_fill_brewer("Accent")
```
Figure 14\.5: A screenshot of the **streamgraph** display of Beatles names over time.
14\.2 Animation
---------------
The **gganimate** package provides a simple way to create animations (i.e., [*GIF*](https://en.wikipedia.org/w/index.php?search=GIF)s) from **ggplot2** data graphics.
In Figure [14\.6](ch-vizIII.html#fig:beatles-gganminate), we illustrate a simple transition, wherein the lines indicating the popularity of each band member’s name over time grows and shrinks.
```
library(gganimate)
library(transformr)
beatles_animation <- beatles_plot +
transition_states(
name,
transition_length = 2,
state_length = 1
) +
enter_grow() +
exit_shrink()
animate(beatles_animation, height = 400, width = 800)
```
Figure 14\.6: Evolving Beatles plot created by **gganimate**.
14\.3 Flexdashboard
-------------------
The **flexdashboard** package provides a straightforward way to create and publish data visualizations as a [*dashboard*](https://en.wikipedia.org/w/index.php?search=dashboard).
Dashboards are a common way that data scientists make data available to managers and others to make decisions.
They will often include a mix of graphical and textual displays that can be targeted to their needs.
Here we provide an example of an R Markdown file that creates a static dashboard of information from the **palmerpenguins** package.
**flexdashboard** divides up the page into rows and columns.
In this case, we create two columns of nearly equal width.
The second column (which appears on the right in Figure [14\.8](ch-vizIII.html#fig:flex2)) is further subdivided into two rows, each marked by a third\-level section header.
```
---
title: "Flexdashboard example (Palmer Penguins)"
output:
flexdashboard::flex_dashboard:
orientation: columns
vertical_layout: fill
---
```{r setup, include=FALSE}
library(flexdashboard)
library(palmerpenguins)
library(tidyverse)
```
Column {data-width=400}
-----------------------------------------------------------------------
### Chart A
```{r}
ggplot(
penguins,
aes(x = bill_length_mm, y = bill_depth_mm, color = species)
) +
geom_point()
```
Column {data-width=300}
-----------------------------------------------------------------------
### Chart B
```{r}
DT::datatable(penguins)
```
### Chart C
```{r}
roundval <- 2
cleanmean <- function(x, roundval = 2, na.rm = TRUE) {
return(round(mean(x, na.rm = na.rm), digits = roundval))
}
summarystat <- penguins %>%
group_by(species) %>%
summarize(
`Average bill length (mm)` = cleanmean(bill_length_mm),
`Average bill depth (mm)` = cleanmean(bill_depth_mm)
)
knitr::kable(summarystat)
```
```
Figure 14\.7: Sample **flexdashboard** input file.
Figure 14\.8: Sample **flexdashboard** output.
The upper\-right panel of this dashboard employs **DT** to provide a data table that the user can interact with.
However, the dashboard itself is not interactive, in the sense that the user can only change the display through this HTML widget.
Changing the display in that upper\-right panel has no effect on the other panels.
To create a fully interactive [*web application*](https://en.wikipedia.org/w/index.php?search=web%20application), we need a more powerful tool, which we introduce in the next section.
14\.4 Interactive web apps with Shiny
-------------------------------------
Shiny is a framework for **R** that can be used to create interactive [*web applications*](https://en.wikipedia.org/w/index.php?search=web%20applications) and dynamic dashboards.
It is particularly attractive because it provides a high\-level structure to easily prototype and deploy apps.
While a full discussion of Shiny is outside the scope of this book, we will demonstrate how one might create a dynamic web app that allows the user to explore the data set of babies with the same names as the Beatles.
One way to write a Shiny app involves creating a `ui.R` file that controls the user interface, and a `server.R` file to
display the results.
(Alternatively, the two files can be combined into a single `app.R` file that includes both components.)
These files communicate with each other using [*reactive objects*](https://en.wikipedia.org/w/index.php?search=reactive%20objects) `input` and `output`.
Reactive expressions are special constructions that use input from [*widgets*](https://en.wikipedia.org/w/index.php?search=widgets) to return a value.
These allow the application to automatically update when the user clicks on a button, changes a slider, or provides other input.
### 14\.4\.1 Example: interactive display of the Beatles
For this example, we’d like to let the user pick the start and end years along with a set of checkboxes to include their favorite Beatles.
The `ui.R` file shown in Figure [14\.9](ch-vizIII.html#fig:shiny-ui) sets up a title, creates inputs for the start and end years (with default values), creates a set of check boxes for each of the Beatles’ names, then plots the result.
```
# ui.R
beatles_names <- c("John", "Paul", "George", "Ringo")
shinyUI(
bootstrapPage(
h3("Frequency of Beatles names over time"),
numericInput(
"startyear", "Enter starting year",
value = 1960, min = 1880, max = 2014, step = 1
),
numericInput(
"endyear", "Enter ending year",
value = 1970, min = 1881, max = 2014, step = 1
),
checkboxGroupInput(
'names', 'Names to display:',
sort(unique(beatles_names)),
selected = c("George", "Paul")
),
plotOutput("plot")
)
)
```
Figure 14\.9: User interface code for a simple Shiny app.
The `server.R` file shown in Figure [14\.10](ch-vizIII.html#fig:shiny-server) loads needed packages, performs some data wrangling, extracts the reactive objects using the `input` object, then generates the desired plot.
The `renderPlot()` function returns a reactive object called `plot` that is referenced in `ui.R`.
Within this function, the values for the years and Beatles are used within a call to `filter()` to identify what to plot.
```
# server.R
library(tidyverse)
library(babynames)
library(shiny)
Beatles <- babynames %>%
filter(name %in% c("John", "Paul", "George", "Ringo") & sex == "M")
shinyServer(
function(input, output) {
output$plot <- renderPlot({
ds <- Beatles %>%
filter(
year >= input$startyear, year <= input$endyear,
name %in% input$names
)
ggplot(data = ds, aes(x = year, y = prop, color = name)) +
geom_line(size = 2)
})
}
)
```
Figure 14\.10: Server processing code for a simple Shiny app.
Shiny Apps can be run locally within **RStudio**, or deployed on a Shiny App server (such as <http://shinyapps.io>).
Please see the book website at [https://mdsr\-book.github.io](https://mdsr-book.github.io) for access to the code files.
Figure [14\.11](ch-vizIII.html#fig:beatlesshiny) displays the results when only Paul and George are checked when run locally.
```
library(shiny)
runApp('.')
```
Figure 14\.11: A screenshot of the Shiny app displaying babies with Beatles names.
### 14\.4\.2 More on reactive programming
Shiny is an extremely powerful and complicated system to master.
Repeated and gradual exposure to reactive programming and widgets will pay off in terms of flexible and attractive displays.
For this example, we demonstrate some additional features that show off some of the possibilities: more general reactive objects, dynamic user interfaces, and progress indicators.
Here we display information about health violations from [*New York City*](https://en.wikipedia.org/w/index.php?search=New%20York%20City) restaurants.
The user has the option to specify a [*borough*](https://en.wikipedia.org/w/index.php?search=borough) (district) within New York City and a cuisine.
Since not every cuisine is available within every borough, we need to dynamically filter the list.
We do this by calling `uiOutput()`.
This references a reactive object created within the `server()` function.
The output is displayed in a `dataTableOuput()` widget from the **DT** package.
```
library(tidyverse)
library(shiny)
library(shinybusy)
library(mdsr)
mergedViolations <- Violations %>%
left_join(Cuisines)
ui <- fluidPage(
titlePanel("Restaurant Explorer"),
fluidRow(
# some things take time: this lets users know
add_busy_spinner(spin = "fading-circle"),
column(
4,
selectInput(inputId = "boro",
label = "Borough:",
choices = c(
"ALL",
unique(as.character(mergedViolations$boro))
)
)
),
# display dynamic list of cuisines
column(4, uiOutput("cuisinecontrols"))
),
# Create a new row for the table.
fluidRow(
DT::dataTableOutput("table")
)
)
```
Figure 14\.12: User interface processing code for a more sophisticated Shiny app.
The code shown in Figure [14\.12](ch-vizIII.html#fig:shiny-part1) also includes a call to the `add_busy_spinner()` function from the **shinybusy** package.
It takes time to render the various reactive objects, and the spinner shows up to alert the user that there will be a slight delay.
```
server <- function(input, output) {
datasetboro <- reactive({ # Filter data based on selections
data <- mergedViolations %>%
select(
dba, cuisine_code, cuisine_description, street,
boro, zipcode, score, violation_code, grade_date
) %>%
distinct()
req(input$boro) # wait until there's a selection
if (input$boro != "ALL") {
data <- data %>%
filter(boro == input$boro)
}
data
})
datasetcuisine <- reactive({ # dynamic list of cuisines
req(input$cuisine) # wait until list is available
data <- datasetboro() %>%
unique()
if (input$cuisine != "ALL") {
data <- data %>%
filter(cuisine_description == input$cuisine)
}
data
})
output$table <- DT::renderDataTable(DT::datatable(datasetcuisine()))
output$cuisinecontrols <- renderUI({
availablelevels <-
unique(sort(as.character(datasetboro()$cuisine_description)))
selectInput(
inputId = "cuisine",
label = "Cuisine:",
choices = c("ALL", availablelevels)
)
})
}
shinyApp(ui = ui, server = server)
```
Figure 14\.13: Server processing code for a more sophisticated Shiny app.
The code shown in Figure [14\.13](ch-vizIII.html#fig:shiny-part2) makes up the rest of the Shiny app.
We create a reactive object that is dynamically filtered based on which borough and cuisine are selected.
Calls made to the `req()` function wait until the reactive inputs are available (at startup these will take time to populate with the default values).
The two functions are linked with a call to the `shinyApp()` function.
Figure [14\.14](ch-vizIII.html#fig:bakeshoppe) displays the Shiny app when it is running.
Figure 14\.14: A screenshot of the Shiny app displaying New York City restaurants.
### 14\.4\.1 Example: interactive display of the Beatles
For this example, we’d like to let the user pick the start and end years along with a set of checkboxes to include their favorite Beatles.
The `ui.R` file shown in Figure [14\.9](ch-vizIII.html#fig:shiny-ui) sets up a title, creates inputs for the start and end years (with default values), creates a set of check boxes for each of the Beatles’ names, then plots the result.
```
# ui.R
beatles_names <- c("John", "Paul", "George", "Ringo")
shinyUI(
bootstrapPage(
h3("Frequency of Beatles names over time"),
numericInput(
"startyear", "Enter starting year",
value = 1960, min = 1880, max = 2014, step = 1
),
numericInput(
"endyear", "Enter ending year",
value = 1970, min = 1881, max = 2014, step = 1
),
checkboxGroupInput(
'names', 'Names to display:',
sort(unique(beatles_names)),
selected = c("George", "Paul")
),
plotOutput("plot")
)
)
```
Figure 14\.9: User interface code for a simple Shiny app.
The `server.R` file shown in Figure [14\.10](ch-vizIII.html#fig:shiny-server) loads needed packages, performs some data wrangling, extracts the reactive objects using the `input` object, then generates the desired plot.
The `renderPlot()` function returns a reactive object called `plot` that is referenced in `ui.R`.
Within this function, the values for the years and Beatles are used within a call to `filter()` to identify what to plot.
```
# server.R
library(tidyverse)
library(babynames)
library(shiny)
Beatles <- babynames %>%
filter(name %in% c("John", "Paul", "George", "Ringo") & sex == "M")
shinyServer(
function(input, output) {
output$plot <- renderPlot({
ds <- Beatles %>%
filter(
year >= input$startyear, year <= input$endyear,
name %in% input$names
)
ggplot(data = ds, aes(x = year, y = prop, color = name)) +
geom_line(size = 2)
})
}
)
```
Figure 14\.10: Server processing code for a simple Shiny app.
Shiny Apps can be run locally within **RStudio**, or deployed on a Shiny App server (such as <http://shinyapps.io>).
Please see the book website at [https://mdsr\-book.github.io](https://mdsr-book.github.io) for access to the code files.
Figure [14\.11](ch-vizIII.html#fig:beatlesshiny) displays the results when only Paul and George are checked when run locally.
```
library(shiny)
runApp('.')
```
Figure 14\.11: A screenshot of the Shiny app displaying babies with Beatles names.
### 14\.4\.2 More on reactive programming
Shiny is an extremely powerful and complicated system to master.
Repeated and gradual exposure to reactive programming and widgets will pay off in terms of flexible and attractive displays.
For this example, we demonstrate some additional features that show off some of the possibilities: more general reactive objects, dynamic user interfaces, and progress indicators.
Here we display information about health violations from [*New York City*](https://en.wikipedia.org/w/index.php?search=New%20York%20City) restaurants.
The user has the option to specify a [*borough*](https://en.wikipedia.org/w/index.php?search=borough) (district) within New York City and a cuisine.
Since not every cuisine is available within every borough, we need to dynamically filter the list.
We do this by calling `uiOutput()`.
This references a reactive object created within the `server()` function.
The output is displayed in a `dataTableOuput()` widget from the **DT** package.
```
library(tidyverse)
library(shiny)
library(shinybusy)
library(mdsr)
mergedViolations <- Violations %>%
left_join(Cuisines)
ui <- fluidPage(
titlePanel("Restaurant Explorer"),
fluidRow(
# some things take time: this lets users know
add_busy_spinner(spin = "fading-circle"),
column(
4,
selectInput(inputId = "boro",
label = "Borough:",
choices = c(
"ALL",
unique(as.character(mergedViolations$boro))
)
)
),
# display dynamic list of cuisines
column(4, uiOutput("cuisinecontrols"))
),
# Create a new row for the table.
fluidRow(
DT::dataTableOutput("table")
)
)
```
Figure 14\.12: User interface processing code for a more sophisticated Shiny app.
The code shown in Figure [14\.12](ch-vizIII.html#fig:shiny-part1) also includes a call to the `add_busy_spinner()` function from the **shinybusy** package.
It takes time to render the various reactive objects, and the spinner shows up to alert the user that there will be a slight delay.
```
server <- function(input, output) {
datasetboro <- reactive({ # Filter data based on selections
data <- mergedViolations %>%
select(
dba, cuisine_code, cuisine_description, street,
boro, zipcode, score, violation_code, grade_date
) %>%
distinct()
req(input$boro) # wait until there's a selection
if (input$boro != "ALL") {
data <- data %>%
filter(boro == input$boro)
}
data
})
datasetcuisine <- reactive({ # dynamic list of cuisines
req(input$cuisine) # wait until list is available
data <- datasetboro() %>%
unique()
if (input$cuisine != "ALL") {
data <- data %>%
filter(cuisine_description == input$cuisine)
}
data
})
output$table <- DT::renderDataTable(DT::datatable(datasetcuisine()))
output$cuisinecontrols <- renderUI({
availablelevels <-
unique(sort(as.character(datasetboro()$cuisine_description)))
selectInput(
inputId = "cuisine",
label = "Cuisine:",
choices = c("ALL", availablelevels)
)
})
}
shinyApp(ui = ui, server = server)
```
Figure 14\.13: Server processing code for a more sophisticated Shiny app.
The code shown in Figure [14\.13](ch-vizIII.html#fig:shiny-part2) makes up the rest of the Shiny app.
We create a reactive object that is dynamically filtered based on which borough and cuisine are selected.
Calls made to the `req()` function wait until the reactive inputs are available (at startup these will take time to populate with the default values).
The two functions are linked with a call to the `shinyApp()` function.
Figure [14\.14](ch-vizIII.html#fig:bakeshoppe) displays the Shiny app when it is running.
Figure 14\.14: A screenshot of the Shiny app displaying New York City restaurants.
14\.5 Customization of **ggplot2** graphics
-------------------------------------------
There are endless possibilities for customizing plots in **R** and **ggplot2**.
One important concept is the notion of *themes*.
In the next section, we will illustrate how to customize a **ggplot2** theme by defining one we include in the **mdsr** package.
**ggplot2** provides many different ways to change the appearance of a plot.
A comprehensive system of customizations is called a [*theme*](https://en.wikipedia.org/w/index.php?search=theme).
In **ggplot2**, a theme is a `list` of 93 different attributes that define how axis labels, titles, grid lines, etc. are drawn.
The default theme is `theme_grey()`.
```
length(theme_grey())
```
```
[1] 93
```
For example, notable features of `theme_grey()` are the distinctive grey background and white grid lines.
The `panel.background` and `panel.grid` properties control these aspects of the theme.
```
theme_grey() %>%
pluck("panel.background")
```
```
List of 5
$ fill : chr "grey92"
$ colour : logi NA
$ size : NULL
$ linetype : NULL
$ inherit.blank: logi TRUE
- attr(*, "class")= chr [1:2] "element_rect" "element"
```
```
theme_grey() %>%
pluck("panel.grid")
```
```
List of 6
$ colour : chr "white"
$ size : NULL
$ linetype : NULL
$ lineend : NULL
$ arrow : logi FALSE
$ inherit.blank: logi TRUE
- attr(*, "class")= chr [1:2] "element_line" "element"
```
A number of useful themes are built into **ggplot2**, including `theme_bw()` for a more traditional white background, `theme_minimal()`, and `theme_classic()`.
These can be invoked using the eponymous functions. We compare `theme_grey()` with `theme_bw()` in Figure [14\.15](ch-vizIII.html#fig:theme-bw).
```
beatles_plot
beatles_plot + theme_bw()
```
Figure 14\.15: (ref:theme\-bw\-cap)
(ref:theme\-bw\-cap) Comparison of two **ggplot2** themes.
At left, the default grey theme. At right, the black\-and\-white theme.
We can modify a theme on\-the\-fly using the `theme()` function.
In Figure [14\.16](ch-vizIII.html#fig:theme-mod) we illustrate how to change the background color and major grid lines color.
```
beatles_plot +
theme(
panel.background = element_rect(fill = "cornsilk"),
panel.grid.major = element_line(color = "dodgerblue")
)
```
Figure 14\.16: Beatles plot with custom **ggplot2** theme.
How did we know the names of those colors? You can display **R**’s built\-in colors using the `colors()` function.
There are [more intuitive color maps](http://bc.bojanorama.pl/wp-content/uploads/2013/04/rcolorsheet-0.png) on the Web.
```
head(colors())
```
```
[1] "white" "aliceblue" "antiquewhite" "antiquewhite1"
[5] "antiquewhite2" "antiquewhite3"
```
To create a new theme, write a function that will return a complete **ggplot2** theme.
One could write this function by completely specifying all 93 items.
However, in this case we illustrate how the `%+replace%` operator can be used to modify an existing theme.
We start with `theme_grey()` and change the background color, major and minor grid lines colors, and the default font.
```
theme_mdsr <- function(base_size = 12, base_family = "Helvetica") {
theme_grey(base_size = base_size, base_family = base_family) %+replace%
theme(
axis.text = element_text(size = rel(0.8)),
axis.ticks = element_line(color = "black"),
legend.key = element_rect(color = "grey80"),
panel.background = element_rect(fill = "whitesmoke", color = NA),
panel.border = element_rect(fill = NA, color = "grey50"),
panel.grid.major = element_line(color = "grey80", size = 0.2),
panel.grid.minor = element_line(color = "grey92", size = 0.5),
strip.background = element_rect(fill = "grey80", color = "grey50",
size = 0.2)
)
}
```
With our new theme defined, we can apply it in the same way as any of the built\-in themes—namely, by calling the `theme_mdsr()` function.
Figure [14\.17](ch-vizIII.html#fig:theme-mod2) shows how this stylizes the faceted Beatles time series plot.
```
beatles_plot + facet_wrap(~name) + theme_mdsr()
```
Figure 14\.17: Beatles plot with customized **mdsr** theme.
Many people have taken to creating their own themes for **ggplot2**.
In particular, the **ggthemes** package features useful (`theme_solarized()`), humorous (`theme_tufte()`), whimsical (`theme_fivethirtyeight()`), and even derisive (`theme_excel()`) themes. Another humorous theme is `theme_xkcd()`, which attempts to mimic the popular Web comic’s distinctive hand\-drawn styling.
This functionality is provided by the [**xkcd**](http://xkcd.r-forge.r-project.org) package.
```
library(xkcd)
```
To set **xkcd** up, we need to download the pseudo\-handwritten font, import it, and then `loadfonts()`.
Note that the destination for the fonts is system dependent: On Mac OS X this should be `~/Library/Fonts` while for Ubuntu it is `~/.fonts`.
```
download.file(
"http://simonsoftware.se/other/xkcd.ttf",
# ~/Library/Fonts/ for Mac OS X
dest = "~/.fonts/xkcd.ttf", mode = "wb"
)
```
```
font_import(pattern = "[X/x]kcd", prompt = FALSE)
loadfonts()
```
In Figure [14\.18](ch-vizIII.html#fig:beatles-xkcd), we show the xkcd\-styled plot of the popularity of the Beatles names.
```
beatles_plot + theme_xkcd()
```
Figure 14\.18: Prevalence of Beatles names drawn in the style of an **xkcd** Web comic.
14\.6 Extended example: Hot dog eating
--------------------------------------
Writing in 2011, former *New York Times* data graphic intern [Nathan Yau](https://en.wikipedia.org/w/index.php?search=Nathan%20Yau) noted that “[*Adobe Illustrator*](https://en.wikipedia.org/w/index.php?search=Adobe%20Illustrator) is the industry standard.
Every graphic that goes to print at *The New York Times* either was created or edited in Illustrator” (Yau 2011\).
To underscore his point, Yau presents the [data graphic](http://flowingdata.com/2008/07/03/nathans-annual-hot-dog-eating-contest-kobayashi-vs-chestnut/hot-dogs/) shown in Figure [14\.19](ch-vizIII.html#fig:hot-dogs), created in **R** but modified in Illustrator.
Figure 14\.19: Nathan Yau’s Hot Dog Eating data graphic that was created in **R** but modified using Adobe Illustrator (reprinted with permission from [flowingdata.com](http://www.flowingdata.com)).
Ten years later, *The New York Times* data graphic department now produces much of their content using `D3.js`, an interactive JavaScript library that we discussed in Section [14\.1](ch-vizIII.html#sec:d3).
What follows is our best attempt to recreate a static version of Figure [14\.19](ch-vizIII.html#fig:hot-dogs) entirely within **R** using **ggplot2** graphics.
After saving the plot as a PDF, we can open it in Illustrator or [*Inkscape*](https://en.wikipedia.org/w/index.php?search=Inkscape) for further customization if necessary.
Undertaking such “Copy the Master” exercises (D. Nolan and Perrett 2016\) is a good way to deepen your skills.
```
library(tidyverse)
library(mdsr)
hd <- read_csv(
"http://datasets.flowingdata.com/hot-dog-contest-winners.csv"
) %>%
janitor::clean_names()
glimpse(hd)
```
```
Rows: 31
Columns: 5
$ year <dbl> 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1…
$ winner <chr> "Paul Siederman & Joe Baldini", "Thomas DeBerry", "Stev…
$ dogs_eaten <dbl> 9.1, 11.0, 11.0, 19.5, 9.5, 11.8, 15.5, 12.0, 14.0, 13.…
$ country <chr> "United States", "United States", "United States", "Mex…
$ new_record <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0…
```
The `hd` data table doesn’t provide any data from before 1980, so we need to estimate them from Figure [14\.19](ch-vizIII.html#fig:hot-dogs) and manually add these rows to our data frame.
```
new_data <- tibble(
year = c(1979, 1978, 1974, 1972, 1916),
winner = c(NA, "Walter Paul", NA, NA, "James Mullen"),
dogs_eaten = c(19.5, 17, 10, 14, 13),
country = rep(NA, 5), new_record = c(1,1,0,0,0)
)
hd <- hd %>%
bind_rows(new_data)
glimpse(hd)
```
```
Rows: 36
Columns: 5
$ year <dbl> 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1…
$ winner <chr> "Paul Siederman & Joe Baldini", "Thomas DeBerry", "Stev…
$ dogs_eaten <dbl> 9.1, 11.0, 11.0, 19.5, 9.5, 11.8, 15.5, 12.0, 14.0, 13.…
$ country <chr> "United States", "United States", "United States", "Mex…
$ new_record <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0…
```
Note that we only want to draw some of the `year`s on the horizontal axis and only every 10th value on the vertical axis.
```
xlabs <- c(1916, 1972, 1980, 1990, 2007)
ylabs <- seq(from = 0, to = 70, by = 10)
```
Finally, the plot only shows the data up until 2008, even though the file contains more recent information than that.
Let’s define a subset that we’ll use for plotting.
```
hd_plot <- hd %>%
filter(year < 2008)
```
Our most basic plot is shown in Figure [14\.20](ch-vizIII.html#fig:hd-basic).
```
p <- ggplot(data = hd_plot, aes(x = year, y = dogs_eaten)) +
geom_col()
p
```
Figure 14\.20: A simple bar graph of hot dog eating.
This doesn’t provide the context of Figure [14\.19](ch-vizIII.html#fig:hot-dogs), nor the pizzazz.
Although most of the important data are already there, we still have a great deal of work to do to make this data graphic as engaging as Figure [14\.19](ch-vizIII.html#fig:hot-dogs).
Our recreation is shown in Figure [14\.21](ch-vizIII.html#fig:our-hot-dogs).
We aren’t actually going to draw the \\(y\\)\-axis—instead we are going to place the labels for the \\(y\\) values on the plot.
We’ll put the locations for those values in a data frame.
```
ticks_y <- tibble(x = 1912, y = ylabs)
```
There are many text annotations, and we will collect those into a single data frame. Here, we use the `tribble()` function to create a data frame row\-by\-row. The format of the input is similar to a CSV (see Section [6\.4\.1\.1](ch-dataII.html#sec:csv)).
```
text <- tribble(
~x, ~y, ~label, ~adj,
# Frank Dellarosa
1953, 37, paste(
"Frank Dellarosa eats 21 and a half HDBs over 12",
"\nminutes, breaking the previous record of 19 and a half."), 0,
# Joey Chestnut
1985, 69, paste(
"For the first time since 1999, an American",
"\nreclaims the title when Joey Chestnut",
"\nconsumes 66 HDBs, a new world record."), 0,
# Kobayashi
1972, 55, paste(
"Through 2001-2005, Takeru Kobayashi wins by no less",
"\nthan 12 HDBs. In 2006, he only wins by 1.75. After win-",
"\nning 6 years in a row and setting the world record 4 times,",
"\nKobayashi places second in 2007."), 0,
# Walter Paul
1942, 26, paste(
"Walter Paul sets a new",
"\nworld record with 17 HDBs."), 0,
# James Mullen
1917, 10.5, paste(
"James Mullen wins the inaugural",
"\ncontest, scarfing 13 HDBs. Length",
"\nof contest unavailable."), 0,
1935, 72, "NEW WORLD RECORD", 0,
1914, 72, "Hot dogs and buns (HDBs)", 0,
1940, 2, "*Data between 1916 and 1972 were unavailable", 0,
1922, 2, "Source: FlowingData", 0,
)
```
The grey segments that connect the text labels to the bars in the plot must be manually specified in another data frame.
Here, we use `tribble()` to construct a data frame in which each row corresponds to a single segment.
Next, we use the `unnest()` function to expand the data frame so that each row corresponds to a single point.
This will allow us to pass it to the `geom_segment()` function.
```
segments <- tribble(
~x, ~y,
c(1978, 1991, 1991, NA), c(37, 37, 21, NA),
c(2004, 2007, 2007, NA), c(69, 69, 66, NA),
c(1998, 2006, 2006, NA), c(58, 58, 53.75, NA),
c(2005, 2005, NA), c(58, 49, NA),
c(2004, 2004, NA), c(58, 53.5, NA),
c(2003, 2003, NA), c(58, 44.5, NA),
c(2002, 2002, NA), c(58, 50.5, NA),
c(2001, 2001, NA), c(58, 50, NA),
c(1955, 1978, 1978), c(26, 26, 17)
) %>%
unnest(cols = c(x, y))
```
Finally, we draw the plot, layering on each of the elements that we defined above.
```
p +
geom_col(aes(fill = factor(new_record))) +
geom_hline(yintercept = 0, color = "darkgray") +
scale_fill_manual(name = NULL,
values = c("0" = "#006f3c", "1" = "#81c450")
) +
scale_x_continuous(
name = NULL, breaks = xlabs, minor_breaks = NULL,
limits = c(1912, 2008), expand = c(0, 1)
) +
scale_y_continuous(
name = NULL, breaks = ylabs, labels = NULL,
minor_breaks = NULL, expand = c(0.01, 1)
) +
geom_text(
data = ticks_y, aes(x = x, y = y + 2, label = y),
size = 3
) +
labs(
title = "Winners from Nathan's Hot Dog Eating Contest",
subtitle = paste(
"Since 1916, the annual eating competition has grown substantially",
"attracting competitors from around\nthe world.",
"This year's competition will be televised on July 4, 2008",
"at 12pm EDT live on ESPN.\n\n\n"
)
) +
geom_text(
data = text, aes(x = x, y = y, label = label),
hjust = "left", size = 3
) +
geom_path(
data = segments, aes(x = x, y = y), col = "darkgray"
) +
# Key
geom_rect(
xmin = 1933, ymin = 70.75, xmax = 1934.3, ymax = 73.25,
fill = "#81c450", color = "white"
) +
guides(fill = FALSE) +
theme(
panel.background = element_rect(fill = "white"),
panel.grid.major.y =
element_line(color = "gray", linetype = "dotted"),
plot.title = element_text(face = "bold", size = 16),
plot.subtitle = element_text(size = 10),
axis.ticks.length = unit(0, "cm")
)
```
Figure 14\.21: Recreation of the hot dog graphic.
14\.7 Further resources
-----------------------
The [`htmlwidgets`](http://www.htmlwidgets.org) website includes a gallery of showcase applications of JavaScript in **R**.
Details and examples of use of the **flexdashboard** package can be found at <https://rmarkdown.rstudio.com/flexdashboard>.
The Shiny gallery (<http://shiny.rstudio.com/gallery>) includes a number of interactive visualizations (and associated
code), many of which feature JavaScript libraries.
Nearly 200 examples of widgets and idioms in Shiny are available at [https://github.com/rstudio/shiny\-examples](https://github.com/rstudio/shiny-examples).
The **RStudio** Shiny cheat sheet is a useful reference.
Hadley Wickham (2020a) provides a comprehensive guide to many aspects of Shiny development.
The **extrafont** package makes use of the full suite of fonts that are installed on your computer, rather than the relatively small sets of fonts that **R** knows about. (These are often device and operating system dependent, but three fonts—`sans`, `serif`, and `mono`—are always available.) For a
more extensive tutorial on how to use the **extrafont** package, see [http://tinyurl.com/fonts\-rcharts](http://tinyurl.com/fonts-rcharts).
14\.8 Exercises
---------------
**Problem 1 (Easy)**: Modify the Shiny app that displays the frequency of Beatles names over time so that it has a `checkboxInput()` widget that uses the `theme_tufte()` theme from the `ggthemes` package.
**Problem 2 (Medium)**: Create a Shiny app that demonstrates the use of at least five widgets.
**Problem 3 (Medium)**: The `macleish` package contains weather data collected every 10 minutes in 2015 from two weather stations in Whately, Massachusetts.
Using the `ggplot2` package, create a data graphic that displays the average temperature over each 10\-minute interval (`temperature`) as a function of time (`when`) from the `whately_2015` dataframe. Create annotations to include context about the four seasons: the date of the vernal and autumnal equinoxes, and the summer and winter solstices.
**Problem 4 (Medium)**: Modify the restaurant violations Shiny app so that it displays a table of the number of restaurants within a given type of cuisine along with a count of restaurants (as specified by the `dba` variable. (Hint: Be sure not to double count. The dataset should include 842 unique pizza restaurants in all boroughs and 281 Caribbean restaurants in Brooklyn.)
**Problem 5 (Medium)**: Create your own `ggplot2` theme. Describe the choices you made and justify why you made them using the principles introduced earlier.
**Problem 6 (Medium)**: The following code generates a scatterplot with marginal histograms.
```
p <- ggplot(HELPrct, aes(x = age, y = cesd)) +
geom_point() +
theme_classic() +
stat_smooth(method = "loess", formula = y ~ x, size = 2)
ggExtra::ggMarginal(p, type = "histogram", binwidth = 3)
```
Find an example where such a display might be useful. Be sure to interpret your graphical display
**Problem 7 (Medium)**: Using data from the `palmerpenguins` package, create a Shiny app that displays measurements from the `penguins` dataframe. Allow the user to select a species or a gender, and to choose between various attributes on a scatterplot. (Hint: examples of similar apps can be found at the [Shiny gallery](https://shiny.rstudio.com/gallery)).
**Problem 8 (Medium)**: Create a Shiny app to display an interactive time series plot of the `macleish` weather data. Include a selection box to alternate between data from the `whately_2015` and `orchard_2015` weather stations.
Add a selector of dates to include in the display. Do you notice any irregularities?
**Problem 9 (Hard)**: Repeat the earlier question using the weather data from the MacLeish field station, but include context on major storms listed on the Wikipedia pages: [2014–2015 North American Winter](https://en.wikipedia.org/wiki/2014%E2%80%9315_North_American_winter) and
[2015–2016 North American Winter](https://en.wikipedia.org/wiki/2015%E2%80%9316_North_American_winter).
**Problem 10 (Hard)**: Using data from the `Lahman` package, create a Shiny app that displays career leaderboards similar to the one at [http://www.baseball\-reference.com/leaders/HR\_season.shtml](http://www.baseball-reference.com/leaders/HR_season.shtml). Allow the user to select a statistic of their choice, and to choose between `Career`, `Active`, `Progressive`, and `Yearly League` leaderboards. (Hint: examples of similar apps can be found at the [Shiny gallery](https://shiny.rstudio.com/gallery).)
14\.9 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/dataviz\-III.html\#dataviz\-III\-online\-exercises](https://mdsr-book.github.io/mdsr2e/dataviz-III.html#dataviz-III-online-exercises)
**Problem 1 (Medium)**:
Write a Shiny app that allows the user to pick variables from the `HELPrct` data in the `mosaicData` package and to generate a scatterplot.
Include a checkbox to add a smoother and a choice of transformations for the y axis variable.
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-sql.html |
Chapter 15 Database querying using SQL
======================================
Thus far, most of the data that we have encountered in this book (such as the **Lahman** baseball data in Chapter [4](ch-dataI.html#ch:dataI)) has been small—meaning that it will fit easily in a personal computer’s memory.
In this chapter, we will explore approaches for working with data sets that are larger—let’s call them *medium* data.
These data will fit on a personal computer’s hard disk, but not necessarily in its memory.
Thankfully, a venerable solution for retrieving medium data from a database has been around since the 1970s: [*SQL*](https://en.wikipedia.org/w/index.php?search=SQL) (structured query language).
Database management systems implementing SQL provide a ubiquitous architecture for storing and querying data that is relational in nature.
While the death of SQL has been presaged many times, it continues to provide an effective solution for medium data.
Its wide deployment makes it a “must\-know” tool for data scientists.
For those of you with bigger appetites,
we will consider some extensions that move us closer to a true big data setting in Chapter [21](ch-big.html#ch:big).
15\.1 From **dplyr** to SQL
---------------------------
Recall the **airlines** data that we encountered in Chapter [9](ch-foundations.html#ch:foundations).
Using the **dplyr** verbs that we developed in Chapters [4](ch-dataI.html#ch:dataI) and [5](ch-join.html#ch:join), consider retrieving the top on\-time carriers with at least 100 flights arriving at JFK in September 2016\.
If the data are stored in data frames called `flights` and `carriers`, then we might write a **dplyr** pipeline like this:
```
q <- flights %>%
filter(
year == 2016 & month == 9,
dest == "JFK"
) %>%
inner_join(carriers, by = c("carrier" = "carrier")) %>%
group_by(name) %>%
summarize(
N = n(),
pct_ontime = sum(arr_delay <= 15) / n()
) %>%
filter(N >= 100) %>%
arrange(desc(pct_ontime))
head(q, 4)
```
```
# Source: lazy query [?? x 3]
# Database: mysql 5.7.33-log
# [@mdsr.cdc7tgkkqd0n.us-east-1.rds.amazonaws.com:/airlines]
# Ordered by: desc(pct_ontime)
name N pct_ontime
<chr> <dbl> <dbl>
1 Delta Air Lines Inc. 2396 0.869
2 Virgin America 347 0.833
3 JetBlue Airways 3463 0.817
4 American Airlines Inc. 1397 0.782
```
However, the `flights` data frame can become very large. Going back to 1987, there are more than 169 million individual flights—each comprising a different row in this table.
These data occupy nearly 20 gigabytes as CSVs, and thus are problematic to store in a personal computer’s memory.
Instead, we write these data to disk, and use a querying language to access only those rows that interest us.
In this case, we configured **dplyr** to access the `flights` data on a MySQL server.
The `dbConnect_scidb()` function from the **mdsr** package provides a connection to the `airlines` database that lives on a remote MySQL server and stores it as the object `db`.
The `tbl()` function from **dplyr** maps the `flights` table in that `airlines` database to an object in **R**, in this case also called `flights`.
The same is done for the `carriers` table.
```
library(tidyverse)
library(mdsr)
db <- dbConnect_scidb("airlines")
flights <- tbl(db, "flights")
carriers <- tbl(db, "carriers")
```
Note that while we can use the `flights` and `carriers` objects *as if* they were data frames, they are not, in fact, `data.frame`s. Rather, they have class `tbl_MySQLConnection`, and more generally, `tbl_sql`. A `tbl` is a special kind of object created by **dplyr** that behaves similarly to a `data.frame`.
```
class(flights)
```
```
[1] "tbl_MySQLConnection" "tbl_dbi" "tbl_sql"
[4] "tbl_lazy" "tbl"
```
Note also that in the output of our pipeline above, there is an explicit mention of a MySQL database. We set up this database ahead of time (see Chapter [16](ch-sql2.html#ch:sql2) for instructions on doing this), but **dplyr** allows us to interact with these `tbl`s as if they were `data.frame`s in our **R** session. This is a powerful and convenient illusion!
What is actually happening is that **dplyr** translates our pipeline into SQL.
We can see the translation by passing the pipeline through the `show_query()` function using our previously created query.
```
show_query(q)
```
```
<SQL>
SELECT *
FROM (SELECT `name`, COUNT(*) AS `N`, SUM(`arr_delay` <= 15.0) / COUNT(*) AS `pct_ontime`
FROM (SELECT `year`, `month`, `day`, `dep_time`, `sched_dep_time`, `dep_delay`, `arr_time`, `sched_arr_time`, `arr_delay`, `LHS`.`carrier` AS `carrier`, `tailnum`, `flight`, `origin`, `dest`, `air_time`, `distance`, `cancelled`, `diverted`, `hour`, `minute`, `time_hour`, `name`
FROM (SELECT *
FROM `flights`
WHERE ((`year` = 2016.0 AND `month` = 9.0) AND (`dest` = 'JFK'))) `LHS`
INNER JOIN `carriers` AS `RHS`
ON (`LHS`.`carrier` = `RHS`.`carrier`)
) `q01`
GROUP BY `name`) `q02`
WHERE (`N` >= 100.0)
ORDER BY `pct_ontime` DESC
```
Understanding this output is not important—the translator here is creating temporary tables with unintelligible names—but it should convince you that even though we wrote our pipeline in **R**, it was translated to SQL. **dplyr** will do this automatically any time you are working with objects of class `tbl_sql`. If we were to write an SQL query equivalent to our pipeline, we would write it in a more readable format:
```
SELECT
c.name,
SUM(1) AS N,
SUM(arr_delay <= 15) / SUM(1) AS pct_ontime
FROM flights AS f
JOIN carriers AS c ON f.carrier = c.carrier
WHERE year = 2016 AND month = 9
AND dest = 'JFK'
GROUP BY name
HAVING N >= 100
ORDER BY pct_ontime DESC
LIMIT 0,4;
```
How did **dplyr** perform this translation?[23](#fn23) As we learn SQL, the parallels will become clear (e.g., the **dplyr** verb `filter()` corresponds to the SQL `WHERE` clause). But what about the formulas we put in our `summarize()` command? Notice that the **R** command `n()` was converted into `("COUNT(*)` in SQL. This is not magic either: the `translate_sql()` function provides translation between **R** commands and SQL commands. For example, it will translate basic mathematical expressions.
```
library(dbplyr)
translate_sql(mean(arr_delay, na.rm = TRUE))
```
```
<SQL> AVG(`arr_delay`) OVER ()
```
However, it only recognizes a small set of the most common operations—it cannot magically translate any **R** function into SQL. It can be easily tricked. For example, if we make a copy of the very common **R** function `paste0()` (which concatenates strings) called `my_paste()`, that function is not translated.
```
my_paste <- paste0
translate_sql(my_paste("this", "is", "a", "string"))
```
```
<SQL> my_paste('this', 'is', 'a', 'string')
```
This is a good thing—since it allows you to pass arbitrary SQL code through. But you have to know what you are doing. Since there is no SQL function called `my_paste()`, this will throw an error, even though it is a perfectly valid **R** expression.
```
carriers %>%
mutate(name_code = my_paste(name, "(", carrier, ")"))
```
```
Error in .local(conn, statement, ...): could not run statement: execute command denied to user 'mdsr_public'@'%' for routine 'airlines.my_paste'
```
```
class(carriers)
```
```
[1] "tbl_MySQLConnection" "tbl_dbi" "tbl_sql"
[4] "tbl_lazy" "tbl"
```
Because `carriers` is a `tbl_sql` and not a `data.frame`, the MySQL server is actually doing the computations here. The **dplyr** pipeline is simply translated into SQL and submitted to the server. To make this work, we need to replace `my_paste()` with its MySQL equivalent command, which is `CONCAT()`.
```
carriers %>%
mutate(name_code = CONCAT(name, "(", carrier, ")"))
```
```
# Source: lazy query [?? x 3]
# Database: mysql 5.7.33-log
# [@mdsr.cdc7tgkkqd0n.us-east-1.rds.amazonaws.com:/airlines]
carrier name name_code
<chr> <chr> <chr>
1 02Q Titan Airways Titan Airways(02Q)
2 04Q Tradewind Aviation Tradewind Aviation(04Q)
3 05Q Comlux Aviation, AG Comlux Aviation, AG(05Q)
4 06Q Master Top Linhas Aereas Ltd. Master Top Linhas Aereas Ltd.(06Q)
5 07Q Flair Airlines Ltd. Flair Airlines Ltd.(07Q)
6 09Q Swift Air, LLC Swift Air, LLC(09Q)
7 0BQ DCA DCA(0BQ)
8 0CQ ACM AIR CHARTER GmbH ACM AIR CHARTER GmbH(0CQ)
9 0GQ Inter Island Airways, d/b/a I… Inter Island Airways, d/b/a Inter…
10 0HQ Polar Airlines de Mexico d/b/… Polar Airlines de Mexico d/b/a No…
# … with more rows
```
The syntax of this looks a bit strange, since `CONCAT()` is not a valid **R** expression—but it works.
Another alternative is to pull the `carriers` data into **R** using the `collect()` function first, and then use `my_paste()` as before.[24](#fn24) The `collect()` function breaks the connection to the MySQL server and returns a `data.frame` (which is also a `tbl_df`).
```
carriers %>%
collect() %>%
mutate(name_code = my_paste(name, "(", carrier, ")"))
```
```
# A tibble: 1,610 × 3
carrier name name_code
<chr> <chr> <chr>
1 02Q Titan Airways Titan Airways(02Q)
2 04Q Tradewind Aviation Tradewind Aviation(04Q)
3 05Q Comlux Aviation, AG Comlux Aviation, AG(05Q)
4 06Q Master Top Linhas Aereas Ltd. Master Top Linhas Aereas Ltd.(06Q)
5 07Q Flair Airlines Ltd. Flair Airlines Ltd.(07Q)
6 09Q Swift Air, LLC Swift Air, LLC(09Q)
7 0BQ DCA DCA(0BQ)
8 0CQ ACM AIR CHARTER GmbH ACM AIR CHARTER GmbH(0CQ)
9 0GQ Inter Island Airways, d/b/a I… Inter Island Airways, d/b/a Inter…
10 0HQ Polar Airlines de Mexico d/b/… Polar Airlines de Mexico d/b/a No…
# … with 1,600 more rows
```
This example illustrates that when using **dplyr** with a `tbl_sql` backend, one must be careful to use expressions that SQL can understand. This is just one more reason why it is important to know SQL on its own and not rely entirely on the **dplyr** front\-end (as wonderful as it is).
For querying a database, the choice of whether to use **dplyr** or SQL is largely a question of convenience.
If you want to work with the result of your query in **R**, then use **dplyr**.
If, on the other hand, you are pulling data into a [*web application*](https://en.wikipedia.org/w/index.php?search=web%20application), you likely have no alternative other than writing the SQL query yourself. **dplyr** is just one SQL client that only works in **R**, but there are SQL servers all over the world, in countless environments.
Furthermore, as we will see in Chapter [21](ch-big.html#ch:big), even the big data tools that supersede SQL assume prior knowledge of SQL. Thus, in this chapter, we will learn how to write SQL queries.
15\.2 Flat\-file databases
--------------------------
It may be the case that all of the data that you have encountered thus far has been in a application\-specific format (e.g., **R**, [*Minitab*](https://en.wikipedia.org/w/index.php?search=Minitab), [*SPSS*](https://en.wikipedia.org/w/index.php?search=SPSS), [*Stata*](https://en.wikipedia.org/w/index.php?search=Stata) or has taken the form of a single CSV (comma\-separated value) file. This file consists of nothing more than rows and columns of data, usually with a header row providing names for each of the columns. Such a file is known as known as a *flat file*,
since it consists of just one flat (e.g., two\-dimensional) file. A *spreadsheet* application—like [*Excel*](https://en.wikipedia.org/w/index.php?search=Excel) or [*Google Sheets*](https://en.wikipedia.org/w/index.php?search=Google%20Sheets)—allows a user to open a flat file, edit it, and also provides a slew of features for generating additional columns, formatting cells, etc. In **R**, the `read_csv()` command from the
**readr** package converts a flat file database into a `data.frame`.
These flat\-file databases are both extremely common and extremely useful, so why do we need anything else? One set of limitations comes from computer hardware. A personal computer has two main options for storing data:
* Memory (RAM): the amount of data that a computer can work on at once. Modern computers typically have a few gigabytes of memory. A computer can access data in memory extremely quickly (tens of GBs per second).
* Hard Disk: the amount of data that a computer can store permanently. Modern computers typically have hundreds or even thousands of gigabytes (terabytes) of storage space. However, accessing data on disk is orders of magnitude slower than accessing data in memory (hundreds of MBs per second).
Thus, there is a trade\-off between storage space (disks have more room) and speed (memory is much faster to access). It is important to recognize that these are *physical* limitations—if you only have 4 Gb of RAM on your computer, you simply can’t read more than 4 Gb of data into memory.[25](#fn25)
In general, all objects in your **R** workspace are stored in memory. Note that the `carriers` object that we created earlier occupies very little memory (since the data still lives on the SQL server), whereas `collect(carriers)` pulls the data into **R** and occupies much more memory.
You can find out how much memory an object occupies in **R** using the `object.size()` function and its **print** method.
```
carriers %>%
object.size() %>%
print(units = "Kb")
```
```
3.6 Kb
```
```
carriers %>%
collect() %>%
object.size() %>%
print(units = "Kb")
```
```
234.8 Kb
```
For a typical **R** user, this means that it can be difficult or impossible to work with a data set stored as a `data.frame` that is larger than a few Gb. The following bit of code will illustrate that a data set of random numbers with 100 columns and 1 million rows occupies more than three\-quarters of a Gb of memory on this computer.
```
n <- 100 * 1e6
x <- matrix(runif(n), ncol = 100)
dim(x)
```
```
[1] 1000000 100
```
```
print(object.size(x), units = "Mb")
```
```
762.9 Mb
```
Thus, by the time that `data.frame` reached 10 million rows, it would be problematic for most personal computers—probably making your machine sluggish and unresponsive—and it could never reach 100 million rows. But Google processes over 3\.5 *billion* search queries per day! We know that they get stored somewhere—where do they all go?
To work effectively with larger data, we need a system that stores *all* of the data on disk, but allows us to access a portion of the data in memory easily. A [*relational database*](https://en.wikipedia.org/w/index.php?search=relational%20database)—which stores data in a collection of linkable tables—provides a powerful solution to this problem.
While more sophisticated approaches are available to address big data challenges, databases are a venerable solution for medium data.
15\.3 The SQL universe
----------------------
SQL (Structured Query Language) is a programming language for *relational database management systems*. Originally developed in the 1970s, it is a mature, powerful, and widely used storage and retrieval solution for data of many sizes. [*Google*](https://en.wikipedia.org/w/index.php?search=Google), [*Facebook*](https://en.wikipedia.org/w/index.php?search=Facebook), [*Twitter*](https://en.wikipedia.org/w/index.php?search=Twitter), [*Reddit*](https://en.wikipedia.org/w/index.php?search=Reddit), [*LinkedIn*](https://en.wikipedia.org/w/index.php?search=LinkedIn), [*Instagram*](https://en.wikipedia.org/w/index.php?search=Instagram), and countless other companies all access large datastores using SQL.
Relational database management systems (RDBMS) are very efficient for data that is naturally broken into a series of *tables* that are linked together by *keys*. A table is a two\-dimensional array of data that has *records* (rows) and *fields* (columns). It is very much like a `data.frame` in **R**, but there are some important differences that make SQL more efficient under certain conditions.
The theoretical foundation for SQL is based on *relational algebra* and *tuple relational calculus*. These ideas were developed by mathematicians and computer scientists, and while they are not required knowledge for our purposes, they help to solidify SQL’s standing as a data storage and retrieval system.
SQL has been an [American National Standards Institute](https://en.wikipedia.org/wiki/American_National_Standards_Institute) (ANSI) standard since 1986, but that standard is only loosely followed by its implementing developers. Unfortunately, this means that there are many different dialects of SQL, and translating between them is not always trivial. However, the broad strokes of the SQL language are common to all, and by learning one dialect, you will be able to easily understand any other (Kline et al. 2008\).
Major implementations of SQL include:
* [*Oracle*](https://en.wikipedia.org/w/index.php?search=Oracle): corporation that claims \#1 market share by revenue—now owns MySQL.
* [*Microsoft SQL Server*](https://en.wikipedia.org/w/index.php?search=Microsoft%20SQL%20Server): another widespread corporate SQL product.
* [*SQLite*](https://en.wikipedia.org/w/index.php?search=SQLite): a lightweight, open\-source version of SQL that has recently become the most widely used implementation of SQL, in part due to its being embedded in [*Android*](https://en.wikipedia.org/w/index.php?search=Android), the world’s most popular mobile operating system. SQLite is an excellent choice for relatively simple applications—like storing data associated with a particular mobile app—but has neither the features nor the scalability for persistent, multi\-user, multi\-purpose applications.
* [*MySQL*](https://en.wikipedia.org/w/index.php?search=MySQL): the most popular client\-server RDBMS. It is open source, but is now owned by Oracle Corporation, and that has caused some tension in the open\-source community. One of the original developers of MySQL, Monty Widenius, now maintains [*MariaDB*](https://en.wikipedia.org/w/index.php?search=MariaDB) as a community fork. MySQL is used by Facebook, Google, LinkedIn, and Twitter.
* [*PostgreSQL*](https://en.wikipedia.org/w/index.php?search=PostgreSQL): a feature\-rich, standards\-compliant, open\-source implementation growing in popularity. PostgreSQL hews closer to the ANSI standard than MySQL, supports more functions and data types, and provides powerful procedural languages that can extend its base functionality. It is used by Reddit and Instagram, among others.
* [*MonetDB*](https://en.wikipedia.org/w/index.php?search=MonetDB) and [*MonetDBLite*](https://en.wikipedia.org/w/index.php?search=MonetDBLite): open\-source implementations that are column\-based, rather than the traditional row\-based systems. Column\-based RDBMSs scale better for big data. **MonetDBLite** is an **R** package that provides a local experience similar to SQLite.
* [*Vertica*](https://en.wikipedia.org/w/index.php?search=Vertica): a commercial column\-based implementation founded by Postgres originator [Michael Stonebraker](https://en.wikipedia.org/w/index.php?search=Michael%20Stonebraker) and now owned by [*Hewlett\-Packard*](https://en.wikipedia.org/w/index.php?search=Hewlett-Packard).
We will focus on MySQL, but most aspects are similar in PostgreSQL or SQLite (see Appendix [F](ch-db-setup.html#ch:db-setup) for setup instructions).
15\.4 The SQL data manipulation language
----------------------------------------
MySQL is based on a client\-server model. This means that there is a *database server* that stores the data and executes queries. It can be located on the user’s local computer or on a remote server. We will be connecting to a server hosted by [*Amazon Web Services*](https://en.wikipedia.org/w/index.php?search=Amazon%20Web%20Services). To retrieve data from the server, one can connect to it via any number of client programs. One can of course use the command\-line `mysql` program, or the official [*GUI*](https://en.wikipedia.org/w/index.php?search=GUI) application: [*MySQL Workbench*](https://en.wikipedia.org/w/index.php?search=MySQL%20Workbench). While we encourage the reader to explore both options—we most often use the Workbench for MySQL development—the output you will see in this presentation comes directly from the MySQL command line client.
Even though `dplyr` enables one to execute most queries using **R** syntax, and without even worrying so much *where* the data are stored, learning SQL is valuable in its own right due to its ubiquity.
If you are just learning SQL for the first time, use the command\-line client and/or one of the GUI applications. The former provides the most direct feedback, and the latter will provide lots of helpful information.
Information about setting up a MySQL database can be found in Appendix [F](ch-db-setup.html#ch:db-setup): we assume that this has been done on a local or remote machine.
In what follows, you will see SQL commands and their results in tables.
To run these on your computer, please see Section [F.4](ch-db-setup.html#sec:connect-sql) for information about connecting to a MySQL server.
As noted in Chapter [1](ch-prologue.html#ch:prologue), the [**airlines** package](https://github.com/beanumber/airlines) streamlines construction an SQL database containing over 169 million flights.
These data come directly from the [*United States Bureau of Transportation Statistics*](https://en.wikipedia.org/w/index.php?search=United%20States%20Bureau%20of%20Transportation%20Statistics).
We access a remote SQL database that we have already set up and populated using the **airlines** package.
Note that this database is relational and consists of multiple tables.
```
SHOW TABLES;
```
| Tables\_in\_airlines |
| --- |
| airports |
| carriers |
| flights |
| planes |
Note that every SQL statement must end with a semicolon. To see what columns are present in the `airports` table, we ask for a description.
```
DESCRIBE airports;
```
| Field | Type | Null | Key | Default | Extra |
| --- | --- | --- | --- | --- | --- |
| faa | varchar(3\) | NO | PRI | | |
| name | varchar(255\) | YES | | NA | |
| lat | decimal(10,7\) | YES | | NA | |
| lon | decimal(10,7\) | YES | | NA | |
| alt | int(11\) | YES | | NA | |
| tz | smallint(4\) | YES | | NA | |
| dst | char(1\) | YES | | NA | |
| city | varchar(255\) | YES | | NA | |
| country | varchar(255\) | YES | | NA | |
This command tells us the names of the field (or variables) in the table, as well as their data type, and what kind of keys might be present (we will learn more about keys in Chapter [16](ch-sql2.html#ch:sql2)).
Next, we want to build a *query*. Queries in SQL start with the `SELECT` keyword and consist of several clauses, which have to be written in this order:
* `SELECT` allows you to list the columns, or functions operating on columns, that you want to retrieve. This is an analogous operation to the `select()` verb in **dplyr**, potentially combined with `mutate()`.
* `FROM` specifies the table where the data are.
* `JOIN` allows you to stitch together two or more tables using a key. This is analogous to the `inner_join()` and `left_join()` commands in **dplyr**.
* `WHERE` allows you to filter the records according to some criteria. This is an analogous operation to the `filter()` verb in **dplyr**.
* `GROUP BY` allows you to aggregate the records according to some shared value. This is an analogous operation to the `group_by()` verb in **dplyr**.
* `HAVING` is like a `WHERE` clause that operates on the result set—not the records themselves. This is analogous to applying a second `filter()` command in **dplyr**, after the rows have already been aggregated.
* `ORDER BY` is exactly what it sounds like—it specifies a condition for ordering the rows of the result set. This is analogous to the `arrange()` verb in **dplyr**.
* `LIMIT` restricts the number of rows in the output. This is similar to the **R** commands `head()` and `slice()`.
Only the `SELECT` and `FROM` clauses are required. Thus, the simplest query one can write is:
```
SELECT * FROM flights;
```
**DO NOT EXECUTE THIS QUERY!** This will cause all 169 million records to be dumped! This will not only crash your machine, but also tie up the server for everyone else!
A safe query is:
```
SELECT * FROM flights LIMIT 0,10;
```
We can specify a subset of variables to be displayed.
Table [15\.1](ch-sql.html#tab:select-limit2) displays the results, limited to the specified fields and the first 10 records.
```
SELECT year, month, day, dep_time, sched_dep_time, dep_delay, origin
FROM flights
LIMIT 0, 10;
```
Table 15\.1: Specifying a subset of variables.
| year | month | day | dep\_time | sched\_dep\_time | dep\_delay | origin |
| --- | --- | --- | --- | --- | --- | --- |
| 2010 | 10 | 1 | 1 | 2100 | 181 | EWR |
| 2010 | 10 | 1 | 1 | 1920 | 281 | FLL |
| 2010 | 10 | 1 | 3 | 2355 | 8 | JFK |
| 2010 | 10 | 1 | 5 | 2200 | 125 | IAD |
| 2010 | 10 | 1 | 7 | 2245 | 82 | LAX |
| 2010 | 10 | 1 | 7 | 10 | \-3 | LAX |
| 2010 | 10 | 1 | 7 | 2150 | 137 | ATL |
| 2010 | 10 | 1 | 8 | 15 | \-7 | SMF |
| 2010 | 10 | 1 | 8 | 10 | \-2 | LAS |
| 2010 | 10 | 1 | 10 | 2225 | 105 | SJC |
The astute reader will recognize the similarities between the five idioms for single table analysis and the join operations discussed in Chapters [4](ch-dataI.html#ch:dataI) and [5](ch-join.html#ch:join) and the SQL syntax.
This is not a coincidence!
In the contrary, **dplyr** represents a concerted effort to bring the almost natural language SQL syntax to **R**.
For this book, we have presented the **R** syntax first, since much of our content is predicated on the basic data wrangling skills developed previously.
But historically, SQL predated the **dplyr** by decades.
In Table [15\.2](ch-sql.html#tab:sql-r), we illustrate the functional equivalence of SQL and **dplyr** commands.
(ref:filter\-sql) `SELECT col1, col2 FROM a WHERE col3 = 'x'`
(ref:filter\-r) `a %>% filter(col3 == 'x') %>% select(col1, col2)`
(ref:aggregate\-sql) `SELECT id, SUM(col1) FROM a GROUP BY id`
(ref:aggregate\-r) `a %>% group_by(id) %>% summarize(SUM(col1))`
(ref:join\-sql) `SELECT * FROM a JOIN b ON a.id = b.id`
(ref:join\-r) `a %>% inner_join(b, by = c('id' = 'id'))`
Table 15\.2: Equivalent commands in SQL and R, where \\(a\\) and \\(b\\) are SQL tables and R dataframes.
| Concept | SQL | R |
| --- | --- | --- |
| Filter by rows \& columns | (ref:filter\-sql) | (ref:filter\-r) |
| Aggregate by rows | (ref:aggregate\-sql) | (ref:aggregate\-r) |
| Combine two tables | (ref:join\-sql) | (ref:join\-r) |
### 15\.4\.1 `SELECT...FROM`
As noted above, every SQL `SELECT` query must contain `SELECT` and `FROM`.
The analyst may specify columns to be retrieved. We saw above that the `airports` table contains seven columns. If we only wanted to retrieve the FAA `code` and `name` of each airport, we could write the following query.
```
SELECT faa, name FROM airports;
```
| faa | name |
| --- | --- |
| 04G | Lansdowne Airport |
| 06A | Moton Field Municipal Airport |
| 06C | Schaumburg Regional |
In addition to columns that are present in the database, one can retrieve columns that are functions of other columns.
For example, if we wanted to return the geographic coordinates of each airport as an \\((x,y)\\) pair, we could combine those fields.
```
SELECT
name,
CONCAT('(', lat, ', ', lon, ')')
FROM airports
LIMIT 0, 6;
```
| name | CONCAT(‘(,’ lat, ‘,’ lon, ‘)’) |
| --- | --- |
| Lansdowne Airport | (41\.1304722, \-80\.6195833\) |
| Moton Field Municipal Airport | (32\.4605722, \-85\.6800278\) |
| Schaumburg Regional | (41\.9893408, \-88\.1012428\) |
| Randall Airport | (41\.4319120, \-74\.3915611\) |
| Jekyll Island Airport | (31\.0744722, \-81\.4277778\) |
| Elizabethton Municipal Airport | (36\.3712222, \-82\.1734167\) |
Note that the column header for the derived column above is ungainly, since it consists of the entire formula that we used to construct it!
This is difficult to read, and would be cumbersome to work with.
An easy fix is to give this derived column an *alias*. We can do this using the keyword `AS`.
```
SELECT
name,
CONCAT('(', lat, ', ', lon, ')') AS coords
FROM airports
LIMIT 0, 6;
```
| name | coords |
| --- | --- |
| Lansdowne Airport | (41\.1304722, \-80\.6195833\) |
| Moton Field Municipal Airport | (32\.4605722, \-85\.6800278\) |
| Schaumburg Regional | (41\.9893408, \-88\.1012428\) |
| Randall Airport | (41\.4319120, \-74\.3915611\) |
| Jekyll Island Airport | (31\.0744722, \-81\.4277778\) |
| Elizabethton Municipal Airport | (36\.3712222, \-82\.1734167\) |
We can also use `AS` to refer to a column in the table by a different name in the result set.
```
SELECT
name AS airport_name,
CONCAT('(', lat, ', ', lon, ')') AS coords
FROM airports
LIMIT 0, 6;
```
| airport\_name | coords |
| --- | --- |
| Lansdowne Airport | (41\.1304722, \-80\.6195833\) |
| Moton Field Municipal Airport | (32\.4605722, \-85\.6800278\) |
| Schaumburg Regional | (41\.9893408, \-88\.1012428\) |
| Randall Airport | (41\.4319120, \-74\.3915611\) |
| Jekyll Island Airport | (31\.0744722, \-81\.4277778\) |
| Elizabethton Municipal Airport | (36\.3712222, \-82\.1734167\) |
This brings an important distinction to the fore: In SQL, it is crucial to distinguish between clauses that operate *on the rows of the original table* versus those that operate *on the rows of the result set*.
Here, `name`, `lat`, and `lon` are columns in the original table—they are written to the disk on the SQL server.
On the other hand, `airport_name` and `coords` exist only in the result set—which is passed from the server to the client and is not written to the disk.
The preceding examples show the SQL equivalents of the **dplyr** commands `select()`, `mutate()`, and `rename()`.
### 15\.4\.2 `WHERE`
The `WHERE` clause is analogous to the `filter()` command in **dplyr**—it allows you to restrict the set of rows that are retrieved to only those rows that match a certain condition.
Thus, while there are several million rows in the `flights` table in each year—each corresponding to a single flight—there were only a few dozen flights that left [*Bradley International Airport*](https://en.wikipedia.org/w/index.php?search=Bradley%20International%20Airport) on June 26th, 2013\.
```
SELECT
year, month, day, origin, dest,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
| year | month | day | origin | dest | flight | carrier |
| --- | --- | --- | --- | --- | --- | --- |
| 2013 | 6 | 26 | BDL | EWR | 4714 | EV |
| 2013 | 6 | 26 | BDL | MIA | 2015 | AA |
| 2013 | 6 | 26 | BDL | DTW | 1644 | DL |
| 2013 | 6 | 26 | BDL | BWI | 2584 | WN |
| 2013 | 6 | 26 | BDL | ATL | 1065 | DL |
| 2013 | 6 | 26 | BDL | DCA | 1077 | US |
It would be convenient to search for flights in a date range.
Unfortunately, there is no date field in this table—but rather separate columns for the `year`, `month`, and `day`.
Nevertheless, we can tell SQL to interpret these columns as a date, using the `STR_TO_DATE()` function.[26](#fn26) Unlike in **R** code, function names in SQL code are customarily capitalized.
Dates and times can be challenging to wrangle.
To learn more about these date tokens, see the MySQL [documentation](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_str-to-date) for `STR_TO_DATE()`.
```
SELECT
STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d') AS theDate,
origin,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
| theDate | origin | flight | carrier |
| --- | --- | --- | --- |
| 2013\-06\-26 | BDL | 4714 | EV |
| 2013\-06\-26 | BDL | 2015 | AA |
| 2013\-06\-26 | BDL | 1644 | DL |
| 2013\-06\-26 | BDL | 2584 | WN |
| 2013\-06\-26 | BDL | 1065 | DL |
| 2013\-06\-26 | BDL | 1077 | US |
Note that here we have used a `WHERE` clause on columns that are not present in the result set. We can do this because `WHERE` operates only on the rows of the original table.
Conversely, if we were to try and use a `WHERE` clause on `theDate`, it would not work, because (as the error suggests), `theDate` is not the name of a column in the `flights` table.
```
SELECT
STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d') AS theDate,
origin, flight, carrier
FROM flights
WHERE theDate = '2013-06-26'
AND origin = 'BDL'
LIMIT 0, 6;
```
```
ERROR 1049 (42000): Unknown database 'airlines'
```
A workaround is to copy and paste the definition of `theDate` into the `WHERE` clause, since `WHERE` *can* operate on functions of columns in the original table (results not shown).
```
SELECT
STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d') AS theDate,
origin, flight, carrier
FROM flights
WHERE STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d') =
'2013-06-26'
AND origin = 'BDL'
LIMIT 0, 6;
```
This query will work, but here we have stumbled onto another wrinkle that exposes subtleties in how SQL executes queries.
The previous query was able to make use of indices defined on the `year`, `month`, and `day` columns.
However, the latter query is not able to make use of these indices because it is trying to filter on functions of a combination of those columns.
This makes the latter query very slow.
We will return to a fuller discussion of indices in Section [16\.1](ch-sql2.html#sec:indices).
Finally, we can use the `BETWEEN` syntax to filter through a range of dates.
The `DISTINCT` keyword limits the result set to one row per unique value of `theDate`.
```
SELECT
DISTINCT STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d')
AS theDate
FROM flights
WHERE year = 2013 AND month = 6 AND day BETWEEN 26 and 30
AND origin = 'BDL'
LIMIT 0, 6;
```
| theDate |
| --- |
| 2013\-06\-26 |
| 2013\-06\-27 |
| 2013\-06\-28 |
| 2013\-06\-29 |
| 2013\-06\-30 |
Similarly, we can use the `IN` syntax to search for items in a specified list.
Note that flights on the 27th, 28th, and 29th of June are retrieved in the query using `BETWEEN` but not in the query using `IN`.
```
SELECT
DISTINCT STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d')
AS theDate
FROM flights
WHERE year = 2013 AND month = 6 AND day IN (26, 30)
AND origin = 'BDL'
LIMIT 0, 6;
```
| theDate |
| --- |
| 2013\-06\-26 |
| 2013\-06\-30 |
SQL also supports `OR` clauses in addition to `AND` clauses, but one must always be careful with parentheses when using `OR`. Note the difference in the numbers of rows returned by the following two queries (557,874 vs. 2,542\). The `COUNT` function simply counts the number of rows. The criteria in the `WHERE` clause are not evaluated left to right, but rather the `AND`s are evaluated first. This means that in the first query below, all flights on the 26th day of any month, regardless of year or month, would be returned.
```
/* returns 557,874 records */
SELECT
COUNT(*) AS N
FROM flights
WHERE year = 2013 AND month = 6 OR day = 26
AND origin = 'BDL';
```
```
/* returns 2,542 records */
SELECT
COUNT(*) AS N
FROM flights
WHERE year = 2013 AND (month = 6 OR day = 26)
AND origin = 'BDL';
```
### 15\.4\.3 `GROUP BY`
The `GROUP BY` clause allows one to *aggregate* multiple rows according to some criteria.
The challenge when using `GROUP BY` is specifying *how* multiple rows of data should be reduced into a single value. [Aggregate functions](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html) (e.g., `COUNT()`, `SUM()`, `MAX()`, and `AVG()`) are necessary.
We know that there were 65 flights that left Bradley Airport on June 26th, 2013, but how many belonged to each airline carrier?
To get this information we need to aggregate the individual flights, based on who the carrier was.
```
SELECT
carrier,
COUNT(*) AS numFlights,
SUM(1) AS numFlightsAlso
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
GROUP BY carrier;
```
| carrier | numFlights | numFlightsAlso |
| --- | --- | --- |
| 9E | 5 | 5 |
| AA | 4 | 4 |
| B6 | 5 | 5 |
| DL | 11 | 11 |
| EV | 5 | 5 |
| MQ | 5 | 5 |
| UA | 1 | 1 |
| US | 7 | 7 |
| WN | 19 | 19 |
| YV | 3 | 3 |
For each of these airlines, which flight left the earliest in the morning?
```
SELECT
carrier,
COUNT(*) AS numFlights,
MIN(dep_time)
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
GROUP BY carrier;
```
| carrier | numFlights | MIN(dep\_time) |
| --- | --- | --- |
| 9E | 5 | 0 |
| AA | 4 | 559 |
| B6 | 5 | 719 |
| DL | 11 | 559 |
| EV | 5 | 555 |
| MQ | 5 | 0 |
| UA | 1 | 0 |
| US | 7 | 618 |
| WN | 19 | 601 |
| YV | 3 | 0 |
This is a bit tricky to figure out because the `dep_time` variable is stored as an integer, but would be better represented as a `time` data type.
If it is a three\-digit integer, then the first digit is the hour, but if it is a four\-digit integer, then the first two digits are the hour.
In either case, the last two digits are the minutes, and there are no seconds recorded.
The `MAKETIME()` function combined with the `IF(condition, value if true, value if false)` statement can help us with this.
```
SELECT
carrier,
COUNT(*) AS numFlights,
MAKETIME(
IF(LENGTH(MIN(dep_time)) = 3,
LEFT(MIN(dep_time), 1),
LEFT(MIN(dep_time), 2)
),
RIGHT(MIN(dep_time), 2),
0
) AS firstDepartureTime
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
GROUP BY carrier
LIMIT 0, 6;
```
| carrier | numFlights | firstDepartureTime |
| --- | --- | --- |
| 9E | 5 | 00:00:00 |
| AA | 4 | 05:59:00 |
| B6 | 5 | 07:19:00 |
| DL | 11 | 05:59:00 |
| EV | 5 | 05:55:00 |
| MQ | 5 | 00:00:00 |
We can also group by more than one column, but need to be careful to specify that we apply an aggregate function to each column that we are *not* grouping by.
In this case, every time we access `dep_time`, we apply the `MIN()` function, since there may be many different values of `dep_time` associated with each unique combination of `carrier` and `dest`.
Applying the `MIN()` function returns the smallest such value unambiguously.
```
SELECT
carrier, dest,
COUNT(*) AS numFlights,
MAKETIME(
IF(LENGTH(MIN(dep_time)) = 3,
LEFT(MIN(dep_time), 1),
LEFT(MIN(dep_time), 2)
),
RIGHT(MIN(dep_time), 2),
0
) AS firstDepartureTime
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
GROUP BY carrier, dest
LIMIT 0, 6;
```
| carrier | dest | numFlights | firstDepartureTime |
| --- | --- | --- | --- |
| 9E | CVG | 2 | 00:00:00 |
| 9E | DTW | 1 | 18:20:00 |
| 9E | MSP | 1 | 11:25:00 |
| 9E | RDU | 1 | 09:38:00 |
| AA | DFW | 3 | 07:04:00 |
| AA | MIA | 1 | 05:59:00 |
### 15\.4\.4 `ORDER BY`
The use of aggregate function allows us to answer some very basic exploratory questions.
Combining this with an `ORDER BY` clause will bring the most interesting results to the top.
For example, which destinations are most common from Bradley in 2013?
```
SELECT
dest, SUM(1) AS numFlights
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
ORDER BY numFlights DESC
LIMIT 0, 6;
```
| dest | numFlights |
| --- | --- |
| ORD | 2657 |
| BWI | 2613 |
| ATL | 2277 |
| CLT | 1842 |
| MCO | 1789 |
| DTW | 1523 |
Note that since the `ORDER BY` clause cannot be executed until all of the data are retrieved, it operates on the result set, and not the rows of the original data.
Thus, derived columns *can* be referenced in the `ORDER BY` clause.
Which of those destinations had the lowest average arrival delay time?
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
| dest | numFlights | avg\_arr\_delay |
| --- | --- | --- |
| CLE | 57 | \-13\.07 |
| LAX | 127 | \-10\.31 |
| CVG | 708 | \-7\.37 |
| MSP | 981 | \-3\.66 |
| MIA | 404 | \-3\.27 |
| DCA | 204 | \-2\.90 |
[*Cleveland Hopkins International Airport*](https://en.wikipedia.org/w/index.php?search=Cleveland%20Hopkins%20International%20Airport) (CLE) has the smallest average arrival delay time.
### 15\.4\.5 `HAVING`
Although flights to Cleveland had the lowest average arrival delay—more than 13 minutes ahead of schedule—there were only 57 flights that went to from Bradley to Cleveland in all of 2013\.
It probably makes more sense to consider only those destinations that had, say, at least two flights per day.
We can filter our result set using a `HAVING` clause.
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
HAVING numFlights > 365 * 2
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
| dest | numFlights | avg\_arr\_delay |
| --- | --- | --- |
| MSP | 981 | \-3\.664 |
| DTW | 1523 | \-2\.148 |
| CLT | 1842 | \-0\.120 |
| FLL | 1011 | 0\.277 |
| DFW | 1062 | 0\.750 |
| ATL | 2277 | 4\.470 |
We can see now that among the airports that are common destinations from Bradley, Minneapolis\-St. Paul has the lowest average arrival delay time, at nearly 4 minutes ahead of schedule, on average.
Note that MySQL and SQLite support the use of derived column aliases in `HAVING` clauses, but PostgreSQL does not.
It is important to understand that the `HAVING` clause operates on the result set.
While `WHERE` and `HAVING` are similar in spirit and syntax (and indeed, in **dplyr** they are both masked by the `filter()` function), they are different, because `WHERE` operates on the original data in the table and `HAVING` operates on the result set. Moving the `HAVING` condition to the `WHERE` clause will not work.
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
AND numFlights > 365 * 2
GROUP BY dest
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
```
ERROR 1049 (42000): Unknown database 'airlines'
```
On the other hand, moving the `WHERE` conditions to the `HAVING` clause will work, but could result in a major loss of efficiency.
The following query will return the same result as the one we considered previously.
```
SELECT
origin, dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
GROUP BY origin, dest
HAVING numFlights > 365 * 2
AND origin = 'BDL'
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
Moving the `origin = 'BDL'` condition to the `HAVING` clause means that *all* airport destinations had to be considered.
With this condition in the `WHERE` clause, the server can quickly identify only those flights that left Bradley, perform the aggregation, and then filter this relatively small result set for those entries with a sufficient number of flights.
Conversely, with this condition in the `HAVING` clause, the server is forced to consider *all* 3 million flights from 2013, perform the aggregation for all pairs of airports, and then filter this much larger result set for those entries with a sufficient number of flights from Bradley.
The filtering of the result set is not importantly slower, but the aggregation over 3 million rows as opposed to a few thousand is.
To maximize query efficiency, put conditions in a `WHERE` clause as opposed to a `HAVING` clause whenever possible.
### 15\.4\.6 `LIMIT`
A `LIMIT` clause simply allows you to truncate the output to a specified number of rows.
This achieves an effect analogous to the **R** commands `head()` or `slice()`.
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
HAVING numFlights > 365*2
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
| dest | numFlights | avg\_arr\_delay |
| --- | --- | --- |
| MSP | 981 | \-3\.664 |
| DTW | 1523 | \-2\.148 |
| CLT | 1842 | \-0\.120 |
| FLL | 1011 | 0\.277 |
| DFW | 1062 | 0\.750 |
| ATL | 2277 | 4\.470 |
Note, however, that it is also possible to retrieve rows not at the beginning.
The first number in the `LIMIT` clause indicates the number of rows to skip, and the latter indicates the number of rows to retrieve.
Thus, this query will return the 4th–7th airports in the previous list.
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
HAVING numFlights > 365*2
ORDER BY avg_arr_delay ASC
LIMIT 3,4;
```
| dest | numFlights | avg\_arr\_delay |
| --- | --- | --- |
| FLL | 1011 | 0\.277 |
| DFW | 1062 | 0\.750 |
| ATL | 2277 | 4\.470 |
| BWI | 2613 | 5\.032 |
### 15\.4\.7 `JOIN`
In Chapter [5](ch-join.html#ch:join), we presented several **dplyr** join operators: `inner_join()` and `left_join()`.
Other functions (e.g., `semi_join()`) are also available.
As you might expect, these operations are fundamental to SQL—and moreover, the success of the RDBMS paradigm is predicated on the ability to efficiently join tables together.
Recall that SQL is a *relational* database management system—the relations between the tables allow you to write queries that efficiently tie together information from multiple sources.
The syntax for performing these operations in SQL requires the `JOIN` keyword.
In general, there are four pieces of information that you need to specify in order to join two tables:
* The name of the first table that you want to join
* (optional) The *type* of join that you want to use
* The name of the second table that you want to join
* The *condition(s)* under which you want the records in the first table to match the records in the second table
There are many possible permutations of how two tables can be joined, and in many cases, a single query may involve several or even dozens of tables.
In practice, the `JOIN` syntax varies among SQL implementations.
In MySQL, `OUTER JOIN`s are not available, but the following join types are:
* `JOIN`: includes all of the rows that are present in *both* tables and match.
* `LEFT JOIN`: includes all of the rows that are present in the first table. Rows in the first table that have no match in the second are filled with `NULL`s.
* `RIGHT JOIN`: include all of the rows that are present in the second table. This is the opposite of a `LEFT JOIN`.
* `CROSS JOIN`: the Cartesian product of the two tables. Thus, all possible combinations of rows matching the joining condition are returned.
Recall that in the `flights` table, the `origin` and `dest`ination of each flight are recorded.
```
SELECT
origin, dest,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
| origin | dest | flight | carrier |
| --- | --- | --- | --- |
| BDL | EWR | 4714 | EV |
| BDL | MIA | 2015 | AA |
| BDL | DTW | 1644 | DL |
| BDL | BWI | 2584 | WN |
| BDL | ATL | 1065 | DL |
| BDL | DCA | 1077 | US |
Note that the `flights` table contains only the three\-character FAA airport codes for both airports—not the full name of the airport.
These cryptic abbreviations are not easily understood by humans.
Which airport is `EWR`?
Wouldn’t it be more convenient to have the airport name in the table?
It would be more convenient, but it would also be significantly less efficient from a storage and retrieval point of view, as well as more problematic from a [*database integrity*](https://en.wikipedia.org/w/index.php?search=database%20integrity) point of view.
The solution is to store information *about airports* in the `airports` table, along with these cryptic codes—which we will now call *keys*—and to only store these keys in the `flights` table—which is about *flights*, not airports.
However, we can use these keys to join the two tables together in our query.
In this manner, we can [*have our cake and eat it too*](https://en.wikipedia.org/w/index.php?search=have%20our%20cake%20and%20eat%20it%20too): The data are stored in separate tables for efficiency, but we can still have the full names in the result set if we choose.
Note how once again, the distinction between the rows of the original table and the result set is critical.
To write our query, we simply have to specify the table we want to join onto `flights` (e.g., `airports`) and the condition by which we want to match rows in `flights` with rows in `airports`.
In this case, we want the airport code listed in `flights.dest` to be matched to the airport code in `airports.faa`.
We need to specify that we want to see the `name` column from the `airports` table in the result set (see Table [15\.3](ch-sql.html#tab:join)).
```
SELECT
origin, dest,
airports.name AS dest_name,
flight, carrier
FROM flights
JOIN airports ON flights.dest = airports.faa
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
Table 15\.3: Using JOIN to retrieve airport names.
| origin | dest | dest\_name | flight | carrier |
| --- | --- | --- | --- | --- |
| BDL | EWR | Newark Liberty Intl | 4714 | EV |
| BDL | MIA | Miami Intl | 2015 | AA |
| BDL | DTW | Detroit Metro Wayne Co | 1644 | DL |
| BDL | BWI | Baltimore Washington Intl | 2584 | WN |
| BDL | ATL | Hartsfield Jackson Atlanta Intl | 1065 | DL |
| BDL | DCA | Ronald Reagan Washington Natl | 1077 | US |
This is much easier to read for humans.
One quick improvement to the readability of this query is to use *table aliases*.
This will save us some typing now, but a considerable amount later on.
A table alias is often just a single letter after the reserved word `AS` in the specification of each table in the `FROM` and `JOIN` clauses.
Note that these aliases can be referenced anywhere else in the query (see Table [15\.4](ch-sql.html#tab:join-alias)).
```
SELECT
origin, dest,
a.name AS dest_name,
flight, carrier
FROM flights AS o
JOIN airports AS a ON o.dest = a.faa
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
Table 15\.4: Using JOIN with table aliases.
| origin | dest | dest\_name | flight | carrier |
| --- | --- | --- | --- | --- |
| BDL | EWR | Newark Liberty Intl | 4714 | EV |
| BDL | MIA | Miami Intl | 2015 | AA |
| BDL | DTW | Detroit Metro Wayne Co | 1644 | DL |
| BDL | BWI | Baltimore Washington Intl | 2584 | WN |
| BDL | ATL | Hartsfield Jackson Atlanta Intl | 1065 | DL |
| BDL | DCA | Ronald Reagan Washington Natl | 1077 | US |
In the same manner, there are cryptic codes in `flights` for the airline carriers.
The full name of each carrier is stored in the `carriers` table, since that is the place where information about carriers are stored.
We can join this table to our result set to retrieve the name of each carrier (see Table [15\.5](ch-sql.html#tab:join-multiple)).
```
SELECT
dest, a.name AS dest_name,
o.carrier, c.name AS carrier_name
FROM flights AS o
JOIN airports AS a ON o.dest = a.faa
JOIN carriers AS c ON o.carrier = c.carrier
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
Table 15\.5: Using JOIN with multiple tables.
| dest | dest\_name | carrier | carrier\_name |
| --- | --- | --- | --- |
| EWR | Newark Liberty Intl | EV | ExpressJet Airlines Inc. |
| MIA | Miami Intl | AA | American Airlines Inc. |
| DTW | Detroit Metro Wayne Co | DL | Delta Air Lines Inc. |
| BWI | Baltimore Washington Intl | WN | Southwest Airlines Co. |
| ATL | Hartsfield Jackson Atlanta Intl | DL | Delta Air Lines Inc. |
| DCA | Ronald Reagan Washington Natl | US | US Airways Inc. |
Finally, to retrieve the name of the originating airport, we can join onto the same table more than once.
Here the table aliases are necessary.
```
SELECT
flight,
a2.name AS orig_name,
a1.name AS dest_name,
c.name AS carrier_name
FROM flights AS o
JOIN airports AS a1 ON o.dest = a1.faa
JOIN airports AS a2 ON o.origin = a2.faa
JOIN carriers AS c ON o.carrier = c.carrier
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
Table 15\.6: Using JOIN on the same table more than once.
| flight | orig\_name | dest\_name | carrier\_name |
| --- | --- | --- | --- |
| 4714 | Bradley Intl | Newark Liberty Intl | ExpressJet Airlines Inc. |
| 2015 | Bradley Intl | Miami Intl | American Airlines Inc. |
| 1644 | Bradley Intl | Detroit Metro Wayne Co | Delta Air Lines Inc. |
| 2584 | Bradley Intl | Baltimore Washington Intl | Southwest Airlines Co. |
| 1065 | Bradley Intl | Hartsfield Jackson Atlanta Intl | Delta Air Lines Inc. |
| 1077 | Bradley Intl | Ronald Reagan Washington Natl | US Airways Inc. |
Table [15\.6](ch-sql.html#tab:join-multiple-times) displays the results.
Now it is perfectly clear that [*ExpressJet*](https://en.wikipedia.org/w/index.php?search=ExpressJet) flight 4714 flew from Bradley International airport to [*Newark Liberty International airport*](https://en.wikipedia.org/w/index.php?search=Newark%20Liberty%20International%20airport) on June 26th, 2013\.
However, in order to put this together, we had to join four tables.
Wouldn’t it be easier to store these data in a single table that looks like the result set? For a variety of reasons, the answer is no.
First, there are very literal storage considerations.
The `airports.name` field has room for 255 characters.
```
DESCRIBE airports;
```
| Field | Type | Null | Key | Default | Extra |
| --- | --- | --- | --- | --- | --- |
| faa | varchar(3\) | NO | PRI | | |
| name | varchar(255\) | YES | | NA | |
| lat | decimal(10,7\) | YES | | NA | |
| lon | decimal(10,7\) | YES | | NA | |
| alt | int(11\) | YES | | NA | |
| tz | smallint(4\) | YES | | NA | |
| dst | char(1\) | YES | | NA | |
| city | varchar(255\) | YES | | NA | |
| country | varchar(255\) | YES | | NA | |
This takes up considerably more space on disk than the four\-character abbreviation stored in `airports.faa`.
For small data sets, this overhead might not matter, but the `flights` table contains 169 million rows, so replacing the four\-character `origin` field with a 255\-character field would result in a noticeable difference in space on disk. (Plus, we’d have to do this twice, since the same would apply to `dest`.)
We’d suffer a similar penalty for including the full name of each carrier in the `flights` table.
Other things being equal, tables that take up less room on disk are faster to search.
Second, it would be logically inefficient to store the full name of each airport in the `flights` table.
The name of the airport doesn’t change for each flight.
It doesn’t make sense to store the full name of the airport any more than it would make sense to store the full name of the month, instead of just the integer corresponding to each month.
Third, what if the name of the airport *did* change?
For example, in 1998 the airport with code DCA was renamed from Washington National to [*Ronald Reagan Washington National*](https://en.wikipedia.org/w/index.php?search=Ronald%20Reagan%20Washington%20National).
It is still the same airport in the same location, and it still has code DCA—only the full name has changed. With separate tables, we only need to update a single field: the `name` column in the `airports` table for the DCA row.
Had we stored the full name in the `flights` table, we would have to make millions of substitutions, and would risk ending up in a situation in which both “Washington National” and “Reagan National” were present in the table.
When designing a database, how do you know whether to create a separate table for pieces of information?
The short answer is that if you are designing a persistent, scalable database for speed and efficiency, then every *entity* should have its own table.
In practice, very often it is not worth the time and effort to set this up if we are simply doing some quick analysis.
But for permanent systems—like a database backend to a website—proper curation is necessary.
The notions of [normal forms](https://en.wikipedia.org/wiki/Database_normalization#Normal_forms), and specifically [*third normal form*](https://en.wikipedia.org/w/index.php?search=third%20normal%20form) (3NF), provide guidance for how to properly design a database.
A full discussion of this is beyond the scope of this book, but the basic idea is to “keep like with like.”
If you are designing a database that will be used for a long time or by a lot of people, take the extra time to design it well.
#### 15\.4\.7\.1 `LEFT JOIN`
Recall that in a `JOIN`—also known as an *inner* or *natural* or *regular* `JOIN`—all possible matching pairs of rows from the two tables are included.
Thus, if the first table has \\(n\\) rows and the second table has \\(m\\), as many as \\(nm\\) rows could be returned. However, in the `airports` table each row has a unique airport code, and thus every row in `flights` will match the destination field to *at most* one row in the `airports` table.
What happens if no such entry is present in `airports`?
That is, what happens if there is a destination airport in `flights` that has no corresponding entry in `airports`? If you are using a `JOIN`, then the offending row in `flights` is simply not returned.
On the other hand, if you are using a `LEFT JOIN`, then every row in the first table is returned, and the corresponding entries from the second table are left blank.
In this example, no airport names were found for several airports.
```
SELECT
year, month, day, origin, dest,
a.name AS dest_name,
flight, carrier
FROM flights AS o
LEFT JOIN airports AS a ON o.dest = a.faa
WHERE year = 2013 AND month = 6 AND day = 26
AND a.name is null
LIMIT 0, 6;
```
| year | month | day | origin | dest | dest\_name | flight | carrier |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 2013 | 6 | 26 | BOS | SJU | NA | 261 | B6 |
| 2013 | 6 | 26 | JFK | SJU | NA | 1203 | B6 |
| 2013 | 6 | 26 | JFK | PSE | NA | 745 | B6 |
| 2013 | 6 | 26 | JFK | SJU | NA | 1503 | B6 |
| 2013 | 6 | 26 | JFK | BQN | NA | 839 | B6 |
| 2013 | 6 | 26 | JFK | BQN | NA | 939 | B6 |
The output indicates that the airports are all in [*Puerto Rico*](https://en.wikipedia.org/w/index.php?search=Puerto%20Rico): SJU is in [*San Juan*](https://en.wikipedia.org/w/index.php?search=San%20Juan), BQN is in [*Aguadilla*](https://en.wikipedia.org/w/index.php?search=Aguadilla), and PSE is in [*Ponce*](https://en.wikipedia.org/w/index.php?search=Ponce).
The result set from a `LEFT JOIN` is always a superset of the result set from the same query with a regular `JOIN`.
A `RIGHT JOIN` is simply the opposite of a `LEFT JOIN`—that is, the tables have simply been specified in the opposite order.
This can be useful in certain cases, especially when you are joining more than two tables.
### 15\.4\.8 `UNION`
Two separate queries can be combined using a `UNION` clause.
```
(SELECT
year, month, day, origin, dest,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL' AND dest = 'MSP')
UNION
(SELECT
year, month, day, origin, dest,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'JFK' AND dest = 'ORD')
LIMIT 0,10;
```
| year | month | day | origin | dest | flight | carrier |
| --- | --- | --- | --- | --- | --- | --- |
| 2013 | 6 | 26 | BDL | MSP | 797 | DL |
| 2013 | 6 | 26 | BDL | MSP | 3338 | 9E |
| 2013 | 6 | 26 | BDL | MSP | 1226 | DL |
| 2013 | 6 | 26 | JFK | ORD | 905 | B6 |
| 2013 | 6 | 26 | JFK | ORD | 1105 | B6 |
| 2013 | 6 | 26 | JFK | ORD | 3523 | 9E |
| 2013 | 6 | 26 | JFK | ORD | 1711 | AA |
| 2013 | 6 | 26 | JFK | ORD | 105 | B6 |
| 2013 | 6 | 26 | JFK | ORD | 3521 | 9E |
| 2013 | 6 | 26 | JFK | ORD | 3525 | 9E |
This is analogous to the **dplyr** operation `bind_rows()`.
### 15\.4\.9 Subqueries
It is also possible to use a result set as if it were a table.
That is, you can write one query to generate a result set, and then use that result set in a larger query as if it were a table, or even just a list of values.
The initial query is called a [*subquery*](https://en.wikipedia.org/w/index.php?search=subquery).
For example, Bradley is listed as an “international” airport, but with the exception of trips to [*Montreal*](https://en.wikipedia.org/w/index.php?search=Montreal) and [*Toronto*](https://en.wikipedia.org/w/index.php?search=Toronto) and occasional flights to [*Mexico*](https://en.wikipedia.org/w/index.php?search=Mexico) and [*Europe*](https://en.wikipedia.org/w/index.php?search=Europe), it is more of a regional airport.
Does it have any flights coming from or going to [*Alaska*](https://en.wikipedia.org/w/index.php?search=Alaska) and [*Hawaii*](https://en.wikipedia.org/w/index.php?search=Hawaii)?
We can retrieve the list of airports outside the lower 48 states by filtering the airports table using the time zone `tz` column (see Table [15\.7](ch-sql.html#tab:outside) for the first six).
```
SELECT faa, name, tz, city
FROM airports AS a
WHERE tz < -8
LIMIT 0, 6;
```
Table 15\.7: First set of six airports outside the lower 48 states.
| faa | name | tz | city |
| --- | --- | --- | --- |
| 369 | Atmautluak Airport | \-9 | Atmautluak |
| 6K8 | Tok Junction Airport | \-9 | Tok |
| ABL | Ambler Airport | \-9 | Ambler |
| ADK | Adak Airport | \-9 | Adak Island |
| ADQ | Kodiak | \-9 | Kodiak |
| AET | Allakaket Airport | \-9 | Allakaket |
Now, let’s use the airport codes generated by that query as a list to filter the flights leaving from Bradley in 2013\.
Note the subquery in parentheses in the query below.
```
SELECT
dest, a.name AS dest_name,
SUM(1) AS N, COUNT(distinct carrier) AS numCarriers
FROM flights AS o
LEFT JOIN airports AS a ON o.dest = a.faa
WHERE year = 2013
AND origin = 'BDL'
AND dest IN
(SELECT faa
FROM airports
WHERE tz < -8)
GROUP BY dest;
```
No results are returned.
As it turns out, Bradley did not have any outgoing flights to Alaska or Hawaii.
However, it did have some flights to and from airports in the Pacific Time Zone.
```
SELECT
dest, a.name AS dest_name,
SUM(1) AS N, COUNT(distinct carrier) AS numCarriers
FROM flights AS o
LEFT JOIN airports AS a ON o.origin = a.faa
WHERE year = 2013
AND dest = 'BDL'
AND origin IN
(SELECT faa
FROM airports
WHERE tz < -7)
GROUP BY origin;
```
| dest | dest\_name | N | numCarriers |
| --- | --- | --- | --- |
| BDL | Mc Carran Intl | 262 | 1 |
| BDL | Los Angeles Intl | 127 | 1 |
We could also employ a similar subquery to create an ephemeral table (results not shown).
```
SELECT
dest, a.name AS dest_name,
SUM(1) AS N, COUNT(distinct carrier) AS numCarriers
FROM flights AS o
JOIN (SELECT *
FROM airports
WHERE tz < -7) AS a
ON o.origin = a.faa
WHERE year = 2013 AND dest = 'BDL'
GROUP BY origin;
```
Of course, we could have achieved the same result with a `JOIN` and `WHERE` (results not shown).
```
SELECT
dest, a.name AS dest_name,
SUM(1) AS N, COUNT(distinct carrier) AS numCarriers
FROM flights AS o
LEFT JOIN airports AS a ON o.origin = a.faa
WHERE year = 2013
AND dest = 'BDL'
AND tz < -7
GROUP BY origin;
```
It is important to note that while subqueries are often convenient, they cannot make use of indices.
In most cases it is preferable to write the query using joins as opposed to subqueries.
15\.5 Extended example: FiveThirtyEight flights
-----------------------------------------------
Over at [FiveThirtyEight](http://www.fivethirtyeight.com), [Nate Silver](https://en.wikipedia.org/w/index.php?search=Nate%20Silver) wrote [an article](http://fivethirtyeight.com/features/fastest-airlines-fastest-airports/) about airline delays using the same Bureau of Transportation Statistics data that we have in our database (see the link in the footnote[27](#fn27)).
We can use this article as an exercise in querying our airlines database.
The article makes a number of claims.
We’ll walk through some of these. First, the article states:
> In 2014, the 6 million domestic flights the U.S. government tracked required an extra 80 million minutes to reach their destinations.
> The majority of flights (54%) arrived ahead of schedule in 2014\. (The 80 million minutes figure cited earlier is a net number. It consists of about 115 million minutes of delays minus 35 million minutes saved from early arrivals.)
Although there are a number of claims here, we can verify them with a single query.
Here, we compute the total number of flights, the percentage of those that were on time and ahead of schedule, and the total number of minutes of delays.
```
SELECT
SUM(1) AS numFlights,
SUM(IF(arr_delay < 15, 1, 0)) / SUM(1) AS ontimePct,
SUM(IF(arr_delay < 0, 1, 0)) / SUM(1) AS earlyPct,
SUM(arr_delay) / 1e6 AS netMinLate,
SUM(IF(arr_delay > 0, arr_delay, 0)) / 1e6 AS minLate,
SUM(IF(arr_delay < 0, arr_delay, 0)) / 1e6 AS minEarly
FROM flights AS o
WHERE year = 2014
LIMIT 0, 6;
```
| numFlights | ontimePct | earlyPct | netMinLate | minLate | minEarly |
| --- | --- | --- | --- | --- | --- |
| 5819811 | 0\.787 | 0\.542 | 41\.6 | 77\.6 | \-36 |
We see the right number of flights (about 6 million), and the percentage of flights that were early (about 54%) is also about right.
The total number of minutes early (about 36 million) is also about right.
However, the total number of minutes late is way off (about 78 million vs. 115 million), and as a consequence, so is the net number of minutes late (about 42 million vs. 80 million).
In this case, you have to read the fine print.
A description of the [methodology](http://fivethirtyeight.com/features/how-we-found-the-fastest-flights/) used in this analysis contains some information about the *estimates*[28](#fn28) of the arrival delay for cancelled flights.
The problem is that cancelled flights have an `arr_delay` value of 0, yet in the real\-world experience of travelers, the practical delay is much longer.
The FiveThirtyEight data scientists concocted an estimate of the actual delay experienced by travelers due to cancelled flights.
> A quick\-and\-dirty answer is that cancelled flights are associated with a delay of four or five hours, on average. However, the calculation varies based on the particular circumstances of each flight.
Unfortunately, reproducing the estimates made by FiveThirtyEight is likely impossible, and certainly beyond the scope of what we can accomplish here.
Since we only care about the aggregate number of minutes, we can amend our computation to add, say, 270 minutes of delay time for each cancelled flight.
```
SELECT
SUM(1) AS numFlights,
SUM(IF(arr_delay < 15, 1, 0)) / SUM(1) AS ontimePct,
SUM(IF(arr_delay < 0, 1, 0)) / SUM(1) AS earlyPct,
SUM(IF(cancelled = 1, 270, arr_delay)) / 1e6 AS netMinLate,
SUM(
IF(cancelled = 1, 270, IF(arr_delay > 0, arr_delay, 0))
) / 1e6 AS minLate,
SUM(IF(arr_delay < 0, arr_delay, 0)) / 1e6 AS minEarly
FROM flights AS o
WHERE year = 2014
LIMIT 0, 6;
```
| numFlights | ontimePct | earlyPct | netMinLate | minLate | minEarly |
| --- | --- | --- | --- | --- | --- |
| 5819811 | 0\.787 | 0\.542 | 75\.9 | 112 | \-36 |
This again puts us in the neighborhood of the estimates from the article.
One has to read the fine print to properly vet these estimates.
The problem is not that the estimates reported by Silver are inaccurate—on the contrary, they seem plausible and are certainly better than not correcting for cancelled flights at all.
However, it is not immediately clear from reading the article (you have to read the separate methodology article) that these estimates—which account for roughly 25% of the total minutes late reported—are in fact estimates and not hard data.
Later in the article, Silver presents a figure that breaks down the percentage of flights that were on time, had a delay of 15 to 119 minutes, or were delayed longer than 2 hours.
We can pull the data for this figure with the following query.
Here, in order to plot these results, we need to actually bring them back into **R**.
To do this, we will use the functionality provided by the **knitr** package (see Section [F.4\.3](ch-db-setup.html#sec:connect-r-sql) for more information about connecting to a MySQL server from within **R**).
The results of this query will be saved to an **R** data frame called `res`.
```
SELECT o.carrier, c.name,
SUM(1) AS numFlights,
SUM(IF(arr_delay > 15 AND arr_delay <= 119, 1, 0)) AS shortDelay,
SUM(
IF(arr_delay >= 120 OR cancelled = 1 OR diverted = 1, 1, 0)
) AS longDelay
FROM
flights AS o
LEFT JOIN
carriers c ON o.carrier = c.carrier
WHERE year = 2014
GROUP BY carrier
ORDER BY shortDelay DESC
```
Reproducing the figure requires a little bit of work.
We begin by pruning less informative labels from the carriers.
```
res <- res %>%
as_tibble() %>%
mutate(
name = str_remove_all(name, "Air(lines|ways| Lines)"),
name = str_remove_all(name, "(Inc\\.|Co\\.|Corporation)"),
name = str_remove_all(name, "\\(.*\\)"),
name = str_remove_all(name, " *$")
)
res %>%
pull(name)
```
```
[1] "Southwest" "ExpressJet" "SkyWest" "Delta"
[5] "American" "United" "Envoy Air" "US"
[9] "JetBlue" "Frontier" "Alaska" "AirTran"
[13] "Virgin America" "Hawaiian"
```
Next, it is now clear that FiveThirtyEight has considered airline mergers and regional carriers that are not captured in our data.
Specifically: “We classify all remaining [*AirTran*](https://en.wikipedia.org/w/index.php?search=AirTran) flights as [*Southwest*](https://en.wikipedia.org/w/index.php?search=Southwest) flights.” [*Envoy Air*](https://en.wikipedia.org/w/index.php?search=Envoy%20Air) serves [*American Airlines*](https://en.wikipedia.org/w/index.php?search=American%20Airlines).
However, there is a bewildering network of alliances among the other regional carriers.
Greatly complicating matters, [*ExpressJet*](https://en.wikipedia.org/w/index.php?search=ExpressJet) and [*SkyWest*](https://en.wikipedia.org/w/index.php?search=SkyWest) serve multiple national carriers (primarily United, American, and Delta) under different flight numbers. FiveThirtyEight provides [a footnote](http://fivethirtyeight.com/features/how-we-found-the-fastest-flights/#fn-5) detailing how they have assigned flights carried by these regional carriers, but we have chosen to ignore that here and include ExpressJet and SkyWest as independent carriers.
Thus, the data that we show in Figure [15\.1](ch-sql.html#fig:ft8-plot) does not match the figure from FiveThirtyEight.
```
carriers_2014 <- res %>%
mutate(
groupName = case_when(
name %in% c("Envoy Air", "American Eagle") ~ "American",
name == "AirTran" ~ "Southwest",
TRUE ~ name
)
) %>%
group_by(groupName) %>%
summarize(
numFlights = sum(numFlights),
wShortDelay = sum(shortDelay),
wLongDelay = sum(longDelay)
) %>%
mutate(
wShortDelayPct = wShortDelay / numFlights,
wLongDelayPct = wLongDelay / numFlights,
delayed = wShortDelayPct + wLongDelayPct,
ontime = 1 - delayed
)
carriers_2014
```
```
# A tibble: 12 × 8
groupName numFlights wShortDelay wLongDelay wShortDelayPct wLongDelayPct
<chr> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Alaska 160257 18366 2613 0.115 0.0163
2 American 930398 191071 53641 0.205 0.0577
3 Delta 800375 105194 19818 0.131 0.0248
4 ExpressJet 686021 136207 59663 0.199 0.0870
5 Frontier 85474 18410 2959 0.215 0.0346
6 Hawaiian 74732 5098 514 0.0682 0.00688
7 JetBlue 249693 46618 12789 0.187 0.0512
8 SkyWest 613030 107192 33114 0.175 0.0540
9 Southwest 1254128 275155 44907 0.219 0.0358
10 United 493528 93721 20923 0.190 0.0424
11 US 414665 64505 12328 0.156 0.0297
12 Virgin Am… 57510 8356 1976 0.145 0.0344
# … with 2 more variables: delayed <dbl>, ontime <dbl>
```
After tidying this data frame using the `pivot_longer()` function (see Chapter [6](ch-dataII.html#ch:dataII)), we can draw the figure as a stacked bar chart.
```
carriers_tidy <- carriers_2014 %>%
select(groupName, wShortDelayPct, wLongDelayPct, delayed) %>%
pivot_longer(
-c(groupName, delayed),
names_to = "delay_type",
values_to = "pct"
)
delay_chart <- ggplot(
data = carriers_tidy,
aes(x = reorder(groupName, pct, max), y = pct)
) +
geom_col(aes(fill = delay_type)) +
scale_fill_manual(
name = NULL,
values = c("red", "gold"),
labels = c(
"Flights Delayed 120+ Minutes\ncancelled or Diverted",
"Flights Delayed 15-119 Minutes"
)
) +
scale_y_continuous(limits = c(0, 1)) +
coord_flip() +
labs(
title = "Southwest's Delays Are Short; United's Are Long",
subtitle = "As share of scheduled flights, 2014"
) +
ylab(NULL) +
xlab(NULL) +
ggthemes::theme_fivethirtyeight() +
theme(
plot.title = element_text(hjust = 1),
plot.subtitle = element_text(hjust = -0.2)
)
```
Getting the right text labels in the right places to mimic the display requires additional wrangling.
We show our best effort in Figure [15\.1](ch-sql.html#fig:ft8-plot).
In fact, by comparing the two figures, it becomes clear that many of the long delays suffered by United and American passengers occur on flights operated by ExpressJet and SkyWest.
```
delay_chart +
geom_text(
data = filter(carriers_tidy, delay_type == "wShortDelayPct"),
aes(label = paste0(round(pct * 100, 1), "% ")),
hjust = "right",
size = 2
) +
geom_text(
data = filter(carriers_tidy, delay_type == "wLongDelayPct"),
aes(y = delayed - pct, label = paste0(round(pct * 100, 1), "% ")),
hjust = "left",
nudge_y = 0.01,
size = 2
)
```
Figure 15\.1: Recreation of the FiveThirtyEight plot on flight delays.
The rest of the analysis is predicated on FiveThirtyEight’s definition of *target time*, which is different from the scheduled time in the database.
To compute it would take us far astray.
In [another graphic](https://espnfivethirtyeight.files.wordpress.com/2015/03/silver-feature-fastflight-7.png?w=575&h=752) in the article, FiveThirtyEight reports the slowest and fastest airports among the 30 largest airports.
Using arrival delay time instead of the FiveThirtyEight\-defined target time, we can produce a similar table by joining the results of two queries together.
```
SELECT
dest,
SUM(1) AS numFlights,
AVG(arr_delay) AS avgArrivalDelay
FROM
flights AS o
WHERE year = 2014
GROUP BY dest
ORDER BY numFlights DESC
LIMIT 0, 30
```
```
SELECT
origin,
SUM(1) AS numFlights,
AVG(arr_delay) AS avgDepartDelay
FROM
flights AS o
WHERE year = 2014
GROUP BY origin
ORDER BY numFlights DESC
LIMIT 0, 30
```
```
dests %>%
left_join(origins, by = c("dest" = "origin")) %>%
select(dest, avgDepartDelay, avgArrivalDelay) %>%
arrange(desc(avgDepartDelay)) %>%
as_tibble()
```
```
# A tibble: 30 × 3
dest avgDepartDelay avgArrivalDelay
<chr> <dbl> <dbl>
1 ORD 14.3 13.1
2 MDW 12.8 7.40
3 DEN 11.3 7.60
4 IAD 11.3 7.45
5 HOU 11.3 8.07
6 DFW 10.7 9.00
7 BWI 10.2 6.04
8 BNA 9.47 8.94
9 EWR 8.70 9.61
10 IAH 8.41 6.75
# … with 20 more rows
```
Finally, FiveThirtyEight produces [a simple table](https://espnfivethirtyeight.files.wordpress.com/2015/03/silver-feature-fastflight-81.png?w=575&h=440) ranking the airlines by the amount of time added versus *typical*—another of their creations—and target time.
What we can do instead is compute a similar table for the average arrival delay time by carrier, *after controlling for the routes*.
First, we compute the average arrival delay time for each route.
```
SELECT
origin, dest,
SUM(1) AS numFlights,
AVG(arr_delay) AS avgDelay
FROM
flights AS o
WHERE year = 2014
GROUP BY origin, dest
```
```
head(routes)
```
```
origin dest numFlights avgDelay
1 ABE ATL 829 5.43
2 ABE DTW 665 3.23
3 ABE ORD 144 19.51
4 ABI DFW 2832 10.70
5 ABQ ATL 893 1.92
6 ABQ BWI 559 6.60
```
Next, we perform the same calculation, but this time, we add `carrier` to the `GROUP BY` clause.
```
SELECT
origin, dest,
o.carrier, c.name,
SUM(1) AS numFlights,
AVG(arr_delay) AS avgDelay
FROM
flights AS o
LEFT JOIN
carriers c ON o.carrier = c.carrier
WHERE year = 2014
GROUP BY origin, dest, o.carrier
```
Next, we merge these two data sets, matching the routes traveled by each carrier with the route averages across all carriers.
```
routes_aug <- routes_carriers %>%
left_join(routes, by = c("origin" = "origin", "dest" = "dest")) %>%
as_tibble()
head(routes_aug)
```
```
# A tibble: 6 × 8
origin dest carrier name numFlights.x avgDelay.x numFlights.y avgDelay.y
<chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 ABE ATL DL Delt… 186 1.67 829 5.43
2 ABE ATL EV Expr… 643 6.52 829 5.43
3 ABE DTW EV Expr… 665 3.23 665 3.23
4 ABE ORD EV Expr… 144 19.5 144 19.5
5 ABI DFW EV Expr… 219 7 2832 10.7
6 ABI DFW MQ Envo… 2613 11.0 2832 10.7
```
Note that `routes_aug` contains both the average arrival delay time for each carrier on each route that it flies (`avgDelay.x`) as well as the average arrival delay time for each route across all carriers (`avgDelay.y`).
We can then compute the difference between these times, and aggregate the weighted average for each carrier.
```
routes_aug %>%
group_by(carrier) %>%
# use str_remove_all() to remove parentheses
summarize(
carrier_name = str_remove_all(first(name), "\\(.*\\)"),
numRoutes = n(),
numFlights = sum(numFlights.x),
wAvgDelay = sum(
numFlights.x * (avgDelay.x - avgDelay.y),
na.rm = TRUE
) / sum(numFlights.x)
) %>%
arrange(wAvgDelay)
```
```
# A tibble: 14 × 5
carrier carrier_name numRoutes numFlights wAvgDelay
<chr> <chr> <int> <dbl> <dbl>
1 VX Virgin America 72 57510 -2.69
2 FL AirTran Airways Corporation 170 79495 -1.55
3 AS Alaska Airlines Inc. 242 160257 -1.44
4 US US Airways Inc. 378 414665 -1.31
5 DL Delta Air Lines Inc. 900 800375 -1.01
6 UA United Air Lines Inc. 621 493528 -0.982
7 MQ Envoy Air 442 392701 -0.455
8 AA American Airlines Inc. 390 537697 -0.0340
9 HA Hawaiian Airlines Inc. 56 74732 0.272
10 OO SkyWest Airlines Inc. 1250 613030 0.358
11 B6 JetBlue Airways 316 249693 0.767
12 EV ExpressJet Airlines Inc. 1534 686021 0.845
13 WN Southwest Airlines Co. 1284 1174633 1.13
14 F9 Frontier Airlines Inc. 326 85474 2.29
```
15\.6 SQL vs. **R**
-------------------
This chapter contains an introduction to the database querying language SQL.
However, along the way we have highlighted the similarities and differences between the way certain things are done in **R** versus how they are done in SQL.
While the rapid development of **dplyr** has brought fusion to the most common data management operations shared by both **R** and SQL, while at the same time shielding the user from concerns about where certain operations are being performed, it is important for a practicing data scientist to understand the relative strengths and weaknesses of each of their tools.
While the process of slicing and dicing data can generally be performed in either **R** or SQL, we have already seen tasks for which one is more appropriate (e.g., faster, simpler, or more logically structured) than the other. **R** is a statistical computing environment that is developed for the purpose of data analysis.
If the data are small enough to be read into memory, then **R** puts a vast array of data analysis functions at your fingertips.
However, if the data are large enough to be problematic in memory, then SQL provides a robust, parallelizable, and scalable solution for data storage and retrieval.
The SQL query language, or the **dplyr** interface, enable one to efficiently perform basic data management operations on smaller pieces of the data.
However, there is an upfront cost to creating a well\-designed SQL database.
Moreover, the analytic capabilities of SQL are very limited, offering only a few simple statistical functions (e.g., `AVG()`, `SD()`, etc.—although user\-defined extensions are possible).
Thus, while SQL is usually a more robust solution for *data management*, it is a poor substitute for **R** when it comes to *data analysis*.
15\.7 Further resources
-----------------------
The documentation for [MySQL](https://dev.mysql.com/doc/refman/5.6/en/index.html), [PostgreSQL](http://www.postgresql.org/docs/9.4/interactive/index.html), and [SQLite](https://www.sqlite.org/docs.html) are the authoritative sources for complete information on their respective syntaxes.
We have also found Kline et al. (2008\) to be a useful reference.
15\.8 Exercises
---------------
**Problem 1 (Easy)**: How many rows are available in the `Measurements` table of the Smith College Wideband Auditory Immittance database?
```
library(RMySQL)
con <- dbConnect(
MySQL(), host = "scidb.smith.edu",
user = "waiuser", password = "smith_waiDB",
dbname = "wai"
)
Measurements <- tbl(con, "Measurements")
```
**Problem 2 (Easy)**: Identify what years of data are available in the `flights` table of the `airlines` database.
```
library(tidyverse)
library(mdsr)
library(RMySQL)
con <- dbConnect_scidb("airlines")
```
**Problem 3 (Easy)**: Use the `dbConnect_scidb` function to connect to the `airlines` database to answer the following problem.
How many domestic flights flew into Dallas\-Fort Worth (DFW) on May 14, 2010?
**Problem 4 (Easy)**: Wideband acoustic immittance (WAI) is an area of biomedical research that strives to develop WAI measurements as noninvasive auditory diagnostic tools. WAI measurements are reported in many related formats, including absorbance, admittance, impedance, power reflectance, and pressure reflectance. More information can be found about this public facing WAI database at [http://www.science.smith.edu/wai\-database/home/about](http://www.science.smith.edu/wai-database/home/about).
```
library(RMySQL)
db <- dbConnect(
MySQL(),
user = "waiuser",
password = "smith_waiDB",
host = "scidb.smith.edu",
dbname = "wai"
)
```
1. How many female subjects are there in total across all studies?
2. Find the average absorbance for participants for each study, ordered by highest to lowest value.
3. Write a query to count all the measurements with a calculated absorbance of less than 0\.
**Problem 5 (Medium)**: Use the `dbConnect_scidb` function to connect to the `airlines` database to answer the following problem.
Of all the destinations from Chicago O’Hare (ORD), which were the most common in 2010?
**Problem 6 (Medium)**: Use the `dbConnect_scidb` function to connect to the `airlines` database to answer the following problem.
Which airport had the highest average arrival delay time in 2010?
**Problem 7 (Medium)**:
Use the `dbConnect_scidb` function to connect to the `airlines` database to answer the following problem.
How many domestic flights came into or flew out of Bradley Airport (BDL) in 2012?
**Problem 8 (Medium)**: Use the `dbConnect_scidb` function to connect to the `airlines` database to answer the following problem.
List the airline and flight number for all flights between LAX and JFK on September 26th, 1990\.
**Problem 9 (Medium)**: The following questions require use of the `Lahman` package and reference basic baseball terminology. (See <https://en.wikipedia.org/wiki/Baseball_statistics> for explanations of any acronyms.)
1. List the names of all batters who have at least 300 home runs (HR) and 300 stolen bases (SB) in their careers and rank them by career batting average (\\(H/AB\\)).
2. List the names of all pitchers who have at least 300 wins (W) and 3,000 strikeouts (SO) in their careers and rank them by career winning percentage (\\(W/(W\+L)\\)).
3. The attainment of either 500 home runs (HR) or 3,000 hits (H) in a career is considered to be among the greatest achievements to which a batter can aspire. These milestones are thought to guarantee induction into the Baseball Hall of Fame, and yet several players who have attained either milestone have not been inducted into the Hall of Fame. Identify them.
**Problem 10 (Medium)**: Use the `dbConnect_scidb` function to connect to the `airlines` database to answer the following problem.
Find all flights between `JFK` and `SFO` in 1994\. How many were canceled? What percentage of the total number of flights were canceled?
**Problem 11 (Hard)**: The following open\-ended question may require more than one query and a thoughtful response.
Based on data from 2012 only, and assuming that transportation to the airport is not an issue, would you rather fly out of JFK, LaGuardia (LGA), or Newark (EWR)? Why or why not?
Use the `dbConnect_scidb` function to connect to the `airlines` database.
15\.9 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/sql\-I.html\#sqlI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/sql-I.html#sqlI-online-exercises)
**Problem 1 (Easy)**: What years of data are available in the `mdsr` package `imdb` database `title` table?
(Hint: create a connection with a call to `dbConnect_scidb("imdb")`.)
15\.1 From **dplyr** to SQL
---------------------------
Recall the **airlines** data that we encountered in Chapter [9](ch-foundations.html#ch:foundations).
Using the **dplyr** verbs that we developed in Chapters [4](ch-dataI.html#ch:dataI) and [5](ch-join.html#ch:join), consider retrieving the top on\-time carriers with at least 100 flights arriving at JFK in September 2016\.
If the data are stored in data frames called `flights` and `carriers`, then we might write a **dplyr** pipeline like this:
```
q <- flights %>%
filter(
year == 2016 & month == 9,
dest == "JFK"
) %>%
inner_join(carriers, by = c("carrier" = "carrier")) %>%
group_by(name) %>%
summarize(
N = n(),
pct_ontime = sum(arr_delay <= 15) / n()
) %>%
filter(N >= 100) %>%
arrange(desc(pct_ontime))
head(q, 4)
```
```
# Source: lazy query [?? x 3]
# Database: mysql 5.7.33-log
# [@mdsr.cdc7tgkkqd0n.us-east-1.rds.amazonaws.com:/airlines]
# Ordered by: desc(pct_ontime)
name N pct_ontime
<chr> <dbl> <dbl>
1 Delta Air Lines Inc. 2396 0.869
2 Virgin America 347 0.833
3 JetBlue Airways 3463 0.817
4 American Airlines Inc. 1397 0.782
```
However, the `flights` data frame can become very large. Going back to 1987, there are more than 169 million individual flights—each comprising a different row in this table.
These data occupy nearly 20 gigabytes as CSVs, and thus are problematic to store in a personal computer’s memory.
Instead, we write these data to disk, and use a querying language to access only those rows that interest us.
In this case, we configured **dplyr** to access the `flights` data on a MySQL server.
The `dbConnect_scidb()` function from the **mdsr** package provides a connection to the `airlines` database that lives on a remote MySQL server and stores it as the object `db`.
The `tbl()` function from **dplyr** maps the `flights` table in that `airlines` database to an object in **R**, in this case also called `flights`.
The same is done for the `carriers` table.
```
library(tidyverse)
library(mdsr)
db <- dbConnect_scidb("airlines")
flights <- tbl(db, "flights")
carriers <- tbl(db, "carriers")
```
Note that while we can use the `flights` and `carriers` objects *as if* they were data frames, they are not, in fact, `data.frame`s. Rather, they have class `tbl_MySQLConnection`, and more generally, `tbl_sql`. A `tbl` is a special kind of object created by **dplyr** that behaves similarly to a `data.frame`.
```
class(flights)
```
```
[1] "tbl_MySQLConnection" "tbl_dbi" "tbl_sql"
[4] "tbl_lazy" "tbl"
```
Note also that in the output of our pipeline above, there is an explicit mention of a MySQL database. We set up this database ahead of time (see Chapter [16](ch-sql2.html#ch:sql2) for instructions on doing this), but **dplyr** allows us to interact with these `tbl`s as if they were `data.frame`s in our **R** session. This is a powerful and convenient illusion!
What is actually happening is that **dplyr** translates our pipeline into SQL.
We can see the translation by passing the pipeline through the `show_query()` function using our previously created query.
```
show_query(q)
```
```
<SQL>
SELECT *
FROM (SELECT `name`, COUNT(*) AS `N`, SUM(`arr_delay` <= 15.0) / COUNT(*) AS `pct_ontime`
FROM (SELECT `year`, `month`, `day`, `dep_time`, `sched_dep_time`, `dep_delay`, `arr_time`, `sched_arr_time`, `arr_delay`, `LHS`.`carrier` AS `carrier`, `tailnum`, `flight`, `origin`, `dest`, `air_time`, `distance`, `cancelled`, `diverted`, `hour`, `minute`, `time_hour`, `name`
FROM (SELECT *
FROM `flights`
WHERE ((`year` = 2016.0 AND `month` = 9.0) AND (`dest` = 'JFK'))) `LHS`
INNER JOIN `carriers` AS `RHS`
ON (`LHS`.`carrier` = `RHS`.`carrier`)
) `q01`
GROUP BY `name`) `q02`
WHERE (`N` >= 100.0)
ORDER BY `pct_ontime` DESC
```
Understanding this output is not important—the translator here is creating temporary tables with unintelligible names—but it should convince you that even though we wrote our pipeline in **R**, it was translated to SQL. **dplyr** will do this automatically any time you are working with objects of class `tbl_sql`. If we were to write an SQL query equivalent to our pipeline, we would write it in a more readable format:
```
SELECT
c.name,
SUM(1) AS N,
SUM(arr_delay <= 15) / SUM(1) AS pct_ontime
FROM flights AS f
JOIN carriers AS c ON f.carrier = c.carrier
WHERE year = 2016 AND month = 9
AND dest = 'JFK'
GROUP BY name
HAVING N >= 100
ORDER BY pct_ontime DESC
LIMIT 0,4;
```
How did **dplyr** perform this translation?[23](#fn23) As we learn SQL, the parallels will become clear (e.g., the **dplyr** verb `filter()` corresponds to the SQL `WHERE` clause). But what about the formulas we put in our `summarize()` command? Notice that the **R** command `n()` was converted into `("COUNT(*)` in SQL. This is not magic either: the `translate_sql()` function provides translation between **R** commands and SQL commands. For example, it will translate basic mathematical expressions.
```
library(dbplyr)
translate_sql(mean(arr_delay, na.rm = TRUE))
```
```
<SQL> AVG(`arr_delay`) OVER ()
```
However, it only recognizes a small set of the most common operations—it cannot magically translate any **R** function into SQL. It can be easily tricked. For example, if we make a copy of the very common **R** function `paste0()` (which concatenates strings) called `my_paste()`, that function is not translated.
```
my_paste <- paste0
translate_sql(my_paste("this", "is", "a", "string"))
```
```
<SQL> my_paste('this', 'is', 'a', 'string')
```
This is a good thing—since it allows you to pass arbitrary SQL code through. But you have to know what you are doing. Since there is no SQL function called `my_paste()`, this will throw an error, even though it is a perfectly valid **R** expression.
```
carriers %>%
mutate(name_code = my_paste(name, "(", carrier, ")"))
```
```
Error in .local(conn, statement, ...): could not run statement: execute command denied to user 'mdsr_public'@'%' for routine 'airlines.my_paste'
```
```
class(carriers)
```
```
[1] "tbl_MySQLConnection" "tbl_dbi" "tbl_sql"
[4] "tbl_lazy" "tbl"
```
Because `carriers` is a `tbl_sql` and not a `data.frame`, the MySQL server is actually doing the computations here. The **dplyr** pipeline is simply translated into SQL and submitted to the server. To make this work, we need to replace `my_paste()` with its MySQL equivalent command, which is `CONCAT()`.
```
carriers %>%
mutate(name_code = CONCAT(name, "(", carrier, ")"))
```
```
# Source: lazy query [?? x 3]
# Database: mysql 5.7.33-log
# [@mdsr.cdc7tgkkqd0n.us-east-1.rds.amazonaws.com:/airlines]
carrier name name_code
<chr> <chr> <chr>
1 02Q Titan Airways Titan Airways(02Q)
2 04Q Tradewind Aviation Tradewind Aviation(04Q)
3 05Q Comlux Aviation, AG Comlux Aviation, AG(05Q)
4 06Q Master Top Linhas Aereas Ltd. Master Top Linhas Aereas Ltd.(06Q)
5 07Q Flair Airlines Ltd. Flair Airlines Ltd.(07Q)
6 09Q Swift Air, LLC Swift Air, LLC(09Q)
7 0BQ DCA DCA(0BQ)
8 0CQ ACM AIR CHARTER GmbH ACM AIR CHARTER GmbH(0CQ)
9 0GQ Inter Island Airways, d/b/a I… Inter Island Airways, d/b/a Inter…
10 0HQ Polar Airlines de Mexico d/b/… Polar Airlines de Mexico d/b/a No…
# … with more rows
```
The syntax of this looks a bit strange, since `CONCAT()` is not a valid **R** expression—but it works.
Another alternative is to pull the `carriers` data into **R** using the `collect()` function first, and then use `my_paste()` as before.[24](#fn24) The `collect()` function breaks the connection to the MySQL server and returns a `data.frame` (which is also a `tbl_df`).
```
carriers %>%
collect() %>%
mutate(name_code = my_paste(name, "(", carrier, ")"))
```
```
# A tibble: 1,610 × 3
carrier name name_code
<chr> <chr> <chr>
1 02Q Titan Airways Titan Airways(02Q)
2 04Q Tradewind Aviation Tradewind Aviation(04Q)
3 05Q Comlux Aviation, AG Comlux Aviation, AG(05Q)
4 06Q Master Top Linhas Aereas Ltd. Master Top Linhas Aereas Ltd.(06Q)
5 07Q Flair Airlines Ltd. Flair Airlines Ltd.(07Q)
6 09Q Swift Air, LLC Swift Air, LLC(09Q)
7 0BQ DCA DCA(0BQ)
8 0CQ ACM AIR CHARTER GmbH ACM AIR CHARTER GmbH(0CQ)
9 0GQ Inter Island Airways, d/b/a I… Inter Island Airways, d/b/a Inter…
10 0HQ Polar Airlines de Mexico d/b/… Polar Airlines de Mexico d/b/a No…
# … with 1,600 more rows
```
This example illustrates that when using **dplyr** with a `tbl_sql` backend, one must be careful to use expressions that SQL can understand. This is just one more reason why it is important to know SQL on its own and not rely entirely on the **dplyr** front\-end (as wonderful as it is).
For querying a database, the choice of whether to use **dplyr** or SQL is largely a question of convenience.
If you want to work with the result of your query in **R**, then use **dplyr**.
If, on the other hand, you are pulling data into a [*web application*](https://en.wikipedia.org/w/index.php?search=web%20application), you likely have no alternative other than writing the SQL query yourself. **dplyr** is just one SQL client that only works in **R**, but there are SQL servers all over the world, in countless environments.
Furthermore, as we will see in Chapter [21](ch-big.html#ch:big), even the big data tools that supersede SQL assume prior knowledge of SQL. Thus, in this chapter, we will learn how to write SQL queries.
15\.2 Flat\-file databases
--------------------------
It may be the case that all of the data that you have encountered thus far has been in a application\-specific format (e.g., **R**, [*Minitab*](https://en.wikipedia.org/w/index.php?search=Minitab), [*SPSS*](https://en.wikipedia.org/w/index.php?search=SPSS), [*Stata*](https://en.wikipedia.org/w/index.php?search=Stata) or has taken the form of a single CSV (comma\-separated value) file. This file consists of nothing more than rows and columns of data, usually with a header row providing names for each of the columns. Such a file is known as known as a *flat file*,
since it consists of just one flat (e.g., two\-dimensional) file. A *spreadsheet* application—like [*Excel*](https://en.wikipedia.org/w/index.php?search=Excel) or [*Google Sheets*](https://en.wikipedia.org/w/index.php?search=Google%20Sheets)—allows a user to open a flat file, edit it, and also provides a slew of features for generating additional columns, formatting cells, etc. In **R**, the `read_csv()` command from the
**readr** package converts a flat file database into a `data.frame`.
These flat\-file databases are both extremely common and extremely useful, so why do we need anything else? One set of limitations comes from computer hardware. A personal computer has two main options for storing data:
* Memory (RAM): the amount of data that a computer can work on at once. Modern computers typically have a few gigabytes of memory. A computer can access data in memory extremely quickly (tens of GBs per second).
* Hard Disk: the amount of data that a computer can store permanently. Modern computers typically have hundreds or even thousands of gigabytes (terabytes) of storage space. However, accessing data on disk is orders of magnitude slower than accessing data in memory (hundreds of MBs per second).
Thus, there is a trade\-off between storage space (disks have more room) and speed (memory is much faster to access). It is important to recognize that these are *physical* limitations—if you only have 4 Gb of RAM on your computer, you simply can’t read more than 4 Gb of data into memory.[25](#fn25)
In general, all objects in your **R** workspace are stored in memory. Note that the `carriers` object that we created earlier occupies very little memory (since the data still lives on the SQL server), whereas `collect(carriers)` pulls the data into **R** and occupies much more memory.
You can find out how much memory an object occupies in **R** using the `object.size()` function and its **print** method.
```
carriers %>%
object.size() %>%
print(units = "Kb")
```
```
3.6 Kb
```
```
carriers %>%
collect() %>%
object.size() %>%
print(units = "Kb")
```
```
234.8 Kb
```
For a typical **R** user, this means that it can be difficult or impossible to work with a data set stored as a `data.frame` that is larger than a few Gb. The following bit of code will illustrate that a data set of random numbers with 100 columns and 1 million rows occupies more than three\-quarters of a Gb of memory on this computer.
```
n <- 100 * 1e6
x <- matrix(runif(n), ncol = 100)
dim(x)
```
```
[1] 1000000 100
```
```
print(object.size(x), units = "Mb")
```
```
762.9 Mb
```
Thus, by the time that `data.frame` reached 10 million rows, it would be problematic for most personal computers—probably making your machine sluggish and unresponsive—and it could never reach 100 million rows. But Google processes over 3\.5 *billion* search queries per day! We know that they get stored somewhere—where do they all go?
To work effectively with larger data, we need a system that stores *all* of the data on disk, but allows us to access a portion of the data in memory easily. A [*relational database*](https://en.wikipedia.org/w/index.php?search=relational%20database)—which stores data in a collection of linkable tables—provides a powerful solution to this problem.
While more sophisticated approaches are available to address big data challenges, databases are a venerable solution for medium data.
15\.3 The SQL universe
----------------------
SQL (Structured Query Language) is a programming language for *relational database management systems*. Originally developed in the 1970s, it is a mature, powerful, and widely used storage and retrieval solution for data of many sizes. [*Google*](https://en.wikipedia.org/w/index.php?search=Google), [*Facebook*](https://en.wikipedia.org/w/index.php?search=Facebook), [*Twitter*](https://en.wikipedia.org/w/index.php?search=Twitter), [*Reddit*](https://en.wikipedia.org/w/index.php?search=Reddit), [*LinkedIn*](https://en.wikipedia.org/w/index.php?search=LinkedIn), [*Instagram*](https://en.wikipedia.org/w/index.php?search=Instagram), and countless other companies all access large datastores using SQL.
Relational database management systems (RDBMS) are very efficient for data that is naturally broken into a series of *tables* that are linked together by *keys*. A table is a two\-dimensional array of data that has *records* (rows) and *fields* (columns). It is very much like a `data.frame` in **R**, but there are some important differences that make SQL more efficient under certain conditions.
The theoretical foundation for SQL is based on *relational algebra* and *tuple relational calculus*. These ideas were developed by mathematicians and computer scientists, and while they are not required knowledge for our purposes, they help to solidify SQL’s standing as a data storage and retrieval system.
SQL has been an [American National Standards Institute](https://en.wikipedia.org/wiki/American_National_Standards_Institute) (ANSI) standard since 1986, but that standard is only loosely followed by its implementing developers. Unfortunately, this means that there are many different dialects of SQL, and translating between them is not always trivial. However, the broad strokes of the SQL language are common to all, and by learning one dialect, you will be able to easily understand any other (Kline et al. 2008\).
Major implementations of SQL include:
* [*Oracle*](https://en.wikipedia.org/w/index.php?search=Oracle): corporation that claims \#1 market share by revenue—now owns MySQL.
* [*Microsoft SQL Server*](https://en.wikipedia.org/w/index.php?search=Microsoft%20SQL%20Server): another widespread corporate SQL product.
* [*SQLite*](https://en.wikipedia.org/w/index.php?search=SQLite): a lightweight, open\-source version of SQL that has recently become the most widely used implementation of SQL, in part due to its being embedded in [*Android*](https://en.wikipedia.org/w/index.php?search=Android), the world’s most popular mobile operating system. SQLite is an excellent choice for relatively simple applications—like storing data associated with a particular mobile app—but has neither the features nor the scalability for persistent, multi\-user, multi\-purpose applications.
* [*MySQL*](https://en.wikipedia.org/w/index.php?search=MySQL): the most popular client\-server RDBMS. It is open source, but is now owned by Oracle Corporation, and that has caused some tension in the open\-source community. One of the original developers of MySQL, Monty Widenius, now maintains [*MariaDB*](https://en.wikipedia.org/w/index.php?search=MariaDB) as a community fork. MySQL is used by Facebook, Google, LinkedIn, and Twitter.
* [*PostgreSQL*](https://en.wikipedia.org/w/index.php?search=PostgreSQL): a feature\-rich, standards\-compliant, open\-source implementation growing in popularity. PostgreSQL hews closer to the ANSI standard than MySQL, supports more functions and data types, and provides powerful procedural languages that can extend its base functionality. It is used by Reddit and Instagram, among others.
* [*MonetDB*](https://en.wikipedia.org/w/index.php?search=MonetDB) and [*MonetDBLite*](https://en.wikipedia.org/w/index.php?search=MonetDBLite): open\-source implementations that are column\-based, rather than the traditional row\-based systems. Column\-based RDBMSs scale better for big data. **MonetDBLite** is an **R** package that provides a local experience similar to SQLite.
* [*Vertica*](https://en.wikipedia.org/w/index.php?search=Vertica): a commercial column\-based implementation founded by Postgres originator [Michael Stonebraker](https://en.wikipedia.org/w/index.php?search=Michael%20Stonebraker) and now owned by [*Hewlett\-Packard*](https://en.wikipedia.org/w/index.php?search=Hewlett-Packard).
We will focus on MySQL, but most aspects are similar in PostgreSQL or SQLite (see Appendix [F](ch-db-setup.html#ch:db-setup) for setup instructions).
15\.4 The SQL data manipulation language
----------------------------------------
MySQL is based on a client\-server model. This means that there is a *database server* that stores the data and executes queries. It can be located on the user’s local computer or on a remote server. We will be connecting to a server hosted by [*Amazon Web Services*](https://en.wikipedia.org/w/index.php?search=Amazon%20Web%20Services). To retrieve data from the server, one can connect to it via any number of client programs. One can of course use the command\-line `mysql` program, or the official [*GUI*](https://en.wikipedia.org/w/index.php?search=GUI) application: [*MySQL Workbench*](https://en.wikipedia.org/w/index.php?search=MySQL%20Workbench). While we encourage the reader to explore both options—we most often use the Workbench for MySQL development—the output you will see in this presentation comes directly from the MySQL command line client.
Even though `dplyr` enables one to execute most queries using **R** syntax, and without even worrying so much *where* the data are stored, learning SQL is valuable in its own right due to its ubiquity.
If you are just learning SQL for the first time, use the command\-line client and/or one of the GUI applications. The former provides the most direct feedback, and the latter will provide lots of helpful information.
Information about setting up a MySQL database can be found in Appendix [F](ch-db-setup.html#ch:db-setup): we assume that this has been done on a local or remote machine.
In what follows, you will see SQL commands and their results in tables.
To run these on your computer, please see Section [F.4](ch-db-setup.html#sec:connect-sql) for information about connecting to a MySQL server.
As noted in Chapter [1](ch-prologue.html#ch:prologue), the [**airlines** package](https://github.com/beanumber/airlines) streamlines construction an SQL database containing over 169 million flights.
These data come directly from the [*United States Bureau of Transportation Statistics*](https://en.wikipedia.org/w/index.php?search=United%20States%20Bureau%20of%20Transportation%20Statistics).
We access a remote SQL database that we have already set up and populated using the **airlines** package.
Note that this database is relational and consists of multiple tables.
```
SHOW TABLES;
```
| Tables\_in\_airlines |
| --- |
| airports |
| carriers |
| flights |
| planes |
Note that every SQL statement must end with a semicolon. To see what columns are present in the `airports` table, we ask for a description.
```
DESCRIBE airports;
```
| Field | Type | Null | Key | Default | Extra |
| --- | --- | --- | --- | --- | --- |
| faa | varchar(3\) | NO | PRI | | |
| name | varchar(255\) | YES | | NA | |
| lat | decimal(10,7\) | YES | | NA | |
| lon | decimal(10,7\) | YES | | NA | |
| alt | int(11\) | YES | | NA | |
| tz | smallint(4\) | YES | | NA | |
| dst | char(1\) | YES | | NA | |
| city | varchar(255\) | YES | | NA | |
| country | varchar(255\) | YES | | NA | |
This command tells us the names of the field (or variables) in the table, as well as their data type, and what kind of keys might be present (we will learn more about keys in Chapter [16](ch-sql2.html#ch:sql2)).
Next, we want to build a *query*. Queries in SQL start with the `SELECT` keyword and consist of several clauses, which have to be written in this order:
* `SELECT` allows you to list the columns, or functions operating on columns, that you want to retrieve. This is an analogous operation to the `select()` verb in **dplyr**, potentially combined with `mutate()`.
* `FROM` specifies the table where the data are.
* `JOIN` allows you to stitch together two or more tables using a key. This is analogous to the `inner_join()` and `left_join()` commands in **dplyr**.
* `WHERE` allows you to filter the records according to some criteria. This is an analogous operation to the `filter()` verb in **dplyr**.
* `GROUP BY` allows you to aggregate the records according to some shared value. This is an analogous operation to the `group_by()` verb in **dplyr**.
* `HAVING` is like a `WHERE` clause that operates on the result set—not the records themselves. This is analogous to applying a second `filter()` command in **dplyr**, after the rows have already been aggregated.
* `ORDER BY` is exactly what it sounds like—it specifies a condition for ordering the rows of the result set. This is analogous to the `arrange()` verb in **dplyr**.
* `LIMIT` restricts the number of rows in the output. This is similar to the **R** commands `head()` and `slice()`.
Only the `SELECT` and `FROM` clauses are required. Thus, the simplest query one can write is:
```
SELECT * FROM flights;
```
**DO NOT EXECUTE THIS QUERY!** This will cause all 169 million records to be dumped! This will not only crash your machine, but also tie up the server for everyone else!
A safe query is:
```
SELECT * FROM flights LIMIT 0,10;
```
We can specify a subset of variables to be displayed.
Table [15\.1](ch-sql.html#tab:select-limit2) displays the results, limited to the specified fields and the first 10 records.
```
SELECT year, month, day, dep_time, sched_dep_time, dep_delay, origin
FROM flights
LIMIT 0, 10;
```
Table 15\.1: Specifying a subset of variables.
| year | month | day | dep\_time | sched\_dep\_time | dep\_delay | origin |
| --- | --- | --- | --- | --- | --- | --- |
| 2010 | 10 | 1 | 1 | 2100 | 181 | EWR |
| 2010 | 10 | 1 | 1 | 1920 | 281 | FLL |
| 2010 | 10 | 1 | 3 | 2355 | 8 | JFK |
| 2010 | 10 | 1 | 5 | 2200 | 125 | IAD |
| 2010 | 10 | 1 | 7 | 2245 | 82 | LAX |
| 2010 | 10 | 1 | 7 | 10 | \-3 | LAX |
| 2010 | 10 | 1 | 7 | 2150 | 137 | ATL |
| 2010 | 10 | 1 | 8 | 15 | \-7 | SMF |
| 2010 | 10 | 1 | 8 | 10 | \-2 | LAS |
| 2010 | 10 | 1 | 10 | 2225 | 105 | SJC |
The astute reader will recognize the similarities between the five idioms for single table analysis and the join operations discussed in Chapters [4](ch-dataI.html#ch:dataI) and [5](ch-join.html#ch:join) and the SQL syntax.
This is not a coincidence!
In the contrary, **dplyr** represents a concerted effort to bring the almost natural language SQL syntax to **R**.
For this book, we have presented the **R** syntax first, since much of our content is predicated on the basic data wrangling skills developed previously.
But historically, SQL predated the **dplyr** by decades.
In Table [15\.2](ch-sql.html#tab:sql-r), we illustrate the functional equivalence of SQL and **dplyr** commands.
(ref:filter\-sql) `SELECT col1, col2 FROM a WHERE col3 = 'x'`
(ref:filter\-r) `a %>% filter(col3 == 'x') %>% select(col1, col2)`
(ref:aggregate\-sql) `SELECT id, SUM(col1) FROM a GROUP BY id`
(ref:aggregate\-r) `a %>% group_by(id) %>% summarize(SUM(col1))`
(ref:join\-sql) `SELECT * FROM a JOIN b ON a.id = b.id`
(ref:join\-r) `a %>% inner_join(b, by = c('id' = 'id'))`
Table 15\.2: Equivalent commands in SQL and R, where \\(a\\) and \\(b\\) are SQL tables and R dataframes.
| Concept | SQL | R |
| --- | --- | --- |
| Filter by rows \& columns | (ref:filter\-sql) | (ref:filter\-r) |
| Aggregate by rows | (ref:aggregate\-sql) | (ref:aggregate\-r) |
| Combine two tables | (ref:join\-sql) | (ref:join\-r) |
### 15\.4\.1 `SELECT...FROM`
As noted above, every SQL `SELECT` query must contain `SELECT` and `FROM`.
The analyst may specify columns to be retrieved. We saw above that the `airports` table contains seven columns. If we only wanted to retrieve the FAA `code` and `name` of each airport, we could write the following query.
```
SELECT faa, name FROM airports;
```
| faa | name |
| --- | --- |
| 04G | Lansdowne Airport |
| 06A | Moton Field Municipal Airport |
| 06C | Schaumburg Regional |
In addition to columns that are present in the database, one can retrieve columns that are functions of other columns.
For example, if we wanted to return the geographic coordinates of each airport as an \\((x,y)\\) pair, we could combine those fields.
```
SELECT
name,
CONCAT('(', lat, ', ', lon, ')')
FROM airports
LIMIT 0, 6;
```
| name | CONCAT(‘(,’ lat, ‘,’ lon, ‘)’) |
| --- | --- |
| Lansdowne Airport | (41\.1304722, \-80\.6195833\) |
| Moton Field Municipal Airport | (32\.4605722, \-85\.6800278\) |
| Schaumburg Regional | (41\.9893408, \-88\.1012428\) |
| Randall Airport | (41\.4319120, \-74\.3915611\) |
| Jekyll Island Airport | (31\.0744722, \-81\.4277778\) |
| Elizabethton Municipal Airport | (36\.3712222, \-82\.1734167\) |
Note that the column header for the derived column above is ungainly, since it consists of the entire formula that we used to construct it!
This is difficult to read, and would be cumbersome to work with.
An easy fix is to give this derived column an *alias*. We can do this using the keyword `AS`.
```
SELECT
name,
CONCAT('(', lat, ', ', lon, ')') AS coords
FROM airports
LIMIT 0, 6;
```
| name | coords |
| --- | --- |
| Lansdowne Airport | (41\.1304722, \-80\.6195833\) |
| Moton Field Municipal Airport | (32\.4605722, \-85\.6800278\) |
| Schaumburg Regional | (41\.9893408, \-88\.1012428\) |
| Randall Airport | (41\.4319120, \-74\.3915611\) |
| Jekyll Island Airport | (31\.0744722, \-81\.4277778\) |
| Elizabethton Municipal Airport | (36\.3712222, \-82\.1734167\) |
We can also use `AS` to refer to a column in the table by a different name in the result set.
```
SELECT
name AS airport_name,
CONCAT('(', lat, ', ', lon, ')') AS coords
FROM airports
LIMIT 0, 6;
```
| airport\_name | coords |
| --- | --- |
| Lansdowne Airport | (41\.1304722, \-80\.6195833\) |
| Moton Field Municipal Airport | (32\.4605722, \-85\.6800278\) |
| Schaumburg Regional | (41\.9893408, \-88\.1012428\) |
| Randall Airport | (41\.4319120, \-74\.3915611\) |
| Jekyll Island Airport | (31\.0744722, \-81\.4277778\) |
| Elizabethton Municipal Airport | (36\.3712222, \-82\.1734167\) |
This brings an important distinction to the fore: In SQL, it is crucial to distinguish between clauses that operate *on the rows of the original table* versus those that operate *on the rows of the result set*.
Here, `name`, `lat`, and `lon` are columns in the original table—they are written to the disk on the SQL server.
On the other hand, `airport_name` and `coords` exist only in the result set—which is passed from the server to the client and is not written to the disk.
The preceding examples show the SQL equivalents of the **dplyr** commands `select()`, `mutate()`, and `rename()`.
### 15\.4\.2 `WHERE`
The `WHERE` clause is analogous to the `filter()` command in **dplyr**—it allows you to restrict the set of rows that are retrieved to only those rows that match a certain condition.
Thus, while there are several million rows in the `flights` table in each year—each corresponding to a single flight—there were only a few dozen flights that left [*Bradley International Airport*](https://en.wikipedia.org/w/index.php?search=Bradley%20International%20Airport) on June 26th, 2013\.
```
SELECT
year, month, day, origin, dest,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
| year | month | day | origin | dest | flight | carrier |
| --- | --- | --- | --- | --- | --- | --- |
| 2013 | 6 | 26 | BDL | EWR | 4714 | EV |
| 2013 | 6 | 26 | BDL | MIA | 2015 | AA |
| 2013 | 6 | 26 | BDL | DTW | 1644 | DL |
| 2013 | 6 | 26 | BDL | BWI | 2584 | WN |
| 2013 | 6 | 26 | BDL | ATL | 1065 | DL |
| 2013 | 6 | 26 | BDL | DCA | 1077 | US |
It would be convenient to search for flights in a date range.
Unfortunately, there is no date field in this table—but rather separate columns for the `year`, `month`, and `day`.
Nevertheless, we can tell SQL to interpret these columns as a date, using the `STR_TO_DATE()` function.[26](#fn26) Unlike in **R** code, function names in SQL code are customarily capitalized.
Dates and times can be challenging to wrangle.
To learn more about these date tokens, see the MySQL [documentation](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_str-to-date) for `STR_TO_DATE()`.
```
SELECT
STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d') AS theDate,
origin,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
| theDate | origin | flight | carrier |
| --- | --- | --- | --- |
| 2013\-06\-26 | BDL | 4714 | EV |
| 2013\-06\-26 | BDL | 2015 | AA |
| 2013\-06\-26 | BDL | 1644 | DL |
| 2013\-06\-26 | BDL | 2584 | WN |
| 2013\-06\-26 | BDL | 1065 | DL |
| 2013\-06\-26 | BDL | 1077 | US |
Note that here we have used a `WHERE` clause on columns that are not present in the result set. We can do this because `WHERE` operates only on the rows of the original table.
Conversely, if we were to try and use a `WHERE` clause on `theDate`, it would not work, because (as the error suggests), `theDate` is not the name of a column in the `flights` table.
```
SELECT
STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d') AS theDate,
origin, flight, carrier
FROM flights
WHERE theDate = '2013-06-26'
AND origin = 'BDL'
LIMIT 0, 6;
```
```
ERROR 1049 (42000): Unknown database 'airlines'
```
A workaround is to copy and paste the definition of `theDate` into the `WHERE` clause, since `WHERE` *can* operate on functions of columns in the original table (results not shown).
```
SELECT
STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d') AS theDate,
origin, flight, carrier
FROM flights
WHERE STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d') =
'2013-06-26'
AND origin = 'BDL'
LIMIT 0, 6;
```
This query will work, but here we have stumbled onto another wrinkle that exposes subtleties in how SQL executes queries.
The previous query was able to make use of indices defined on the `year`, `month`, and `day` columns.
However, the latter query is not able to make use of these indices because it is trying to filter on functions of a combination of those columns.
This makes the latter query very slow.
We will return to a fuller discussion of indices in Section [16\.1](ch-sql2.html#sec:indices).
Finally, we can use the `BETWEEN` syntax to filter through a range of dates.
The `DISTINCT` keyword limits the result set to one row per unique value of `theDate`.
```
SELECT
DISTINCT STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d')
AS theDate
FROM flights
WHERE year = 2013 AND month = 6 AND day BETWEEN 26 and 30
AND origin = 'BDL'
LIMIT 0, 6;
```
| theDate |
| --- |
| 2013\-06\-26 |
| 2013\-06\-27 |
| 2013\-06\-28 |
| 2013\-06\-29 |
| 2013\-06\-30 |
Similarly, we can use the `IN` syntax to search for items in a specified list.
Note that flights on the 27th, 28th, and 29th of June are retrieved in the query using `BETWEEN` but not in the query using `IN`.
```
SELECT
DISTINCT STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d')
AS theDate
FROM flights
WHERE year = 2013 AND month = 6 AND day IN (26, 30)
AND origin = 'BDL'
LIMIT 0, 6;
```
| theDate |
| --- |
| 2013\-06\-26 |
| 2013\-06\-30 |
SQL also supports `OR` clauses in addition to `AND` clauses, but one must always be careful with parentheses when using `OR`. Note the difference in the numbers of rows returned by the following two queries (557,874 vs. 2,542\). The `COUNT` function simply counts the number of rows. The criteria in the `WHERE` clause are not evaluated left to right, but rather the `AND`s are evaluated first. This means that in the first query below, all flights on the 26th day of any month, regardless of year or month, would be returned.
```
/* returns 557,874 records */
SELECT
COUNT(*) AS N
FROM flights
WHERE year = 2013 AND month = 6 OR day = 26
AND origin = 'BDL';
```
```
/* returns 2,542 records */
SELECT
COUNT(*) AS N
FROM flights
WHERE year = 2013 AND (month = 6 OR day = 26)
AND origin = 'BDL';
```
### 15\.4\.3 `GROUP BY`
The `GROUP BY` clause allows one to *aggregate* multiple rows according to some criteria.
The challenge when using `GROUP BY` is specifying *how* multiple rows of data should be reduced into a single value. [Aggregate functions](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html) (e.g., `COUNT()`, `SUM()`, `MAX()`, and `AVG()`) are necessary.
We know that there were 65 flights that left Bradley Airport on June 26th, 2013, but how many belonged to each airline carrier?
To get this information we need to aggregate the individual flights, based on who the carrier was.
```
SELECT
carrier,
COUNT(*) AS numFlights,
SUM(1) AS numFlightsAlso
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
GROUP BY carrier;
```
| carrier | numFlights | numFlightsAlso |
| --- | --- | --- |
| 9E | 5 | 5 |
| AA | 4 | 4 |
| B6 | 5 | 5 |
| DL | 11 | 11 |
| EV | 5 | 5 |
| MQ | 5 | 5 |
| UA | 1 | 1 |
| US | 7 | 7 |
| WN | 19 | 19 |
| YV | 3 | 3 |
For each of these airlines, which flight left the earliest in the morning?
```
SELECT
carrier,
COUNT(*) AS numFlights,
MIN(dep_time)
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
GROUP BY carrier;
```
| carrier | numFlights | MIN(dep\_time) |
| --- | --- | --- |
| 9E | 5 | 0 |
| AA | 4 | 559 |
| B6 | 5 | 719 |
| DL | 11 | 559 |
| EV | 5 | 555 |
| MQ | 5 | 0 |
| UA | 1 | 0 |
| US | 7 | 618 |
| WN | 19 | 601 |
| YV | 3 | 0 |
This is a bit tricky to figure out because the `dep_time` variable is stored as an integer, but would be better represented as a `time` data type.
If it is a three\-digit integer, then the first digit is the hour, but if it is a four\-digit integer, then the first two digits are the hour.
In either case, the last two digits are the minutes, and there are no seconds recorded.
The `MAKETIME()` function combined with the `IF(condition, value if true, value if false)` statement can help us with this.
```
SELECT
carrier,
COUNT(*) AS numFlights,
MAKETIME(
IF(LENGTH(MIN(dep_time)) = 3,
LEFT(MIN(dep_time), 1),
LEFT(MIN(dep_time), 2)
),
RIGHT(MIN(dep_time), 2),
0
) AS firstDepartureTime
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
GROUP BY carrier
LIMIT 0, 6;
```
| carrier | numFlights | firstDepartureTime |
| --- | --- | --- |
| 9E | 5 | 00:00:00 |
| AA | 4 | 05:59:00 |
| B6 | 5 | 07:19:00 |
| DL | 11 | 05:59:00 |
| EV | 5 | 05:55:00 |
| MQ | 5 | 00:00:00 |
We can also group by more than one column, but need to be careful to specify that we apply an aggregate function to each column that we are *not* grouping by.
In this case, every time we access `dep_time`, we apply the `MIN()` function, since there may be many different values of `dep_time` associated with each unique combination of `carrier` and `dest`.
Applying the `MIN()` function returns the smallest such value unambiguously.
```
SELECT
carrier, dest,
COUNT(*) AS numFlights,
MAKETIME(
IF(LENGTH(MIN(dep_time)) = 3,
LEFT(MIN(dep_time), 1),
LEFT(MIN(dep_time), 2)
),
RIGHT(MIN(dep_time), 2),
0
) AS firstDepartureTime
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
GROUP BY carrier, dest
LIMIT 0, 6;
```
| carrier | dest | numFlights | firstDepartureTime |
| --- | --- | --- | --- |
| 9E | CVG | 2 | 00:00:00 |
| 9E | DTW | 1 | 18:20:00 |
| 9E | MSP | 1 | 11:25:00 |
| 9E | RDU | 1 | 09:38:00 |
| AA | DFW | 3 | 07:04:00 |
| AA | MIA | 1 | 05:59:00 |
### 15\.4\.4 `ORDER BY`
The use of aggregate function allows us to answer some very basic exploratory questions.
Combining this with an `ORDER BY` clause will bring the most interesting results to the top.
For example, which destinations are most common from Bradley in 2013?
```
SELECT
dest, SUM(1) AS numFlights
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
ORDER BY numFlights DESC
LIMIT 0, 6;
```
| dest | numFlights |
| --- | --- |
| ORD | 2657 |
| BWI | 2613 |
| ATL | 2277 |
| CLT | 1842 |
| MCO | 1789 |
| DTW | 1523 |
Note that since the `ORDER BY` clause cannot be executed until all of the data are retrieved, it operates on the result set, and not the rows of the original data.
Thus, derived columns *can* be referenced in the `ORDER BY` clause.
Which of those destinations had the lowest average arrival delay time?
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
| dest | numFlights | avg\_arr\_delay |
| --- | --- | --- |
| CLE | 57 | \-13\.07 |
| LAX | 127 | \-10\.31 |
| CVG | 708 | \-7\.37 |
| MSP | 981 | \-3\.66 |
| MIA | 404 | \-3\.27 |
| DCA | 204 | \-2\.90 |
[*Cleveland Hopkins International Airport*](https://en.wikipedia.org/w/index.php?search=Cleveland%20Hopkins%20International%20Airport) (CLE) has the smallest average arrival delay time.
### 15\.4\.5 `HAVING`
Although flights to Cleveland had the lowest average arrival delay—more than 13 minutes ahead of schedule—there were only 57 flights that went to from Bradley to Cleveland in all of 2013\.
It probably makes more sense to consider only those destinations that had, say, at least two flights per day.
We can filter our result set using a `HAVING` clause.
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
HAVING numFlights > 365 * 2
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
| dest | numFlights | avg\_arr\_delay |
| --- | --- | --- |
| MSP | 981 | \-3\.664 |
| DTW | 1523 | \-2\.148 |
| CLT | 1842 | \-0\.120 |
| FLL | 1011 | 0\.277 |
| DFW | 1062 | 0\.750 |
| ATL | 2277 | 4\.470 |
We can see now that among the airports that are common destinations from Bradley, Minneapolis\-St. Paul has the lowest average arrival delay time, at nearly 4 minutes ahead of schedule, on average.
Note that MySQL and SQLite support the use of derived column aliases in `HAVING` clauses, but PostgreSQL does not.
It is important to understand that the `HAVING` clause operates on the result set.
While `WHERE` and `HAVING` are similar in spirit and syntax (and indeed, in **dplyr** they are both masked by the `filter()` function), they are different, because `WHERE` operates on the original data in the table and `HAVING` operates on the result set. Moving the `HAVING` condition to the `WHERE` clause will not work.
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
AND numFlights > 365 * 2
GROUP BY dest
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
```
ERROR 1049 (42000): Unknown database 'airlines'
```
On the other hand, moving the `WHERE` conditions to the `HAVING` clause will work, but could result in a major loss of efficiency.
The following query will return the same result as the one we considered previously.
```
SELECT
origin, dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
GROUP BY origin, dest
HAVING numFlights > 365 * 2
AND origin = 'BDL'
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
Moving the `origin = 'BDL'` condition to the `HAVING` clause means that *all* airport destinations had to be considered.
With this condition in the `WHERE` clause, the server can quickly identify only those flights that left Bradley, perform the aggregation, and then filter this relatively small result set for those entries with a sufficient number of flights.
Conversely, with this condition in the `HAVING` clause, the server is forced to consider *all* 3 million flights from 2013, perform the aggregation for all pairs of airports, and then filter this much larger result set for those entries with a sufficient number of flights from Bradley.
The filtering of the result set is not importantly slower, but the aggregation over 3 million rows as opposed to a few thousand is.
To maximize query efficiency, put conditions in a `WHERE` clause as opposed to a `HAVING` clause whenever possible.
### 15\.4\.6 `LIMIT`
A `LIMIT` clause simply allows you to truncate the output to a specified number of rows.
This achieves an effect analogous to the **R** commands `head()` or `slice()`.
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
HAVING numFlights > 365*2
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
| dest | numFlights | avg\_arr\_delay |
| --- | --- | --- |
| MSP | 981 | \-3\.664 |
| DTW | 1523 | \-2\.148 |
| CLT | 1842 | \-0\.120 |
| FLL | 1011 | 0\.277 |
| DFW | 1062 | 0\.750 |
| ATL | 2277 | 4\.470 |
Note, however, that it is also possible to retrieve rows not at the beginning.
The first number in the `LIMIT` clause indicates the number of rows to skip, and the latter indicates the number of rows to retrieve.
Thus, this query will return the 4th–7th airports in the previous list.
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
HAVING numFlights > 365*2
ORDER BY avg_arr_delay ASC
LIMIT 3,4;
```
| dest | numFlights | avg\_arr\_delay |
| --- | --- | --- |
| FLL | 1011 | 0\.277 |
| DFW | 1062 | 0\.750 |
| ATL | 2277 | 4\.470 |
| BWI | 2613 | 5\.032 |
### 15\.4\.7 `JOIN`
In Chapter [5](ch-join.html#ch:join), we presented several **dplyr** join operators: `inner_join()` and `left_join()`.
Other functions (e.g., `semi_join()`) are also available.
As you might expect, these operations are fundamental to SQL—and moreover, the success of the RDBMS paradigm is predicated on the ability to efficiently join tables together.
Recall that SQL is a *relational* database management system—the relations between the tables allow you to write queries that efficiently tie together information from multiple sources.
The syntax for performing these operations in SQL requires the `JOIN` keyword.
In general, there are four pieces of information that you need to specify in order to join two tables:
* The name of the first table that you want to join
* (optional) The *type* of join that you want to use
* The name of the second table that you want to join
* The *condition(s)* under which you want the records in the first table to match the records in the second table
There are many possible permutations of how two tables can be joined, and in many cases, a single query may involve several or even dozens of tables.
In practice, the `JOIN` syntax varies among SQL implementations.
In MySQL, `OUTER JOIN`s are not available, but the following join types are:
* `JOIN`: includes all of the rows that are present in *both* tables and match.
* `LEFT JOIN`: includes all of the rows that are present in the first table. Rows in the first table that have no match in the second are filled with `NULL`s.
* `RIGHT JOIN`: include all of the rows that are present in the second table. This is the opposite of a `LEFT JOIN`.
* `CROSS JOIN`: the Cartesian product of the two tables. Thus, all possible combinations of rows matching the joining condition are returned.
Recall that in the `flights` table, the `origin` and `dest`ination of each flight are recorded.
```
SELECT
origin, dest,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
| origin | dest | flight | carrier |
| --- | --- | --- | --- |
| BDL | EWR | 4714 | EV |
| BDL | MIA | 2015 | AA |
| BDL | DTW | 1644 | DL |
| BDL | BWI | 2584 | WN |
| BDL | ATL | 1065 | DL |
| BDL | DCA | 1077 | US |
Note that the `flights` table contains only the three\-character FAA airport codes for both airports—not the full name of the airport.
These cryptic abbreviations are not easily understood by humans.
Which airport is `EWR`?
Wouldn’t it be more convenient to have the airport name in the table?
It would be more convenient, but it would also be significantly less efficient from a storage and retrieval point of view, as well as more problematic from a [*database integrity*](https://en.wikipedia.org/w/index.php?search=database%20integrity) point of view.
The solution is to store information *about airports* in the `airports` table, along with these cryptic codes—which we will now call *keys*—and to only store these keys in the `flights` table—which is about *flights*, not airports.
However, we can use these keys to join the two tables together in our query.
In this manner, we can [*have our cake and eat it too*](https://en.wikipedia.org/w/index.php?search=have%20our%20cake%20and%20eat%20it%20too): The data are stored in separate tables for efficiency, but we can still have the full names in the result set if we choose.
Note how once again, the distinction between the rows of the original table and the result set is critical.
To write our query, we simply have to specify the table we want to join onto `flights` (e.g., `airports`) and the condition by which we want to match rows in `flights` with rows in `airports`.
In this case, we want the airport code listed in `flights.dest` to be matched to the airport code in `airports.faa`.
We need to specify that we want to see the `name` column from the `airports` table in the result set (see Table [15\.3](ch-sql.html#tab:join)).
```
SELECT
origin, dest,
airports.name AS dest_name,
flight, carrier
FROM flights
JOIN airports ON flights.dest = airports.faa
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
Table 15\.3: Using JOIN to retrieve airport names.
| origin | dest | dest\_name | flight | carrier |
| --- | --- | --- | --- | --- |
| BDL | EWR | Newark Liberty Intl | 4714 | EV |
| BDL | MIA | Miami Intl | 2015 | AA |
| BDL | DTW | Detroit Metro Wayne Co | 1644 | DL |
| BDL | BWI | Baltimore Washington Intl | 2584 | WN |
| BDL | ATL | Hartsfield Jackson Atlanta Intl | 1065 | DL |
| BDL | DCA | Ronald Reagan Washington Natl | 1077 | US |
This is much easier to read for humans.
One quick improvement to the readability of this query is to use *table aliases*.
This will save us some typing now, but a considerable amount later on.
A table alias is often just a single letter after the reserved word `AS` in the specification of each table in the `FROM` and `JOIN` clauses.
Note that these aliases can be referenced anywhere else in the query (see Table [15\.4](ch-sql.html#tab:join-alias)).
```
SELECT
origin, dest,
a.name AS dest_name,
flight, carrier
FROM flights AS o
JOIN airports AS a ON o.dest = a.faa
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
Table 15\.4: Using JOIN with table aliases.
| origin | dest | dest\_name | flight | carrier |
| --- | --- | --- | --- | --- |
| BDL | EWR | Newark Liberty Intl | 4714 | EV |
| BDL | MIA | Miami Intl | 2015 | AA |
| BDL | DTW | Detroit Metro Wayne Co | 1644 | DL |
| BDL | BWI | Baltimore Washington Intl | 2584 | WN |
| BDL | ATL | Hartsfield Jackson Atlanta Intl | 1065 | DL |
| BDL | DCA | Ronald Reagan Washington Natl | 1077 | US |
In the same manner, there are cryptic codes in `flights` for the airline carriers.
The full name of each carrier is stored in the `carriers` table, since that is the place where information about carriers are stored.
We can join this table to our result set to retrieve the name of each carrier (see Table [15\.5](ch-sql.html#tab:join-multiple)).
```
SELECT
dest, a.name AS dest_name,
o.carrier, c.name AS carrier_name
FROM flights AS o
JOIN airports AS a ON o.dest = a.faa
JOIN carriers AS c ON o.carrier = c.carrier
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
Table 15\.5: Using JOIN with multiple tables.
| dest | dest\_name | carrier | carrier\_name |
| --- | --- | --- | --- |
| EWR | Newark Liberty Intl | EV | ExpressJet Airlines Inc. |
| MIA | Miami Intl | AA | American Airlines Inc. |
| DTW | Detroit Metro Wayne Co | DL | Delta Air Lines Inc. |
| BWI | Baltimore Washington Intl | WN | Southwest Airlines Co. |
| ATL | Hartsfield Jackson Atlanta Intl | DL | Delta Air Lines Inc. |
| DCA | Ronald Reagan Washington Natl | US | US Airways Inc. |
Finally, to retrieve the name of the originating airport, we can join onto the same table more than once.
Here the table aliases are necessary.
```
SELECT
flight,
a2.name AS orig_name,
a1.name AS dest_name,
c.name AS carrier_name
FROM flights AS o
JOIN airports AS a1 ON o.dest = a1.faa
JOIN airports AS a2 ON o.origin = a2.faa
JOIN carriers AS c ON o.carrier = c.carrier
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
Table 15\.6: Using JOIN on the same table more than once.
| flight | orig\_name | dest\_name | carrier\_name |
| --- | --- | --- | --- |
| 4714 | Bradley Intl | Newark Liberty Intl | ExpressJet Airlines Inc. |
| 2015 | Bradley Intl | Miami Intl | American Airlines Inc. |
| 1644 | Bradley Intl | Detroit Metro Wayne Co | Delta Air Lines Inc. |
| 2584 | Bradley Intl | Baltimore Washington Intl | Southwest Airlines Co. |
| 1065 | Bradley Intl | Hartsfield Jackson Atlanta Intl | Delta Air Lines Inc. |
| 1077 | Bradley Intl | Ronald Reagan Washington Natl | US Airways Inc. |
Table [15\.6](ch-sql.html#tab:join-multiple-times) displays the results.
Now it is perfectly clear that [*ExpressJet*](https://en.wikipedia.org/w/index.php?search=ExpressJet) flight 4714 flew from Bradley International airport to [*Newark Liberty International airport*](https://en.wikipedia.org/w/index.php?search=Newark%20Liberty%20International%20airport) on June 26th, 2013\.
However, in order to put this together, we had to join four tables.
Wouldn’t it be easier to store these data in a single table that looks like the result set? For a variety of reasons, the answer is no.
First, there are very literal storage considerations.
The `airports.name` field has room for 255 characters.
```
DESCRIBE airports;
```
| Field | Type | Null | Key | Default | Extra |
| --- | --- | --- | --- | --- | --- |
| faa | varchar(3\) | NO | PRI | | |
| name | varchar(255\) | YES | | NA | |
| lat | decimal(10,7\) | YES | | NA | |
| lon | decimal(10,7\) | YES | | NA | |
| alt | int(11\) | YES | | NA | |
| tz | smallint(4\) | YES | | NA | |
| dst | char(1\) | YES | | NA | |
| city | varchar(255\) | YES | | NA | |
| country | varchar(255\) | YES | | NA | |
This takes up considerably more space on disk than the four\-character abbreviation stored in `airports.faa`.
For small data sets, this overhead might not matter, but the `flights` table contains 169 million rows, so replacing the four\-character `origin` field with a 255\-character field would result in a noticeable difference in space on disk. (Plus, we’d have to do this twice, since the same would apply to `dest`.)
We’d suffer a similar penalty for including the full name of each carrier in the `flights` table.
Other things being equal, tables that take up less room on disk are faster to search.
Second, it would be logically inefficient to store the full name of each airport in the `flights` table.
The name of the airport doesn’t change for each flight.
It doesn’t make sense to store the full name of the airport any more than it would make sense to store the full name of the month, instead of just the integer corresponding to each month.
Third, what if the name of the airport *did* change?
For example, in 1998 the airport with code DCA was renamed from Washington National to [*Ronald Reagan Washington National*](https://en.wikipedia.org/w/index.php?search=Ronald%20Reagan%20Washington%20National).
It is still the same airport in the same location, and it still has code DCA—only the full name has changed. With separate tables, we only need to update a single field: the `name` column in the `airports` table for the DCA row.
Had we stored the full name in the `flights` table, we would have to make millions of substitutions, and would risk ending up in a situation in which both “Washington National” and “Reagan National” were present in the table.
When designing a database, how do you know whether to create a separate table for pieces of information?
The short answer is that if you are designing a persistent, scalable database for speed and efficiency, then every *entity* should have its own table.
In practice, very often it is not worth the time and effort to set this up if we are simply doing some quick analysis.
But for permanent systems—like a database backend to a website—proper curation is necessary.
The notions of [normal forms](https://en.wikipedia.org/wiki/Database_normalization#Normal_forms), and specifically [*third normal form*](https://en.wikipedia.org/w/index.php?search=third%20normal%20form) (3NF), provide guidance for how to properly design a database.
A full discussion of this is beyond the scope of this book, but the basic idea is to “keep like with like.”
If you are designing a database that will be used for a long time or by a lot of people, take the extra time to design it well.
#### 15\.4\.7\.1 `LEFT JOIN`
Recall that in a `JOIN`—also known as an *inner* or *natural* or *regular* `JOIN`—all possible matching pairs of rows from the two tables are included.
Thus, if the first table has \\(n\\) rows and the second table has \\(m\\), as many as \\(nm\\) rows could be returned. However, in the `airports` table each row has a unique airport code, and thus every row in `flights` will match the destination field to *at most* one row in the `airports` table.
What happens if no such entry is present in `airports`?
That is, what happens if there is a destination airport in `flights` that has no corresponding entry in `airports`? If you are using a `JOIN`, then the offending row in `flights` is simply not returned.
On the other hand, if you are using a `LEFT JOIN`, then every row in the first table is returned, and the corresponding entries from the second table are left blank.
In this example, no airport names were found for several airports.
```
SELECT
year, month, day, origin, dest,
a.name AS dest_name,
flight, carrier
FROM flights AS o
LEFT JOIN airports AS a ON o.dest = a.faa
WHERE year = 2013 AND month = 6 AND day = 26
AND a.name is null
LIMIT 0, 6;
```
| year | month | day | origin | dest | dest\_name | flight | carrier |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 2013 | 6 | 26 | BOS | SJU | NA | 261 | B6 |
| 2013 | 6 | 26 | JFK | SJU | NA | 1203 | B6 |
| 2013 | 6 | 26 | JFK | PSE | NA | 745 | B6 |
| 2013 | 6 | 26 | JFK | SJU | NA | 1503 | B6 |
| 2013 | 6 | 26 | JFK | BQN | NA | 839 | B6 |
| 2013 | 6 | 26 | JFK | BQN | NA | 939 | B6 |
The output indicates that the airports are all in [*Puerto Rico*](https://en.wikipedia.org/w/index.php?search=Puerto%20Rico): SJU is in [*San Juan*](https://en.wikipedia.org/w/index.php?search=San%20Juan), BQN is in [*Aguadilla*](https://en.wikipedia.org/w/index.php?search=Aguadilla), and PSE is in [*Ponce*](https://en.wikipedia.org/w/index.php?search=Ponce).
The result set from a `LEFT JOIN` is always a superset of the result set from the same query with a regular `JOIN`.
A `RIGHT JOIN` is simply the opposite of a `LEFT JOIN`—that is, the tables have simply been specified in the opposite order.
This can be useful in certain cases, especially when you are joining more than two tables.
### 15\.4\.8 `UNION`
Two separate queries can be combined using a `UNION` clause.
```
(SELECT
year, month, day, origin, dest,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL' AND dest = 'MSP')
UNION
(SELECT
year, month, day, origin, dest,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'JFK' AND dest = 'ORD')
LIMIT 0,10;
```
| year | month | day | origin | dest | flight | carrier |
| --- | --- | --- | --- | --- | --- | --- |
| 2013 | 6 | 26 | BDL | MSP | 797 | DL |
| 2013 | 6 | 26 | BDL | MSP | 3338 | 9E |
| 2013 | 6 | 26 | BDL | MSP | 1226 | DL |
| 2013 | 6 | 26 | JFK | ORD | 905 | B6 |
| 2013 | 6 | 26 | JFK | ORD | 1105 | B6 |
| 2013 | 6 | 26 | JFK | ORD | 3523 | 9E |
| 2013 | 6 | 26 | JFK | ORD | 1711 | AA |
| 2013 | 6 | 26 | JFK | ORD | 105 | B6 |
| 2013 | 6 | 26 | JFK | ORD | 3521 | 9E |
| 2013 | 6 | 26 | JFK | ORD | 3525 | 9E |
This is analogous to the **dplyr** operation `bind_rows()`.
### 15\.4\.9 Subqueries
It is also possible to use a result set as if it were a table.
That is, you can write one query to generate a result set, and then use that result set in a larger query as if it were a table, or even just a list of values.
The initial query is called a [*subquery*](https://en.wikipedia.org/w/index.php?search=subquery).
For example, Bradley is listed as an “international” airport, but with the exception of trips to [*Montreal*](https://en.wikipedia.org/w/index.php?search=Montreal) and [*Toronto*](https://en.wikipedia.org/w/index.php?search=Toronto) and occasional flights to [*Mexico*](https://en.wikipedia.org/w/index.php?search=Mexico) and [*Europe*](https://en.wikipedia.org/w/index.php?search=Europe), it is more of a regional airport.
Does it have any flights coming from or going to [*Alaska*](https://en.wikipedia.org/w/index.php?search=Alaska) and [*Hawaii*](https://en.wikipedia.org/w/index.php?search=Hawaii)?
We can retrieve the list of airports outside the lower 48 states by filtering the airports table using the time zone `tz` column (see Table [15\.7](ch-sql.html#tab:outside) for the first six).
```
SELECT faa, name, tz, city
FROM airports AS a
WHERE tz < -8
LIMIT 0, 6;
```
Table 15\.7: First set of six airports outside the lower 48 states.
| faa | name | tz | city |
| --- | --- | --- | --- |
| 369 | Atmautluak Airport | \-9 | Atmautluak |
| 6K8 | Tok Junction Airport | \-9 | Tok |
| ABL | Ambler Airport | \-9 | Ambler |
| ADK | Adak Airport | \-9 | Adak Island |
| ADQ | Kodiak | \-9 | Kodiak |
| AET | Allakaket Airport | \-9 | Allakaket |
Now, let’s use the airport codes generated by that query as a list to filter the flights leaving from Bradley in 2013\.
Note the subquery in parentheses in the query below.
```
SELECT
dest, a.name AS dest_name,
SUM(1) AS N, COUNT(distinct carrier) AS numCarriers
FROM flights AS o
LEFT JOIN airports AS a ON o.dest = a.faa
WHERE year = 2013
AND origin = 'BDL'
AND dest IN
(SELECT faa
FROM airports
WHERE tz < -8)
GROUP BY dest;
```
No results are returned.
As it turns out, Bradley did not have any outgoing flights to Alaska or Hawaii.
However, it did have some flights to and from airports in the Pacific Time Zone.
```
SELECT
dest, a.name AS dest_name,
SUM(1) AS N, COUNT(distinct carrier) AS numCarriers
FROM flights AS o
LEFT JOIN airports AS a ON o.origin = a.faa
WHERE year = 2013
AND dest = 'BDL'
AND origin IN
(SELECT faa
FROM airports
WHERE tz < -7)
GROUP BY origin;
```
| dest | dest\_name | N | numCarriers |
| --- | --- | --- | --- |
| BDL | Mc Carran Intl | 262 | 1 |
| BDL | Los Angeles Intl | 127 | 1 |
We could also employ a similar subquery to create an ephemeral table (results not shown).
```
SELECT
dest, a.name AS dest_name,
SUM(1) AS N, COUNT(distinct carrier) AS numCarriers
FROM flights AS o
JOIN (SELECT *
FROM airports
WHERE tz < -7) AS a
ON o.origin = a.faa
WHERE year = 2013 AND dest = 'BDL'
GROUP BY origin;
```
Of course, we could have achieved the same result with a `JOIN` and `WHERE` (results not shown).
```
SELECT
dest, a.name AS dest_name,
SUM(1) AS N, COUNT(distinct carrier) AS numCarriers
FROM flights AS o
LEFT JOIN airports AS a ON o.origin = a.faa
WHERE year = 2013
AND dest = 'BDL'
AND tz < -7
GROUP BY origin;
```
It is important to note that while subqueries are often convenient, they cannot make use of indices.
In most cases it is preferable to write the query using joins as opposed to subqueries.
### 15\.4\.1 `SELECT...FROM`
As noted above, every SQL `SELECT` query must contain `SELECT` and `FROM`.
The analyst may specify columns to be retrieved. We saw above that the `airports` table contains seven columns. If we only wanted to retrieve the FAA `code` and `name` of each airport, we could write the following query.
```
SELECT faa, name FROM airports;
```
| faa | name |
| --- | --- |
| 04G | Lansdowne Airport |
| 06A | Moton Field Municipal Airport |
| 06C | Schaumburg Regional |
In addition to columns that are present in the database, one can retrieve columns that are functions of other columns.
For example, if we wanted to return the geographic coordinates of each airport as an \\((x,y)\\) pair, we could combine those fields.
```
SELECT
name,
CONCAT('(', lat, ', ', lon, ')')
FROM airports
LIMIT 0, 6;
```
| name | CONCAT(‘(,’ lat, ‘,’ lon, ‘)’) |
| --- | --- |
| Lansdowne Airport | (41\.1304722, \-80\.6195833\) |
| Moton Field Municipal Airport | (32\.4605722, \-85\.6800278\) |
| Schaumburg Regional | (41\.9893408, \-88\.1012428\) |
| Randall Airport | (41\.4319120, \-74\.3915611\) |
| Jekyll Island Airport | (31\.0744722, \-81\.4277778\) |
| Elizabethton Municipal Airport | (36\.3712222, \-82\.1734167\) |
Note that the column header for the derived column above is ungainly, since it consists of the entire formula that we used to construct it!
This is difficult to read, and would be cumbersome to work with.
An easy fix is to give this derived column an *alias*. We can do this using the keyword `AS`.
```
SELECT
name,
CONCAT('(', lat, ', ', lon, ')') AS coords
FROM airports
LIMIT 0, 6;
```
| name | coords |
| --- | --- |
| Lansdowne Airport | (41\.1304722, \-80\.6195833\) |
| Moton Field Municipal Airport | (32\.4605722, \-85\.6800278\) |
| Schaumburg Regional | (41\.9893408, \-88\.1012428\) |
| Randall Airport | (41\.4319120, \-74\.3915611\) |
| Jekyll Island Airport | (31\.0744722, \-81\.4277778\) |
| Elizabethton Municipal Airport | (36\.3712222, \-82\.1734167\) |
We can also use `AS` to refer to a column in the table by a different name in the result set.
```
SELECT
name AS airport_name,
CONCAT('(', lat, ', ', lon, ')') AS coords
FROM airports
LIMIT 0, 6;
```
| airport\_name | coords |
| --- | --- |
| Lansdowne Airport | (41\.1304722, \-80\.6195833\) |
| Moton Field Municipal Airport | (32\.4605722, \-85\.6800278\) |
| Schaumburg Regional | (41\.9893408, \-88\.1012428\) |
| Randall Airport | (41\.4319120, \-74\.3915611\) |
| Jekyll Island Airport | (31\.0744722, \-81\.4277778\) |
| Elizabethton Municipal Airport | (36\.3712222, \-82\.1734167\) |
This brings an important distinction to the fore: In SQL, it is crucial to distinguish between clauses that operate *on the rows of the original table* versus those that operate *on the rows of the result set*.
Here, `name`, `lat`, and `lon` are columns in the original table—they are written to the disk on the SQL server.
On the other hand, `airport_name` and `coords` exist only in the result set—which is passed from the server to the client and is not written to the disk.
The preceding examples show the SQL equivalents of the **dplyr** commands `select()`, `mutate()`, and `rename()`.
### 15\.4\.2 `WHERE`
The `WHERE` clause is analogous to the `filter()` command in **dplyr**—it allows you to restrict the set of rows that are retrieved to only those rows that match a certain condition.
Thus, while there are several million rows in the `flights` table in each year—each corresponding to a single flight—there were only a few dozen flights that left [*Bradley International Airport*](https://en.wikipedia.org/w/index.php?search=Bradley%20International%20Airport) on June 26th, 2013\.
```
SELECT
year, month, day, origin, dest,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
| year | month | day | origin | dest | flight | carrier |
| --- | --- | --- | --- | --- | --- | --- |
| 2013 | 6 | 26 | BDL | EWR | 4714 | EV |
| 2013 | 6 | 26 | BDL | MIA | 2015 | AA |
| 2013 | 6 | 26 | BDL | DTW | 1644 | DL |
| 2013 | 6 | 26 | BDL | BWI | 2584 | WN |
| 2013 | 6 | 26 | BDL | ATL | 1065 | DL |
| 2013 | 6 | 26 | BDL | DCA | 1077 | US |
It would be convenient to search for flights in a date range.
Unfortunately, there is no date field in this table—but rather separate columns for the `year`, `month`, and `day`.
Nevertheless, we can tell SQL to interpret these columns as a date, using the `STR_TO_DATE()` function.[26](#fn26) Unlike in **R** code, function names in SQL code are customarily capitalized.
Dates and times can be challenging to wrangle.
To learn more about these date tokens, see the MySQL [documentation](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_str-to-date) for `STR_TO_DATE()`.
```
SELECT
STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d') AS theDate,
origin,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
| theDate | origin | flight | carrier |
| --- | --- | --- | --- |
| 2013\-06\-26 | BDL | 4714 | EV |
| 2013\-06\-26 | BDL | 2015 | AA |
| 2013\-06\-26 | BDL | 1644 | DL |
| 2013\-06\-26 | BDL | 2584 | WN |
| 2013\-06\-26 | BDL | 1065 | DL |
| 2013\-06\-26 | BDL | 1077 | US |
Note that here we have used a `WHERE` clause on columns that are not present in the result set. We can do this because `WHERE` operates only on the rows of the original table.
Conversely, if we were to try and use a `WHERE` clause on `theDate`, it would not work, because (as the error suggests), `theDate` is not the name of a column in the `flights` table.
```
SELECT
STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d') AS theDate,
origin, flight, carrier
FROM flights
WHERE theDate = '2013-06-26'
AND origin = 'BDL'
LIMIT 0, 6;
```
```
ERROR 1049 (42000): Unknown database 'airlines'
```
A workaround is to copy and paste the definition of `theDate` into the `WHERE` clause, since `WHERE` *can* operate on functions of columns in the original table (results not shown).
```
SELECT
STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d') AS theDate,
origin, flight, carrier
FROM flights
WHERE STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d') =
'2013-06-26'
AND origin = 'BDL'
LIMIT 0, 6;
```
This query will work, but here we have stumbled onto another wrinkle that exposes subtleties in how SQL executes queries.
The previous query was able to make use of indices defined on the `year`, `month`, and `day` columns.
However, the latter query is not able to make use of these indices because it is trying to filter on functions of a combination of those columns.
This makes the latter query very slow.
We will return to a fuller discussion of indices in Section [16\.1](ch-sql2.html#sec:indices).
Finally, we can use the `BETWEEN` syntax to filter through a range of dates.
The `DISTINCT` keyword limits the result set to one row per unique value of `theDate`.
```
SELECT
DISTINCT STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d')
AS theDate
FROM flights
WHERE year = 2013 AND month = 6 AND day BETWEEN 26 and 30
AND origin = 'BDL'
LIMIT 0, 6;
```
| theDate |
| --- |
| 2013\-06\-26 |
| 2013\-06\-27 |
| 2013\-06\-28 |
| 2013\-06\-29 |
| 2013\-06\-30 |
Similarly, we can use the `IN` syntax to search for items in a specified list.
Note that flights on the 27th, 28th, and 29th of June are retrieved in the query using `BETWEEN` but not in the query using `IN`.
```
SELECT
DISTINCT STR_TO_DATE(CONCAT(year, '-', month, '-', day), '%Y-%m-%d')
AS theDate
FROM flights
WHERE year = 2013 AND month = 6 AND day IN (26, 30)
AND origin = 'BDL'
LIMIT 0, 6;
```
| theDate |
| --- |
| 2013\-06\-26 |
| 2013\-06\-30 |
SQL also supports `OR` clauses in addition to `AND` clauses, but one must always be careful with parentheses when using `OR`. Note the difference in the numbers of rows returned by the following two queries (557,874 vs. 2,542\). The `COUNT` function simply counts the number of rows. The criteria in the `WHERE` clause are not evaluated left to right, but rather the `AND`s are evaluated first. This means that in the first query below, all flights on the 26th day of any month, regardless of year or month, would be returned.
```
/* returns 557,874 records */
SELECT
COUNT(*) AS N
FROM flights
WHERE year = 2013 AND month = 6 OR day = 26
AND origin = 'BDL';
```
```
/* returns 2,542 records */
SELECT
COUNT(*) AS N
FROM flights
WHERE year = 2013 AND (month = 6 OR day = 26)
AND origin = 'BDL';
```
### 15\.4\.3 `GROUP BY`
The `GROUP BY` clause allows one to *aggregate* multiple rows according to some criteria.
The challenge when using `GROUP BY` is specifying *how* multiple rows of data should be reduced into a single value. [Aggregate functions](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html) (e.g., `COUNT()`, `SUM()`, `MAX()`, and `AVG()`) are necessary.
We know that there were 65 flights that left Bradley Airport on June 26th, 2013, but how many belonged to each airline carrier?
To get this information we need to aggregate the individual flights, based on who the carrier was.
```
SELECT
carrier,
COUNT(*) AS numFlights,
SUM(1) AS numFlightsAlso
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
GROUP BY carrier;
```
| carrier | numFlights | numFlightsAlso |
| --- | --- | --- |
| 9E | 5 | 5 |
| AA | 4 | 4 |
| B6 | 5 | 5 |
| DL | 11 | 11 |
| EV | 5 | 5 |
| MQ | 5 | 5 |
| UA | 1 | 1 |
| US | 7 | 7 |
| WN | 19 | 19 |
| YV | 3 | 3 |
For each of these airlines, which flight left the earliest in the morning?
```
SELECT
carrier,
COUNT(*) AS numFlights,
MIN(dep_time)
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
GROUP BY carrier;
```
| carrier | numFlights | MIN(dep\_time) |
| --- | --- | --- |
| 9E | 5 | 0 |
| AA | 4 | 559 |
| B6 | 5 | 719 |
| DL | 11 | 559 |
| EV | 5 | 555 |
| MQ | 5 | 0 |
| UA | 1 | 0 |
| US | 7 | 618 |
| WN | 19 | 601 |
| YV | 3 | 0 |
This is a bit tricky to figure out because the `dep_time` variable is stored as an integer, but would be better represented as a `time` data type.
If it is a three\-digit integer, then the first digit is the hour, but if it is a four\-digit integer, then the first two digits are the hour.
In either case, the last two digits are the minutes, and there are no seconds recorded.
The `MAKETIME()` function combined with the `IF(condition, value if true, value if false)` statement can help us with this.
```
SELECT
carrier,
COUNT(*) AS numFlights,
MAKETIME(
IF(LENGTH(MIN(dep_time)) = 3,
LEFT(MIN(dep_time), 1),
LEFT(MIN(dep_time), 2)
),
RIGHT(MIN(dep_time), 2),
0
) AS firstDepartureTime
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
GROUP BY carrier
LIMIT 0, 6;
```
| carrier | numFlights | firstDepartureTime |
| --- | --- | --- |
| 9E | 5 | 00:00:00 |
| AA | 4 | 05:59:00 |
| B6 | 5 | 07:19:00 |
| DL | 11 | 05:59:00 |
| EV | 5 | 05:55:00 |
| MQ | 5 | 00:00:00 |
We can also group by more than one column, but need to be careful to specify that we apply an aggregate function to each column that we are *not* grouping by.
In this case, every time we access `dep_time`, we apply the `MIN()` function, since there may be many different values of `dep_time` associated with each unique combination of `carrier` and `dest`.
Applying the `MIN()` function returns the smallest such value unambiguously.
```
SELECT
carrier, dest,
COUNT(*) AS numFlights,
MAKETIME(
IF(LENGTH(MIN(dep_time)) = 3,
LEFT(MIN(dep_time), 1),
LEFT(MIN(dep_time), 2)
),
RIGHT(MIN(dep_time), 2),
0
) AS firstDepartureTime
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
GROUP BY carrier, dest
LIMIT 0, 6;
```
| carrier | dest | numFlights | firstDepartureTime |
| --- | --- | --- | --- |
| 9E | CVG | 2 | 00:00:00 |
| 9E | DTW | 1 | 18:20:00 |
| 9E | MSP | 1 | 11:25:00 |
| 9E | RDU | 1 | 09:38:00 |
| AA | DFW | 3 | 07:04:00 |
| AA | MIA | 1 | 05:59:00 |
### 15\.4\.4 `ORDER BY`
The use of aggregate function allows us to answer some very basic exploratory questions.
Combining this with an `ORDER BY` clause will bring the most interesting results to the top.
For example, which destinations are most common from Bradley in 2013?
```
SELECT
dest, SUM(1) AS numFlights
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
ORDER BY numFlights DESC
LIMIT 0, 6;
```
| dest | numFlights |
| --- | --- |
| ORD | 2657 |
| BWI | 2613 |
| ATL | 2277 |
| CLT | 1842 |
| MCO | 1789 |
| DTW | 1523 |
Note that since the `ORDER BY` clause cannot be executed until all of the data are retrieved, it operates on the result set, and not the rows of the original data.
Thus, derived columns *can* be referenced in the `ORDER BY` clause.
Which of those destinations had the lowest average arrival delay time?
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
| dest | numFlights | avg\_arr\_delay |
| --- | --- | --- |
| CLE | 57 | \-13\.07 |
| LAX | 127 | \-10\.31 |
| CVG | 708 | \-7\.37 |
| MSP | 981 | \-3\.66 |
| MIA | 404 | \-3\.27 |
| DCA | 204 | \-2\.90 |
[*Cleveland Hopkins International Airport*](https://en.wikipedia.org/w/index.php?search=Cleveland%20Hopkins%20International%20Airport) (CLE) has the smallest average arrival delay time.
### 15\.4\.5 `HAVING`
Although flights to Cleveland had the lowest average arrival delay—more than 13 minutes ahead of schedule—there were only 57 flights that went to from Bradley to Cleveland in all of 2013\.
It probably makes more sense to consider only those destinations that had, say, at least two flights per day.
We can filter our result set using a `HAVING` clause.
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
HAVING numFlights > 365 * 2
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
| dest | numFlights | avg\_arr\_delay |
| --- | --- | --- |
| MSP | 981 | \-3\.664 |
| DTW | 1523 | \-2\.148 |
| CLT | 1842 | \-0\.120 |
| FLL | 1011 | 0\.277 |
| DFW | 1062 | 0\.750 |
| ATL | 2277 | 4\.470 |
We can see now that among the airports that are common destinations from Bradley, Minneapolis\-St. Paul has the lowest average arrival delay time, at nearly 4 minutes ahead of schedule, on average.
Note that MySQL and SQLite support the use of derived column aliases in `HAVING` clauses, but PostgreSQL does not.
It is important to understand that the `HAVING` clause operates on the result set.
While `WHERE` and `HAVING` are similar in spirit and syntax (and indeed, in **dplyr** they are both masked by the `filter()` function), they are different, because `WHERE` operates on the original data in the table and `HAVING` operates on the result set. Moving the `HAVING` condition to the `WHERE` clause will not work.
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
AND numFlights > 365 * 2
GROUP BY dest
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
```
ERROR 1049 (42000): Unknown database 'airlines'
```
On the other hand, moving the `WHERE` conditions to the `HAVING` clause will work, but could result in a major loss of efficiency.
The following query will return the same result as the one we considered previously.
```
SELECT
origin, dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
GROUP BY origin, dest
HAVING numFlights > 365 * 2
AND origin = 'BDL'
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
Moving the `origin = 'BDL'` condition to the `HAVING` clause means that *all* airport destinations had to be considered.
With this condition in the `WHERE` clause, the server can quickly identify only those flights that left Bradley, perform the aggregation, and then filter this relatively small result set for those entries with a sufficient number of flights.
Conversely, with this condition in the `HAVING` clause, the server is forced to consider *all* 3 million flights from 2013, perform the aggregation for all pairs of airports, and then filter this much larger result set for those entries with a sufficient number of flights from Bradley.
The filtering of the result set is not importantly slower, but the aggregation over 3 million rows as opposed to a few thousand is.
To maximize query efficiency, put conditions in a `WHERE` clause as opposed to a `HAVING` clause whenever possible.
### 15\.4\.6 `LIMIT`
A `LIMIT` clause simply allows you to truncate the output to a specified number of rows.
This achieves an effect analogous to the **R** commands `head()` or `slice()`.
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
HAVING numFlights > 365*2
ORDER BY avg_arr_delay ASC
LIMIT 0, 6;
```
| dest | numFlights | avg\_arr\_delay |
| --- | --- | --- |
| MSP | 981 | \-3\.664 |
| DTW | 1523 | \-2\.148 |
| CLT | 1842 | \-0\.120 |
| FLL | 1011 | 0\.277 |
| DFW | 1062 | 0\.750 |
| ATL | 2277 | 4\.470 |
Note, however, that it is also possible to retrieve rows not at the beginning.
The first number in the `LIMIT` clause indicates the number of rows to skip, and the latter indicates the number of rows to retrieve.
Thus, this query will return the 4th–7th airports in the previous list.
```
SELECT
dest, SUM(1) AS numFlights,
AVG(arr_delay) AS avg_arr_delay
FROM flights
WHERE year = 2013
AND origin = 'BDL'
GROUP BY dest
HAVING numFlights > 365*2
ORDER BY avg_arr_delay ASC
LIMIT 3,4;
```
| dest | numFlights | avg\_arr\_delay |
| --- | --- | --- |
| FLL | 1011 | 0\.277 |
| DFW | 1062 | 0\.750 |
| ATL | 2277 | 4\.470 |
| BWI | 2613 | 5\.032 |
### 15\.4\.7 `JOIN`
In Chapter [5](ch-join.html#ch:join), we presented several **dplyr** join operators: `inner_join()` and `left_join()`.
Other functions (e.g., `semi_join()`) are also available.
As you might expect, these operations are fundamental to SQL—and moreover, the success of the RDBMS paradigm is predicated on the ability to efficiently join tables together.
Recall that SQL is a *relational* database management system—the relations between the tables allow you to write queries that efficiently tie together information from multiple sources.
The syntax for performing these operations in SQL requires the `JOIN` keyword.
In general, there are four pieces of information that you need to specify in order to join two tables:
* The name of the first table that you want to join
* (optional) The *type* of join that you want to use
* The name of the second table that you want to join
* The *condition(s)* under which you want the records in the first table to match the records in the second table
There are many possible permutations of how two tables can be joined, and in many cases, a single query may involve several or even dozens of tables.
In practice, the `JOIN` syntax varies among SQL implementations.
In MySQL, `OUTER JOIN`s are not available, but the following join types are:
* `JOIN`: includes all of the rows that are present in *both* tables and match.
* `LEFT JOIN`: includes all of the rows that are present in the first table. Rows in the first table that have no match in the second are filled with `NULL`s.
* `RIGHT JOIN`: include all of the rows that are present in the second table. This is the opposite of a `LEFT JOIN`.
* `CROSS JOIN`: the Cartesian product of the two tables. Thus, all possible combinations of rows matching the joining condition are returned.
Recall that in the `flights` table, the `origin` and `dest`ination of each flight are recorded.
```
SELECT
origin, dest,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
| origin | dest | flight | carrier |
| --- | --- | --- | --- |
| BDL | EWR | 4714 | EV |
| BDL | MIA | 2015 | AA |
| BDL | DTW | 1644 | DL |
| BDL | BWI | 2584 | WN |
| BDL | ATL | 1065 | DL |
| BDL | DCA | 1077 | US |
Note that the `flights` table contains only the three\-character FAA airport codes for both airports—not the full name of the airport.
These cryptic abbreviations are not easily understood by humans.
Which airport is `EWR`?
Wouldn’t it be more convenient to have the airport name in the table?
It would be more convenient, but it would also be significantly less efficient from a storage and retrieval point of view, as well as more problematic from a [*database integrity*](https://en.wikipedia.org/w/index.php?search=database%20integrity) point of view.
The solution is to store information *about airports* in the `airports` table, along with these cryptic codes—which we will now call *keys*—and to only store these keys in the `flights` table—which is about *flights*, not airports.
However, we can use these keys to join the two tables together in our query.
In this manner, we can [*have our cake and eat it too*](https://en.wikipedia.org/w/index.php?search=have%20our%20cake%20and%20eat%20it%20too): The data are stored in separate tables for efficiency, but we can still have the full names in the result set if we choose.
Note how once again, the distinction between the rows of the original table and the result set is critical.
To write our query, we simply have to specify the table we want to join onto `flights` (e.g., `airports`) and the condition by which we want to match rows in `flights` with rows in `airports`.
In this case, we want the airport code listed in `flights.dest` to be matched to the airport code in `airports.faa`.
We need to specify that we want to see the `name` column from the `airports` table in the result set (see Table [15\.3](ch-sql.html#tab:join)).
```
SELECT
origin, dest,
airports.name AS dest_name,
flight, carrier
FROM flights
JOIN airports ON flights.dest = airports.faa
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
Table 15\.3: Using JOIN to retrieve airport names.
| origin | dest | dest\_name | flight | carrier |
| --- | --- | --- | --- | --- |
| BDL | EWR | Newark Liberty Intl | 4714 | EV |
| BDL | MIA | Miami Intl | 2015 | AA |
| BDL | DTW | Detroit Metro Wayne Co | 1644 | DL |
| BDL | BWI | Baltimore Washington Intl | 2584 | WN |
| BDL | ATL | Hartsfield Jackson Atlanta Intl | 1065 | DL |
| BDL | DCA | Ronald Reagan Washington Natl | 1077 | US |
This is much easier to read for humans.
One quick improvement to the readability of this query is to use *table aliases*.
This will save us some typing now, but a considerable amount later on.
A table alias is often just a single letter after the reserved word `AS` in the specification of each table in the `FROM` and `JOIN` clauses.
Note that these aliases can be referenced anywhere else in the query (see Table [15\.4](ch-sql.html#tab:join-alias)).
```
SELECT
origin, dest,
a.name AS dest_name,
flight, carrier
FROM flights AS o
JOIN airports AS a ON o.dest = a.faa
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
Table 15\.4: Using JOIN with table aliases.
| origin | dest | dest\_name | flight | carrier |
| --- | --- | --- | --- | --- |
| BDL | EWR | Newark Liberty Intl | 4714 | EV |
| BDL | MIA | Miami Intl | 2015 | AA |
| BDL | DTW | Detroit Metro Wayne Co | 1644 | DL |
| BDL | BWI | Baltimore Washington Intl | 2584 | WN |
| BDL | ATL | Hartsfield Jackson Atlanta Intl | 1065 | DL |
| BDL | DCA | Ronald Reagan Washington Natl | 1077 | US |
In the same manner, there are cryptic codes in `flights` for the airline carriers.
The full name of each carrier is stored in the `carriers` table, since that is the place where information about carriers are stored.
We can join this table to our result set to retrieve the name of each carrier (see Table [15\.5](ch-sql.html#tab:join-multiple)).
```
SELECT
dest, a.name AS dest_name,
o.carrier, c.name AS carrier_name
FROM flights AS o
JOIN airports AS a ON o.dest = a.faa
JOIN carriers AS c ON o.carrier = c.carrier
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
Table 15\.5: Using JOIN with multiple tables.
| dest | dest\_name | carrier | carrier\_name |
| --- | --- | --- | --- |
| EWR | Newark Liberty Intl | EV | ExpressJet Airlines Inc. |
| MIA | Miami Intl | AA | American Airlines Inc. |
| DTW | Detroit Metro Wayne Co | DL | Delta Air Lines Inc. |
| BWI | Baltimore Washington Intl | WN | Southwest Airlines Co. |
| ATL | Hartsfield Jackson Atlanta Intl | DL | Delta Air Lines Inc. |
| DCA | Ronald Reagan Washington Natl | US | US Airways Inc. |
Finally, to retrieve the name of the originating airport, we can join onto the same table more than once.
Here the table aliases are necessary.
```
SELECT
flight,
a2.name AS orig_name,
a1.name AS dest_name,
c.name AS carrier_name
FROM flights AS o
JOIN airports AS a1 ON o.dest = a1.faa
JOIN airports AS a2 ON o.origin = a2.faa
JOIN carriers AS c ON o.carrier = c.carrier
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL'
LIMIT 0, 6;
```
Table 15\.6: Using JOIN on the same table more than once.
| flight | orig\_name | dest\_name | carrier\_name |
| --- | --- | --- | --- |
| 4714 | Bradley Intl | Newark Liberty Intl | ExpressJet Airlines Inc. |
| 2015 | Bradley Intl | Miami Intl | American Airlines Inc. |
| 1644 | Bradley Intl | Detroit Metro Wayne Co | Delta Air Lines Inc. |
| 2584 | Bradley Intl | Baltimore Washington Intl | Southwest Airlines Co. |
| 1065 | Bradley Intl | Hartsfield Jackson Atlanta Intl | Delta Air Lines Inc. |
| 1077 | Bradley Intl | Ronald Reagan Washington Natl | US Airways Inc. |
Table [15\.6](ch-sql.html#tab:join-multiple-times) displays the results.
Now it is perfectly clear that [*ExpressJet*](https://en.wikipedia.org/w/index.php?search=ExpressJet) flight 4714 flew from Bradley International airport to [*Newark Liberty International airport*](https://en.wikipedia.org/w/index.php?search=Newark%20Liberty%20International%20airport) on June 26th, 2013\.
However, in order to put this together, we had to join four tables.
Wouldn’t it be easier to store these data in a single table that looks like the result set? For a variety of reasons, the answer is no.
First, there are very literal storage considerations.
The `airports.name` field has room for 255 characters.
```
DESCRIBE airports;
```
| Field | Type | Null | Key | Default | Extra |
| --- | --- | --- | --- | --- | --- |
| faa | varchar(3\) | NO | PRI | | |
| name | varchar(255\) | YES | | NA | |
| lat | decimal(10,7\) | YES | | NA | |
| lon | decimal(10,7\) | YES | | NA | |
| alt | int(11\) | YES | | NA | |
| tz | smallint(4\) | YES | | NA | |
| dst | char(1\) | YES | | NA | |
| city | varchar(255\) | YES | | NA | |
| country | varchar(255\) | YES | | NA | |
This takes up considerably more space on disk than the four\-character abbreviation stored in `airports.faa`.
For small data sets, this overhead might not matter, but the `flights` table contains 169 million rows, so replacing the four\-character `origin` field with a 255\-character field would result in a noticeable difference in space on disk. (Plus, we’d have to do this twice, since the same would apply to `dest`.)
We’d suffer a similar penalty for including the full name of each carrier in the `flights` table.
Other things being equal, tables that take up less room on disk are faster to search.
Second, it would be logically inefficient to store the full name of each airport in the `flights` table.
The name of the airport doesn’t change for each flight.
It doesn’t make sense to store the full name of the airport any more than it would make sense to store the full name of the month, instead of just the integer corresponding to each month.
Third, what if the name of the airport *did* change?
For example, in 1998 the airport with code DCA was renamed from Washington National to [*Ronald Reagan Washington National*](https://en.wikipedia.org/w/index.php?search=Ronald%20Reagan%20Washington%20National).
It is still the same airport in the same location, and it still has code DCA—only the full name has changed. With separate tables, we only need to update a single field: the `name` column in the `airports` table for the DCA row.
Had we stored the full name in the `flights` table, we would have to make millions of substitutions, and would risk ending up in a situation in which both “Washington National” and “Reagan National” were present in the table.
When designing a database, how do you know whether to create a separate table for pieces of information?
The short answer is that if you are designing a persistent, scalable database for speed and efficiency, then every *entity* should have its own table.
In practice, very often it is not worth the time and effort to set this up if we are simply doing some quick analysis.
But for permanent systems—like a database backend to a website—proper curation is necessary.
The notions of [normal forms](https://en.wikipedia.org/wiki/Database_normalization#Normal_forms), and specifically [*third normal form*](https://en.wikipedia.org/w/index.php?search=third%20normal%20form) (3NF), provide guidance for how to properly design a database.
A full discussion of this is beyond the scope of this book, but the basic idea is to “keep like with like.”
If you are designing a database that will be used for a long time or by a lot of people, take the extra time to design it well.
#### 15\.4\.7\.1 `LEFT JOIN`
Recall that in a `JOIN`—also known as an *inner* or *natural* or *regular* `JOIN`—all possible matching pairs of rows from the two tables are included.
Thus, if the first table has \\(n\\) rows and the second table has \\(m\\), as many as \\(nm\\) rows could be returned. However, in the `airports` table each row has a unique airport code, and thus every row in `flights` will match the destination field to *at most* one row in the `airports` table.
What happens if no such entry is present in `airports`?
That is, what happens if there is a destination airport in `flights` that has no corresponding entry in `airports`? If you are using a `JOIN`, then the offending row in `flights` is simply not returned.
On the other hand, if you are using a `LEFT JOIN`, then every row in the first table is returned, and the corresponding entries from the second table are left blank.
In this example, no airport names were found for several airports.
```
SELECT
year, month, day, origin, dest,
a.name AS dest_name,
flight, carrier
FROM flights AS o
LEFT JOIN airports AS a ON o.dest = a.faa
WHERE year = 2013 AND month = 6 AND day = 26
AND a.name is null
LIMIT 0, 6;
```
| year | month | day | origin | dest | dest\_name | flight | carrier |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 2013 | 6 | 26 | BOS | SJU | NA | 261 | B6 |
| 2013 | 6 | 26 | JFK | SJU | NA | 1203 | B6 |
| 2013 | 6 | 26 | JFK | PSE | NA | 745 | B6 |
| 2013 | 6 | 26 | JFK | SJU | NA | 1503 | B6 |
| 2013 | 6 | 26 | JFK | BQN | NA | 839 | B6 |
| 2013 | 6 | 26 | JFK | BQN | NA | 939 | B6 |
The output indicates that the airports are all in [*Puerto Rico*](https://en.wikipedia.org/w/index.php?search=Puerto%20Rico): SJU is in [*San Juan*](https://en.wikipedia.org/w/index.php?search=San%20Juan), BQN is in [*Aguadilla*](https://en.wikipedia.org/w/index.php?search=Aguadilla), and PSE is in [*Ponce*](https://en.wikipedia.org/w/index.php?search=Ponce).
The result set from a `LEFT JOIN` is always a superset of the result set from the same query with a regular `JOIN`.
A `RIGHT JOIN` is simply the opposite of a `LEFT JOIN`—that is, the tables have simply been specified in the opposite order.
This can be useful in certain cases, especially when you are joining more than two tables.
#### 15\.4\.7\.1 `LEFT JOIN`
Recall that in a `JOIN`—also known as an *inner* or *natural* or *regular* `JOIN`—all possible matching pairs of rows from the two tables are included.
Thus, if the first table has \\(n\\) rows and the second table has \\(m\\), as many as \\(nm\\) rows could be returned. However, in the `airports` table each row has a unique airport code, and thus every row in `flights` will match the destination field to *at most* one row in the `airports` table.
What happens if no such entry is present in `airports`?
That is, what happens if there is a destination airport in `flights` that has no corresponding entry in `airports`? If you are using a `JOIN`, then the offending row in `flights` is simply not returned.
On the other hand, if you are using a `LEFT JOIN`, then every row in the first table is returned, and the corresponding entries from the second table are left blank.
In this example, no airport names were found for several airports.
```
SELECT
year, month, day, origin, dest,
a.name AS dest_name,
flight, carrier
FROM flights AS o
LEFT JOIN airports AS a ON o.dest = a.faa
WHERE year = 2013 AND month = 6 AND day = 26
AND a.name is null
LIMIT 0, 6;
```
| year | month | day | origin | dest | dest\_name | flight | carrier |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 2013 | 6 | 26 | BOS | SJU | NA | 261 | B6 |
| 2013 | 6 | 26 | JFK | SJU | NA | 1203 | B6 |
| 2013 | 6 | 26 | JFK | PSE | NA | 745 | B6 |
| 2013 | 6 | 26 | JFK | SJU | NA | 1503 | B6 |
| 2013 | 6 | 26 | JFK | BQN | NA | 839 | B6 |
| 2013 | 6 | 26 | JFK | BQN | NA | 939 | B6 |
The output indicates that the airports are all in [*Puerto Rico*](https://en.wikipedia.org/w/index.php?search=Puerto%20Rico): SJU is in [*San Juan*](https://en.wikipedia.org/w/index.php?search=San%20Juan), BQN is in [*Aguadilla*](https://en.wikipedia.org/w/index.php?search=Aguadilla), and PSE is in [*Ponce*](https://en.wikipedia.org/w/index.php?search=Ponce).
The result set from a `LEFT JOIN` is always a superset of the result set from the same query with a regular `JOIN`.
A `RIGHT JOIN` is simply the opposite of a `LEFT JOIN`—that is, the tables have simply been specified in the opposite order.
This can be useful in certain cases, especially when you are joining more than two tables.
### 15\.4\.8 `UNION`
Two separate queries can be combined using a `UNION` clause.
```
(SELECT
year, month, day, origin, dest,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'BDL' AND dest = 'MSP')
UNION
(SELECT
year, month, day, origin, dest,
flight, carrier
FROM flights
WHERE year = 2013 AND month = 6 AND day = 26
AND origin = 'JFK' AND dest = 'ORD')
LIMIT 0,10;
```
| year | month | day | origin | dest | flight | carrier |
| --- | --- | --- | --- | --- | --- | --- |
| 2013 | 6 | 26 | BDL | MSP | 797 | DL |
| 2013 | 6 | 26 | BDL | MSP | 3338 | 9E |
| 2013 | 6 | 26 | BDL | MSP | 1226 | DL |
| 2013 | 6 | 26 | JFK | ORD | 905 | B6 |
| 2013 | 6 | 26 | JFK | ORD | 1105 | B6 |
| 2013 | 6 | 26 | JFK | ORD | 3523 | 9E |
| 2013 | 6 | 26 | JFK | ORD | 1711 | AA |
| 2013 | 6 | 26 | JFK | ORD | 105 | B6 |
| 2013 | 6 | 26 | JFK | ORD | 3521 | 9E |
| 2013 | 6 | 26 | JFK | ORD | 3525 | 9E |
This is analogous to the **dplyr** operation `bind_rows()`.
### 15\.4\.9 Subqueries
It is also possible to use a result set as if it were a table.
That is, you can write one query to generate a result set, and then use that result set in a larger query as if it were a table, or even just a list of values.
The initial query is called a [*subquery*](https://en.wikipedia.org/w/index.php?search=subquery).
For example, Bradley is listed as an “international” airport, but with the exception of trips to [*Montreal*](https://en.wikipedia.org/w/index.php?search=Montreal) and [*Toronto*](https://en.wikipedia.org/w/index.php?search=Toronto) and occasional flights to [*Mexico*](https://en.wikipedia.org/w/index.php?search=Mexico) and [*Europe*](https://en.wikipedia.org/w/index.php?search=Europe), it is more of a regional airport.
Does it have any flights coming from or going to [*Alaska*](https://en.wikipedia.org/w/index.php?search=Alaska) and [*Hawaii*](https://en.wikipedia.org/w/index.php?search=Hawaii)?
We can retrieve the list of airports outside the lower 48 states by filtering the airports table using the time zone `tz` column (see Table [15\.7](ch-sql.html#tab:outside) for the first six).
```
SELECT faa, name, tz, city
FROM airports AS a
WHERE tz < -8
LIMIT 0, 6;
```
Table 15\.7: First set of six airports outside the lower 48 states.
| faa | name | tz | city |
| --- | --- | --- | --- |
| 369 | Atmautluak Airport | \-9 | Atmautluak |
| 6K8 | Tok Junction Airport | \-9 | Tok |
| ABL | Ambler Airport | \-9 | Ambler |
| ADK | Adak Airport | \-9 | Adak Island |
| ADQ | Kodiak | \-9 | Kodiak |
| AET | Allakaket Airport | \-9 | Allakaket |
Now, let’s use the airport codes generated by that query as a list to filter the flights leaving from Bradley in 2013\.
Note the subquery in parentheses in the query below.
```
SELECT
dest, a.name AS dest_name,
SUM(1) AS N, COUNT(distinct carrier) AS numCarriers
FROM flights AS o
LEFT JOIN airports AS a ON o.dest = a.faa
WHERE year = 2013
AND origin = 'BDL'
AND dest IN
(SELECT faa
FROM airports
WHERE tz < -8)
GROUP BY dest;
```
No results are returned.
As it turns out, Bradley did not have any outgoing flights to Alaska or Hawaii.
However, it did have some flights to and from airports in the Pacific Time Zone.
```
SELECT
dest, a.name AS dest_name,
SUM(1) AS N, COUNT(distinct carrier) AS numCarriers
FROM flights AS o
LEFT JOIN airports AS a ON o.origin = a.faa
WHERE year = 2013
AND dest = 'BDL'
AND origin IN
(SELECT faa
FROM airports
WHERE tz < -7)
GROUP BY origin;
```
| dest | dest\_name | N | numCarriers |
| --- | --- | --- | --- |
| BDL | Mc Carran Intl | 262 | 1 |
| BDL | Los Angeles Intl | 127 | 1 |
We could also employ a similar subquery to create an ephemeral table (results not shown).
```
SELECT
dest, a.name AS dest_name,
SUM(1) AS N, COUNT(distinct carrier) AS numCarriers
FROM flights AS o
JOIN (SELECT *
FROM airports
WHERE tz < -7) AS a
ON o.origin = a.faa
WHERE year = 2013 AND dest = 'BDL'
GROUP BY origin;
```
Of course, we could have achieved the same result with a `JOIN` and `WHERE` (results not shown).
```
SELECT
dest, a.name AS dest_name,
SUM(1) AS N, COUNT(distinct carrier) AS numCarriers
FROM flights AS o
LEFT JOIN airports AS a ON o.origin = a.faa
WHERE year = 2013
AND dest = 'BDL'
AND tz < -7
GROUP BY origin;
```
It is important to note that while subqueries are often convenient, they cannot make use of indices.
In most cases it is preferable to write the query using joins as opposed to subqueries.
15\.5 Extended example: FiveThirtyEight flights
-----------------------------------------------
Over at [FiveThirtyEight](http://www.fivethirtyeight.com), [Nate Silver](https://en.wikipedia.org/w/index.php?search=Nate%20Silver) wrote [an article](http://fivethirtyeight.com/features/fastest-airlines-fastest-airports/) about airline delays using the same Bureau of Transportation Statistics data that we have in our database (see the link in the footnote[27](#fn27)).
We can use this article as an exercise in querying our airlines database.
The article makes a number of claims.
We’ll walk through some of these. First, the article states:
> In 2014, the 6 million domestic flights the U.S. government tracked required an extra 80 million minutes to reach their destinations.
> The majority of flights (54%) arrived ahead of schedule in 2014\. (The 80 million minutes figure cited earlier is a net number. It consists of about 115 million minutes of delays minus 35 million minutes saved from early arrivals.)
Although there are a number of claims here, we can verify them with a single query.
Here, we compute the total number of flights, the percentage of those that were on time and ahead of schedule, and the total number of minutes of delays.
```
SELECT
SUM(1) AS numFlights,
SUM(IF(arr_delay < 15, 1, 0)) / SUM(1) AS ontimePct,
SUM(IF(arr_delay < 0, 1, 0)) / SUM(1) AS earlyPct,
SUM(arr_delay) / 1e6 AS netMinLate,
SUM(IF(arr_delay > 0, arr_delay, 0)) / 1e6 AS minLate,
SUM(IF(arr_delay < 0, arr_delay, 0)) / 1e6 AS minEarly
FROM flights AS o
WHERE year = 2014
LIMIT 0, 6;
```
| numFlights | ontimePct | earlyPct | netMinLate | minLate | minEarly |
| --- | --- | --- | --- | --- | --- |
| 5819811 | 0\.787 | 0\.542 | 41\.6 | 77\.6 | \-36 |
We see the right number of flights (about 6 million), and the percentage of flights that were early (about 54%) is also about right.
The total number of minutes early (about 36 million) is also about right.
However, the total number of minutes late is way off (about 78 million vs. 115 million), and as a consequence, so is the net number of minutes late (about 42 million vs. 80 million).
In this case, you have to read the fine print.
A description of the [methodology](http://fivethirtyeight.com/features/how-we-found-the-fastest-flights/) used in this analysis contains some information about the *estimates*[28](#fn28) of the arrival delay for cancelled flights.
The problem is that cancelled flights have an `arr_delay` value of 0, yet in the real\-world experience of travelers, the practical delay is much longer.
The FiveThirtyEight data scientists concocted an estimate of the actual delay experienced by travelers due to cancelled flights.
> A quick\-and\-dirty answer is that cancelled flights are associated with a delay of four or five hours, on average. However, the calculation varies based on the particular circumstances of each flight.
Unfortunately, reproducing the estimates made by FiveThirtyEight is likely impossible, and certainly beyond the scope of what we can accomplish here.
Since we only care about the aggregate number of minutes, we can amend our computation to add, say, 270 minutes of delay time for each cancelled flight.
```
SELECT
SUM(1) AS numFlights,
SUM(IF(arr_delay < 15, 1, 0)) / SUM(1) AS ontimePct,
SUM(IF(arr_delay < 0, 1, 0)) / SUM(1) AS earlyPct,
SUM(IF(cancelled = 1, 270, arr_delay)) / 1e6 AS netMinLate,
SUM(
IF(cancelled = 1, 270, IF(arr_delay > 0, arr_delay, 0))
) / 1e6 AS minLate,
SUM(IF(arr_delay < 0, arr_delay, 0)) / 1e6 AS minEarly
FROM flights AS o
WHERE year = 2014
LIMIT 0, 6;
```
| numFlights | ontimePct | earlyPct | netMinLate | minLate | minEarly |
| --- | --- | --- | --- | --- | --- |
| 5819811 | 0\.787 | 0\.542 | 75\.9 | 112 | \-36 |
This again puts us in the neighborhood of the estimates from the article.
One has to read the fine print to properly vet these estimates.
The problem is not that the estimates reported by Silver are inaccurate—on the contrary, they seem plausible and are certainly better than not correcting for cancelled flights at all.
However, it is not immediately clear from reading the article (you have to read the separate methodology article) that these estimates—which account for roughly 25% of the total minutes late reported—are in fact estimates and not hard data.
Later in the article, Silver presents a figure that breaks down the percentage of flights that were on time, had a delay of 15 to 119 minutes, or were delayed longer than 2 hours.
We can pull the data for this figure with the following query.
Here, in order to plot these results, we need to actually bring them back into **R**.
To do this, we will use the functionality provided by the **knitr** package (see Section [F.4\.3](ch-db-setup.html#sec:connect-r-sql) for more information about connecting to a MySQL server from within **R**).
The results of this query will be saved to an **R** data frame called `res`.
```
SELECT o.carrier, c.name,
SUM(1) AS numFlights,
SUM(IF(arr_delay > 15 AND arr_delay <= 119, 1, 0)) AS shortDelay,
SUM(
IF(arr_delay >= 120 OR cancelled = 1 OR diverted = 1, 1, 0)
) AS longDelay
FROM
flights AS o
LEFT JOIN
carriers c ON o.carrier = c.carrier
WHERE year = 2014
GROUP BY carrier
ORDER BY shortDelay DESC
```
Reproducing the figure requires a little bit of work.
We begin by pruning less informative labels from the carriers.
```
res <- res %>%
as_tibble() %>%
mutate(
name = str_remove_all(name, "Air(lines|ways| Lines)"),
name = str_remove_all(name, "(Inc\\.|Co\\.|Corporation)"),
name = str_remove_all(name, "\\(.*\\)"),
name = str_remove_all(name, " *$")
)
res %>%
pull(name)
```
```
[1] "Southwest" "ExpressJet" "SkyWest" "Delta"
[5] "American" "United" "Envoy Air" "US"
[9] "JetBlue" "Frontier" "Alaska" "AirTran"
[13] "Virgin America" "Hawaiian"
```
Next, it is now clear that FiveThirtyEight has considered airline mergers and regional carriers that are not captured in our data.
Specifically: “We classify all remaining [*AirTran*](https://en.wikipedia.org/w/index.php?search=AirTran) flights as [*Southwest*](https://en.wikipedia.org/w/index.php?search=Southwest) flights.” [*Envoy Air*](https://en.wikipedia.org/w/index.php?search=Envoy%20Air) serves [*American Airlines*](https://en.wikipedia.org/w/index.php?search=American%20Airlines).
However, there is a bewildering network of alliances among the other regional carriers.
Greatly complicating matters, [*ExpressJet*](https://en.wikipedia.org/w/index.php?search=ExpressJet) and [*SkyWest*](https://en.wikipedia.org/w/index.php?search=SkyWest) serve multiple national carriers (primarily United, American, and Delta) under different flight numbers. FiveThirtyEight provides [a footnote](http://fivethirtyeight.com/features/how-we-found-the-fastest-flights/#fn-5) detailing how they have assigned flights carried by these regional carriers, but we have chosen to ignore that here and include ExpressJet and SkyWest as independent carriers.
Thus, the data that we show in Figure [15\.1](ch-sql.html#fig:ft8-plot) does not match the figure from FiveThirtyEight.
```
carriers_2014 <- res %>%
mutate(
groupName = case_when(
name %in% c("Envoy Air", "American Eagle") ~ "American",
name == "AirTran" ~ "Southwest",
TRUE ~ name
)
) %>%
group_by(groupName) %>%
summarize(
numFlights = sum(numFlights),
wShortDelay = sum(shortDelay),
wLongDelay = sum(longDelay)
) %>%
mutate(
wShortDelayPct = wShortDelay / numFlights,
wLongDelayPct = wLongDelay / numFlights,
delayed = wShortDelayPct + wLongDelayPct,
ontime = 1 - delayed
)
carriers_2014
```
```
# A tibble: 12 × 8
groupName numFlights wShortDelay wLongDelay wShortDelayPct wLongDelayPct
<chr> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Alaska 160257 18366 2613 0.115 0.0163
2 American 930398 191071 53641 0.205 0.0577
3 Delta 800375 105194 19818 0.131 0.0248
4 ExpressJet 686021 136207 59663 0.199 0.0870
5 Frontier 85474 18410 2959 0.215 0.0346
6 Hawaiian 74732 5098 514 0.0682 0.00688
7 JetBlue 249693 46618 12789 0.187 0.0512
8 SkyWest 613030 107192 33114 0.175 0.0540
9 Southwest 1254128 275155 44907 0.219 0.0358
10 United 493528 93721 20923 0.190 0.0424
11 US 414665 64505 12328 0.156 0.0297
12 Virgin Am… 57510 8356 1976 0.145 0.0344
# … with 2 more variables: delayed <dbl>, ontime <dbl>
```
After tidying this data frame using the `pivot_longer()` function (see Chapter [6](ch-dataII.html#ch:dataII)), we can draw the figure as a stacked bar chart.
```
carriers_tidy <- carriers_2014 %>%
select(groupName, wShortDelayPct, wLongDelayPct, delayed) %>%
pivot_longer(
-c(groupName, delayed),
names_to = "delay_type",
values_to = "pct"
)
delay_chart <- ggplot(
data = carriers_tidy,
aes(x = reorder(groupName, pct, max), y = pct)
) +
geom_col(aes(fill = delay_type)) +
scale_fill_manual(
name = NULL,
values = c("red", "gold"),
labels = c(
"Flights Delayed 120+ Minutes\ncancelled or Diverted",
"Flights Delayed 15-119 Minutes"
)
) +
scale_y_continuous(limits = c(0, 1)) +
coord_flip() +
labs(
title = "Southwest's Delays Are Short; United's Are Long",
subtitle = "As share of scheduled flights, 2014"
) +
ylab(NULL) +
xlab(NULL) +
ggthemes::theme_fivethirtyeight() +
theme(
plot.title = element_text(hjust = 1),
plot.subtitle = element_text(hjust = -0.2)
)
```
Getting the right text labels in the right places to mimic the display requires additional wrangling.
We show our best effort in Figure [15\.1](ch-sql.html#fig:ft8-plot).
In fact, by comparing the two figures, it becomes clear that many of the long delays suffered by United and American passengers occur on flights operated by ExpressJet and SkyWest.
```
delay_chart +
geom_text(
data = filter(carriers_tidy, delay_type == "wShortDelayPct"),
aes(label = paste0(round(pct * 100, 1), "% ")),
hjust = "right",
size = 2
) +
geom_text(
data = filter(carriers_tidy, delay_type == "wLongDelayPct"),
aes(y = delayed - pct, label = paste0(round(pct * 100, 1), "% ")),
hjust = "left",
nudge_y = 0.01,
size = 2
)
```
Figure 15\.1: Recreation of the FiveThirtyEight plot on flight delays.
The rest of the analysis is predicated on FiveThirtyEight’s definition of *target time*, which is different from the scheduled time in the database.
To compute it would take us far astray.
In [another graphic](https://espnfivethirtyeight.files.wordpress.com/2015/03/silver-feature-fastflight-7.png?w=575&h=752) in the article, FiveThirtyEight reports the slowest and fastest airports among the 30 largest airports.
Using arrival delay time instead of the FiveThirtyEight\-defined target time, we can produce a similar table by joining the results of two queries together.
```
SELECT
dest,
SUM(1) AS numFlights,
AVG(arr_delay) AS avgArrivalDelay
FROM
flights AS o
WHERE year = 2014
GROUP BY dest
ORDER BY numFlights DESC
LIMIT 0, 30
```
```
SELECT
origin,
SUM(1) AS numFlights,
AVG(arr_delay) AS avgDepartDelay
FROM
flights AS o
WHERE year = 2014
GROUP BY origin
ORDER BY numFlights DESC
LIMIT 0, 30
```
```
dests %>%
left_join(origins, by = c("dest" = "origin")) %>%
select(dest, avgDepartDelay, avgArrivalDelay) %>%
arrange(desc(avgDepartDelay)) %>%
as_tibble()
```
```
# A tibble: 30 × 3
dest avgDepartDelay avgArrivalDelay
<chr> <dbl> <dbl>
1 ORD 14.3 13.1
2 MDW 12.8 7.40
3 DEN 11.3 7.60
4 IAD 11.3 7.45
5 HOU 11.3 8.07
6 DFW 10.7 9.00
7 BWI 10.2 6.04
8 BNA 9.47 8.94
9 EWR 8.70 9.61
10 IAH 8.41 6.75
# … with 20 more rows
```
Finally, FiveThirtyEight produces [a simple table](https://espnfivethirtyeight.files.wordpress.com/2015/03/silver-feature-fastflight-81.png?w=575&h=440) ranking the airlines by the amount of time added versus *typical*—another of their creations—and target time.
What we can do instead is compute a similar table for the average arrival delay time by carrier, *after controlling for the routes*.
First, we compute the average arrival delay time for each route.
```
SELECT
origin, dest,
SUM(1) AS numFlights,
AVG(arr_delay) AS avgDelay
FROM
flights AS o
WHERE year = 2014
GROUP BY origin, dest
```
```
head(routes)
```
```
origin dest numFlights avgDelay
1 ABE ATL 829 5.43
2 ABE DTW 665 3.23
3 ABE ORD 144 19.51
4 ABI DFW 2832 10.70
5 ABQ ATL 893 1.92
6 ABQ BWI 559 6.60
```
Next, we perform the same calculation, but this time, we add `carrier` to the `GROUP BY` clause.
```
SELECT
origin, dest,
o.carrier, c.name,
SUM(1) AS numFlights,
AVG(arr_delay) AS avgDelay
FROM
flights AS o
LEFT JOIN
carriers c ON o.carrier = c.carrier
WHERE year = 2014
GROUP BY origin, dest, o.carrier
```
Next, we merge these two data sets, matching the routes traveled by each carrier with the route averages across all carriers.
```
routes_aug <- routes_carriers %>%
left_join(routes, by = c("origin" = "origin", "dest" = "dest")) %>%
as_tibble()
head(routes_aug)
```
```
# A tibble: 6 × 8
origin dest carrier name numFlights.x avgDelay.x numFlights.y avgDelay.y
<chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 ABE ATL DL Delt… 186 1.67 829 5.43
2 ABE ATL EV Expr… 643 6.52 829 5.43
3 ABE DTW EV Expr… 665 3.23 665 3.23
4 ABE ORD EV Expr… 144 19.5 144 19.5
5 ABI DFW EV Expr… 219 7 2832 10.7
6 ABI DFW MQ Envo… 2613 11.0 2832 10.7
```
Note that `routes_aug` contains both the average arrival delay time for each carrier on each route that it flies (`avgDelay.x`) as well as the average arrival delay time for each route across all carriers (`avgDelay.y`).
We can then compute the difference between these times, and aggregate the weighted average for each carrier.
```
routes_aug %>%
group_by(carrier) %>%
# use str_remove_all() to remove parentheses
summarize(
carrier_name = str_remove_all(first(name), "\\(.*\\)"),
numRoutes = n(),
numFlights = sum(numFlights.x),
wAvgDelay = sum(
numFlights.x * (avgDelay.x - avgDelay.y),
na.rm = TRUE
) / sum(numFlights.x)
) %>%
arrange(wAvgDelay)
```
```
# A tibble: 14 × 5
carrier carrier_name numRoutes numFlights wAvgDelay
<chr> <chr> <int> <dbl> <dbl>
1 VX Virgin America 72 57510 -2.69
2 FL AirTran Airways Corporation 170 79495 -1.55
3 AS Alaska Airlines Inc. 242 160257 -1.44
4 US US Airways Inc. 378 414665 -1.31
5 DL Delta Air Lines Inc. 900 800375 -1.01
6 UA United Air Lines Inc. 621 493528 -0.982
7 MQ Envoy Air 442 392701 -0.455
8 AA American Airlines Inc. 390 537697 -0.0340
9 HA Hawaiian Airlines Inc. 56 74732 0.272
10 OO SkyWest Airlines Inc. 1250 613030 0.358
11 B6 JetBlue Airways 316 249693 0.767
12 EV ExpressJet Airlines Inc. 1534 686021 0.845
13 WN Southwest Airlines Co. 1284 1174633 1.13
14 F9 Frontier Airlines Inc. 326 85474 2.29
```
15\.6 SQL vs. **R**
-------------------
This chapter contains an introduction to the database querying language SQL.
However, along the way we have highlighted the similarities and differences between the way certain things are done in **R** versus how they are done in SQL.
While the rapid development of **dplyr** has brought fusion to the most common data management operations shared by both **R** and SQL, while at the same time shielding the user from concerns about where certain operations are being performed, it is important for a practicing data scientist to understand the relative strengths and weaknesses of each of their tools.
While the process of slicing and dicing data can generally be performed in either **R** or SQL, we have already seen tasks for which one is more appropriate (e.g., faster, simpler, or more logically structured) than the other. **R** is a statistical computing environment that is developed for the purpose of data analysis.
If the data are small enough to be read into memory, then **R** puts a vast array of data analysis functions at your fingertips.
However, if the data are large enough to be problematic in memory, then SQL provides a robust, parallelizable, and scalable solution for data storage and retrieval.
The SQL query language, or the **dplyr** interface, enable one to efficiently perform basic data management operations on smaller pieces of the data.
However, there is an upfront cost to creating a well\-designed SQL database.
Moreover, the analytic capabilities of SQL are very limited, offering only a few simple statistical functions (e.g., `AVG()`, `SD()`, etc.—although user\-defined extensions are possible).
Thus, while SQL is usually a more robust solution for *data management*, it is a poor substitute for **R** when it comes to *data analysis*.
15\.7 Further resources
-----------------------
The documentation for [MySQL](https://dev.mysql.com/doc/refman/5.6/en/index.html), [PostgreSQL](http://www.postgresql.org/docs/9.4/interactive/index.html), and [SQLite](https://www.sqlite.org/docs.html) are the authoritative sources for complete information on their respective syntaxes.
We have also found Kline et al. (2008\) to be a useful reference.
15\.8 Exercises
---------------
**Problem 1 (Easy)**: How many rows are available in the `Measurements` table of the Smith College Wideband Auditory Immittance database?
```
library(RMySQL)
con <- dbConnect(
MySQL(), host = "scidb.smith.edu",
user = "waiuser", password = "smith_waiDB",
dbname = "wai"
)
Measurements <- tbl(con, "Measurements")
```
**Problem 2 (Easy)**: Identify what years of data are available in the `flights` table of the `airlines` database.
```
library(tidyverse)
library(mdsr)
library(RMySQL)
con <- dbConnect_scidb("airlines")
```
**Problem 3 (Easy)**: Use the `dbConnect_scidb` function to connect to the `airlines` database to answer the following problem.
How many domestic flights flew into Dallas\-Fort Worth (DFW) on May 14, 2010?
**Problem 4 (Easy)**: Wideband acoustic immittance (WAI) is an area of biomedical research that strives to develop WAI measurements as noninvasive auditory diagnostic tools. WAI measurements are reported in many related formats, including absorbance, admittance, impedance, power reflectance, and pressure reflectance. More information can be found about this public facing WAI database at [http://www.science.smith.edu/wai\-database/home/about](http://www.science.smith.edu/wai-database/home/about).
```
library(RMySQL)
db <- dbConnect(
MySQL(),
user = "waiuser",
password = "smith_waiDB",
host = "scidb.smith.edu",
dbname = "wai"
)
```
1. How many female subjects are there in total across all studies?
2. Find the average absorbance for participants for each study, ordered by highest to lowest value.
3. Write a query to count all the measurements with a calculated absorbance of less than 0\.
**Problem 5 (Medium)**: Use the `dbConnect_scidb` function to connect to the `airlines` database to answer the following problem.
Of all the destinations from Chicago O’Hare (ORD), which were the most common in 2010?
**Problem 6 (Medium)**: Use the `dbConnect_scidb` function to connect to the `airlines` database to answer the following problem.
Which airport had the highest average arrival delay time in 2010?
**Problem 7 (Medium)**:
Use the `dbConnect_scidb` function to connect to the `airlines` database to answer the following problem.
How many domestic flights came into or flew out of Bradley Airport (BDL) in 2012?
**Problem 8 (Medium)**: Use the `dbConnect_scidb` function to connect to the `airlines` database to answer the following problem.
List the airline and flight number for all flights between LAX and JFK on September 26th, 1990\.
**Problem 9 (Medium)**: The following questions require use of the `Lahman` package and reference basic baseball terminology. (See <https://en.wikipedia.org/wiki/Baseball_statistics> for explanations of any acronyms.)
1. List the names of all batters who have at least 300 home runs (HR) and 300 stolen bases (SB) in their careers and rank them by career batting average (\\(H/AB\\)).
2. List the names of all pitchers who have at least 300 wins (W) and 3,000 strikeouts (SO) in their careers and rank them by career winning percentage (\\(W/(W\+L)\\)).
3. The attainment of either 500 home runs (HR) or 3,000 hits (H) in a career is considered to be among the greatest achievements to which a batter can aspire. These milestones are thought to guarantee induction into the Baseball Hall of Fame, and yet several players who have attained either milestone have not been inducted into the Hall of Fame. Identify them.
**Problem 10 (Medium)**: Use the `dbConnect_scidb` function to connect to the `airlines` database to answer the following problem.
Find all flights between `JFK` and `SFO` in 1994\. How many were canceled? What percentage of the total number of flights were canceled?
**Problem 11 (Hard)**: The following open\-ended question may require more than one query and a thoughtful response.
Based on data from 2012 only, and assuming that transportation to the airport is not an issue, would you rather fly out of JFK, LaGuardia (LGA), or Newark (EWR)? Why or why not?
Use the `dbConnect_scidb` function to connect to the `airlines` database.
15\.9 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/sql\-I.html\#sqlI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/sql-I.html#sqlI-online-exercises)
**Problem 1 (Easy)**: What years of data are available in the `mdsr` package `imdb` database `title` table?
(Hint: create a connection with a call to `dbConnect_scidb("imdb")`.)
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-sql2.html |
Chapter 16 Database administration
==================================
In Chapter [15](ch-sql.html#ch:sql), we learned how to write `SELECT` queries to retrieve data from an existing SQL server. Of course, these queries depend on that server being configured, and the proper data loaded into it. In this chapter, we provide the tools necessary to set up a new database and populate it. Furthermore, we present concepts that will help you construct efficient databases that enable faster query performance. While the treatment herein is not sufficient to make you a seasoned database administrator, it should be enough to allow you to start experimenting with SQL databases on your own.
As in Chapter [15](ch-sql.html#ch:sql), the code that you see in this chapter illustrates exchanges between a MySQL server and a client. In places where **R** is involved, we will make that explicit. We assume that you are able to log in to a MySQL server. (See Appendix [F](ch-db-setup.html#ch:db-setup) for instructions on how to install, configure, and log in to an SQL server.)
16\.1 Constructing efficient SQL databases
------------------------------------------
While it is often helpful to think about SQL tables as being analogous to `data.frame`s in **R**, there are some important differences. In **R**, a `data.frame` is a `list` of vectors that have the same length. Each of those vectors has a specific data type (e.g., integers, character strings, etc.), but those data types can vary across the columns. The same is true of tables in SQL, but there are additional constraints that we can impose on SQL tables that can improve both the logical integrity of our data, as well as the performance we can achieve when searching it.
### 16\.1\.1 Creating new databases
Once you have logged into MySQL, you can see what databases are available to you by running the `SHOW DATABASES` command at the `mysql>` prompt.
```
SHOW DATABASES;
```
| Database |
| --- |
| information\_schema |
| airlines |
| fec |
| imdb |
| lahman |
| nyctaxi |
In this case, the output indicates that the `airlines` database already exists.
If it didn’t, we could create it using the `CREATE DATABASE` command.
```
CREATE DATABASE airlines;
```
Since we will continue to work with the `airlines` database, we can save ourselves some typing by utilizing the `USE` command to make that connection explicit.
```
USE airlines;
```
Now that we are confined to the `airlines` database, there is no ambiguity in asking what tables are present.
```
SHOW TABLES;
```
| Tables\_in\_airlines |
| --- |
| airports |
| carriers |
| flights |
| planes |
### 16\.1\.2 `CREATE TABLE`
Recall that in Chapter [15](ch-sql.html#ch:sql) we used the `DESCRIBE` statement to display the definition of each table. This lists each field, its data type, whether there are keys or indices defined on it, and whether `NULL` values are allowed. For example, the `airports` table has the following definition.
```
DESCRIBE airports;
```
| Field | Type | Null | Key | Default | Extra |
| --- | --- | --- | --- | --- | --- |
| faa | varchar(3\) | NO | PRI | | |
| name | varchar(255\) | YES | | NA | |
| lat | decimal(10,7\) | YES | | NA | |
| lon | decimal(10,7\) | YES | | NA | |
| alt | int(11\) | YES | | NA | |
| tz | smallint(4\) | YES | | NA | |
| dst | char(1\) | YES | | NA | |
| city | varchar(255\) | YES | | NA | |
| country | varchar(255\) | YES | | NA | |
We can see from the output that the `faa`, `name`, `city`, and `country` fields are defined as `varchar` (or variable character) fields. These fields contain character strings, but the length of the strings allowed varies. We know that the `faa` code is restricted to three characters, and so we have codified that in the table definition. The `dst` field contains only a single character, indicating whether daylight saving time is observed at each airport. The `lat` and `lon` fields contain geographic coordinates, which can be three\-digit numbers (i.e., the maximum value is 180\) with up to seven decimal places. The `tz` field can be up to a four\-digit integer, while the `alt` field is allowed eleven digits.
In this case, `NULL` values are allowed—and are the default—in all of the fields except for `faa`, which is the primary key.
**R** is translating the null character in SQL (`NULL`) to the null character in **R** (`NA`).
These definitions did not come out of thin air, nor were they automatically generated. In this case, we wrote them by hand, in the following `CREATE TABLE` statement:
```
SHOW CREATE TABLE airports;
```
```
CREATE TABLE `airports` (
`faa` varchar(3) NOT NULL DEFAULT '',
`name` varchar(255) DEFAULT NULL,
`lat` decimal(10,7) DEFAULT NULL,
`lon` decimal(10,7) DEFAULT NULL,
`alt` int(11) DEFAULT NULL,
`tz` smallint(4) DEFAULT NULL,
`dst` char(1) DEFAULT NULL,
`city` varchar(255) DEFAULT NULL,
`country` varchar(255) DEFAULT NULL,
PRIMARY KEY (`faa`)
```
The `CREATE TABLE` command starts by defining the name of the table, and then proceeds to list the field definitions in a comma\-separated list. If you want to build a database from scratch—as we do in Section [16\.3](ch-sql2.html#sec:toy-db)—you will have to write these definitions for each table.[29](#fn29) Tables that are already created can be modified using the `ALTER TABLE` command. For example, the following will change the `tz` field to two digits and change the default value to zero.
```
ALTER TABLE airports CHANGE tz tz smallint(2) DEFAULT 0;
```
### 16\.1\.3 Keys
Two related but different concepts are [*keys*](https://en.wikipedia.org/w/index.php?search=keys) and [*indices*](https://en.wikipedia.org/w/index.php?search=indices). The former offers some performance advantages but is primarily useful for imposing constraints on possible entries in the database, while the latter is purely about improving the speed of retrieval.
Different relational database management systems (RDBMS) may implement a variety of different kinds of keys, but three types are most common.
In each case, suppose that we have a table with \\(n\\) rows and \\(p\\) columns.
* `PRIMARY KEY`: a column or set of columns in a table that uniquely identifies each row. By convention, this column is often called `id`. A table can have at most one [*primary key*](https://en.wikipedia.org/w/index.php?search=primary%20key), and in general it is considered good practice to define a primary key on every table (although there are exceptions to this rule). If the index spans \\(k \< p\\) columns, then even though the primary key must by definition have \\(n\\) rows itself, it only requires \\(nk\\) pieces of data, rather than the \\(np\\) that the full table occupies. Thus, the primary key is always smaller than the table itself, and is thus faster to search. A second critically important role of the primary key is enforcement of non\-duplication. If you try to insert a row into a table that would result in a duplicate entry for the primary key, you will get an error.
* `UNIQUE KEY`: a column or set of columns in a table that uniquely identifies each row, except for rows that contain `NULL` in some of those attributes. Unlike primary keys, a single table may have many unique keys. A typical use for these are in a lookup table. For example, [Ted Turocy](https://en.wikipedia.org/w/index.php?search=Ted%20Turocy) maintains a [register](http://chadwick-bureau.com/the-register/) of player `id`s for professional baseball players across multiple data providers. Each row in this table is a different player, and the primary key is a randomly\-generated hash—each player gets exactly one value. However, each row also contains that same player’s `id` in systems designed by [*MLBAM*](https://en.wikipedia.org/w/index.php?search=MLBAM), [*Baseball\-Reference*](https://en.wikipedia.org/w/index.php?search=Baseball-Reference), [*Baseball Prospectus*](https://en.wikipedia.org/w/index.php?search=Baseball%20Prospectus), [*Fangraphs*](https://en.wikipedia.org/w/index.php?search=Fangraphs), etc. This is tremendously useful for researchers working with multiple data providers, since they can easily link a player’s statistics in one system to his information in another. However, this ability is predicated on the *uniqueness* of each player’s id in *each* system. Moreover, many players may not have an `id` in every system, since some data providers track minor league baseball, or even the Japanese and Korean professional leagues. Thus, the imposition of a unique key—which allows `NULL`s—is necessary to maintain the integrity of these data.
* `FOREIGN KEY`: a column or set of columns that reference a primary key in another table. For example, the primary key in the `carriers` table is `carrier`. The `carrier` column in the `flights` table, which consists of carrier IDs, is a [*foreign key*](https://en.wikipedia.org/w/index.php?search=foreign%20key) that references `carriers.carrier`. Foreign keys don’t offer any performance enhancements, but they are important for maintaining [*referential integrity*](https://en.wikipedia.org/w/index.php?search=referential%20integrity), especially in transactional databases that have many insertions and deletions.
You can use the `SHOW KEYS` command to identify the keys in a table. Note that the `carriers` table has only one key defined: a primary key on `carrier`.
```
SHOW KEYS FROM carriers;
```
| Table | Non\_unique | Key\_name | Seq\_in\_index | Column\_name | Cardinality |
| --- | --- | --- | --- | --- | --- |
| carriers | 0 | PRIMARY | 1 | carrier | 1610 |
### 16\.1\.4 Indices
While keys help maintain the integrity of the data, indices impose no constraints—they simply enable faster retrieval. An index is a lookup table that helps SQL keep track of which records contain certain values. Judicious use of indices can dramatically speed up retrieval times.
The technical implementation of efficient indices is an active area of research among computer scientists, and fast indices are one of the primary advantages that differentiate SQL tables from large **R** data frames.
Indices have to be built by the database in advance, and they are then written to the disk. Thus, indices take up space on the disk (this is one of the reasons that they aren’t implemented in **R**). For some tables with many indices, the size of the indices can even exceed the size of the raw data. Thus, when building indices, there is a trade\-off to consider: You want just enough indices but not too many.
Consider the task of locating all of the rows in the `flights` table that contain the `origin` value `BDL`. These rows are strewn about the table in no particular order. How would you find them? A simple approach would be to start with the first row, examine the `origin` field, grab it if it contains `BDL`, and otherwise move to the second row. In order to ensure that all of the matching rows are returned, this algorithm must check every single one of the \\(n\=\\)48 million rows[30](#fn30) in this table! So its speed is \\(O(n)\\). However, we have built an index on the `origin` column, and this index contains only 2,266 rows (see Table [16\.1](ch-sql2.html#tab:show-indexes)). Each row in the index corresponds to exactly one value of `origin`, and contains a lookup for the exact rows in the table that are specific to that value. Thus, when we ask for the rows for which `origin` is equal to `BDL`, the database will use the index to deliver those rows very quickly. In practice, the retrieval speed for indexed columns can be \\(O(\\ln{n})\\) (or better)—which is a tremendous advantage when \\(n\\) is large.
The speed\-up that indices can provide is often especially apparent when joining two large tables. To see why, consider the following toy example. Suppose we want to merge two tables on the columns whose values are listed below. To merge these records correctly, we have to do a lot of work to find the appropriate value in the second list that matches each value in the first list.
```
[1] 5 18 2 3 4 2 1
```
```
[1] 5 6 3 18 4 7 1 2
```
On the other hand, consider performing the same task on the same set of values, but having the values sorted ahead of time. Now, the merging task is very fast, because we can quickly locate the matching records. In effect, by keeping the records sorted, we have off\-loaded the sorting task when we do a merge, resulting in much faster merging performance. However, this requires that we sort the records in the first place and then keep them sorted. This may slow down other operations—such as inserting new records—which now have to be done more carefully.
```
[1] 1 2 2 3 4 5 18
```
```
[1] 1 2 3 4 5 6 7 18
```
```
SHOW INDEXES FROM flights;
```
Table 16\.1: Indices in the flights table.
| Table | Non\_unique | Key\_name | Seq\_in\_index | Column\_name | Cardinality |
| --- | --- | --- | --- | --- | --- |
| flights | 1 | Year | 1 | year | 7 |
| flights | 1 | Date | 1 | year | 7 |
| flights | 1 | Date | 2 | month | 89 |
| flights | 1 | Date | 3 | day | 2712 |
| flights | 1 | Origin | 1 | origin | 2267 |
| flights | 1 | Dest | 1 | dest | 2267 |
| flights | 1 | Carrier | 1 | carrier | 134 |
| flights | 1 | tailNum | 1 | tailnum | 37862 |
In MySQL the `SHOW INDEXES` command is equivalent to `SHOW KEYS`. Note that the `flights` table has several keys defined, but no primary key (see Table [16\.1](ch-sql2.html#tab:show-indexes)).
The key `Date` spans the three columns `year`, `month`, and `day`.
### 16\.1\.5 Query plans
It is important to have the right indices built for your specific data and the queries that are likely to be run on it. Unfortunately, there is not always a straightforward answer to the question of which indices to build.
For the `flights` table, it seems likely that many queries will involve searching for flights from a particular origin, or to a particular destination, or during a particular year (or range of years), or on a specific carrier, and so we have built indices on each of these columns.
We have also built the `Date` index, since it seems likely that people would want to search for flights on a certain date.
However, it does not seem so likely that people would search for flights in a specific month across all years, and thus we have not built an index on `month` alone.
The `Date` index contains the `month` column, but this index can only be used if `year` is also part of the query.
You can ask MySQL for information about how it is going to perform a query using the `EXPLAIN` syntax. This will help you understand how onerous your query is, without actually running it—saving you the time of having to wait for it to execute.
This output reflects the query plan returned by the MySQL server.
If we were to run a query for long flights using the `distance` column the server will have to inspect each of the 48 million rows, since this column is not indexed.
This is the slowest possible search, and is often called a [*table scan*](https://en.wikipedia.org/w/index.php?search=table%20scan).
The 48 million number that you see in the `rows` column is an estimate of the number of rows that MySQL will have to consult in order to process your query.
In general, more rows mean a slower query.
```
EXPLAIN SELECT * FROM flights WHERE distance > 3000;
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32 | ALL | NA | NA | NA | NA | 47932811 | 33\.3 |
On the other hand, if we search for recent flights using the `year` column, which has an index built on it, then we only need to consider a fraction of those rows (about 6\.3 million).
```
EXPLAIN SELECT * FROM flights WHERE year = 2013;
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | p27 | ALL | Year,Date | NA | NA | NA | 6369482 | 100 |
Note that for the second example the server could have used either the index `Year` or the index `Date` (which contains the column `year`). Because of the index, only the 6\.3 million flights from 2013 were consulted.
Similarly, if we search by year and month, we can use the `Date` index.
```
EXPLAIN SELECT * FROM flights WHERE year = 2013 AND month = 6;
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | p27 | ref | Year,Date | Date | 6 | const,const | 714535 | 100 |
But if we search for months across all years, we can’t!
The query plan results in a table scan again.
```
EXPLAIN SELECT * FROM flights WHERE month = 6;
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32 | ALL | NA | NA | NA | NA | 47932811 | 10 |
This is because although `month` is part of the `Date` index, it is the *second* column in the index, and thus it doesn’t help us when we aren’t filtering on `year`. Thus, if it were common for our users to search on `month` without `year`, it would probably be worth building an index on `month`. Were we to actually run these queries, there would be a significant difference in computational time.
Using indices is especially important when performing `JOIN` operations on large tables.
In this example, both queries use indices.
However, because the cardinality of the index on `tailnum` is smaller that the cardinality of the index on `year` (see Table [16\.1](ch-sql2.html#tab:show-indexes)), the number of rows in `flights` associated with each unique value of `tailnum` is smaller than for each unique value of `year`.
Thus, the first query runs faster.
```
EXPLAIN
SELECT * FROM planes p
LEFT JOIN flights o ON p.tailnum = o.TailNum
WHERE manufacturer = 'BOEING';
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| p | NA | ALL | NA | NA | NA | NA | 3322 | 10 |
| o | p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32 | ref | tailNum | tailNum | 9 | airlines.p.tailnum | 1266 | 100 |
```
EXPLAIN
SELECT * FROM planes p
LEFT JOIN flights o ON p.Year = o.Year
WHERE manufacturer = 'BOEING';
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| p | NA | ALL | NA | NA | NA | NA | 3322 | 10 |
| o | p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32 | ref | Year,Date | Year | 3 | airlines.p.year | 6450117 | 100 |
### 16\.1\.6 Partitioning
Another approach to speeding up queries on large tables (like `flights`) is [*partitioning*](https://en.wikipedia.org/w/index.php?search=partitioning). Here, we could create partitions based on the `year`. For `flights` this would instruct the server to physically write the `flights` table as a series of smaller tables, each one specific to a single value of `year`. At the same time, the server would create a logical supertable, so that to the user, the appearance of `flights` would be unchanged. This acts like a preemptive index on the `year` column.
If most of the queries to the `flights` table were for a specific year or range of years, then partitioning could significantly improve performance, since most of the rows would never be consulted. For example, if most of the queries to the `flights` database were for the past three years, then partitioning would reduce the search space of most queries on the full data set to the roughly 20 million flights in the last three years instead of the 169 million rows in the last 20 years. But here again, if most of the queries to the `flights` table were about carriers across years, then this type of partitioning would not help at all. It is the job of the database designer to tailor the database structure to the pattern of queries coming from the users. As a data scientist, this may mean that you have to tailor the database structure to the queries that you are running.
16\.2 Changing SQL data
-----------------------
In Chapter [15](ch-sql.html#ch:sql), we described how to query an SQL database using the `SELECT` command. Thus far in this chapter, we have discussed how to set up an SQL database, and how to optimize it for speed. None of these operations actually change data in an existing database. In this section, we will briefly touch upon the `UPDATE` and `INSERT` commands, which allow you to do exactly that.
### 16\.2\.1 Changing data
The `UPDATE` command allows you to reset values in a table across all rows that match a certain criteria. For example, in Chapter [15](ch-sql.html#ch:sql) we discussed the possibility that airports could change names over time. The airport in [*Washington, D.C.*](https://en.wikipedia.org/w/index.php?search=Washington,%20D.C.) with code `DCA` is now called [*Ronald Reagan Washington National*](https://en.wikipedia.org/w/index.php?search=Ronald%20Reagan%20Washington%20National).
```
SELECT faa, name FROM airports WHERE faa = 'DCA';
```
| faa | name |
| --- | --- |
| DCA | Ronald Reagan Washington Natl |
However, the “[Ronald Reagan](https://en.wikipedia.org/w/index.php?search=Ronald%20Reagan)” prefix was added in 1998\. If—for whatever reason—we wanted to go back to the old name, we could use an `UPDATE` command to change that information in the `airports` table.
```
UPDATE airports
SET name = 'Washington National'
WHERE faa = 'DCA';
```
An `UPDATE` operation can be very useful when you have to apply wholesale changes over a large number of rows. However, extreme caution is necessary, since an imprecise `UPDATE` query can wipe out large quantities of data, and there is no “undo” operation!
Exercise extraordinary caution when performing `UPDATE`s.
### 16\.2\.2 Adding data
New data can be appended to an existing table with the `INSERT` commands. There are actually three things that can happen, depending on what you want to do when you have a primary key conflict. This occurs when one of the new rows that you are trying to insert has the same primary key value as one of the existing rows in the table.
* `INSERT`: Try to insert the new rows. If there is a primary key conflict, quit and throw an error.
* `INSERT IGNORE`: Try to insert the new rows. If there is a primary key conflict, skip inserting the conflicting rows and leave the existing rows untouched. Continue inserting data that does not conflict.
* `REPLACE`: Try to insert the new rows. If there is a primary key conflict, overwrite the existing rows with the new ones. Continue inserting data that does not conflict.
Recall that in Chapter [15](ch-sql.html#ch:sql) we found that the airports in [*Puerto Rico*](https://en.wikipedia.org/w/index.php?search=Puerto%20Rico) were not present in the `airports` table. If we wanted to add these manually, we could use `INSERT`.
```
INSERT INTO airports (faa, name)
VALUES ('SJU', 'Luis Munoz Marin International Airport');
```
Since `faa` is the primary key on this table, we can insert this row without contributing values for all of the other fields. In this case, the new row corresponding to `SJU` would have the `faa` and `name` fields as noted above, and the default values for all of the other fields. If we were to run this operation a second time, we would get an error, because of the primary key collision on `SJU`. We could avoid the error by choosing to `INSERT INGORE` or `REPLACE` instead of `INSERT`.
### 16\.2\.3 Importing data from a file
In practice, we rarely add new data manually in this manner. Instead, new data are most often added using the `LOAD DATA` command. This allows a file containing new data—usually a CSV—to be inserted in bulk. This is very common, when, for example, your data comes to you daily in a CSV file and you want to keep your database up to date. The primary key collision concepts described above also apply to the `LOAD DATA` syntax, and are important to understand for proper database maintenance. We illustrate the use of `LOAD DATA` in Section [16\.3](ch-sql2.html#sec:toy-db).
16\.3 Extended example: Building a database
-------------------------------------------
The [*extract\-transform\-load*](https://en.wikipedia.org/w/index.php?search=extract-transform-load) (ETL) paradigm is common among data professionals. The idea is that many data sources need to be extracted from some external source, transformed into a different format, and finally loaded into a database system. Often, this is an iterative process that needs to be done every day, or even every hour. In such cases, developing the infrastructure to automate these steps can result in dramatically increased productivity.
In this example, we will illustrate how to set up a MySQL database for the **babynames** data using the command line and SQL, but not **R**. As noted previously, while the **dplyr** package has made **R** a viable interface for querying and populating SQL databases, it is occasionally necessary to get “under the hood” with SQL. The files that correspond to this example can be found on the book website at [http://mdsr\-book.github.io/](http://mdsr-book.github.io/).
### 16\.3\.1 Extract
In this case, our data already lives in an **R** package, but in most cases, your data will live on a website, or be available in a different format. Our goal is to take that data from wherever it is and download it. For the **babynames** data, there isn’t much to do, since we already have the data in an **R** package. We will simply load it.
```
library(babynames)
```
### 16\.3\.2 Transform
Since SQL tables conform to a row\-and\-column paradigm, our goal during the transform phase is to create CSV files (see Chapter [6](ch-dataII.html#ch:dataII)) for each of the tables. In this example, we will create tables for the `babynames` and `births` tables. You can try to add the `applicants` and `lifetables` tables on your own. We will simply write these data to CSV files using the `write_csv()` command. Since the `babynames` table is very long (nearly 1\.8 million rows), we will just use the more recent data.
```
babynames %>%
filter(year > 1975) %>%
write_csv("babynames.csv")
births %>%
write_csv("births.csv")
list.files(".", pattern = ".csv")
```
```
[1] "babynames.csv" "births.csv"
```
This raises an important question: what should we call these objects? The **babynames** package includes a data frame called `babynames` with one row per sex per year per name. Having both the database and a table with the same name may be confusing. To clarify which is which we will call the database `babynamedb` and the table `babynames`.
Spending time thinking about the naming of databases, tables, and fields before you create them can help avoid confusion later on.
### 16\.3\.3 Load into MySQL database
Next, we need to write a script that will define the table structure for these two tables in a MySQL database (instructions for creation of a database in
SQLite can be found in Section [F.4\.4](ch-db-setup.html#sec:sqlitebaby)).
This script will have four parts:
1. a `USE` statement that ensures we are in the right schema/database
2. a series of `DROP TABLE` statements that drop any old tables with the same names as the ones we are going to create
3. a series of `CREATE TABLE` statements that specify the table structures
4. a series of `LOAD DATA` statements that read the data from the CSVs into the appropriate tables
The first part is easy:
```
USE babynamedb;
```
This assumes that we have a local database called `babynamedb`—we will create this later.
The second part is easy in this case, since we only have two tables. These ensure that we can run this script as many times as we want.
```
DROP TABLE IF EXISTS babynames;
DROP TABLE IF EXISTS births;
```
Be careful with the `DROP TABLE` statement. It destroys data.
The third step is the trickiest part—we need to define the columns precisely. The use of `str()`, `summary()`, and `glimpse()` are particularly useful for matching up **R** data types with MySQL data types. Please see [the MySQL documentation](http://dev.mysql.com/doc/refman/5.7/en/data-types.html) for more information about what data types are supported.
```
glimpse(babynames)
```
```
Rows: 1,924,665
Columns: 5
$ year <dbl> 1880, 1880, 1880, 1880, 1880, 1880, 1880, 1880, 1880, 1880, 1…
$ sex <chr> "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "…
$ name <chr> "Mary", "Anna", "Emma", "Elizabeth", "Minnie", "Margaret", "I…
$ n <int> 7065, 2604, 2003, 1939, 1746, 1578, 1472, 1414, 1320, 1288, 1…
$ prop <dbl> 0.07238, 0.02668, 0.02052, 0.01987, 0.01789, 0.01617, 0.01508…
```
In this case, we know that the `year` variable will only contain four\-digit integers, so we can specify that this column take up only that much room in SQL. Similarly, the `sex` variable is just a single character, so we can restrict the width of that column as well. These savings probably won’t matter much in this example, but for large tables they can make a noticeable difference.
```
CREATE TABLE `babynames` (
`year` smallint(4) NOT NULL DEFAULT 0,
`sex` char(1) NOT NULL DEFAULT 'F',
`name` varchar(255) NOT NULL DEFAULT '',
`n` mediumint(7) NOT NULL DEFAULT 0,
`prop` decimal(21,20) NOT NULL DEFAULT 0,
PRIMARY KEY (`year`, `sex`, `name`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
```
In this table, each row contains the information about one name for one sex in one year. Thus, each row contains a unique combination of those three variables, and we can therefore define a primary key across those three fields. Note the use of backquotes (to denote tables and variables) and the
use of regular quotes (for default values).
```
glimpse(births)
```
```
Rows: 109
Columns: 2
$ year <int> 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918,…
$ births <int> 2718000, 2777000, 2809000, 2840000, 2869000, 2966000, 29650…
```
```
CREATE TABLE `births` (
`year` smallint(4) NOT NULL DEFAULT 0,
`births` mediumint(8) NOT NULL DEFAULT 0,
PRIMARY KEY (`year`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
```
Finally, we have to tell MySQL where to find the CSV files and where to put the data it finds in them. This is accomplished using the [`LOAD DATA` command](http://dev.mysql.com/doc/refman/5.7/en/load-data.html). You may also need to add a `LINES TERMINATED BY \r\n` clause, but we have omitted that for clarity. Please be aware that lines terminate using different characters in different operating systems, so Windows, Mac, and Linux users may have to tweak these commands to suit their needs. The `SHOW WARNINGS` commands are not necessary, but they will help with debugging.
```
LOAD DATA LOCAL INFILE './babynames.csv' INTO TABLE `babynames`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' IGNORE 1 LINES;
SHOW WARNINGS;
LOAD DATA LOCAL INFILE './births.csv' INTO TABLE `births`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' IGNORE 1 LINES;
SHOW WARNINGS;
```
Putting this all together, we have the following script:
```
USE babynamedb;
DROP TABLE IF EXISTS babynames;
DROP TABLE IF EXISTS births;
CREATE TABLE `babynames` (
`year` smallint(4) NOT NULL DEFAULT 0,
`sex` char(1) NOT NULL DEFAULT 'F',
`name` varchar(255) NOT NULL DEFAULT '',
`n` mediumint(7) NOT NULL DEFAULT 0,
`prop` decimal(21,20) NOT NULL DEFAULT 0,
PRIMARY KEY (`year`, `sex`, `name`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
CREATE TABLE `births` (
`year` smallint(4) NOT NULL DEFAULT 0,
`births` mediumint(8) NOT NULL DEFAULT 0,
PRIMARY KEY (`year`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
LOAD DATA LOCAL INFILE './babynames.csv' INTO TABLE `babynames`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' IGNORE 1 LINES;
LOAD DATA LOCAL INFILE './births.csv' INTO TABLE `births`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' IGNORE 1 LINES;
SELECT year
, COUNT(DISTINCT name) AS numNames
, SUM(n) AS numBirths
FROM babynames
GROUP BY year
ORDER BY numBirths DESC
LIMIT 0,10;
```
Note that we have added a `SELECT` query just to verify that our table is populated. To load this into MySQL, we must first make sure that the `babynamedb` database exists, and if not, we must create it.
First, we check to see if `babynamedb` exists. We can do this from the command line using shell commands:
```
mysql -e "SHOW DATABASES;"
```
If it doesn’t exist, then we must create it:
```
mysql -e "CREATE DATABASE babynamedb;"
```
Finally, we run our script. The `--show-warnings` and `-v` flags are optional, but will help with debugging.
```
mysql --local-infile --show-warnings -v babynamedb
< babynamedb.mysql
```
In practice, if your SQL script is not perfect, you will see errors or warnings the first time you try this.
But by iterating this process, you will eventually refine your script such that it works as desired.
If you get an 1148 error, make sure that you are using the `--local-infile` flag.
```
ERROR 1148 (42000): The used command is not allowed with this MySQL version
```
If you get a 29 error, make sure that the file exists in this location and that the `mysql` user has permission to read and execute it.
```
ERROR 29 (HY000): File './babynames.csv' not found (Errcode: 13)
```
Once the MySQL database has been created, the following commands can be used to access it from **R** using **dplyr**:
```
db <- dbConnect(RMySQL::MySQL(), dbname = "babynamedb")
babynames <- tbl(db, "babynames")
babynames %>%
filter(name == "Benjamin")
```
16\.4 Scalability
-----------------
With the exception of SQLite, RBDMSs scale very well on a single computer to databases that take up dozens of gigabytes. For a dedicated server, even terabytes are workable on a single machine. Beyond this, many companies employ distributed solutions called [*clusters*](https://en.wikipedia.org/w/index.php?search=clusters). A cluster is simply more than one machine (i.e., a node) linked together running the same RDBMS. One machine is designated as the head node, and this machine controls all of the other nodes. The actual data are distributed across the various nodes, and the head node manages queries—parceling them to the appropriate cluster nodes.
A full discussion of clusters and other distributed architectures (including replication) are beyond the scope of this book. In Chapter [21](ch-big.html#ch:big), we discuss alternatives to SQL that may provide higher\-end solutions for bigger data.
16\.5 Further resources
-----------------------
The *SQL in a Nutshell* book (Kline et al. 2008\) is a useful reference for all things SQL.
16\.6 Exercises
---------------
**Problem 1 (Easy)**: Alice is searching for cancelled flights in the `flights` table, and her query is running very slowly. She decides to build an index on `cancelled` in the hopes of speeding things up. Discuss the relative merits of her plan. What are the trade\-offs? Will her query be any faster?
**Problem 2 (Medium)**: The `Master` table of the `Lahman` database contains biographical information about baseball players. The primary key is the `playerID` variable. There are also variables for `retroID` and `bbrefID`, which correspond to the player’s identifier in other baseball databases. Discuss the ramifications of placing a primary, unique, or foreign key on `retroID`.
**Problem 3 (Medium)**: Bob wants to analyze the on\-time performance of United Airlines flights across the decade of the 1990s. Discuss how the partitioning scheme of the `flights` table based on `year` will affect the performance of Bob’s queries, relative to an unpartitioned table.
**Problem 4 (Hard)**: Use the `macleish` package to download the weather data at the MacLeish Field Station. Write your own table schema from scratch and import these data into the database server of your choice.
**Problem 5 (Hard)**: Write a full table schema for the two tables in the `fueleconomy` package and import them into the database server of your choice.
**Problem 6 (Hard)**: Write a full table schema for the `mtcars` data set and import it into the database server of your choice.
**Problem 7 (Hard)**: Write a full table schema for the five tables in the `nasaweather` package and import them into the database server of your choice.
**Problem 8 (Hard)**: Write a full table schema for two of the ten tables in the `usdanutrients` package and import them into the database server of your choice.
```
# remotes::install_github("hadley/usdanutrients")
library(usdanutrients)
# data(package="usdanutrients")
```
16\.7 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/sql\-II.html\#sqlII\-online\-exercises](https://mdsr-book.github.io/mdsr2e/sql-II.html#sqlII-online-exercises)
**Problem 1 (Easy)**: The `flights` table in the `airlines` database contains the following indexes:
```
SHOW INDEXES FROM flights;
```
| Table | Non\_unique | Key\_name | Seq\_in\_index | Column\_name | Collation | Cardinality | Sub\_part | Packed | Null | Index\_type | Comment | Index\_comment |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | 1 | Year | 1 | year | A | 7 | NA | NA | YES | BTREE | | |
| flights | 1 | Date | 1 | year | A | 7 | NA | NA | YES | BTREE | | |
| flights | 1 | Date | 2 | month | A | 89 | NA | NA | YES | BTREE | | |
| flights | 1 | Date | 3 | day | A | 2712 | NA | NA | YES | BTREE | | |
| flights | 1 | Origin | 1 | origin | A | 2267 | NA | NA | | BTREE | | |
| flights | 1 | Dest | 1 | dest | A | 2267 | NA | NA | | BTREE | | |
| flights | 1 | Carrier | 1 | carrier | A | 134 | NA | NA | | BTREE | | |
| flights | 1 | tailNum | 1 | tailnum | A | 37862 | NA | NA | YES | BTREE | | |
Consider the following queries:
```
SELECT * FROM flights WHERE cancelled = 1;
SELECT * FROM flights WHERE carrier = "DL";
```
Which query will execute faster? Justify your answer.
**Problem 2 (Hard)**: Use the `fec12` package to download and unzip the federal election data for 2012 that were used in Chapter 2\. Write your own table schema from scratch and import these data into the database server of your choice.
16\.1 Constructing efficient SQL databases
------------------------------------------
While it is often helpful to think about SQL tables as being analogous to `data.frame`s in **R**, there are some important differences. In **R**, a `data.frame` is a `list` of vectors that have the same length. Each of those vectors has a specific data type (e.g., integers, character strings, etc.), but those data types can vary across the columns. The same is true of tables in SQL, but there are additional constraints that we can impose on SQL tables that can improve both the logical integrity of our data, as well as the performance we can achieve when searching it.
### 16\.1\.1 Creating new databases
Once you have logged into MySQL, you can see what databases are available to you by running the `SHOW DATABASES` command at the `mysql>` prompt.
```
SHOW DATABASES;
```
| Database |
| --- |
| information\_schema |
| airlines |
| fec |
| imdb |
| lahman |
| nyctaxi |
In this case, the output indicates that the `airlines` database already exists.
If it didn’t, we could create it using the `CREATE DATABASE` command.
```
CREATE DATABASE airlines;
```
Since we will continue to work with the `airlines` database, we can save ourselves some typing by utilizing the `USE` command to make that connection explicit.
```
USE airlines;
```
Now that we are confined to the `airlines` database, there is no ambiguity in asking what tables are present.
```
SHOW TABLES;
```
| Tables\_in\_airlines |
| --- |
| airports |
| carriers |
| flights |
| planes |
### 16\.1\.2 `CREATE TABLE`
Recall that in Chapter [15](ch-sql.html#ch:sql) we used the `DESCRIBE` statement to display the definition of each table. This lists each field, its data type, whether there are keys or indices defined on it, and whether `NULL` values are allowed. For example, the `airports` table has the following definition.
```
DESCRIBE airports;
```
| Field | Type | Null | Key | Default | Extra |
| --- | --- | --- | --- | --- | --- |
| faa | varchar(3\) | NO | PRI | | |
| name | varchar(255\) | YES | | NA | |
| lat | decimal(10,7\) | YES | | NA | |
| lon | decimal(10,7\) | YES | | NA | |
| alt | int(11\) | YES | | NA | |
| tz | smallint(4\) | YES | | NA | |
| dst | char(1\) | YES | | NA | |
| city | varchar(255\) | YES | | NA | |
| country | varchar(255\) | YES | | NA | |
We can see from the output that the `faa`, `name`, `city`, and `country` fields are defined as `varchar` (or variable character) fields. These fields contain character strings, but the length of the strings allowed varies. We know that the `faa` code is restricted to three characters, and so we have codified that in the table definition. The `dst` field contains only a single character, indicating whether daylight saving time is observed at each airport. The `lat` and `lon` fields contain geographic coordinates, which can be three\-digit numbers (i.e., the maximum value is 180\) with up to seven decimal places. The `tz` field can be up to a four\-digit integer, while the `alt` field is allowed eleven digits.
In this case, `NULL` values are allowed—and are the default—in all of the fields except for `faa`, which is the primary key.
**R** is translating the null character in SQL (`NULL`) to the null character in **R** (`NA`).
These definitions did not come out of thin air, nor were they automatically generated. In this case, we wrote them by hand, in the following `CREATE TABLE` statement:
```
SHOW CREATE TABLE airports;
```
```
CREATE TABLE `airports` (
`faa` varchar(3) NOT NULL DEFAULT '',
`name` varchar(255) DEFAULT NULL,
`lat` decimal(10,7) DEFAULT NULL,
`lon` decimal(10,7) DEFAULT NULL,
`alt` int(11) DEFAULT NULL,
`tz` smallint(4) DEFAULT NULL,
`dst` char(1) DEFAULT NULL,
`city` varchar(255) DEFAULT NULL,
`country` varchar(255) DEFAULT NULL,
PRIMARY KEY (`faa`)
```
The `CREATE TABLE` command starts by defining the name of the table, and then proceeds to list the field definitions in a comma\-separated list. If you want to build a database from scratch—as we do in Section [16\.3](ch-sql2.html#sec:toy-db)—you will have to write these definitions for each table.[29](#fn29) Tables that are already created can be modified using the `ALTER TABLE` command. For example, the following will change the `tz` field to two digits and change the default value to zero.
```
ALTER TABLE airports CHANGE tz tz smallint(2) DEFAULT 0;
```
### 16\.1\.3 Keys
Two related but different concepts are [*keys*](https://en.wikipedia.org/w/index.php?search=keys) and [*indices*](https://en.wikipedia.org/w/index.php?search=indices). The former offers some performance advantages but is primarily useful for imposing constraints on possible entries in the database, while the latter is purely about improving the speed of retrieval.
Different relational database management systems (RDBMS) may implement a variety of different kinds of keys, but three types are most common.
In each case, suppose that we have a table with \\(n\\) rows and \\(p\\) columns.
* `PRIMARY KEY`: a column or set of columns in a table that uniquely identifies each row. By convention, this column is often called `id`. A table can have at most one [*primary key*](https://en.wikipedia.org/w/index.php?search=primary%20key), and in general it is considered good practice to define a primary key on every table (although there are exceptions to this rule). If the index spans \\(k \< p\\) columns, then even though the primary key must by definition have \\(n\\) rows itself, it only requires \\(nk\\) pieces of data, rather than the \\(np\\) that the full table occupies. Thus, the primary key is always smaller than the table itself, and is thus faster to search. A second critically important role of the primary key is enforcement of non\-duplication. If you try to insert a row into a table that would result in a duplicate entry for the primary key, you will get an error.
* `UNIQUE KEY`: a column or set of columns in a table that uniquely identifies each row, except for rows that contain `NULL` in some of those attributes. Unlike primary keys, a single table may have many unique keys. A typical use for these are in a lookup table. For example, [Ted Turocy](https://en.wikipedia.org/w/index.php?search=Ted%20Turocy) maintains a [register](http://chadwick-bureau.com/the-register/) of player `id`s for professional baseball players across multiple data providers. Each row in this table is a different player, and the primary key is a randomly\-generated hash—each player gets exactly one value. However, each row also contains that same player’s `id` in systems designed by [*MLBAM*](https://en.wikipedia.org/w/index.php?search=MLBAM), [*Baseball\-Reference*](https://en.wikipedia.org/w/index.php?search=Baseball-Reference), [*Baseball Prospectus*](https://en.wikipedia.org/w/index.php?search=Baseball%20Prospectus), [*Fangraphs*](https://en.wikipedia.org/w/index.php?search=Fangraphs), etc. This is tremendously useful for researchers working with multiple data providers, since they can easily link a player’s statistics in one system to his information in another. However, this ability is predicated on the *uniqueness* of each player’s id in *each* system. Moreover, many players may not have an `id` in every system, since some data providers track minor league baseball, or even the Japanese and Korean professional leagues. Thus, the imposition of a unique key—which allows `NULL`s—is necessary to maintain the integrity of these data.
* `FOREIGN KEY`: a column or set of columns that reference a primary key in another table. For example, the primary key in the `carriers` table is `carrier`. The `carrier` column in the `flights` table, which consists of carrier IDs, is a [*foreign key*](https://en.wikipedia.org/w/index.php?search=foreign%20key) that references `carriers.carrier`. Foreign keys don’t offer any performance enhancements, but they are important for maintaining [*referential integrity*](https://en.wikipedia.org/w/index.php?search=referential%20integrity), especially in transactional databases that have many insertions and deletions.
You can use the `SHOW KEYS` command to identify the keys in a table. Note that the `carriers` table has only one key defined: a primary key on `carrier`.
```
SHOW KEYS FROM carriers;
```
| Table | Non\_unique | Key\_name | Seq\_in\_index | Column\_name | Cardinality |
| --- | --- | --- | --- | --- | --- |
| carriers | 0 | PRIMARY | 1 | carrier | 1610 |
### 16\.1\.4 Indices
While keys help maintain the integrity of the data, indices impose no constraints—they simply enable faster retrieval. An index is a lookup table that helps SQL keep track of which records contain certain values. Judicious use of indices can dramatically speed up retrieval times.
The technical implementation of efficient indices is an active area of research among computer scientists, and fast indices are one of the primary advantages that differentiate SQL tables from large **R** data frames.
Indices have to be built by the database in advance, and they are then written to the disk. Thus, indices take up space on the disk (this is one of the reasons that they aren’t implemented in **R**). For some tables with many indices, the size of the indices can even exceed the size of the raw data. Thus, when building indices, there is a trade\-off to consider: You want just enough indices but not too many.
Consider the task of locating all of the rows in the `flights` table that contain the `origin` value `BDL`. These rows are strewn about the table in no particular order. How would you find them? A simple approach would be to start with the first row, examine the `origin` field, grab it if it contains `BDL`, and otherwise move to the second row. In order to ensure that all of the matching rows are returned, this algorithm must check every single one of the \\(n\=\\)48 million rows[30](#fn30) in this table! So its speed is \\(O(n)\\). However, we have built an index on the `origin` column, and this index contains only 2,266 rows (see Table [16\.1](ch-sql2.html#tab:show-indexes)). Each row in the index corresponds to exactly one value of `origin`, and contains a lookup for the exact rows in the table that are specific to that value. Thus, when we ask for the rows for which `origin` is equal to `BDL`, the database will use the index to deliver those rows very quickly. In practice, the retrieval speed for indexed columns can be \\(O(\\ln{n})\\) (or better)—which is a tremendous advantage when \\(n\\) is large.
The speed\-up that indices can provide is often especially apparent when joining two large tables. To see why, consider the following toy example. Suppose we want to merge two tables on the columns whose values are listed below. To merge these records correctly, we have to do a lot of work to find the appropriate value in the second list that matches each value in the first list.
```
[1] 5 18 2 3 4 2 1
```
```
[1] 5 6 3 18 4 7 1 2
```
On the other hand, consider performing the same task on the same set of values, but having the values sorted ahead of time. Now, the merging task is very fast, because we can quickly locate the matching records. In effect, by keeping the records sorted, we have off\-loaded the sorting task when we do a merge, resulting in much faster merging performance. However, this requires that we sort the records in the first place and then keep them sorted. This may slow down other operations—such as inserting new records—which now have to be done more carefully.
```
[1] 1 2 2 3 4 5 18
```
```
[1] 1 2 3 4 5 6 7 18
```
```
SHOW INDEXES FROM flights;
```
Table 16\.1: Indices in the flights table.
| Table | Non\_unique | Key\_name | Seq\_in\_index | Column\_name | Cardinality |
| --- | --- | --- | --- | --- | --- |
| flights | 1 | Year | 1 | year | 7 |
| flights | 1 | Date | 1 | year | 7 |
| flights | 1 | Date | 2 | month | 89 |
| flights | 1 | Date | 3 | day | 2712 |
| flights | 1 | Origin | 1 | origin | 2267 |
| flights | 1 | Dest | 1 | dest | 2267 |
| flights | 1 | Carrier | 1 | carrier | 134 |
| flights | 1 | tailNum | 1 | tailnum | 37862 |
In MySQL the `SHOW INDEXES` command is equivalent to `SHOW KEYS`. Note that the `flights` table has several keys defined, but no primary key (see Table [16\.1](ch-sql2.html#tab:show-indexes)).
The key `Date` spans the three columns `year`, `month`, and `day`.
### 16\.1\.5 Query plans
It is important to have the right indices built for your specific data and the queries that are likely to be run on it. Unfortunately, there is not always a straightforward answer to the question of which indices to build.
For the `flights` table, it seems likely that many queries will involve searching for flights from a particular origin, or to a particular destination, or during a particular year (or range of years), or on a specific carrier, and so we have built indices on each of these columns.
We have also built the `Date` index, since it seems likely that people would want to search for flights on a certain date.
However, it does not seem so likely that people would search for flights in a specific month across all years, and thus we have not built an index on `month` alone.
The `Date` index contains the `month` column, but this index can only be used if `year` is also part of the query.
You can ask MySQL for information about how it is going to perform a query using the `EXPLAIN` syntax. This will help you understand how onerous your query is, without actually running it—saving you the time of having to wait for it to execute.
This output reflects the query plan returned by the MySQL server.
If we were to run a query for long flights using the `distance` column the server will have to inspect each of the 48 million rows, since this column is not indexed.
This is the slowest possible search, and is often called a [*table scan*](https://en.wikipedia.org/w/index.php?search=table%20scan).
The 48 million number that you see in the `rows` column is an estimate of the number of rows that MySQL will have to consult in order to process your query.
In general, more rows mean a slower query.
```
EXPLAIN SELECT * FROM flights WHERE distance > 3000;
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32 | ALL | NA | NA | NA | NA | 47932811 | 33\.3 |
On the other hand, if we search for recent flights using the `year` column, which has an index built on it, then we only need to consider a fraction of those rows (about 6\.3 million).
```
EXPLAIN SELECT * FROM flights WHERE year = 2013;
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | p27 | ALL | Year,Date | NA | NA | NA | 6369482 | 100 |
Note that for the second example the server could have used either the index `Year` or the index `Date` (which contains the column `year`). Because of the index, only the 6\.3 million flights from 2013 were consulted.
Similarly, if we search by year and month, we can use the `Date` index.
```
EXPLAIN SELECT * FROM flights WHERE year = 2013 AND month = 6;
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | p27 | ref | Year,Date | Date | 6 | const,const | 714535 | 100 |
But if we search for months across all years, we can’t!
The query plan results in a table scan again.
```
EXPLAIN SELECT * FROM flights WHERE month = 6;
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32 | ALL | NA | NA | NA | NA | 47932811 | 10 |
This is because although `month` is part of the `Date` index, it is the *second* column in the index, and thus it doesn’t help us when we aren’t filtering on `year`. Thus, if it were common for our users to search on `month` without `year`, it would probably be worth building an index on `month`. Were we to actually run these queries, there would be a significant difference in computational time.
Using indices is especially important when performing `JOIN` operations on large tables.
In this example, both queries use indices.
However, because the cardinality of the index on `tailnum` is smaller that the cardinality of the index on `year` (see Table [16\.1](ch-sql2.html#tab:show-indexes)), the number of rows in `flights` associated with each unique value of `tailnum` is smaller than for each unique value of `year`.
Thus, the first query runs faster.
```
EXPLAIN
SELECT * FROM planes p
LEFT JOIN flights o ON p.tailnum = o.TailNum
WHERE manufacturer = 'BOEING';
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| p | NA | ALL | NA | NA | NA | NA | 3322 | 10 |
| o | p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32 | ref | tailNum | tailNum | 9 | airlines.p.tailnum | 1266 | 100 |
```
EXPLAIN
SELECT * FROM planes p
LEFT JOIN flights o ON p.Year = o.Year
WHERE manufacturer = 'BOEING';
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| p | NA | ALL | NA | NA | NA | NA | 3322 | 10 |
| o | p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32 | ref | Year,Date | Year | 3 | airlines.p.year | 6450117 | 100 |
### 16\.1\.6 Partitioning
Another approach to speeding up queries on large tables (like `flights`) is [*partitioning*](https://en.wikipedia.org/w/index.php?search=partitioning). Here, we could create partitions based on the `year`. For `flights` this would instruct the server to physically write the `flights` table as a series of smaller tables, each one specific to a single value of `year`. At the same time, the server would create a logical supertable, so that to the user, the appearance of `flights` would be unchanged. This acts like a preemptive index on the `year` column.
If most of the queries to the `flights` table were for a specific year or range of years, then partitioning could significantly improve performance, since most of the rows would never be consulted. For example, if most of the queries to the `flights` database were for the past three years, then partitioning would reduce the search space of most queries on the full data set to the roughly 20 million flights in the last three years instead of the 169 million rows in the last 20 years. But here again, if most of the queries to the `flights` table were about carriers across years, then this type of partitioning would not help at all. It is the job of the database designer to tailor the database structure to the pattern of queries coming from the users. As a data scientist, this may mean that you have to tailor the database structure to the queries that you are running.
### 16\.1\.1 Creating new databases
Once you have logged into MySQL, you can see what databases are available to you by running the `SHOW DATABASES` command at the `mysql>` prompt.
```
SHOW DATABASES;
```
| Database |
| --- |
| information\_schema |
| airlines |
| fec |
| imdb |
| lahman |
| nyctaxi |
In this case, the output indicates that the `airlines` database already exists.
If it didn’t, we could create it using the `CREATE DATABASE` command.
```
CREATE DATABASE airlines;
```
Since we will continue to work with the `airlines` database, we can save ourselves some typing by utilizing the `USE` command to make that connection explicit.
```
USE airlines;
```
Now that we are confined to the `airlines` database, there is no ambiguity in asking what tables are present.
```
SHOW TABLES;
```
| Tables\_in\_airlines |
| --- |
| airports |
| carriers |
| flights |
| planes |
### 16\.1\.2 `CREATE TABLE`
Recall that in Chapter [15](ch-sql.html#ch:sql) we used the `DESCRIBE` statement to display the definition of each table. This lists each field, its data type, whether there are keys or indices defined on it, and whether `NULL` values are allowed. For example, the `airports` table has the following definition.
```
DESCRIBE airports;
```
| Field | Type | Null | Key | Default | Extra |
| --- | --- | --- | --- | --- | --- |
| faa | varchar(3\) | NO | PRI | | |
| name | varchar(255\) | YES | | NA | |
| lat | decimal(10,7\) | YES | | NA | |
| lon | decimal(10,7\) | YES | | NA | |
| alt | int(11\) | YES | | NA | |
| tz | smallint(4\) | YES | | NA | |
| dst | char(1\) | YES | | NA | |
| city | varchar(255\) | YES | | NA | |
| country | varchar(255\) | YES | | NA | |
We can see from the output that the `faa`, `name`, `city`, and `country` fields are defined as `varchar` (or variable character) fields. These fields contain character strings, but the length of the strings allowed varies. We know that the `faa` code is restricted to three characters, and so we have codified that in the table definition. The `dst` field contains only a single character, indicating whether daylight saving time is observed at each airport. The `lat` and `lon` fields contain geographic coordinates, which can be three\-digit numbers (i.e., the maximum value is 180\) with up to seven decimal places. The `tz` field can be up to a four\-digit integer, while the `alt` field is allowed eleven digits.
In this case, `NULL` values are allowed—and are the default—in all of the fields except for `faa`, which is the primary key.
**R** is translating the null character in SQL (`NULL`) to the null character in **R** (`NA`).
These definitions did not come out of thin air, nor were they automatically generated. In this case, we wrote them by hand, in the following `CREATE TABLE` statement:
```
SHOW CREATE TABLE airports;
```
```
CREATE TABLE `airports` (
`faa` varchar(3) NOT NULL DEFAULT '',
`name` varchar(255) DEFAULT NULL,
`lat` decimal(10,7) DEFAULT NULL,
`lon` decimal(10,7) DEFAULT NULL,
`alt` int(11) DEFAULT NULL,
`tz` smallint(4) DEFAULT NULL,
`dst` char(1) DEFAULT NULL,
`city` varchar(255) DEFAULT NULL,
`country` varchar(255) DEFAULT NULL,
PRIMARY KEY (`faa`)
```
The `CREATE TABLE` command starts by defining the name of the table, and then proceeds to list the field definitions in a comma\-separated list. If you want to build a database from scratch—as we do in Section [16\.3](ch-sql2.html#sec:toy-db)—you will have to write these definitions for each table.[29](#fn29) Tables that are already created can be modified using the `ALTER TABLE` command. For example, the following will change the `tz` field to two digits and change the default value to zero.
```
ALTER TABLE airports CHANGE tz tz smallint(2) DEFAULT 0;
```
### 16\.1\.3 Keys
Two related but different concepts are [*keys*](https://en.wikipedia.org/w/index.php?search=keys) and [*indices*](https://en.wikipedia.org/w/index.php?search=indices). The former offers some performance advantages but is primarily useful for imposing constraints on possible entries in the database, while the latter is purely about improving the speed of retrieval.
Different relational database management systems (RDBMS) may implement a variety of different kinds of keys, but three types are most common.
In each case, suppose that we have a table with \\(n\\) rows and \\(p\\) columns.
* `PRIMARY KEY`: a column or set of columns in a table that uniquely identifies each row. By convention, this column is often called `id`. A table can have at most one [*primary key*](https://en.wikipedia.org/w/index.php?search=primary%20key), and in general it is considered good practice to define a primary key on every table (although there are exceptions to this rule). If the index spans \\(k \< p\\) columns, then even though the primary key must by definition have \\(n\\) rows itself, it only requires \\(nk\\) pieces of data, rather than the \\(np\\) that the full table occupies. Thus, the primary key is always smaller than the table itself, and is thus faster to search. A second critically important role of the primary key is enforcement of non\-duplication. If you try to insert a row into a table that would result in a duplicate entry for the primary key, you will get an error.
* `UNIQUE KEY`: a column or set of columns in a table that uniquely identifies each row, except for rows that contain `NULL` in some of those attributes. Unlike primary keys, a single table may have many unique keys. A typical use for these are in a lookup table. For example, [Ted Turocy](https://en.wikipedia.org/w/index.php?search=Ted%20Turocy) maintains a [register](http://chadwick-bureau.com/the-register/) of player `id`s for professional baseball players across multiple data providers. Each row in this table is a different player, and the primary key is a randomly\-generated hash—each player gets exactly one value. However, each row also contains that same player’s `id` in systems designed by [*MLBAM*](https://en.wikipedia.org/w/index.php?search=MLBAM), [*Baseball\-Reference*](https://en.wikipedia.org/w/index.php?search=Baseball-Reference), [*Baseball Prospectus*](https://en.wikipedia.org/w/index.php?search=Baseball%20Prospectus), [*Fangraphs*](https://en.wikipedia.org/w/index.php?search=Fangraphs), etc. This is tremendously useful for researchers working with multiple data providers, since they can easily link a player’s statistics in one system to his information in another. However, this ability is predicated on the *uniqueness* of each player’s id in *each* system. Moreover, many players may not have an `id` in every system, since some data providers track minor league baseball, or even the Japanese and Korean professional leagues. Thus, the imposition of a unique key—which allows `NULL`s—is necessary to maintain the integrity of these data.
* `FOREIGN KEY`: a column or set of columns that reference a primary key in another table. For example, the primary key in the `carriers` table is `carrier`. The `carrier` column in the `flights` table, which consists of carrier IDs, is a [*foreign key*](https://en.wikipedia.org/w/index.php?search=foreign%20key) that references `carriers.carrier`. Foreign keys don’t offer any performance enhancements, but they are important for maintaining [*referential integrity*](https://en.wikipedia.org/w/index.php?search=referential%20integrity), especially in transactional databases that have many insertions and deletions.
You can use the `SHOW KEYS` command to identify the keys in a table. Note that the `carriers` table has only one key defined: a primary key on `carrier`.
```
SHOW KEYS FROM carriers;
```
| Table | Non\_unique | Key\_name | Seq\_in\_index | Column\_name | Cardinality |
| --- | --- | --- | --- | --- | --- |
| carriers | 0 | PRIMARY | 1 | carrier | 1610 |
### 16\.1\.4 Indices
While keys help maintain the integrity of the data, indices impose no constraints—they simply enable faster retrieval. An index is a lookup table that helps SQL keep track of which records contain certain values. Judicious use of indices can dramatically speed up retrieval times.
The technical implementation of efficient indices is an active area of research among computer scientists, and fast indices are one of the primary advantages that differentiate SQL tables from large **R** data frames.
Indices have to be built by the database in advance, and they are then written to the disk. Thus, indices take up space on the disk (this is one of the reasons that they aren’t implemented in **R**). For some tables with many indices, the size of the indices can even exceed the size of the raw data. Thus, when building indices, there is a trade\-off to consider: You want just enough indices but not too many.
Consider the task of locating all of the rows in the `flights` table that contain the `origin` value `BDL`. These rows are strewn about the table in no particular order. How would you find them? A simple approach would be to start with the first row, examine the `origin` field, grab it if it contains `BDL`, and otherwise move to the second row. In order to ensure that all of the matching rows are returned, this algorithm must check every single one of the \\(n\=\\)48 million rows[30](#fn30) in this table! So its speed is \\(O(n)\\). However, we have built an index on the `origin` column, and this index contains only 2,266 rows (see Table [16\.1](ch-sql2.html#tab:show-indexes)). Each row in the index corresponds to exactly one value of `origin`, and contains a lookup for the exact rows in the table that are specific to that value. Thus, when we ask for the rows for which `origin` is equal to `BDL`, the database will use the index to deliver those rows very quickly. In practice, the retrieval speed for indexed columns can be \\(O(\\ln{n})\\) (or better)—which is a tremendous advantage when \\(n\\) is large.
The speed\-up that indices can provide is often especially apparent when joining two large tables. To see why, consider the following toy example. Suppose we want to merge two tables on the columns whose values are listed below. To merge these records correctly, we have to do a lot of work to find the appropriate value in the second list that matches each value in the first list.
```
[1] 5 18 2 3 4 2 1
```
```
[1] 5 6 3 18 4 7 1 2
```
On the other hand, consider performing the same task on the same set of values, but having the values sorted ahead of time. Now, the merging task is very fast, because we can quickly locate the matching records. In effect, by keeping the records sorted, we have off\-loaded the sorting task when we do a merge, resulting in much faster merging performance. However, this requires that we sort the records in the first place and then keep them sorted. This may slow down other operations—such as inserting new records—which now have to be done more carefully.
```
[1] 1 2 2 3 4 5 18
```
```
[1] 1 2 3 4 5 6 7 18
```
```
SHOW INDEXES FROM flights;
```
Table 16\.1: Indices in the flights table.
| Table | Non\_unique | Key\_name | Seq\_in\_index | Column\_name | Cardinality |
| --- | --- | --- | --- | --- | --- |
| flights | 1 | Year | 1 | year | 7 |
| flights | 1 | Date | 1 | year | 7 |
| flights | 1 | Date | 2 | month | 89 |
| flights | 1 | Date | 3 | day | 2712 |
| flights | 1 | Origin | 1 | origin | 2267 |
| flights | 1 | Dest | 1 | dest | 2267 |
| flights | 1 | Carrier | 1 | carrier | 134 |
| flights | 1 | tailNum | 1 | tailnum | 37862 |
In MySQL the `SHOW INDEXES` command is equivalent to `SHOW KEYS`. Note that the `flights` table has several keys defined, but no primary key (see Table [16\.1](ch-sql2.html#tab:show-indexes)).
The key `Date` spans the three columns `year`, `month`, and `day`.
### 16\.1\.5 Query plans
It is important to have the right indices built for your specific data and the queries that are likely to be run on it. Unfortunately, there is not always a straightforward answer to the question of which indices to build.
For the `flights` table, it seems likely that many queries will involve searching for flights from a particular origin, or to a particular destination, or during a particular year (or range of years), or on a specific carrier, and so we have built indices on each of these columns.
We have also built the `Date` index, since it seems likely that people would want to search for flights on a certain date.
However, it does not seem so likely that people would search for flights in a specific month across all years, and thus we have not built an index on `month` alone.
The `Date` index contains the `month` column, but this index can only be used if `year` is also part of the query.
You can ask MySQL for information about how it is going to perform a query using the `EXPLAIN` syntax. This will help you understand how onerous your query is, without actually running it—saving you the time of having to wait for it to execute.
This output reflects the query plan returned by the MySQL server.
If we were to run a query for long flights using the `distance` column the server will have to inspect each of the 48 million rows, since this column is not indexed.
This is the slowest possible search, and is often called a [*table scan*](https://en.wikipedia.org/w/index.php?search=table%20scan).
The 48 million number that you see in the `rows` column is an estimate of the number of rows that MySQL will have to consult in order to process your query.
In general, more rows mean a slower query.
```
EXPLAIN SELECT * FROM flights WHERE distance > 3000;
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32 | ALL | NA | NA | NA | NA | 47932811 | 33\.3 |
On the other hand, if we search for recent flights using the `year` column, which has an index built on it, then we only need to consider a fraction of those rows (about 6\.3 million).
```
EXPLAIN SELECT * FROM flights WHERE year = 2013;
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | p27 | ALL | Year,Date | NA | NA | NA | 6369482 | 100 |
Note that for the second example the server could have used either the index `Year` or the index `Date` (which contains the column `year`). Because of the index, only the 6\.3 million flights from 2013 were consulted.
Similarly, if we search by year and month, we can use the `Date` index.
```
EXPLAIN SELECT * FROM flights WHERE year = 2013 AND month = 6;
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | p27 | ref | Year,Date | Date | 6 | const,const | 714535 | 100 |
But if we search for months across all years, we can’t!
The query plan results in a table scan again.
```
EXPLAIN SELECT * FROM flights WHERE month = 6;
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32 | ALL | NA | NA | NA | NA | 47932811 | 10 |
This is because although `month` is part of the `Date` index, it is the *second* column in the index, and thus it doesn’t help us when we aren’t filtering on `year`. Thus, if it were common for our users to search on `month` without `year`, it would probably be worth building an index on `month`. Were we to actually run these queries, there would be a significant difference in computational time.
Using indices is especially important when performing `JOIN` operations on large tables.
In this example, both queries use indices.
However, because the cardinality of the index on `tailnum` is smaller that the cardinality of the index on `year` (see Table [16\.1](ch-sql2.html#tab:show-indexes)), the number of rows in `flights` associated with each unique value of `tailnum` is smaller than for each unique value of `year`.
Thus, the first query runs faster.
```
EXPLAIN
SELECT * FROM planes p
LEFT JOIN flights o ON p.tailnum = o.TailNum
WHERE manufacturer = 'BOEING';
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| p | NA | ALL | NA | NA | NA | NA | 3322 | 10 |
| o | p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32 | ref | tailNum | tailNum | 9 | airlines.p.tailnum | 1266 | 100 |
```
EXPLAIN
SELECT * FROM planes p
LEFT JOIN flights o ON p.Year = o.Year
WHERE manufacturer = 'BOEING';
```
| table | partitions | type | possible\_keys | key | key\_len | ref | rows | filtered |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| p | NA | ALL | NA | NA | NA | NA | 3322 | 10 |
| o | p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27,p28,p29,p30,p31,p32 | ref | Year,Date | Year | 3 | airlines.p.year | 6450117 | 100 |
### 16\.1\.6 Partitioning
Another approach to speeding up queries on large tables (like `flights`) is [*partitioning*](https://en.wikipedia.org/w/index.php?search=partitioning). Here, we could create partitions based on the `year`. For `flights` this would instruct the server to physically write the `flights` table as a series of smaller tables, each one specific to a single value of `year`. At the same time, the server would create a logical supertable, so that to the user, the appearance of `flights` would be unchanged. This acts like a preemptive index on the `year` column.
If most of the queries to the `flights` table were for a specific year or range of years, then partitioning could significantly improve performance, since most of the rows would never be consulted. For example, if most of the queries to the `flights` database were for the past three years, then partitioning would reduce the search space of most queries on the full data set to the roughly 20 million flights in the last three years instead of the 169 million rows in the last 20 years. But here again, if most of the queries to the `flights` table were about carriers across years, then this type of partitioning would not help at all. It is the job of the database designer to tailor the database structure to the pattern of queries coming from the users. As a data scientist, this may mean that you have to tailor the database structure to the queries that you are running.
16\.2 Changing SQL data
-----------------------
In Chapter [15](ch-sql.html#ch:sql), we described how to query an SQL database using the `SELECT` command. Thus far in this chapter, we have discussed how to set up an SQL database, and how to optimize it for speed. None of these operations actually change data in an existing database. In this section, we will briefly touch upon the `UPDATE` and `INSERT` commands, which allow you to do exactly that.
### 16\.2\.1 Changing data
The `UPDATE` command allows you to reset values in a table across all rows that match a certain criteria. For example, in Chapter [15](ch-sql.html#ch:sql) we discussed the possibility that airports could change names over time. The airport in [*Washington, D.C.*](https://en.wikipedia.org/w/index.php?search=Washington,%20D.C.) with code `DCA` is now called [*Ronald Reagan Washington National*](https://en.wikipedia.org/w/index.php?search=Ronald%20Reagan%20Washington%20National).
```
SELECT faa, name FROM airports WHERE faa = 'DCA';
```
| faa | name |
| --- | --- |
| DCA | Ronald Reagan Washington Natl |
However, the “[Ronald Reagan](https://en.wikipedia.org/w/index.php?search=Ronald%20Reagan)” prefix was added in 1998\. If—for whatever reason—we wanted to go back to the old name, we could use an `UPDATE` command to change that information in the `airports` table.
```
UPDATE airports
SET name = 'Washington National'
WHERE faa = 'DCA';
```
An `UPDATE` operation can be very useful when you have to apply wholesale changes over a large number of rows. However, extreme caution is necessary, since an imprecise `UPDATE` query can wipe out large quantities of data, and there is no “undo” operation!
Exercise extraordinary caution when performing `UPDATE`s.
### 16\.2\.2 Adding data
New data can be appended to an existing table with the `INSERT` commands. There are actually three things that can happen, depending on what you want to do when you have a primary key conflict. This occurs when one of the new rows that you are trying to insert has the same primary key value as one of the existing rows in the table.
* `INSERT`: Try to insert the new rows. If there is a primary key conflict, quit and throw an error.
* `INSERT IGNORE`: Try to insert the new rows. If there is a primary key conflict, skip inserting the conflicting rows and leave the existing rows untouched. Continue inserting data that does not conflict.
* `REPLACE`: Try to insert the new rows. If there is a primary key conflict, overwrite the existing rows with the new ones. Continue inserting data that does not conflict.
Recall that in Chapter [15](ch-sql.html#ch:sql) we found that the airports in [*Puerto Rico*](https://en.wikipedia.org/w/index.php?search=Puerto%20Rico) were not present in the `airports` table. If we wanted to add these manually, we could use `INSERT`.
```
INSERT INTO airports (faa, name)
VALUES ('SJU', 'Luis Munoz Marin International Airport');
```
Since `faa` is the primary key on this table, we can insert this row without contributing values for all of the other fields. In this case, the new row corresponding to `SJU` would have the `faa` and `name` fields as noted above, and the default values for all of the other fields. If we were to run this operation a second time, we would get an error, because of the primary key collision on `SJU`. We could avoid the error by choosing to `INSERT INGORE` or `REPLACE` instead of `INSERT`.
### 16\.2\.3 Importing data from a file
In practice, we rarely add new data manually in this manner. Instead, new data are most often added using the `LOAD DATA` command. This allows a file containing new data—usually a CSV—to be inserted in bulk. This is very common, when, for example, your data comes to you daily in a CSV file and you want to keep your database up to date. The primary key collision concepts described above also apply to the `LOAD DATA` syntax, and are important to understand for proper database maintenance. We illustrate the use of `LOAD DATA` in Section [16\.3](ch-sql2.html#sec:toy-db).
### 16\.2\.1 Changing data
The `UPDATE` command allows you to reset values in a table across all rows that match a certain criteria. For example, in Chapter [15](ch-sql.html#ch:sql) we discussed the possibility that airports could change names over time. The airport in [*Washington, D.C.*](https://en.wikipedia.org/w/index.php?search=Washington,%20D.C.) with code `DCA` is now called [*Ronald Reagan Washington National*](https://en.wikipedia.org/w/index.php?search=Ronald%20Reagan%20Washington%20National).
```
SELECT faa, name FROM airports WHERE faa = 'DCA';
```
| faa | name |
| --- | --- |
| DCA | Ronald Reagan Washington Natl |
However, the “[Ronald Reagan](https://en.wikipedia.org/w/index.php?search=Ronald%20Reagan)” prefix was added in 1998\. If—for whatever reason—we wanted to go back to the old name, we could use an `UPDATE` command to change that information in the `airports` table.
```
UPDATE airports
SET name = 'Washington National'
WHERE faa = 'DCA';
```
An `UPDATE` operation can be very useful when you have to apply wholesale changes over a large number of rows. However, extreme caution is necessary, since an imprecise `UPDATE` query can wipe out large quantities of data, and there is no “undo” operation!
Exercise extraordinary caution when performing `UPDATE`s.
### 16\.2\.2 Adding data
New data can be appended to an existing table with the `INSERT` commands. There are actually three things that can happen, depending on what you want to do when you have a primary key conflict. This occurs when one of the new rows that you are trying to insert has the same primary key value as one of the existing rows in the table.
* `INSERT`: Try to insert the new rows. If there is a primary key conflict, quit and throw an error.
* `INSERT IGNORE`: Try to insert the new rows. If there is a primary key conflict, skip inserting the conflicting rows and leave the existing rows untouched. Continue inserting data that does not conflict.
* `REPLACE`: Try to insert the new rows. If there is a primary key conflict, overwrite the existing rows with the new ones. Continue inserting data that does not conflict.
Recall that in Chapter [15](ch-sql.html#ch:sql) we found that the airports in [*Puerto Rico*](https://en.wikipedia.org/w/index.php?search=Puerto%20Rico) were not present in the `airports` table. If we wanted to add these manually, we could use `INSERT`.
```
INSERT INTO airports (faa, name)
VALUES ('SJU', 'Luis Munoz Marin International Airport');
```
Since `faa` is the primary key on this table, we can insert this row without contributing values for all of the other fields. In this case, the new row corresponding to `SJU` would have the `faa` and `name` fields as noted above, and the default values for all of the other fields. If we were to run this operation a second time, we would get an error, because of the primary key collision on `SJU`. We could avoid the error by choosing to `INSERT INGORE` or `REPLACE` instead of `INSERT`.
### 16\.2\.3 Importing data from a file
In practice, we rarely add new data manually in this manner. Instead, new data are most often added using the `LOAD DATA` command. This allows a file containing new data—usually a CSV—to be inserted in bulk. This is very common, when, for example, your data comes to you daily in a CSV file and you want to keep your database up to date. The primary key collision concepts described above also apply to the `LOAD DATA` syntax, and are important to understand for proper database maintenance. We illustrate the use of `LOAD DATA` in Section [16\.3](ch-sql2.html#sec:toy-db).
16\.3 Extended example: Building a database
-------------------------------------------
The [*extract\-transform\-load*](https://en.wikipedia.org/w/index.php?search=extract-transform-load) (ETL) paradigm is common among data professionals. The idea is that many data sources need to be extracted from some external source, transformed into a different format, and finally loaded into a database system. Often, this is an iterative process that needs to be done every day, or even every hour. In such cases, developing the infrastructure to automate these steps can result in dramatically increased productivity.
In this example, we will illustrate how to set up a MySQL database for the **babynames** data using the command line and SQL, but not **R**. As noted previously, while the **dplyr** package has made **R** a viable interface for querying and populating SQL databases, it is occasionally necessary to get “under the hood” with SQL. The files that correspond to this example can be found on the book website at [http://mdsr\-book.github.io/](http://mdsr-book.github.io/).
### 16\.3\.1 Extract
In this case, our data already lives in an **R** package, but in most cases, your data will live on a website, or be available in a different format. Our goal is to take that data from wherever it is and download it. For the **babynames** data, there isn’t much to do, since we already have the data in an **R** package. We will simply load it.
```
library(babynames)
```
### 16\.3\.2 Transform
Since SQL tables conform to a row\-and\-column paradigm, our goal during the transform phase is to create CSV files (see Chapter [6](ch-dataII.html#ch:dataII)) for each of the tables. In this example, we will create tables for the `babynames` and `births` tables. You can try to add the `applicants` and `lifetables` tables on your own. We will simply write these data to CSV files using the `write_csv()` command. Since the `babynames` table is very long (nearly 1\.8 million rows), we will just use the more recent data.
```
babynames %>%
filter(year > 1975) %>%
write_csv("babynames.csv")
births %>%
write_csv("births.csv")
list.files(".", pattern = ".csv")
```
```
[1] "babynames.csv" "births.csv"
```
This raises an important question: what should we call these objects? The **babynames** package includes a data frame called `babynames` with one row per sex per year per name. Having both the database and a table with the same name may be confusing. To clarify which is which we will call the database `babynamedb` and the table `babynames`.
Spending time thinking about the naming of databases, tables, and fields before you create them can help avoid confusion later on.
### 16\.3\.3 Load into MySQL database
Next, we need to write a script that will define the table structure for these two tables in a MySQL database (instructions for creation of a database in
SQLite can be found in Section [F.4\.4](ch-db-setup.html#sec:sqlitebaby)).
This script will have four parts:
1. a `USE` statement that ensures we are in the right schema/database
2. a series of `DROP TABLE` statements that drop any old tables with the same names as the ones we are going to create
3. a series of `CREATE TABLE` statements that specify the table structures
4. a series of `LOAD DATA` statements that read the data from the CSVs into the appropriate tables
The first part is easy:
```
USE babynamedb;
```
This assumes that we have a local database called `babynamedb`—we will create this later.
The second part is easy in this case, since we only have two tables. These ensure that we can run this script as many times as we want.
```
DROP TABLE IF EXISTS babynames;
DROP TABLE IF EXISTS births;
```
Be careful with the `DROP TABLE` statement. It destroys data.
The third step is the trickiest part—we need to define the columns precisely. The use of `str()`, `summary()`, and `glimpse()` are particularly useful for matching up **R** data types with MySQL data types. Please see [the MySQL documentation](http://dev.mysql.com/doc/refman/5.7/en/data-types.html) for more information about what data types are supported.
```
glimpse(babynames)
```
```
Rows: 1,924,665
Columns: 5
$ year <dbl> 1880, 1880, 1880, 1880, 1880, 1880, 1880, 1880, 1880, 1880, 1…
$ sex <chr> "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "…
$ name <chr> "Mary", "Anna", "Emma", "Elizabeth", "Minnie", "Margaret", "I…
$ n <int> 7065, 2604, 2003, 1939, 1746, 1578, 1472, 1414, 1320, 1288, 1…
$ prop <dbl> 0.07238, 0.02668, 0.02052, 0.01987, 0.01789, 0.01617, 0.01508…
```
In this case, we know that the `year` variable will only contain four\-digit integers, so we can specify that this column take up only that much room in SQL. Similarly, the `sex` variable is just a single character, so we can restrict the width of that column as well. These savings probably won’t matter much in this example, but for large tables they can make a noticeable difference.
```
CREATE TABLE `babynames` (
`year` smallint(4) NOT NULL DEFAULT 0,
`sex` char(1) NOT NULL DEFAULT 'F',
`name` varchar(255) NOT NULL DEFAULT '',
`n` mediumint(7) NOT NULL DEFAULT 0,
`prop` decimal(21,20) NOT NULL DEFAULT 0,
PRIMARY KEY (`year`, `sex`, `name`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
```
In this table, each row contains the information about one name for one sex in one year. Thus, each row contains a unique combination of those three variables, and we can therefore define a primary key across those three fields. Note the use of backquotes (to denote tables and variables) and the
use of regular quotes (for default values).
```
glimpse(births)
```
```
Rows: 109
Columns: 2
$ year <int> 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918,…
$ births <int> 2718000, 2777000, 2809000, 2840000, 2869000, 2966000, 29650…
```
```
CREATE TABLE `births` (
`year` smallint(4) NOT NULL DEFAULT 0,
`births` mediumint(8) NOT NULL DEFAULT 0,
PRIMARY KEY (`year`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
```
Finally, we have to tell MySQL where to find the CSV files and where to put the data it finds in them. This is accomplished using the [`LOAD DATA` command](http://dev.mysql.com/doc/refman/5.7/en/load-data.html). You may also need to add a `LINES TERMINATED BY \r\n` clause, but we have omitted that for clarity. Please be aware that lines terminate using different characters in different operating systems, so Windows, Mac, and Linux users may have to tweak these commands to suit their needs. The `SHOW WARNINGS` commands are not necessary, but they will help with debugging.
```
LOAD DATA LOCAL INFILE './babynames.csv' INTO TABLE `babynames`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' IGNORE 1 LINES;
SHOW WARNINGS;
LOAD DATA LOCAL INFILE './births.csv' INTO TABLE `births`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' IGNORE 1 LINES;
SHOW WARNINGS;
```
Putting this all together, we have the following script:
```
USE babynamedb;
DROP TABLE IF EXISTS babynames;
DROP TABLE IF EXISTS births;
CREATE TABLE `babynames` (
`year` smallint(4) NOT NULL DEFAULT 0,
`sex` char(1) NOT NULL DEFAULT 'F',
`name` varchar(255) NOT NULL DEFAULT '',
`n` mediumint(7) NOT NULL DEFAULT 0,
`prop` decimal(21,20) NOT NULL DEFAULT 0,
PRIMARY KEY (`year`, `sex`, `name`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
CREATE TABLE `births` (
`year` smallint(4) NOT NULL DEFAULT 0,
`births` mediumint(8) NOT NULL DEFAULT 0,
PRIMARY KEY (`year`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
LOAD DATA LOCAL INFILE './babynames.csv' INTO TABLE `babynames`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' IGNORE 1 LINES;
LOAD DATA LOCAL INFILE './births.csv' INTO TABLE `births`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' IGNORE 1 LINES;
SELECT year
, COUNT(DISTINCT name) AS numNames
, SUM(n) AS numBirths
FROM babynames
GROUP BY year
ORDER BY numBirths DESC
LIMIT 0,10;
```
Note that we have added a `SELECT` query just to verify that our table is populated. To load this into MySQL, we must first make sure that the `babynamedb` database exists, and if not, we must create it.
First, we check to see if `babynamedb` exists. We can do this from the command line using shell commands:
```
mysql -e "SHOW DATABASES;"
```
If it doesn’t exist, then we must create it:
```
mysql -e "CREATE DATABASE babynamedb;"
```
Finally, we run our script. The `--show-warnings` and `-v` flags are optional, but will help with debugging.
```
mysql --local-infile --show-warnings -v babynamedb
< babynamedb.mysql
```
In practice, if your SQL script is not perfect, you will see errors or warnings the first time you try this.
But by iterating this process, you will eventually refine your script such that it works as desired.
If you get an 1148 error, make sure that you are using the `--local-infile` flag.
```
ERROR 1148 (42000): The used command is not allowed with this MySQL version
```
If you get a 29 error, make sure that the file exists in this location and that the `mysql` user has permission to read and execute it.
```
ERROR 29 (HY000): File './babynames.csv' not found (Errcode: 13)
```
Once the MySQL database has been created, the following commands can be used to access it from **R** using **dplyr**:
```
db <- dbConnect(RMySQL::MySQL(), dbname = "babynamedb")
babynames <- tbl(db, "babynames")
babynames %>%
filter(name == "Benjamin")
```
### 16\.3\.1 Extract
In this case, our data already lives in an **R** package, but in most cases, your data will live on a website, or be available in a different format. Our goal is to take that data from wherever it is and download it. For the **babynames** data, there isn’t much to do, since we already have the data in an **R** package. We will simply load it.
```
library(babynames)
```
### 16\.3\.2 Transform
Since SQL tables conform to a row\-and\-column paradigm, our goal during the transform phase is to create CSV files (see Chapter [6](ch-dataII.html#ch:dataII)) for each of the tables. In this example, we will create tables for the `babynames` and `births` tables. You can try to add the `applicants` and `lifetables` tables on your own. We will simply write these data to CSV files using the `write_csv()` command. Since the `babynames` table is very long (nearly 1\.8 million rows), we will just use the more recent data.
```
babynames %>%
filter(year > 1975) %>%
write_csv("babynames.csv")
births %>%
write_csv("births.csv")
list.files(".", pattern = ".csv")
```
```
[1] "babynames.csv" "births.csv"
```
This raises an important question: what should we call these objects? The **babynames** package includes a data frame called `babynames` with one row per sex per year per name. Having both the database and a table with the same name may be confusing. To clarify which is which we will call the database `babynamedb` and the table `babynames`.
Spending time thinking about the naming of databases, tables, and fields before you create them can help avoid confusion later on.
### 16\.3\.3 Load into MySQL database
Next, we need to write a script that will define the table structure for these two tables in a MySQL database (instructions for creation of a database in
SQLite can be found in Section [F.4\.4](ch-db-setup.html#sec:sqlitebaby)).
This script will have four parts:
1. a `USE` statement that ensures we are in the right schema/database
2. a series of `DROP TABLE` statements that drop any old tables with the same names as the ones we are going to create
3. a series of `CREATE TABLE` statements that specify the table structures
4. a series of `LOAD DATA` statements that read the data from the CSVs into the appropriate tables
The first part is easy:
```
USE babynamedb;
```
This assumes that we have a local database called `babynamedb`—we will create this later.
The second part is easy in this case, since we only have two tables. These ensure that we can run this script as many times as we want.
```
DROP TABLE IF EXISTS babynames;
DROP TABLE IF EXISTS births;
```
Be careful with the `DROP TABLE` statement. It destroys data.
The third step is the trickiest part—we need to define the columns precisely. The use of `str()`, `summary()`, and `glimpse()` are particularly useful for matching up **R** data types with MySQL data types. Please see [the MySQL documentation](http://dev.mysql.com/doc/refman/5.7/en/data-types.html) for more information about what data types are supported.
```
glimpse(babynames)
```
```
Rows: 1,924,665
Columns: 5
$ year <dbl> 1880, 1880, 1880, 1880, 1880, 1880, 1880, 1880, 1880, 1880, 1…
$ sex <chr> "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "…
$ name <chr> "Mary", "Anna", "Emma", "Elizabeth", "Minnie", "Margaret", "I…
$ n <int> 7065, 2604, 2003, 1939, 1746, 1578, 1472, 1414, 1320, 1288, 1…
$ prop <dbl> 0.07238, 0.02668, 0.02052, 0.01987, 0.01789, 0.01617, 0.01508…
```
In this case, we know that the `year` variable will only contain four\-digit integers, so we can specify that this column take up only that much room in SQL. Similarly, the `sex` variable is just a single character, so we can restrict the width of that column as well. These savings probably won’t matter much in this example, but for large tables they can make a noticeable difference.
```
CREATE TABLE `babynames` (
`year` smallint(4) NOT NULL DEFAULT 0,
`sex` char(1) NOT NULL DEFAULT 'F',
`name` varchar(255) NOT NULL DEFAULT '',
`n` mediumint(7) NOT NULL DEFAULT 0,
`prop` decimal(21,20) NOT NULL DEFAULT 0,
PRIMARY KEY (`year`, `sex`, `name`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
```
In this table, each row contains the information about one name for one sex in one year. Thus, each row contains a unique combination of those three variables, and we can therefore define a primary key across those three fields. Note the use of backquotes (to denote tables and variables) and the
use of regular quotes (for default values).
```
glimpse(births)
```
```
Rows: 109
Columns: 2
$ year <int> 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918,…
$ births <int> 2718000, 2777000, 2809000, 2840000, 2869000, 2966000, 29650…
```
```
CREATE TABLE `births` (
`year` smallint(4) NOT NULL DEFAULT 0,
`births` mediumint(8) NOT NULL DEFAULT 0,
PRIMARY KEY (`year`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
```
Finally, we have to tell MySQL where to find the CSV files and where to put the data it finds in them. This is accomplished using the [`LOAD DATA` command](http://dev.mysql.com/doc/refman/5.7/en/load-data.html). You may also need to add a `LINES TERMINATED BY \r\n` clause, but we have omitted that for clarity. Please be aware that lines terminate using different characters in different operating systems, so Windows, Mac, and Linux users may have to tweak these commands to suit their needs. The `SHOW WARNINGS` commands are not necessary, but they will help with debugging.
```
LOAD DATA LOCAL INFILE './babynames.csv' INTO TABLE `babynames`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' IGNORE 1 LINES;
SHOW WARNINGS;
LOAD DATA LOCAL INFILE './births.csv' INTO TABLE `births`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' IGNORE 1 LINES;
SHOW WARNINGS;
```
Putting this all together, we have the following script:
```
USE babynamedb;
DROP TABLE IF EXISTS babynames;
DROP TABLE IF EXISTS births;
CREATE TABLE `babynames` (
`year` smallint(4) NOT NULL DEFAULT 0,
`sex` char(1) NOT NULL DEFAULT 'F',
`name` varchar(255) NOT NULL DEFAULT '',
`n` mediumint(7) NOT NULL DEFAULT 0,
`prop` decimal(21,20) NOT NULL DEFAULT 0,
PRIMARY KEY (`year`, `sex`, `name`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
CREATE TABLE `births` (
`year` smallint(4) NOT NULL DEFAULT 0,
`births` mediumint(8) NOT NULL DEFAULT 0,
PRIMARY KEY (`year`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
LOAD DATA LOCAL INFILE './babynames.csv' INTO TABLE `babynames`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' IGNORE 1 LINES;
LOAD DATA LOCAL INFILE './births.csv' INTO TABLE `births`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' IGNORE 1 LINES;
SELECT year
, COUNT(DISTINCT name) AS numNames
, SUM(n) AS numBirths
FROM babynames
GROUP BY year
ORDER BY numBirths DESC
LIMIT 0,10;
```
Note that we have added a `SELECT` query just to verify that our table is populated. To load this into MySQL, we must first make sure that the `babynamedb` database exists, and if not, we must create it.
First, we check to see if `babynamedb` exists. We can do this from the command line using shell commands:
```
mysql -e "SHOW DATABASES;"
```
If it doesn’t exist, then we must create it:
```
mysql -e "CREATE DATABASE babynamedb;"
```
Finally, we run our script. The `--show-warnings` and `-v` flags are optional, but will help with debugging.
```
mysql --local-infile --show-warnings -v babynamedb
< babynamedb.mysql
```
In practice, if your SQL script is not perfect, you will see errors or warnings the first time you try this.
But by iterating this process, you will eventually refine your script such that it works as desired.
If you get an 1148 error, make sure that you are using the `--local-infile` flag.
```
ERROR 1148 (42000): The used command is not allowed with this MySQL version
```
If you get a 29 error, make sure that the file exists in this location and that the `mysql` user has permission to read and execute it.
```
ERROR 29 (HY000): File './babynames.csv' not found (Errcode: 13)
```
Once the MySQL database has been created, the following commands can be used to access it from **R** using **dplyr**:
```
db <- dbConnect(RMySQL::MySQL(), dbname = "babynamedb")
babynames <- tbl(db, "babynames")
babynames %>%
filter(name == "Benjamin")
```
16\.4 Scalability
-----------------
With the exception of SQLite, RBDMSs scale very well on a single computer to databases that take up dozens of gigabytes. For a dedicated server, even terabytes are workable on a single machine. Beyond this, many companies employ distributed solutions called [*clusters*](https://en.wikipedia.org/w/index.php?search=clusters). A cluster is simply more than one machine (i.e., a node) linked together running the same RDBMS. One machine is designated as the head node, and this machine controls all of the other nodes. The actual data are distributed across the various nodes, and the head node manages queries—parceling them to the appropriate cluster nodes.
A full discussion of clusters and other distributed architectures (including replication) are beyond the scope of this book. In Chapter [21](ch-big.html#ch:big), we discuss alternatives to SQL that may provide higher\-end solutions for bigger data.
16\.5 Further resources
-----------------------
The *SQL in a Nutshell* book (Kline et al. 2008\) is a useful reference for all things SQL.
16\.6 Exercises
---------------
**Problem 1 (Easy)**: Alice is searching for cancelled flights in the `flights` table, and her query is running very slowly. She decides to build an index on `cancelled` in the hopes of speeding things up. Discuss the relative merits of her plan. What are the trade\-offs? Will her query be any faster?
**Problem 2 (Medium)**: The `Master` table of the `Lahman` database contains biographical information about baseball players. The primary key is the `playerID` variable. There are also variables for `retroID` and `bbrefID`, which correspond to the player’s identifier in other baseball databases. Discuss the ramifications of placing a primary, unique, or foreign key on `retroID`.
**Problem 3 (Medium)**: Bob wants to analyze the on\-time performance of United Airlines flights across the decade of the 1990s. Discuss how the partitioning scheme of the `flights` table based on `year` will affect the performance of Bob’s queries, relative to an unpartitioned table.
**Problem 4 (Hard)**: Use the `macleish` package to download the weather data at the MacLeish Field Station. Write your own table schema from scratch and import these data into the database server of your choice.
**Problem 5 (Hard)**: Write a full table schema for the two tables in the `fueleconomy` package and import them into the database server of your choice.
**Problem 6 (Hard)**: Write a full table schema for the `mtcars` data set and import it into the database server of your choice.
**Problem 7 (Hard)**: Write a full table schema for the five tables in the `nasaweather` package and import them into the database server of your choice.
**Problem 8 (Hard)**: Write a full table schema for two of the ten tables in the `usdanutrients` package and import them into the database server of your choice.
```
# remotes::install_github("hadley/usdanutrients")
library(usdanutrients)
# data(package="usdanutrients")
```
16\.7 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/sql\-II.html\#sqlII\-online\-exercises](https://mdsr-book.github.io/mdsr2e/sql-II.html#sqlII-online-exercises)
**Problem 1 (Easy)**: The `flights` table in the `airlines` database contains the following indexes:
```
SHOW INDEXES FROM flights;
```
| Table | Non\_unique | Key\_name | Seq\_in\_index | Column\_name | Collation | Cardinality | Sub\_part | Packed | Null | Index\_type | Comment | Index\_comment |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| flights | 1 | Year | 1 | year | A | 7 | NA | NA | YES | BTREE | | |
| flights | 1 | Date | 1 | year | A | 7 | NA | NA | YES | BTREE | | |
| flights | 1 | Date | 2 | month | A | 89 | NA | NA | YES | BTREE | | |
| flights | 1 | Date | 3 | day | A | 2712 | NA | NA | YES | BTREE | | |
| flights | 1 | Origin | 1 | origin | A | 2267 | NA | NA | | BTREE | | |
| flights | 1 | Dest | 1 | dest | A | 2267 | NA | NA | | BTREE | | |
| flights | 1 | Carrier | 1 | carrier | A | 134 | NA | NA | | BTREE | | |
| flights | 1 | tailNum | 1 | tailnum | A | 37862 | NA | NA | YES | BTREE | | |
Consider the following queries:
```
SELECT * FROM flights WHERE cancelled = 1;
SELECT * FROM flights WHERE carrier = "DL";
```
Which query will execute faster? Justify your answer.
**Problem 2 (Hard)**: Use the `fec12` package to download and unzip the federal election data for 2012 that were used in Chapter 2\. Write your own table schema from scratch and import these data into the database server of your choice.
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-spatial.html |
Chapter 17 Working with geospatial data
=======================================
When data contain geographic coordinates, they can be considered a type of [*spatial data*](https://en.wikipedia.org/w/index.php?search=spatial%20data).
Like the “text as data” that we explore in Chapter [19](ch-text.html#ch:text), spatial data are fundamentally different from the numerical data with which we most often work.
While spatial coordinates are often encoded as numbers, these numbers have special meaning, and our ability to understand them will suffer if we do not recognize their spatial nature.
The field of [*spatial statistics*](https://en.wikipedia.org/w/index.php?search=spatial%20statistics) concerns building and interpreting models that include spatial coordinates. For example, consider a model for airport traffic using the **airlines** data. These data contain the geographic coordinates of each airport, so they are spatially\-aware. But simply including the coordinates for latitude and longitude as covariates in a multiple regression model does not take advantage of the special meaning that these coordinates encode.
In such a model we might be led towards the meaningless conclusion that airports at higher latitudes are associated with greater airplane traffic—simply due to the limited nature of the model and our careless and inappropriate use of these spatial data.
A full treatment of spatial statistics is beyond the scope of this book.
While we won’t be building spatial models in this chapter, we will learn how to manage and visualize geospatial data in **R**. We will learn about how to work with [*shapefiles*](https://en.wikipedia.org/w/index.php?search=shapefiles), which are a *de facto* open specification data structure for encoding spatial information. We will learn about [*projections*](https://en.wikipedia.org/w/index.php?search=projections) (from three\-dimensional space into two\-dimensional space), colors (again), and how to create informative, and not misleading, spatially\-aware visualizations. Our goal—as always—is to provide the reader with the technical ability and intellectual know\-how to derive meaning from geospatial data.
17\.1 Motivation: What’s so great about geospatial data?
--------------------------------------------------------
The most famous early analysis of geospatial data was done by physician [John Snow](https://en.wikipedia.org/w/index.php?search=John%20Snow) in 1854\. In [a certain London neighborhood](https://en.wikipedia.org/wiki/1854_Broad_Street_cholera_outbreak), an outbreak of [*cholera*](https://en.wikipedia.org/w/index.php?search=cholera) killed 127 people in three days, resulting in a mass exodus of the local residents. At the time it was thought that cholera was an airborne disease caused by breathing foul air. Snow was critical of this theory, and set about discovering the true transmission mechanism.
Consider how you might use data to approach this problem. At the hospital, they might have a list of all of the patients that died of cholera. Those data might look like what is presented in Table [17\.1](ch-spatial.html#tab:cholera-data).
Table 17\.1: Hypothetical data from 1854 cholera outbreak.
| Date | Last\_Name | First\_Name | Address | Age | Cause\_death |
| --- | --- | --- | --- | --- | --- |
| Aug 31, 1854 | Jones | Thomas | 26 Broad St. | 37 | cholera |
| Aug 31, 1854 | Jones | Mary | 26 Broad St. | 11 | cholera |
| Oct 1, 1854 | Warwick | Martin | 14 Broad St. | 23 | cholera |
Snow’s genius was in focusing his analysis on the `Address` column. In a literal sense, the `Address` variable is a character vector—it stores text. This text has no obvious medical significance with respect to cholera. But we as human beings recognize that these strings of text encode *geographic locations*—they are [*geospatial*](https://en.wikipedia.org/w/index.php?search=geospatial) data. Snow’s insight into this outbreak involved simply plotting these data in a geographically relevant way (see Figure [17\.2](ch-spatial.html#fig:cholera)).
The `CholeraDeaths` data are included in the **mdsr** package. When you plot the address of each person who died from cholera, you get something similar to what is shown in Figure [17\.1](ch-spatial.html#fig:snow-simple).
```
library(tidyverse)
library(mdsr)
library(sf)
plot(CholeraDeaths["Count"])
```
Figure 17\.1: Context\-free plot of 1854 cholera deaths.
While you might see certain patterns in these data, there is no [*context*](https://en.wikipedia.org/w/index.php?search=context) provided. The [map that Snow actually drew](https://en.wikipedia.org/wiki/1854_Broad_Street_cholera_outbreak#/media/File:Snow-cholera-map-1.jpg) is presented in Figure [17\.2](ch-spatial.html#fig:cholera). The underlying map of the London streets provides helpful context that makes the information in Figure [17\.1](ch-spatial.html#fig:snow-simple) intelligible.
Figure 17\.2: John Snow’s original map of the 1854 Broad Street cholera outbreak. Source: Wikipedia.
However, Snow’s insight was driven by another set of data—the locations of the street\-side water pumps. It may be difficult to see in the reproduction, but in addition to the lines indicating cholera deaths, there are labeled circles indicating the water pumps. A quick study of the map reveals that nearly all of the cholera cases are clustered around a single pump on the center of Broad Street. Snow was able to convince local officials that this pump was the probable cause of the epidemic.
While the story presented here is factual, it may be more legend than spatial data analysts would like to believe. Much of the causality is dubious: Snow himself believed that the outbreak petered out more or less on its own, and he did not create his famous map until afterwards. Nevertheless, his map was influential in the realization among doctors that cholera is a waterborne—rather than airborne—disease.
Our idealized conception of Snow’s use of spatial analysis typifies a successful episode in data science. First, the key insight was made by combining three sources of data: the cholera deaths, the locations of the water pumps, and the London street map. Second, while we now have the capability to create a spatial model directly from the data that might have led to the same conclusion, constructing such a model is considerably more difficult than simply plotting the data in the proper context. Moreover, the plot itself—properly contextualized—is probably more convincing to most people than a statistical model anyway. Human beings have a very strong intuitive ability to see spatial patterns in data, but computers have no such sense. Third, the problem was only resolved when the data\-based evidence was combined with a plausible model that explained the physical phenomenon. That is, Snow *was a doctor* and his knowledge of disease transmission was sufficient to convince his colleagues that cholera was not transmitted via the air.[31](#fn31)
17\.2 Spatial data structures
-----------------------------
Spatial data are often stored in special data structures (i.e., not just `data.frame`s). The most commonly used format for spatial data is called a [*shapefile*](https://en.wikipedia.org/w/index.php?search=shapefile).
Another common format is [*KML*](https://en.wikipedia.org/w/index.php?search=KML).
There are many other formats, and while mastering the details of any of these formats is not realistic in this treatment, there are some important basic notions that one must have in order to work with spatial data.
Shapefiles evolved as the native file format of the ArcView program developed by the [*Environmental Systems Research Institute*](https://en.wikipedia.org/w/index.php?search=Environmental%20Systems%20Research%20Institute) ([*Esri*](https://en.wikipedia.org/w/index.php?search=Esri)) and have since become an open specification. They can be downloaded from many different government websites and other locations that publish spatial data.
Spatial data consists not of rows and columns, but of geometric objects like points, lines, and polygons. Shapefiles contain vector\-based instructions for drawing the boundaries of countries, counties, and towns, etc. As such, shapefiles are richer—and more complicated—data containers than simple data frames. Working with shapefiles in **R** can be challenging, but the major benefit is that shapefiles allow you to provide your data with a geographic context. The results can be stunning.
First, the term “shapefile” is somewhat of a [*misnomer*](https://en.wikipedia.org/w/index.php?search=misnomer), as there are several files that you must have in order to read spatial data.
These files have extensions like `.shp`, `.shx`, and `.dbf`, and they are typically stored in a common directory.
There are *many* packages for **R** that specialize in working with spatial data, but we will focus on the most recent: **sf**. This package provides a **tidyverse**\-friendly set of class definitions and functions for spatial objects in **R**. These will have the class `sf`. (Under the hood, **sf** wraps functionality previously provided by the **rgdal** and **rgeos** packages.[32](#fn32))
To get a sense of how these work, we will make a recreation of Snow’s cholera map. First, download and unzip this file: (<http://rtwilson.com/downloads/SnowGIS_SHP.zip>). After loading the **sf** package, we explore the directory that contains our shapefiles.
```
library(sf)
# The correct path on your computer may be different
dsn <- fs::path(root, "snow", "SnowGIS_SHP")
list.files(dsn)
```
```
[1] "Cholera_Deaths.dbf" "Cholera_Deaths.prj"
[3] "Cholera_Deaths.sbn" "Cholera_Deaths.sbx"
[5] "Cholera_Deaths.shp" "Cholera_Deaths.shx"
[7] "OSMap_Grayscale.tfw" "OSMap_Grayscale.tif"
[9] "OSMap_Grayscale.tif.aux.xml" "OSMap_Grayscale.tif.ovr"
[11] "OSMap.tfw" "OSMap.tif"
[13] "Pumps.dbf" "Pumps.prj"
[15] "Pumps.sbx" "Pumps.shp"
[17] "Pumps.shx" "README.txt"
[19] "SnowMap.tfw" "SnowMap.tif"
[21] "SnowMap.tif.aux.xml" "SnowMap.tif.ovr"
```
Note that there are six files with the name `Cholera_Deaths` and another five with the name `Pumps`. These correspond to two different sets of shapefiles called [*layers*](https://en.wikipedia.org/w/index.php?search=layers), as revealed by the `st_layers()` function.
```
st_layers(dsn)
```
```
Driver: ESRI Shapefile
Available layers:
layer_name geometry_type features fields
1 Pumps Point 8 1
2 Cholera_Deaths Point 250 2
```
We’ll begin by loading the `Cholera_Deaths` layer into **R** using the `st_read()` function. Note that `Cholera_Deaths` is a `data.frame` in addition to being an `sf` object. It contains 250 [*simple features*](https://en.wikipedia.org/w/index.php?search=simple%20features) – these are the rows of the data frame, each corresponding to a different spatial object. In this case, the geometry type is `POINT` for all 250 rows. We will return to discussion of the mysterious `projected CRS` in Section [17\.3\.2](ch-spatial.html#sec:projections), but for now simply note that a specific geographic projection is encoded in these files.
```
CholeraDeaths <- st_read(dsn, layer = "Cholera_Deaths")
```
```
Reading layer `Cholera_Deaths' from data source
`/home/bbaumer/Dropbox/git/mdsr-book/mdsr2e/data/shp/snow/SnowGIS_SHP'
using driver `ESRI Shapefile'
Simple feature collection with 250 features and 2 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: 529000 ymin: 181000 xmax: 530000 ymax: 181000
Projected CRS: OSGB 1936 / British National Grid
```
```
class(CholeraDeaths)
```
```
[1] "sf" "data.frame"
```
```
CholeraDeaths
```
```
Simple feature collection with 250 features and 2 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: 529000 ymin: 181000 xmax: 530000 ymax: 181000
Projected CRS: OSGB 1936 / British National Grid
First 10 features:
Id Count geometry
1 0 3 POINT (529309 181031)
2 0 2 POINT (529312 181025)
3 0 1 POINT (529314 181020)
4 0 1 POINT (529317 181014)
5 0 4 POINT (529321 181008)
6 0 2 POINT (529337 181006)
7 0 2 POINT (529290 181024)
8 0 2 POINT (529301 181021)
9 0 3 POINT (529285 181020)
10 0 2 POINT (529288 181032)
```
There are data associated with each of these points.
Every `sf` object is also a `data.frame` that stores values that correspond to each observation.
In this case, for each of the points, we have an associated `Id` number and a `Count` of the number of deaths at that location. To plot these data, we can simply use the `plot()` generic function as we did in Figure [17\.1](ch-spatial.html#fig:snow-simple). However, in the next section, we will illustrate how `sf` objects can be integrated into a **ggplot2** workflow.
17\.3 Making maps
-----------------
In addition to providing geospatial processing capabilities, **sf** also provides spatial plotting extensions that work seamlessly with **ggplot2**. The `geom_sf()` function extends the grammar of graphics embedded in **ggplot2** that we explored in Chapter [3](ch-vizII.html#ch:vizII) to provide native support for plotting spatial objects. Thus, we are only a few steps away from having some powerful mapping functionality.
### 17\.3\.1 Static maps
The `geom_sf()` function allows you to plot geospatial objects in any **ggplot2** object. Since the \\(x\\) and \\(y\\) coordinates are implied by the geometry of the **sf** object, you don’t have to explicitly bind the \\(x\\) aesthetic (see Chapter [3](ch-vizII.html#ch:vizII)) to the longitudinal coordinate and the \\(y\\) aesthetic to the latitude. Your map looks like this:
```
ggplot(CholeraDeaths) +
geom_sf()
```
Figure 17\.3: A simple **ggplot2** of the cholera deaths, with little context provided.
Figure [17\.3](ch-spatial.html#fig:cholera-ggplot) is an improvement over what you would get from `plot()`. It is mostly clear what the coordinates along the axes are telling us (the units are in fact degrees), but we still don’t have any context for what we are seeing. What we really want is to overlay these points on the London street map—and this is exactly what **ggspatial** lets us do.
The `annotation_map_tile()` function adds a layer of map tiles pulled from [Open Street Map](https://www.openstreetmap.org/). We can control the `zoom` level, as well as the `type`.
Here, we also map the number of deaths at each location to the size of the dot.
```
library(ggspatial)
ggplot(CholeraDeaths) +
annotation_map_tile(type = "osm", zoomin = 0) +
geom_sf(aes(size = Count), alpha = 0.7)
```
Figure 17\.4: Erroneous reproduction of John Snow’s original map of the 1854 cholera outbreak. The dots representing the deaths from cholera are off by hundreds of meters.
We note that [*John Snow* is now the name of a pub](https://londonist.com/pubs/pubs/pubs/john-snow) on the corner of Broadwick (formerly Broad) Street and Lexington Street.
But look carefully at Figure [17\.4](ch-spatial.html#fig:snow-wrong) and Figure [17\.2](ch-spatial.html#fig:cholera). You will not see the points in the right places. The center of the cluster is not on Broadwick Street, and some of the points are in the middle of the street (where there are no residences). Why?
The coordinates in the `CholeraDeaths` object have unfamiliar values, as we can see by accessing the *[*bounding box*](https://en.wikipedia.org/w/index.php?search=bounding%20box)* of the object.
```
st_bbox(CholeraDeaths)
```
```
xmin ymin xmax ymax
529160 180858 529656 181306
```
Both `CholeraDeaths` and the map tiles retrieved by the **ggspatial** package have geospatial coordinates, but those coordinates are not in the same units.
While it is true that `annotation_map_tile()` performs some on the fly coordination translation, there remains a discrepancy between our two geospatial data sources.
To understand how to get these two spatial data sources to work together properly, we have to understand projections.
### 17\.3\.2 Projections
The Earth happens to be an oblate spheroid—a three\-dimensional flattened sphere. Yet we would like to create two\-dimensional representations of the Earth that fit on pages or computer screens. The process of converting locations in a three\-dimensional [*geographic coordinate system*](https://en.wikipedia.org/w/index.php?search=geographic%20coordinate%20system) to a two\-dimensional representation is called [*projection*](https://en.wikipedia.org/w/index.php?search=projection).
Once people figured out that the world was not flat, the question of how to project it followed.
Since people have been making nautical maps for centuries, it would be nice if the study of map projection had resulted in a simple, accurate, universally\-accepted projection system.
Unfortunately, that is not the case. It is simply not possible to faithfully preserve all properties present in a three\-dimensional space in a two\-dimensional space. Thus there is no one best projection system—each has its own advantages and disadvantages.
As noted previously, because the Earth is not a perfect sphere the mathematics behind many of these projections are non\-trivial.
Two properties that a projection system might preserve—though not simultaneously—are *shape/angle* and *area*. That is, a projection system may be constructed in such a way that it faithfully represents the relative sizes of land masses in two dimensions. The Mercator projection shown at left in Figure [17\.5](ch-spatial.html#fig:mercator) is a famous example of a projection system that does *not* preserve area. Its popularity is a result of its *angle*\-preserving nature, which makes it useful for navigation. Unfortunately, it greatly distorts the size of features near the poles, where land masses become infinitely large.
```
library(mapproj)
library(maps)
map("world", projection = "mercator", wrap = TRUE)
map("world", projection = "cylequalarea", param = 45, wrap = TRUE)
```
Figure 17\.5: The world according to the Mercator (left) and Gall–Peters (right) projections.
The Gall–Peters projection shown at right in Figure [17\.5](ch-spatial.html#fig:mercator) does preserve area. Note the difference between the two projections when comparing the size of Greenland to Africa. In reality (as shown in the Gall–Peters projection) Africa is 14 times larger than Greenland. However, because Greenland is much closer to the North Pole, its area is greatly distorted in the Mercator projection, making it appear to be larger than Africa.
This particular example—while illustrative—became famous because of the socio\-political [controversy](https://en.wikipedia.org/wiki/Gall-Peters_projection#Controversy) in which these projections became embroiled. Beginning in the 1960s, a German filmmaker named [Arno Peters](https://en.wikipedia.org/w/index.php?search=Arno%20Peters) alleged that the commonly\-used Mercator projection was an instrument of [*cartographic imperialism*](https://en.wikipedia.org/w/index.php?search=cartographic%20imperialism), in that it falsely focused attention on Northern and Southern countries at the expense of those in Africa and South America closer to the equator. Peters had a point—the Mercator projection has many shortcomings—but unfortunately his claims about the virtues of the Gall–Peters projection (particularly its originality) were mostly false. Peters either ignored or was not aware that cartographers had long campaigned against the Mercator.
Nevertheless, you should be aware that the “default” projection can be very misleading. As a data scientist, your choice of how to project your data can have a direct influence on what viewers will take away from your data maps. Simply ignoring the implications of projections is not an ethically tenable position! While we can’t offer a comprehensive list of map projections here, two common general\-purpose map projections are the [*Lambert conformal conic*](https://en.wikipedia.org/w/index.php?search=Lambert%20conformal%20conic) projection and the [*Albers equal\-area conic*](https://en.wikipedia.org/w/index.php?search=Albers%20equal-area%20conic) projection (see Figure [17\.6](ch-spatial.html#fig:lambert)). In the former, angles are preserved, while in the latter neither scale nor shape are preserved, but gross distortions of both are minimized.
```
map(
"state", projection = "lambert",
parameters = c(lat0 = 20, lat1 = 50), wrap = TRUE
)
map(
"state", projection = "albers",
parameters = c(lat0 = 20, lat1 = 50), wrap = TRUE
)
```
Figure 17\.6: The contiguous United States according to the Lambert conformal conic (left) and Albers equal area (right) projections. We have specified that the scales are true on the 20th and 50th parallels.
Always think about how your data are projected when making a map.
A [*coordinate reference system*](https://en.wikipedia.org/w/index.php?search=coordinate%20reference%20system) (CRS) is needed to keep track of geographic locations. Every spatially\-aware object in **R** can have a projection. Three formats that are common for storing information about the projection of a geospatial object are [*EPSG*](https://en.wikipedia.org/w/index.php?search=EPSG), [*PROJ.4*](https://en.wikipedia.org/w/index.php?search=PROJ.4), and [*WKT*](https://en.wikipedia.org/w/index.php?search=WKT). The former is simply an integer, while PROJ.4 is a cryptic string of text. The latter can be retrieved (or set) using the `st_crs()` command.
```
st_crs(CholeraDeaths)
```
```
Coordinate Reference System:
User input: OSGB 1936 / British National Grid
wkt:
PROJCRS["OSGB 1936 / British National Grid",
BASEGEOGCRS["OSGB 1936",
DATUM["OSGB 1936",
ELLIPSOID["Airy 1830",6377563.396,299.3249646,
LENGTHUNIT["metre",1]]],
PRIMEM["Greenwich",0,
ANGLEUNIT["degree",0.0174532925199433]],
ID["EPSG",4277]],
CONVERSION["British National Grid",
METHOD["Transverse Mercator",
ID["EPSG",9807]],
PARAMETER["Latitude of natural origin",49,
ANGLEUNIT["degree",0.0174532925199433],
ID["EPSG",8801]],
PARAMETER["Longitude of natural origin",-2,
ANGLEUNIT["degree",0.0174532925199433],
ID["EPSG",8802]],
PARAMETER["Scale factor at natural origin",0.9996012717,
SCALEUNIT["unity",1],
ID["EPSG",8805]],
PARAMETER["False easting",400000,
LENGTHUNIT["metre",1],
ID["EPSG",8806]],
PARAMETER["False northing",-100000,
LENGTHUNIT["metre",1],
ID["EPSG",8807]]],
CS[Cartesian,2],
AXIS["(E)",east,
ORDER[1],
LENGTHUNIT["metre",1]],
AXIS["(N)",north,
ORDER[2],
LENGTHUNIT["metre",1]],
USAGE[
SCOPE["unknown"],
AREA["UK - Britain and UKCS 49°46'N to 61°01'N, 7°33'W to 3°33'E"],
BBOX[49.75,-9.2,61.14,2.88]],
ID["EPSG",27700]]
```
It should be clear by now that the science of map projection is complicated, and it is likely unclear how to decipher this representation of the projection, which is in a format called [*Well\-Known Text*](https://en.wikipedia.org/w/index.php?search=Well-Known%20Text). What we can say is that `METHOD["Transverse_Mercator"]` indicates that these data are encoded using a [*Transverse Mercator*](https://en.wikipedia.org/w/index.php?search=Transverse%20Mercator) projection. The [*Airy ellipsoid*](https://en.wikipedia.org/w/index.php?search=Airy%20ellipsoid) is being used, and the units are meters.
The equivalent EPSG system is 27700\.
The [*datum*](https://en.wikipedia.org/w/index.php?search=datum)—or model of the Earth—is OSGB 1936, which is also known as the [*British National Grid*](https://en.wikipedia.org/w/index.php?search=British%20National%20Grid).
The rest of the terms in the string are parameters that specify properties of that projection. The unfamiliar coordinates that we saw earlier for the `CholeraDeaths` data set were relative to this CRS.
There are many CRSs, but a few are most common. A set of EPSG ([*European Petroleum Survey Group*](https://en.wikipedia.org/w/index.php?search=European%20Petroleum%20Survey%20Group)) codes provides a shorthand for the full descriptions (like the one shown above). The most commonly\-used are:
* [EPSG:4326](https://epsg.io/4326) \- Also known as [WGS84](https://en.wikipedia.org/wiki/World_Geodetic_System#WGS84), this is the standard for GPS systems and Google Earth.
* [EPSG:3857](https://epsg.io/3857) \- A Mercator projection used in maps tiles[33](#fn33) by Google Maps, Open Street Maps, etc.
* [EPSG:27700](https://epsg.io/27700) \- Also known as OSGB 1936, or the British National Grid: United Kingdom Ordnance Survey. It is commonly used in Britain.
The `st_crs()` function will translate from the shorthand EPSG code to the full\-text PROJ.4 strings and WKT.
```
st_crs(4326)$epsg
```
```
[1] 4326
```
```
st_crs(3857)$Wkt
```
```
[1] "PROJCS[\"WGS 84 / Pseudo-Mercator\",GEOGCS[\"WGS 84\",DATUM[\"WGS_1984\",SPHEROID[\"WGS 84\",6378137,298.257223563,AUTHORITY[\"EPSG\",\"7030\"]],AUTHORITY[\"EPSG\",\"6326\"]],PRIMEM[\"Greenwich\",0,AUTHORITY[\"EPSG\",\"8901\"]],UNIT[\"degree\",0.0174532925199433,AUTHORITY[\"EPSG\",\"9122\"]],AUTHORITY[\"EPSG\",\"4326\"]],PROJECTION[\"Mercator_1SP\"],PARAMETER[\"central_meridian\",0],PARAMETER[\"scale_factor\",1],PARAMETER[\"false_easting\",0],PARAMETER[\"false_northing\",0],UNIT[\"metre\",1,AUTHORITY[\"EPSG\",\"9001\"]],AXIS[\"Easting\",EAST],AXIS[\"Northing\",NORTH],EXTENSION[\"PROJ4\",\"+proj=merc +a=6378137 +b=6378137 +lat_ts=0 +lon_0=0 +x_0=0 +y_0=0 +k=1 +units=m +nadgrids=@null +wktext +no_defs\"],AUTHORITY[\"EPSG\",\"3857\"]]"
```
```
st_crs(27700)$proj4string
```
```
[1] "+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +ellps=airy +units=m +no_defs"
```
The `CholeraDeaths` points did not show up on our earlier map because we did not project them into the same coordinate system as the map tiles. Since we can’t project the map tiles, we had better project the points in the `CholeraDeaths` data. As noted above, Google Maps tiles (and Open Street Map tiles) are projected in the `espg:3857` system. However, they are confusingly returned with coordinates in the `epsg:4326` system. Thus, we use the `st_transform()` function to project our `CholeraDeaths` data to `epsg:4326`.
```
cholera_4326 <- CholeraDeaths %>%
st_transform(4326)
```
Note that the *bounding box* in our new coordinates are in the same familiar units of latitude and longitude.
```
st_bbox(cholera_4326)
```
```
xmin ymin xmax ymax
-0.140 51.512 -0.133 51.516
```
Unfortunately, the code below *still* produces a map with the points in the wrong places.
```
ggplot(cholera_4326) +
annotation_map_tile(type = "osm", zoomin = 0) +
geom_sf(aes(size = Count), alpha = 0.7)
```
A careful reading of the help file for `spTransform-methods()` (the underlying machinery) gives some clues to our mistake.
```
help("spTransform-methods", package = "rgdal")
```
> Not providing the appropriate `+datum` and `+towgs84` tags may lead to coordinates being out by hundreds of meters. Unfortunately, there is no easy way to provide this information: The user has to know the correct metadata for the data being used, even if this can be hard to discover.
That seems like our problem!
The `+datum` and `+towgs84` arguments were missing from our PROJ.4 string.
```
st_crs(CholeraDeaths)$proj4string
```
```
[1] "+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +ellps=airy +units=m +no_defs"
```
The `CholeraDeaths` object has all of the same specifications as [`epsg:27700`](https://epsg.io/27700) but without the missing `+datum` and `+towgs84` tags. Furthermore, the documentation for the original data source suggests using `epsg:27700`. Thus, we first assert that the `CholeraDeaths` data is in `epsg:27700`.
Then, projecting to `epsg:4326` works as intended.
```
cholera_latlong <- CholeraDeaths %>%
st_set_crs(27700) %>%
st_transform(4326)
snow <- ggplot(cholera_latlong) +
annotation_map_tile(type = "osm", zoomin = 0) +
geom_sf(aes(size = Count))
```
All that remains is to add the locations of the pumps.
```
pumps <- st_read(dsn, layer = "Pumps")
```
```
Reading layer `Pumps' from data source
`/home/bbaumer/Dropbox/git/mdsr-book/mdsr2e/data/shp/snow/SnowGIS_SHP'
using driver `ESRI Shapefile'
Simple feature collection with 8 features and 1 field
Geometry type: POINT
Dimension: XY
Bounding box: xmin: 529000 ymin: 181000 xmax: 530000 ymax: 181000
Projected CRS: OSGB 1936 / British National Grid
```
```
pumps_latlong <- pumps %>%
st_set_crs(27700) %>%
st_transform(4326)
snow +
geom_sf(data = pumps_latlong, size = 3, color = "red")
```
Figure 17\.7: Recreation of John Snow’s original map of the 1854 cholera outbreak. The size of each black dot is proportional to the number of people who died from cholera at that location. The red dots indicate the location of public water pumps. The strong clustering of deaths around the water pump on Broad(wick) Street suggests that perhaps the cholera was spread through water obtained at that pump.
In Figure [17\.7](ch-spatial.html#fig:snow-final), we finally see the clarity that judicious uses of spatial data in the proper context can provide. It is not necessary to fit a statistical model to these data to see that nearly all of the cholera deaths occurred in people closest to the Broad Street water pump, which was later found to be drawing fecal bacteria from a nearby cesspit.
### 17\.3\.3 Dynamic maps with `leaflet`
Leaflet is a powerful open\-source JavaScript library for building interactive maps in HTML. The corresponding **R** package [**leaflet**](https://rstudio.github.io/leaflet/) brings this functionality to **R** using the **htmlwidgets** platform introduced in Chapter [14](ch-vizIII.html#ch:vizIII).
Although the commands are different, the architecture is very similar to **ggplot2**.
However, instead of putting data\-based layers on top of a static map, **leaflet** allows you to put data\-based layers on top of an interactive map.
Because **leaflet** renders as HTML, you won’t be able to take advantage of our plots in the printed book (since they are displayed as screen shots).
We encourage you to run this code on your own and explore interactively.
A **leaflet** map widget is created with the `leaflet()` command. We will subsequently add layers to this widget. The first layer that we will add is a [*tile*](https://en.wikipedia.org/w/index.php?search=tile) layer containing all of the static map information, which by default comes from OpenStreetMap. The second layer we will add here is a marker, which designates a point location. Note how the `addMarkers()` function can take a `data` argument, just like a `geom_*()` layer in **ggplot2** would.
```
white_house <- tibble(
address = "The White House, Washington, DC"
) %>%
tidygeocoder::geocode(address, method = "osm")
library(leaflet)
white_house_map <- leaflet() %>%
addTiles() %>%
addMarkers(data = white_house)
white_house_map
```
Figure 17\.8: A **leaflet** plot of the White House.
When you render this in **RStudio**, or in an **R** Markdown document with HTML output, or in a Web browser using Shiny, you will be able to scroll and zoom on the fly. In Figure [17\.8](ch-spatial.html#fig:leaflet-white-house) we display our version.
We can also add a pop\-up to provide more information about a particular location, as shown in Figure [17\.9](ch-spatial.html#fig:leaflet-white-house-popup).
```
white_house <- white_house %>%
mutate(
title = "The White House",
street_address = "1600 Pennsylvania Ave"
)
white_house_map %>%
addPopups(
data = white_house,
popup = ~paste0("<b>", title, "</b></br>", street_address)
)
```
Figure 17\.9: A **leaflet** plot of the White House with a popup.
Although **leaflet** and **ggplot2** are not syntactically equivalent, they are conceptually similar.
Because the map tiles provide geographic context, the dynamic, zoomable, scrollable maps created by **leaflet** can be more informative than the static maps created by **ggplot2**.
17\.4 Extended example: Congressional districts
-----------------------------------------------
In the 2012 presidential election, the [*Republican*](https://en.wikipedia.org/w/index.php?search=Republican) challenger [Mitt Romney](https://en.wikipedia.org/w/index.php?search=Mitt%20Romney) [narrowly defeated](https://en.wikipedia.org/wiki/United_States_presidential_election_in_North_Carolina,_2012) President [Barack Obama](https://en.wikipedia.org/w/index.php?search=Barack%20Obama) in the state of [*North Carolina*](https://en.wikipedia.org/w/index.php?search=North%20Carolina), winning 50\.4% of the popular votes, and thereby earning all 15 electoral votes. Obama had won North Carolina in 2008—becoming the first [*Democrat*](https://en.wikipedia.org/w/index.php?search=Democrat) to do so since 1976\. As a swing state, North Carolina has voting patterns that are particularly interesting, and—as we will see—contentious.
The roughly 50/50 split in the popular vote suggests that there are about the same number of Democratic and Republican voters in the state.
In the fall of 2020,
10 of North Carolina’s 13 congressional representatives are Republican (with one seat currently vacant).
How can this be?
In this case, geospatial data can help us understand.
### 17\.4\.1 Election results
Our first step is to download the results of the 2012 congressional elections from the Federal Election
Commission. These data are available through the **fec12** package.[34](#fn34)
```
library(fec12)
```
Note that we have slightly more than 435 elections, since these data include U.S. territories like Puerto Rico and the Virgin Islands.
```
results_house %>%
group_by(state, district_id) %>%
summarize(N = n()) %>%
nrow()
```
```
[1] 445
```
According to the [*United States Constitution*](https://en.wikipedia.org/w/index.php?search=United%20States%20Constitution), congressional districts are apportioned according to population from the 2010 U.S. Census. In practice, we see that this is not quite the case. These are the 10 candidates who earned the most votes in the general election.
```
results_house %>%
left_join(candidates, by = "cand_id") %>%
select(state, district_id, cand_name, party, general_votes) %>%
arrange(desc(general_votes))
```
```
# A tibble: 2,343 × 5
state district_id cand_name party general_votes
<chr> <chr> <chr> <chr> <dbl>
1 PR 00 PIERLUISI, PEDRO R NPP 905066
2 PR 00 ALOMAR, RAFAEL COX PPD 881181
3 PA 02 FATTAH, CHAKA MR. D 318176
4 WA 07 MCDERMOTT, JAMES D 298368
5 MI 14 PETERS, GARY D 270450
6 MO 01 CLAY, WILLIAM LACY JR D 267927
7 WI 02 POCAN, MARK D 265422
8 OR 03 BLUMENAUER, EARL D 264979
9 MA 08 LYNCH, STEPHEN F D 263999
10 MN 05 ELLISON, KEITH MAURICE DFL 262102
# … with 2,333 more rows
```
Note that the representatives from Puerto Rico earned nearly three times as many votes as any other Congressional representative.
We are interested in the results from North Carolina.
We begin by creating a data frame specific to that state, with the votes aggregated by congressional district.
As there are 13 districts, the `nc_results` data frame will have exactly 13 rows.
```
district_elections <- results_house %>%
mutate(district = parse_number(district_id)) %>%
group_by(state, district) %>%
summarize(
N = n(),
total_votes = sum(general_votes, na.rm = TRUE),
d_votes = sum(ifelse(party == "D", general_votes, 0), na.rm = TRUE),
r_votes = sum(ifelse(party == "R", general_votes, 0), na.rm = TRUE)
) %>%
mutate(
other_votes = total_votes - d_votes - r_votes,
r_prop = r_votes / total_votes,
winner = ifelse(r_votes > d_votes, "Republican", "Democrat")
)
nc_results <- district_elections %>%
filter(state == "NC")
nc_results %>%
select(-state)
```
```
Adding missing grouping variables: `state`
```
```
# A tibble: 13 × 9
# Groups: state [1]
state district N total_votes d_votes r_votes other_votes r_prop
<chr> <dbl> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 NC 1 4 338066 254644 77288 6134 0.229
2 NC 2 8 311397 128973 174066 8358 0.559
3 NC 3 3 309885 114314 195571 0 0.631
4 NC 4 4 348485 259534 88951 0 0.255
5 NC 5 3 349197 148252 200945 0 0.575
6 NC 6 4 364583 142467 222116 0 0.609
7 NC 7 4 336736 168695 168041 0 0.499
8 NC 8 8 301824 137139 160695 3990 0.532
9 NC 9 13 375690 171503 194537 9650 0.518
10 NC 10 6 334849 144023 190826 0 0.570
11 NC 11 11 331426 141107 190319 0 0.574
12 NC 12 3 310908 247591 63317 0 0.204
13 NC 13 5 370610 160115 210495 0 0.568
# … with 1 more variable: winner <chr>
```
We see that the distribution of the number of votes cast across congressional districts in North Carolina is very narrow—all of the districts had between 301,824 and 375,690 votes cast.
```
nc_results %>%
skim(total_votes) %>%
select(-na)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var state n mean sd p0 p25 p50 p75 p100
<chr> <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 total_votes NC 13 337204. 24175. 301824 311397 336736 349197 375690
```
However, as the close presidential election suggests, the votes of North Carolinans were roughly evenly divided among Democratic and Republican congressional candidates. In fact, state Democrats earned a narrow majority—50\.6%—of the votes. Yet the Republicans won 9 of the 13 races.[35](#fn35)
```
nc_results %>%
summarize(
N = n(),
state_votes = sum(total_votes),
state_d = sum(d_votes),
state_r = sum(r_votes)
) %>%
mutate(
d_prop = state_d / state_votes,
r_prop = state_r / state_votes
)
```
```
# A tibble: 1 × 7
state N state_votes state_d state_r d_prop r_prop
<chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 NC 13 4383656 2218357 2137167 0.506 0.488
```
One clue is to look more closely at the distribution of the proportion of Republican votes in each district.
```
nc_results %>%
select(district, r_prop, winner) %>%
arrange(desc(r_prop))
```
```
# A tibble: 13 × 4
# Groups: state [1]
state district r_prop winner
<chr> <dbl> <dbl> <chr>
1 NC 3 0.631 Republican
2 NC 6 0.609 Republican
3 NC 5 0.575 Republican
4 NC 11 0.574 Republican
5 NC 10 0.570 Republican
6 NC 13 0.568 Republican
7 NC 2 0.559 Republican
8 NC 8 0.532 Republican
9 NC 9 0.518 Republican
10 NC 7 0.499 Democrat
11 NC 4 0.255 Democrat
12 NC 1 0.229 Democrat
13 NC 12 0.204 Democrat
```
In the nine districts that Republicans won, their share of the vote ranged from a narrow (51\.8%) to a comfortable (63\.1%) majority. With the exception of the essentially even 7th district, the three districts that Democrats won were routs, with the Democratic candidate winning between 75% and 80% of the vote. Thus, although Democrats won more votes across the state, most of their votes were clustered within three overwhelmingly Democratic districts, allowing Republicans to prevail with moderate majorities across the remaining nine districts.
Conventional wisdom states that Democratic voters tend to live in cities, so perhaps they were simply clustered in three cities, while Republican voters were spread out across the state in more rural areas.
Let’s look more closely at the districts.
### 17\.4\.2 Congressional districts
To do this, we first download the congressional district shapefiles for the 113th Congress.
```
src <- "http://cdmaps.polisci.ucla.edu/shp/districts113.zip"
dsn_districts <- usethis::use_zip(src, destdir = fs::path("data_large"))
```
Next, we read these shapefiles into **R** as an `sf` object.
```
library(sf)
st_layers(dsn_districts)
```
```
Driver: ESRI Shapefile
Available layers:
layer_name geometry_type features fields
1 districts113 Polygon 436 15
```
```
districts <- st_read(dsn_districts, layer = "districts113") %>%
mutate(DISTRICT = parse_number(as.character(DISTRICT)))
```
```
Reading layer `districts113' from data source
`/home/bbaumer/Dropbox/git/mdsr-book/mdsr2e/data_large/districtShapes'
using driver `ESRI Shapefile'
Simple feature collection with 436 features and 15 fields (with 1 geometry empty)
Geometry type: MULTIPOLYGON
Dimension: XY
Bounding box: xmin: -179 ymin: 18.9 xmax: 180 ymax: 71.4
Geodetic CRS: NAD83
```
```
glimpse(districts)
```
```
Rows: 436
Columns: 16
$ STATENAME <chr> "Louisiana", "Maine", "Maine", "Maryland", "Maryland", …
$ ID <chr> "022113114006", "023113114001", "023113114002", "024113…
$ DISTRICT <dbl> 6, 1, 2, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8…
$ STARTCONG <chr> "113", "113", "113", "113", "113", "113", "113", "113",…
$ ENDCONG <chr> "114", "114", "114", "114", "114", "114", "114", "114",…
$ DISTRICTSI <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ COUNTY <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ PAGE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ LAW <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ NOTE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ BESTDEC <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ FINALNOTE <chr> "{\"From US Census website\"}", "{\"From US Census webs…
$ RNOTE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ LASTCHANGE <chr> "2016-05-29 16:44:10.857626", "2016-05-29 16:44:10.8576…
$ FROMCOUNTY <chr> "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", …
$ geometry <MULTIPOLYGON [°]> MULTIPOLYGON (((-91.8 30.9,..., MULTIPOLYG…
```
We are investigating North Carolina, so we will create a smaller object with only those shapes using the `filter()` function. Note that since every **sf** object is *also* a `data.frame`, we can use all of our usual **dplyr** tools on our geospatial objects.
```
nc_shp <- districts %>%
filter(STATENAME == "North Carolina")
nc_shp %>%
st_geometry() %>%
plot(col = gray.colors(nrow(nc_shp)))
```
Figure 17\.10: A basic map of the North Carolina congressional districts.
It is hard to see exactly what is going on here, but it appears as though there are some traditionally shaped districts, as well as some very strange and narrow districts. Unfortunately the map in Figure [17\.10](ch-spatial.html#fig:nc-basic) is devoid of context, so it is not very informative.
We need the `nc_results` data to provide that context.
### 17\.4\.3 Putting it all together
How to merge these two together? The simplest way is to use the `inner_join()` function from **dplyr** (see Chapter [5](ch-join.html#ch:join)).
Since both `nc_shp` and `nc_results` are `data.frame`s, this will append the election results data to the geospatial data.
Here, we merge the `nc_shp` polygons with the `nc_results` election data frame using the district as the key.
Note that there are 13 polygons and 13 rows.
```
nc_merged <- nc_shp %>%
st_transform(4326) %>%
inner_join(nc_results, by = c("DISTRICT" = "district"))
glimpse(nc_merged)
```
```
Rows: 13
Columns: 24
$ STATENAME <chr> "North Carolina", "North Carolina", "North Carolina", …
$ ID <chr> "037113114002", "037113114003", "037113114004", "03711…
$ DISTRICT <dbl> 2, 3, 4, 1, 5, 6, 7, 8, 9, 10, 11, 12, 13
$ STARTCONG <chr> "113", "113", "113", "113", "113", "113", "113", "113"…
$ ENDCONG <chr> "114", "114", "114", "114", "114", "114", "114", "114"…
$ DISTRICTSI <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ COUNTY <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ PAGE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ LAW <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ NOTE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ BESTDEC <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ FINALNOTE <chr> "{\"From US Census website\"}", "{\"From US Census web…
$ RNOTE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ LASTCHANGE <chr> "2016-05-29 16:44:10.857626", "2016-05-29 16:44:10.857…
$ FROMCOUNTY <chr> "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F",…
$ state <chr> "NC", "NC", "NC", "NC", "NC", "NC", "NC", "NC", "NC", …
$ N <int> 8, 3, 4, 4, 3, 4, 4, 8, 13, 6, 11, 3, 5
$ total_votes <dbl> 311397, 309885, 348485, 338066, 349197, 364583, 336736…
$ d_votes <dbl> 128973, 114314, 259534, 254644, 148252, 142467, 168695…
$ r_votes <dbl> 174066, 195571, 88951, 77288, 200945, 222116, 168041, …
$ other_votes <dbl> 8358, 0, 0, 6134, 0, 0, 0, 3990, 9650, 0, 0, 0, 0
$ r_prop <dbl> 0.559, 0.631, 0.255, 0.229, 0.575, 0.609, 0.499, 0.532…
$ winner <chr> "Republican", "Republican", "Democrat", "Democrat", "R…
$ geometry <MULTIPOLYGON [°]> MULTIPOLYGON (((-80.1 35.8,..., MULTIPOLYGON (((-78.3 …
```
### 17\.4\.4 Using `ggplot2`
We are now ready to plot our map of [North Carolina’s congressional districts](https://upload.wikimedia.org/wikipedia/commons/thumb/3/3d/North_Carolina_Congressional_Districts%2C_113th_Congress.tif/lossless-page1-1280px-North_Carolina_Congressional_Districts%2C_113th_Congress.tif.png). We start by using a simple red–blue color scheme for the districts.
```
nc <- ggplot(data = nc_merged, aes(fill = winner)) +
annotation_map_tile(zoom = 6, type = "osm") +
geom_sf(alpha = 0.5) +
scale_fill_manual("Winner", values = c("blue", "red")) +
geom_sf_label(aes(label = DISTRICT), fill = "white") +
theme_void()
nc
```
Figure 17\.11: Bichromatic choropleth map of the results of the 2012 congressional elections in North Carolina.
Figure [17\.11](ch-spatial.html#fig:nc-bicolor) shows that it was the Democratic districts that tended to be irregularly shaped. Districts 12 and 4 have narrow, tortured shapes—both were heavily Democratic. This plot tells us who won, but it doesn’t convey the subtleties we observed about the *margins* of victory. In the next plot, we use a continuous color scale to to indicate the proportion of votes in each district. The `RdBu` diverging color palette comes from **RColorBrewer** (see Chapter [2](ch-vizI.html#ch:vizI)).
```
nc +
aes(fill = r_prop) +
scale_fill_distiller(
"Proportion\nRepublican",
palette = "RdBu",
limits = c(0.2, 0.8)
)
```
Figure 17\.12: Full color choropleth map of the results of the 2012 congressional elections in North Carolina. The clustering of Democratic voters is evident from the deeper blue in Democratic districts, versus the pale red in the more numerous Republican districts.
The `limits` argument to `scale_fill_distiller()` is important. This forces *red* to be the color associated with 80% Republican votes and *blue* to be associated with 80% Democratic votes. Without this argument, red would be associated with the maximum value in that data (about 63%) and blue with the minimum (about 20%). This would result in the neutral color of white not being at exactly 50%. When choosing color scales, it is critically important to make choices that reflect the data.
Choose colors and scales carefully when making maps.
In Figure [17\.12](ch-spatial.html#fig:nc-map), we can see that the three Democratic districts are “bluer” than the nine Republican counties are “red.” This reflects the clustering that we observed earlier. North Carolina has become one of the more egregious examples of [*gerrymandering*](https://en.wikipedia.org/w/index.php?search=gerrymandering), the phenomenon of when legislators of one party use their re\-districting power for political gain. This is evident in Figure [17\.12](ch-spatial.html#fig:nc-map), where Democratic votes are concentrated in three curiously\-drawn congressional districts. This enables Republican lawmakers to have 69% (9/13\) of the voting power in Congress despite earning only 48\.8% of the votes.
Since the 1st edition of this book, the North Carolina gerrymandering case went all the way to the [*United States Supreme Court*](https://en.wikipedia.org/w/index.php?search=United%20States%20Supreme%20Court).
In a landmark 2018 decision, the Justices ruled 5–4 in [Rucho vs. Common Cause](https://www.supremecourt.gov/search.aspx?FileName=/docket/docketfiles/html/public/18-422.html) that while partisan gerrymanders such as those in North Carolina may be problematic for democracy, they are not reviewable by the judicial system.
### 17\.4\.5 Using `leaflet`
Was it true that the Democratic districts were weaved together to contain many of the biggest cities in the state?
A similar map made in **leaflet** allows us to zoom in and pan out, making it easier to survey the districts.
First, we will define a color palette over the values \\(\[0,1]\\) that ranges from red to blue.
```
library(leaflet)
pal <- colorNumeric(palette = "RdBu", domain = c(0, 1))
```
To make our plot in **leaflet**, we have to add the tiles, and then the polygons defined by the `sf` object `nc_merged`.
Since we want red to be associated with the proportion of Republican votes, we will map `1 - r_prop` to color.
Note that we also add popups with the actual proportions, so that if you click on the map, it will show the district number and the proportion of Republican votes.
The resulting **leaflet** map is shown in Figure [17\.13](ch-spatial.html#fig:leaflet-nc).
```
leaflet_nc <- leaflet(nc_merged) %>%
addTiles() %>%
addPolygons(
weight = 1, fillOpacity = 0.7,
color = ~pal(1 - r_prop),
popup = ~paste("District", DISTRICT, "</br>", round(r_prop, 4))
) %>%
setView(lng = -80, lat = 35, zoom = 7)
```
Figure 17\.13: A **leaflet** plot of the North Carolina congressional districts.
Indeed, the curiously\-drawn districts in blue encompass all seven of the largest cities in the state: [*Charlotte*](https://en.wikipedia.org/w/index.php?search=Charlotte), [*Raleigh*](https://en.wikipedia.org/w/index.php?search=Raleigh), [*Greensboro*](https://en.wikipedia.org/w/index.php?search=Greensboro), [*Durham*](https://en.wikipedia.org/w/index.php?search=Durham), [*Winston\-Salem*](https://en.wikipedia.org/w/index.php?search=Winston-Salem), [*Fayetteville*](https://en.wikipedia.org/w/index.php?search=Fayetteville), and [*Cary*](https://en.wikipedia.org/w/index.php?search=Cary).
17\.5 Effective maps: How (not) to lie
--------------------------------------
The map shown in Figure [17\.12](ch-spatial.html#fig:nc-map) is an example of a [*choropleth*](https://en.wikipedia.org/w/index.php?search=choropleth) map.
This is a very common type of map where coloring and/or shading is used to differentiate a region of the map based on the value of a variable.
These maps are popular, and can be very persuasive, but you should be aware of some challenges when making and interpreting choropleth maps and other data maps.
Three common map types include:
* **Choropleth**: color or shade regions based on the value of a variable
* **Proportional symbol**: associate a symbol with each location, but scale its size to reflect the value of a variable
* **Dot density**: place dots for each data point, and view their accumulation
We note that in a proportional symbol map the symbol placed on the map is usually two\-dimensional.
Its size—*in area*—should be scaled in proportion to the quantity being mapped.
Be aware that often the size of the symbol is defined by its *radius*.
If the *radius* is in direct proportion to the quantity being mapped, then the area will be disproportionately large.
Always scale the size of proportional symbols in terms of their area.
As noted in Chapter [2](ch-vizI.html#ch:vizI), the choice of scale is also important and often done poorly.
The relationship between various quantities can be altered by scale.
In Chapter [2](ch-vizI.html#ch:vizI), we showed how the use of logarithmic scale can be used to improve the readability of a scatterplot.
In Figure [17\.12](ch-spatial.html#fig:nc-map), we illustrated the importance of properly setting the scale of a proportion so that 0\.5 was exactly in the middle.
Try making Figure [17\.12](ch-spatial.html#fig:nc-map) without doing this and see if the results are as easily interpretable.
Decisions about colors are also crucial to making an effective map.
In Chapter [2](ch-vizI.html#ch:vizI), we mentioned the color palettes available through **RColorBrewer**.
When making maps, categorical variables should be displayed using a *qualitative* palette, while quantitative variables should be displayed using a *sequential* or *diverging* palette.
In Figure [17\.12](ch-spatial.html#fig:nc-map) we employed a diverging palette, because Republicans and Democrats are on two opposite ends of the scale, with the neutral white color representing 0\.5\.
It’s important to decide how to deal with missing values.
Leaving them a default color (e.g., white) might confuse them with observed values.
Finally, the concept of *normalization* is fundamental.
Plotting raw data values on maps can easily distort the truth.
This is particularly true in the case of data maps, because area is an implied variable.
Thus, on choropleth maps, we almost always want to show some sort of density or ratio rather than raw values (i.e., counts).
17\.6 Projecting polygons
-------------------------
It is worth briefly illustrating the hazards of mapping unprojected data.
Consider the congressional district map for the entire country.
To plot this, we follow the same steps as before, but omit the step of restricting to North Carolina.
There is one additional step here for creating a mapping between state names and their abbreviations.
Thankfully, these data are built into **R**.
```
districts_full <- districts %>%
left_join(
tibble(state.abb, state.name),
by = c("STATENAME" = "state.name")
) %>%
left_join(
district_elections,
by = c("state.abb" = "state", "DISTRICT" = "district")
)
```
We can make the map by adding white polygons for the generic map data and then adding colored polygons for each congressional district. Some clipping will make this easier to see.
```
box <- st_bbox(districts_full)
world <- map_data("world") %>%
st_as_sf(coords = c("long", "lat")) %>%
group_by(group) %>%
summarize(region = first(region), do_union = FALSE) %>%
st_cast("POLYGON") %>%
st_set_crs(4269)
```
We display the Mercator projection of this base map in Figure [17\.14](ch-spatial.html#fig:us-unprojected).
Note how massive Alaska appears to be in relation to the other states.
Alaska is big, but it is not that big!
This is a distortion of reality due to the projection.
```
map_4269 <- ggplot(data = districts_full) +
geom_sf(data = world, size = 0.1) +
geom_sf(aes(fill = r_prop), size = 0.1) +
scale_fill_distiller(palette = "RdBu", limits = c(0, 1)) +
theme_void() +
labs(fill = "Proportion\nRepublican") +
xlim(-180, -50) + ylim(box[c("ymin", "ymax")])
map_4269
```
Figure 17\.14: U.S. congressional election results, 2012 (Mercator projection).
We can use the Albers equal area projection to make a more representative picture, as shown in Figure [17\.15](ch-spatial.html#fig:us-albers).
Note how Alaska is still the biggest state (and district) by area, but it is now much closer in size to Texas.
```
districts_aea <- districts_full %>%
st_transform(5070)
box <- st_bbox(districts_aea)
map_4269 %+% districts_aea +
xlim(box[c("xmin", "xmax")]) + ylim(box[c("ymin", "ymax")])
```
Figure 17\.15: U.S. congressional election results, 2012 (Albers equal area projection).
17\.7 Playing well with others
------------------------------
There are many technologies outside of **R** that allow you to work with spatial data. [*ArcGIS*](https://en.wikipedia.org/w/index.php?search=ArcGIS) is a proprietary [*Geographic Information System*](https://en.wikipedia.org/w/index.php?search=Geographic%20Information%20System) (GIS) software that is considered by many to be the industry state\-of\-the\-art. [*QGIS*](https://en.wikipedia.org/w/index.php?search=QGIS) is its open\-source competitior. Both have graphical user interfaces.
[*Keyhole Markup Language*](https://en.wikipedia.org/w/index.php?search=Keyhole%20Markup%20Language) (KML) is an [*XML*](https://en.wikipedia.org/w/index.php?search=XML) file format for storing geographic data. KML files can be read by [*Google Earth*](https://en.wikipedia.org/w/index.php?search=Google%20Earth) and other GIS applications. An `sf` object in **R** can be written to KML using the `st_write()` function. These files can then be read by ArcGIS, Google Maps, or Google Earth. Here, we illustrate how to create a KML file for the North Carolina congressional districts data frame that we defined earlier. A screenshot of the resulting output in Google Earth is shown in Figure [17\.16](ch-spatial.html#fig:googleearth).
```
nc_merged %>%
st_transform(4326) %>%
st_write("/tmp/nc_congress113.kml", driver = "kml")
```
Figure 17\.16: Screenshot of the North Carolina congressional districts as rendered in Google Earth, after exporting to KML. Compare with Figure [17\.12](ch-spatial.html#fig:nc-map).
17\.8 Further resources
-----------------------
Some excellent resources for spatial methods include R. S. Bivand, Pebesma, and Gómez\-Rubio (2013\) and Cressie (1993\).
A [helpful pocket guide to CRS systems in **R**](https://www.nceas.ucsb.edu/~frazier/RSpatialGuides/OverviewCoordinateReferenceSystems.pdf) contains information about projections, ellipsoids, and datums (reference points).
Pebesma (2021\) discuss the mechanics of how to work with spatial data in **R** in addition to introducing spatial modeling.
The **tigris** package provides access to shapefiles and demographic data from the United States Census Bureau (Walker 2020\).
The **sf** package has superseded spatial packages **sp**, **rgdal**, and **rgeos** which were used in the first edition of this book.
A guide for [migrating](https://github.com/r-spatial/sf/wiki/migrating) from **sp** to **sf** is maintained by the `r-spatial` group.
The fascinating story of [John Snow](https://en.wikipedia.org/w/index.php?search=John%20Snow) and his pursuit of the causes of cholera can be found in Vinten\-Johansen et al. (2003\).
Quantitative measures of gerrymandering have been a subject of interest to political scientists for some time (Niemi et al. 1990; Engstrom and Wildgen 1977; Hodge, Marshall, and Patterson 2010; Mackenzie 2009\).
17\.9 Exercises
---------------
**Problem 1 (Easy)**: Use the `geocode` function from the `tidygeocoder` package to find the latitude and longitude of the Emily Dickinson Museum in Amherst, Massachusetts.
**Problem 2 (Medium)**: The `pdxTrees` package contains a dataset of over 20,000 trees in Portland, Oregon, parks.
1. Using `pdxTrees_parks` data, create a informative leaflet map for a tree enthusiast curious about the diversity and types of trees in the Portland area.
2. Not all trees were created equal. Create an interactive map that highlights trees in terms of their overall contribution to sustainability and value to the Portland community using variables such as `carbon_storage_value` and `total_annual_benefits`, etc.
3. Create an interactive map that helps identify any problematic trees that city officials should take note of.
**Problem 3 (Hard)**: Researchers at UCLA maintain historical congressional district shapefiles (see <http://cdmaps.polisci.ucla.edu>).
Use these data to discuss the history of gerrymandering in the United States.
Is the problem better or worse today?
**Problem 4 (Hard)**: Use the `tidycensus` package to conduct a spatial analysis of the Census data it contains for your home state. Can you illustrate how the demography of your state varies spatially?
**Problem 5 (Hard)**: Use the `tigris` package to make the congressional election district map for your home state. Do you see evidence of gerrymandering? Why or why not?
17\.10 Supplementary exercises
------------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/geospatial\-I.html\#geospatialI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/geospatial-I.html#geospatialI-online-exercises)
No exercises found
17\.1 Motivation: What’s so great about geospatial data?
--------------------------------------------------------
The most famous early analysis of geospatial data was done by physician [John Snow](https://en.wikipedia.org/w/index.php?search=John%20Snow) in 1854\. In [a certain London neighborhood](https://en.wikipedia.org/wiki/1854_Broad_Street_cholera_outbreak), an outbreak of [*cholera*](https://en.wikipedia.org/w/index.php?search=cholera) killed 127 people in three days, resulting in a mass exodus of the local residents. At the time it was thought that cholera was an airborne disease caused by breathing foul air. Snow was critical of this theory, and set about discovering the true transmission mechanism.
Consider how you might use data to approach this problem. At the hospital, they might have a list of all of the patients that died of cholera. Those data might look like what is presented in Table [17\.1](ch-spatial.html#tab:cholera-data).
Table 17\.1: Hypothetical data from 1854 cholera outbreak.
| Date | Last\_Name | First\_Name | Address | Age | Cause\_death |
| --- | --- | --- | --- | --- | --- |
| Aug 31, 1854 | Jones | Thomas | 26 Broad St. | 37 | cholera |
| Aug 31, 1854 | Jones | Mary | 26 Broad St. | 11 | cholera |
| Oct 1, 1854 | Warwick | Martin | 14 Broad St. | 23 | cholera |
Snow’s genius was in focusing his analysis on the `Address` column. In a literal sense, the `Address` variable is a character vector—it stores text. This text has no obvious medical significance with respect to cholera. But we as human beings recognize that these strings of text encode *geographic locations*—they are [*geospatial*](https://en.wikipedia.org/w/index.php?search=geospatial) data. Snow’s insight into this outbreak involved simply plotting these data in a geographically relevant way (see Figure [17\.2](ch-spatial.html#fig:cholera)).
The `CholeraDeaths` data are included in the **mdsr** package. When you plot the address of each person who died from cholera, you get something similar to what is shown in Figure [17\.1](ch-spatial.html#fig:snow-simple).
```
library(tidyverse)
library(mdsr)
library(sf)
plot(CholeraDeaths["Count"])
```
Figure 17\.1: Context\-free plot of 1854 cholera deaths.
While you might see certain patterns in these data, there is no [*context*](https://en.wikipedia.org/w/index.php?search=context) provided. The [map that Snow actually drew](https://en.wikipedia.org/wiki/1854_Broad_Street_cholera_outbreak#/media/File:Snow-cholera-map-1.jpg) is presented in Figure [17\.2](ch-spatial.html#fig:cholera). The underlying map of the London streets provides helpful context that makes the information in Figure [17\.1](ch-spatial.html#fig:snow-simple) intelligible.
Figure 17\.2: John Snow’s original map of the 1854 Broad Street cholera outbreak. Source: Wikipedia.
However, Snow’s insight was driven by another set of data—the locations of the street\-side water pumps. It may be difficult to see in the reproduction, but in addition to the lines indicating cholera deaths, there are labeled circles indicating the water pumps. A quick study of the map reveals that nearly all of the cholera cases are clustered around a single pump on the center of Broad Street. Snow was able to convince local officials that this pump was the probable cause of the epidemic.
While the story presented here is factual, it may be more legend than spatial data analysts would like to believe. Much of the causality is dubious: Snow himself believed that the outbreak petered out more or less on its own, and he did not create his famous map until afterwards. Nevertheless, his map was influential in the realization among doctors that cholera is a waterborne—rather than airborne—disease.
Our idealized conception of Snow’s use of spatial analysis typifies a successful episode in data science. First, the key insight was made by combining three sources of data: the cholera deaths, the locations of the water pumps, and the London street map. Second, while we now have the capability to create a spatial model directly from the data that might have led to the same conclusion, constructing such a model is considerably more difficult than simply plotting the data in the proper context. Moreover, the plot itself—properly contextualized—is probably more convincing to most people than a statistical model anyway. Human beings have a very strong intuitive ability to see spatial patterns in data, but computers have no such sense. Third, the problem was only resolved when the data\-based evidence was combined with a plausible model that explained the physical phenomenon. That is, Snow *was a doctor* and his knowledge of disease transmission was sufficient to convince his colleagues that cholera was not transmitted via the air.[31](#fn31)
17\.2 Spatial data structures
-----------------------------
Spatial data are often stored in special data structures (i.e., not just `data.frame`s). The most commonly used format for spatial data is called a [*shapefile*](https://en.wikipedia.org/w/index.php?search=shapefile).
Another common format is [*KML*](https://en.wikipedia.org/w/index.php?search=KML).
There are many other formats, and while mastering the details of any of these formats is not realistic in this treatment, there are some important basic notions that one must have in order to work with spatial data.
Shapefiles evolved as the native file format of the ArcView program developed by the [*Environmental Systems Research Institute*](https://en.wikipedia.org/w/index.php?search=Environmental%20Systems%20Research%20Institute) ([*Esri*](https://en.wikipedia.org/w/index.php?search=Esri)) and have since become an open specification. They can be downloaded from many different government websites and other locations that publish spatial data.
Spatial data consists not of rows and columns, but of geometric objects like points, lines, and polygons. Shapefiles contain vector\-based instructions for drawing the boundaries of countries, counties, and towns, etc. As such, shapefiles are richer—and more complicated—data containers than simple data frames. Working with shapefiles in **R** can be challenging, but the major benefit is that shapefiles allow you to provide your data with a geographic context. The results can be stunning.
First, the term “shapefile” is somewhat of a [*misnomer*](https://en.wikipedia.org/w/index.php?search=misnomer), as there are several files that you must have in order to read spatial data.
These files have extensions like `.shp`, `.shx`, and `.dbf`, and they are typically stored in a common directory.
There are *many* packages for **R** that specialize in working with spatial data, but we will focus on the most recent: **sf**. This package provides a **tidyverse**\-friendly set of class definitions and functions for spatial objects in **R**. These will have the class `sf`. (Under the hood, **sf** wraps functionality previously provided by the **rgdal** and **rgeos** packages.[32](#fn32))
To get a sense of how these work, we will make a recreation of Snow’s cholera map. First, download and unzip this file: (<http://rtwilson.com/downloads/SnowGIS_SHP.zip>). After loading the **sf** package, we explore the directory that contains our shapefiles.
```
library(sf)
# The correct path on your computer may be different
dsn <- fs::path(root, "snow", "SnowGIS_SHP")
list.files(dsn)
```
```
[1] "Cholera_Deaths.dbf" "Cholera_Deaths.prj"
[3] "Cholera_Deaths.sbn" "Cholera_Deaths.sbx"
[5] "Cholera_Deaths.shp" "Cholera_Deaths.shx"
[7] "OSMap_Grayscale.tfw" "OSMap_Grayscale.tif"
[9] "OSMap_Grayscale.tif.aux.xml" "OSMap_Grayscale.tif.ovr"
[11] "OSMap.tfw" "OSMap.tif"
[13] "Pumps.dbf" "Pumps.prj"
[15] "Pumps.sbx" "Pumps.shp"
[17] "Pumps.shx" "README.txt"
[19] "SnowMap.tfw" "SnowMap.tif"
[21] "SnowMap.tif.aux.xml" "SnowMap.tif.ovr"
```
Note that there are six files with the name `Cholera_Deaths` and another five with the name `Pumps`. These correspond to two different sets of shapefiles called [*layers*](https://en.wikipedia.org/w/index.php?search=layers), as revealed by the `st_layers()` function.
```
st_layers(dsn)
```
```
Driver: ESRI Shapefile
Available layers:
layer_name geometry_type features fields
1 Pumps Point 8 1
2 Cholera_Deaths Point 250 2
```
We’ll begin by loading the `Cholera_Deaths` layer into **R** using the `st_read()` function. Note that `Cholera_Deaths` is a `data.frame` in addition to being an `sf` object. It contains 250 [*simple features*](https://en.wikipedia.org/w/index.php?search=simple%20features) – these are the rows of the data frame, each corresponding to a different spatial object. In this case, the geometry type is `POINT` for all 250 rows. We will return to discussion of the mysterious `projected CRS` in Section [17\.3\.2](ch-spatial.html#sec:projections), but for now simply note that a specific geographic projection is encoded in these files.
```
CholeraDeaths <- st_read(dsn, layer = "Cholera_Deaths")
```
```
Reading layer `Cholera_Deaths' from data source
`/home/bbaumer/Dropbox/git/mdsr-book/mdsr2e/data/shp/snow/SnowGIS_SHP'
using driver `ESRI Shapefile'
Simple feature collection with 250 features and 2 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: 529000 ymin: 181000 xmax: 530000 ymax: 181000
Projected CRS: OSGB 1936 / British National Grid
```
```
class(CholeraDeaths)
```
```
[1] "sf" "data.frame"
```
```
CholeraDeaths
```
```
Simple feature collection with 250 features and 2 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: 529000 ymin: 181000 xmax: 530000 ymax: 181000
Projected CRS: OSGB 1936 / British National Grid
First 10 features:
Id Count geometry
1 0 3 POINT (529309 181031)
2 0 2 POINT (529312 181025)
3 0 1 POINT (529314 181020)
4 0 1 POINT (529317 181014)
5 0 4 POINT (529321 181008)
6 0 2 POINT (529337 181006)
7 0 2 POINT (529290 181024)
8 0 2 POINT (529301 181021)
9 0 3 POINT (529285 181020)
10 0 2 POINT (529288 181032)
```
There are data associated with each of these points.
Every `sf` object is also a `data.frame` that stores values that correspond to each observation.
In this case, for each of the points, we have an associated `Id` number and a `Count` of the number of deaths at that location. To plot these data, we can simply use the `plot()` generic function as we did in Figure [17\.1](ch-spatial.html#fig:snow-simple). However, in the next section, we will illustrate how `sf` objects can be integrated into a **ggplot2** workflow.
17\.3 Making maps
-----------------
In addition to providing geospatial processing capabilities, **sf** also provides spatial plotting extensions that work seamlessly with **ggplot2**. The `geom_sf()` function extends the grammar of graphics embedded in **ggplot2** that we explored in Chapter [3](ch-vizII.html#ch:vizII) to provide native support for plotting spatial objects. Thus, we are only a few steps away from having some powerful mapping functionality.
### 17\.3\.1 Static maps
The `geom_sf()` function allows you to plot geospatial objects in any **ggplot2** object. Since the \\(x\\) and \\(y\\) coordinates are implied by the geometry of the **sf** object, you don’t have to explicitly bind the \\(x\\) aesthetic (see Chapter [3](ch-vizII.html#ch:vizII)) to the longitudinal coordinate and the \\(y\\) aesthetic to the latitude. Your map looks like this:
```
ggplot(CholeraDeaths) +
geom_sf()
```
Figure 17\.3: A simple **ggplot2** of the cholera deaths, with little context provided.
Figure [17\.3](ch-spatial.html#fig:cholera-ggplot) is an improvement over what you would get from `plot()`. It is mostly clear what the coordinates along the axes are telling us (the units are in fact degrees), but we still don’t have any context for what we are seeing. What we really want is to overlay these points on the London street map—and this is exactly what **ggspatial** lets us do.
The `annotation_map_tile()` function adds a layer of map tiles pulled from [Open Street Map](https://www.openstreetmap.org/). We can control the `zoom` level, as well as the `type`.
Here, we also map the number of deaths at each location to the size of the dot.
```
library(ggspatial)
ggplot(CholeraDeaths) +
annotation_map_tile(type = "osm", zoomin = 0) +
geom_sf(aes(size = Count), alpha = 0.7)
```
Figure 17\.4: Erroneous reproduction of John Snow’s original map of the 1854 cholera outbreak. The dots representing the deaths from cholera are off by hundreds of meters.
We note that [*John Snow* is now the name of a pub](https://londonist.com/pubs/pubs/pubs/john-snow) on the corner of Broadwick (formerly Broad) Street and Lexington Street.
But look carefully at Figure [17\.4](ch-spatial.html#fig:snow-wrong) and Figure [17\.2](ch-spatial.html#fig:cholera). You will not see the points in the right places. The center of the cluster is not on Broadwick Street, and some of the points are in the middle of the street (where there are no residences). Why?
The coordinates in the `CholeraDeaths` object have unfamiliar values, as we can see by accessing the *[*bounding box*](https://en.wikipedia.org/w/index.php?search=bounding%20box)* of the object.
```
st_bbox(CholeraDeaths)
```
```
xmin ymin xmax ymax
529160 180858 529656 181306
```
Both `CholeraDeaths` and the map tiles retrieved by the **ggspatial** package have geospatial coordinates, but those coordinates are not in the same units.
While it is true that `annotation_map_tile()` performs some on the fly coordination translation, there remains a discrepancy between our two geospatial data sources.
To understand how to get these two spatial data sources to work together properly, we have to understand projections.
### 17\.3\.2 Projections
The Earth happens to be an oblate spheroid—a three\-dimensional flattened sphere. Yet we would like to create two\-dimensional representations of the Earth that fit on pages or computer screens. The process of converting locations in a three\-dimensional [*geographic coordinate system*](https://en.wikipedia.org/w/index.php?search=geographic%20coordinate%20system) to a two\-dimensional representation is called [*projection*](https://en.wikipedia.org/w/index.php?search=projection).
Once people figured out that the world was not flat, the question of how to project it followed.
Since people have been making nautical maps for centuries, it would be nice if the study of map projection had resulted in a simple, accurate, universally\-accepted projection system.
Unfortunately, that is not the case. It is simply not possible to faithfully preserve all properties present in a three\-dimensional space in a two\-dimensional space. Thus there is no one best projection system—each has its own advantages and disadvantages.
As noted previously, because the Earth is not a perfect sphere the mathematics behind many of these projections are non\-trivial.
Two properties that a projection system might preserve—though not simultaneously—are *shape/angle* and *area*. That is, a projection system may be constructed in such a way that it faithfully represents the relative sizes of land masses in two dimensions. The Mercator projection shown at left in Figure [17\.5](ch-spatial.html#fig:mercator) is a famous example of a projection system that does *not* preserve area. Its popularity is a result of its *angle*\-preserving nature, which makes it useful for navigation. Unfortunately, it greatly distorts the size of features near the poles, where land masses become infinitely large.
```
library(mapproj)
library(maps)
map("world", projection = "mercator", wrap = TRUE)
map("world", projection = "cylequalarea", param = 45, wrap = TRUE)
```
Figure 17\.5: The world according to the Mercator (left) and Gall–Peters (right) projections.
The Gall–Peters projection shown at right in Figure [17\.5](ch-spatial.html#fig:mercator) does preserve area. Note the difference between the two projections when comparing the size of Greenland to Africa. In reality (as shown in the Gall–Peters projection) Africa is 14 times larger than Greenland. However, because Greenland is much closer to the North Pole, its area is greatly distorted in the Mercator projection, making it appear to be larger than Africa.
This particular example—while illustrative—became famous because of the socio\-political [controversy](https://en.wikipedia.org/wiki/Gall-Peters_projection#Controversy) in which these projections became embroiled. Beginning in the 1960s, a German filmmaker named [Arno Peters](https://en.wikipedia.org/w/index.php?search=Arno%20Peters) alleged that the commonly\-used Mercator projection was an instrument of [*cartographic imperialism*](https://en.wikipedia.org/w/index.php?search=cartographic%20imperialism), in that it falsely focused attention on Northern and Southern countries at the expense of those in Africa and South America closer to the equator. Peters had a point—the Mercator projection has many shortcomings—but unfortunately his claims about the virtues of the Gall–Peters projection (particularly its originality) were mostly false. Peters either ignored or was not aware that cartographers had long campaigned against the Mercator.
Nevertheless, you should be aware that the “default” projection can be very misleading. As a data scientist, your choice of how to project your data can have a direct influence on what viewers will take away from your data maps. Simply ignoring the implications of projections is not an ethically tenable position! While we can’t offer a comprehensive list of map projections here, two common general\-purpose map projections are the [*Lambert conformal conic*](https://en.wikipedia.org/w/index.php?search=Lambert%20conformal%20conic) projection and the [*Albers equal\-area conic*](https://en.wikipedia.org/w/index.php?search=Albers%20equal-area%20conic) projection (see Figure [17\.6](ch-spatial.html#fig:lambert)). In the former, angles are preserved, while in the latter neither scale nor shape are preserved, but gross distortions of both are minimized.
```
map(
"state", projection = "lambert",
parameters = c(lat0 = 20, lat1 = 50), wrap = TRUE
)
map(
"state", projection = "albers",
parameters = c(lat0 = 20, lat1 = 50), wrap = TRUE
)
```
Figure 17\.6: The contiguous United States according to the Lambert conformal conic (left) and Albers equal area (right) projections. We have specified that the scales are true on the 20th and 50th parallels.
Always think about how your data are projected when making a map.
A [*coordinate reference system*](https://en.wikipedia.org/w/index.php?search=coordinate%20reference%20system) (CRS) is needed to keep track of geographic locations. Every spatially\-aware object in **R** can have a projection. Three formats that are common for storing information about the projection of a geospatial object are [*EPSG*](https://en.wikipedia.org/w/index.php?search=EPSG), [*PROJ.4*](https://en.wikipedia.org/w/index.php?search=PROJ.4), and [*WKT*](https://en.wikipedia.org/w/index.php?search=WKT). The former is simply an integer, while PROJ.4 is a cryptic string of text. The latter can be retrieved (or set) using the `st_crs()` command.
```
st_crs(CholeraDeaths)
```
```
Coordinate Reference System:
User input: OSGB 1936 / British National Grid
wkt:
PROJCRS["OSGB 1936 / British National Grid",
BASEGEOGCRS["OSGB 1936",
DATUM["OSGB 1936",
ELLIPSOID["Airy 1830",6377563.396,299.3249646,
LENGTHUNIT["metre",1]]],
PRIMEM["Greenwich",0,
ANGLEUNIT["degree",0.0174532925199433]],
ID["EPSG",4277]],
CONVERSION["British National Grid",
METHOD["Transverse Mercator",
ID["EPSG",9807]],
PARAMETER["Latitude of natural origin",49,
ANGLEUNIT["degree",0.0174532925199433],
ID["EPSG",8801]],
PARAMETER["Longitude of natural origin",-2,
ANGLEUNIT["degree",0.0174532925199433],
ID["EPSG",8802]],
PARAMETER["Scale factor at natural origin",0.9996012717,
SCALEUNIT["unity",1],
ID["EPSG",8805]],
PARAMETER["False easting",400000,
LENGTHUNIT["metre",1],
ID["EPSG",8806]],
PARAMETER["False northing",-100000,
LENGTHUNIT["metre",1],
ID["EPSG",8807]]],
CS[Cartesian,2],
AXIS["(E)",east,
ORDER[1],
LENGTHUNIT["metre",1]],
AXIS["(N)",north,
ORDER[2],
LENGTHUNIT["metre",1]],
USAGE[
SCOPE["unknown"],
AREA["UK - Britain and UKCS 49°46'N to 61°01'N, 7°33'W to 3°33'E"],
BBOX[49.75,-9.2,61.14,2.88]],
ID["EPSG",27700]]
```
It should be clear by now that the science of map projection is complicated, and it is likely unclear how to decipher this representation of the projection, which is in a format called [*Well\-Known Text*](https://en.wikipedia.org/w/index.php?search=Well-Known%20Text). What we can say is that `METHOD["Transverse_Mercator"]` indicates that these data are encoded using a [*Transverse Mercator*](https://en.wikipedia.org/w/index.php?search=Transverse%20Mercator) projection. The [*Airy ellipsoid*](https://en.wikipedia.org/w/index.php?search=Airy%20ellipsoid) is being used, and the units are meters.
The equivalent EPSG system is 27700\.
The [*datum*](https://en.wikipedia.org/w/index.php?search=datum)—or model of the Earth—is OSGB 1936, which is also known as the [*British National Grid*](https://en.wikipedia.org/w/index.php?search=British%20National%20Grid).
The rest of the terms in the string are parameters that specify properties of that projection. The unfamiliar coordinates that we saw earlier for the `CholeraDeaths` data set were relative to this CRS.
There are many CRSs, but a few are most common. A set of EPSG ([*European Petroleum Survey Group*](https://en.wikipedia.org/w/index.php?search=European%20Petroleum%20Survey%20Group)) codes provides a shorthand for the full descriptions (like the one shown above). The most commonly\-used are:
* [EPSG:4326](https://epsg.io/4326) \- Also known as [WGS84](https://en.wikipedia.org/wiki/World_Geodetic_System#WGS84), this is the standard for GPS systems and Google Earth.
* [EPSG:3857](https://epsg.io/3857) \- A Mercator projection used in maps tiles[33](#fn33) by Google Maps, Open Street Maps, etc.
* [EPSG:27700](https://epsg.io/27700) \- Also known as OSGB 1936, or the British National Grid: United Kingdom Ordnance Survey. It is commonly used in Britain.
The `st_crs()` function will translate from the shorthand EPSG code to the full\-text PROJ.4 strings and WKT.
```
st_crs(4326)$epsg
```
```
[1] 4326
```
```
st_crs(3857)$Wkt
```
```
[1] "PROJCS[\"WGS 84 / Pseudo-Mercator\",GEOGCS[\"WGS 84\",DATUM[\"WGS_1984\",SPHEROID[\"WGS 84\",6378137,298.257223563,AUTHORITY[\"EPSG\",\"7030\"]],AUTHORITY[\"EPSG\",\"6326\"]],PRIMEM[\"Greenwich\",0,AUTHORITY[\"EPSG\",\"8901\"]],UNIT[\"degree\",0.0174532925199433,AUTHORITY[\"EPSG\",\"9122\"]],AUTHORITY[\"EPSG\",\"4326\"]],PROJECTION[\"Mercator_1SP\"],PARAMETER[\"central_meridian\",0],PARAMETER[\"scale_factor\",1],PARAMETER[\"false_easting\",0],PARAMETER[\"false_northing\",0],UNIT[\"metre\",1,AUTHORITY[\"EPSG\",\"9001\"]],AXIS[\"Easting\",EAST],AXIS[\"Northing\",NORTH],EXTENSION[\"PROJ4\",\"+proj=merc +a=6378137 +b=6378137 +lat_ts=0 +lon_0=0 +x_0=0 +y_0=0 +k=1 +units=m +nadgrids=@null +wktext +no_defs\"],AUTHORITY[\"EPSG\",\"3857\"]]"
```
```
st_crs(27700)$proj4string
```
```
[1] "+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +ellps=airy +units=m +no_defs"
```
The `CholeraDeaths` points did not show up on our earlier map because we did not project them into the same coordinate system as the map tiles. Since we can’t project the map tiles, we had better project the points in the `CholeraDeaths` data. As noted above, Google Maps tiles (and Open Street Map tiles) are projected in the `espg:3857` system. However, they are confusingly returned with coordinates in the `epsg:4326` system. Thus, we use the `st_transform()` function to project our `CholeraDeaths` data to `epsg:4326`.
```
cholera_4326 <- CholeraDeaths %>%
st_transform(4326)
```
Note that the *bounding box* in our new coordinates are in the same familiar units of latitude and longitude.
```
st_bbox(cholera_4326)
```
```
xmin ymin xmax ymax
-0.140 51.512 -0.133 51.516
```
Unfortunately, the code below *still* produces a map with the points in the wrong places.
```
ggplot(cholera_4326) +
annotation_map_tile(type = "osm", zoomin = 0) +
geom_sf(aes(size = Count), alpha = 0.7)
```
A careful reading of the help file for `spTransform-methods()` (the underlying machinery) gives some clues to our mistake.
```
help("spTransform-methods", package = "rgdal")
```
> Not providing the appropriate `+datum` and `+towgs84` tags may lead to coordinates being out by hundreds of meters. Unfortunately, there is no easy way to provide this information: The user has to know the correct metadata for the data being used, even if this can be hard to discover.
That seems like our problem!
The `+datum` and `+towgs84` arguments were missing from our PROJ.4 string.
```
st_crs(CholeraDeaths)$proj4string
```
```
[1] "+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +ellps=airy +units=m +no_defs"
```
The `CholeraDeaths` object has all of the same specifications as [`epsg:27700`](https://epsg.io/27700) but without the missing `+datum` and `+towgs84` tags. Furthermore, the documentation for the original data source suggests using `epsg:27700`. Thus, we first assert that the `CholeraDeaths` data is in `epsg:27700`.
Then, projecting to `epsg:4326` works as intended.
```
cholera_latlong <- CholeraDeaths %>%
st_set_crs(27700) %>%
st_transform(4326)
snow <- ggplot(cholera_latlong) +
annotation_map_tile(type = "osm", zoomin = 0) +
geom_sf(aes(size = Count))
```
All that remains is to add the locations of the pumps.
```
pumps <- st_read(dsn, layer = "Pumps")
```
```
Reading layer `Pumps' from data source
`/home/bbaumer/Dropbox/git/mdsr-book/mdsr2e/data/shp/snow/SnowGIS_SHP'
using driver `ESRI Shapefile'
Simple feature collection with 8 features and 1 field
Geometry type: POINT
Dimension: XY
Bounding box: xmin: 529000 ymin: 181000 xmax: 530000 ymax: 181000
Projected CRS: OSGB 1936 / British National Grid
```
```
pumps_latlong <- pumps %>%
st_set_crs(27700) %>%
st_transform(4326)
snow +
geom_sf(data = pumps_latlong, size = 3, color = "red")
```
Figure 17\.7: Recreation of John Snow’s original map of the 1854 cholera outbreak. The size of each black dot is proportional to the number of people who died from cholera at that location. The red dots indicate the location of public water pumps. The strong clustering of deaths around the water pump on Broad(wick) Street suggests that perhaps the cholera was spread through water obtained at that pump.
In Figure [17\.7](ch-spatial.html#fig:snow-final), we finally see the clarity that judicious uses of spatial data in the proper context can provide. It is not necessary to fit a statistical model to these data to see that nearly all of the cholera deaths occurred in people closest to the Broad Street water pump, which was later found to be drawing fecal bacteria from a nearby cesspit.
### 17\.3\.3 Dynamic maps with `leaflet`
Leaflet is a powerful open\-source JavaScript library for building interactive maps in HTML. The corresponding **R** package [**leaflet**](https://rstudio.github.io/leaflet/) brings this functionality to **R** using the **htmlwidgets** platform introduced in Chapter [14](ch-vizIII.html#ch:vizIII).
Although the commands are different, the architecture is very similar to **ggplot2**.
However, instead of putting data\-based layers on top of a static map, **leaflet** allows you to put data\-based layers on top of an interactive map.
Because **leaflet** renders as HTML, you won’t be able to take advantage of our plots in the printed book (since they are displayed as screen shots).
We encourage you to run this code on your own and explore interactively.
A **leaflet** map widget is created with the `leaflet()` command. We will subsequently add layers to this widget. The first layer that we will add is a [*tile*](https://en.wikipedia.org/w/index.php?search=tile) layer containing all of the static map information, which by default comes from OpenStreetMap. The second layer we will add here is a marker, which designates a point location. Note how the `addMarkers()` function can take a `data` argument, just like a `geom_*()` layer in **ggplot2** would.
```
white_house <- tibble(
address = "The White House, Washington, DC"
) %>%
tidygeocoder::geocode(address, method = "osm")
library(leaflet)
white_house_map <- leaflet() %>%
addTiles() %>%
addMarkers(data = white_house)
white_house_map
```
Figure 17\.8: A **leaflet** plot of the White House.
When you render this in **RStudio**, or in an **R** Markdown document with HTML output, or in a Web browser using Shiny, you will be able to scroll and zoom on the fly. In Figure [17\.8](ch-spatial.html#fig:leaflet-white-house) we display our version.
We can also add a pop\-up to provide more information about a particular location, as shown in Figure [17\.9](ch-spatial.html#fig:leaflet-white-house-popup).
```
white_house <- white_house %>%
mutate(
title = "The White House",
street_address = "1600 Pennsylvania Ave"
)
white_house_map %>%
addPopups(
data = white_house,
popup = ~paste0("<b>", title, "</b></br>", street_address)
)
```
Figure 17\.9: A **leaflet** plot of the White House with a popup.
Although **leaflet** and **ggplot2** are not syntactically equivalent, they are conceptually similar.
Because the map tiles provide geographic context, the dynamic, zoomable, scrollable maps created by **leaflet** can be more informative than the static maps created by **ggplot2**.
### 17\.3\.1 Static maps
The `geom_sf()` function allows you to plot geospatial objects in any **ggplot2** object. Since the \\(x\\) and \\(y\\) coordinates are implied by the geometry of the **sf** object, you don’t have to explicitly bind the \\(x\\) aesthetic (see Chapter [3](ch-vizII.html#ch:vizII)) to the longitudinal coordinate and the \\(y\\) aesthetic to the latitude. Your map looks like this:
```
ggplot(CholeraDeaths) +
geom_sf()
```
Figure 17\.3: A simple **ggplot2** of the cholera deaths, with little context provided.
Figure [17\.3](ch-spatial.html#fig:cholera-ggplot) is an improvement over what you would get from `plot()`. It is mostly clear what the coordinates along the axes are telling us (the units are in fact degrees), but we still don’t have any context for what we are seeing. What we really want is to overlay these points on the London street map—and this is exactly what **ggspatial** lets us do.
The `annotation_map_tile()` function adds a layer of map tiles pulled from [Open Street Map](https://www.openstreetmap.org/). We can control the `zoom` level, as well as the `type`.
Here, we also map the number of deaths at each location to the size of the dot.
```
library(ggspatial)
ggplot(CholeraDeaths) +
annotation_map_tile(type = "osm", zoomin = 0) +
geom_sf(aes(size = Count), alpha = 0.7)
```
Figure 17\.4: Erroneous reproduction of John Snow’s original map of the 1854 cholera outbreak. The dots representing the deaths from cholera are off by hundreds of meters.
We note that [*John Snow* is now the name of a pub](https://londonist.com/pubs/pubs/pubs/john-snow) on the corner of Broadwick (formerly Broad) Street and Lexington Street.
But look carefully at Figure [17\.4](ch-spatial.html#fig:snow-wrong) and Figure [17\.2](ch-spatial.html#fig:cholera). You will not see the points in the right places. The center of the cluster is not on Broadwick Street, and some of the points are in the middle of the street (where there are no residences). Why?
The coordinates in the `CholeraDeaths` object have unfamiliar values, as we can see by accessing the *[*bounding box*](https://en.wikipedia.org/w/index.php?search=bounding%20box)* of the object.
```
st_bbox(CholeraDeaths)
```
```
xmin ymin xmax ymax
529160 180858 529656 181306
```
Both `CholeraDeaths` and the map tiles retrieved by the **ggspatial** package have geospatial coordinates, but those coordinates are not in the same units.
While it is true that `annotation_map_tile()` performs some on the fly coordination translation, there remains a discrepancy between our two geospatial data sources.
To understand how to get these two spatial data sources to work together properly, we have to understand projections.
### 17\.3\.2 Projections
The Earth happens to be an oblate spheroid—a three\-dimensional flattened sphere. Yet we would like to create two\-dimensional representations of the Earth that fit on pages or computer screens. The process of converting locations in a three\-dimensional [*geographic coordinate system*](https://en.wikipedia.org/w/index.php?search=geographic%20coordinate%20system) to a two\-dimensional representation is called [*projection*](https://en.wikipedia.org/w/index.php?search=projection).
Once people figured out that the world was not flat, the question of how to project it followed.
Since people have been making nautical maps for centuries, it would be nice if the study of map projection had resulted in a simple, accurate, universally\-accepted projection system.
Unfortunately, that is not the case. It is simply not possible to faithfully preserve all properties present in a three\-dimensional space in a two\-dimensional space. Thus there is no one best projection system—each has its own advantages and disadvantages.
As noted previously, because the Earth is not a perfect sphere the mathematics behind many of these projections are non\-trivial.
Two properties that a projection system might preserve—though not simultaneously—are *shape/angle* and *area*. That is, a projection system may be constructed in such a way that it faithfully represents the relative sizes of land masses in two dimensions. The Mercator projection shown at left in Figure [17\.5](ch-spatial.html#fig:mercator) is a famous example of a projection system that does *not* preserve area. Its popularity is a result of its *angle*\-preserving nature, which makes it useful for navigation. Unfortunately, it greatly distorts the size of features near the poles, where land masses become infinitely large.
```
library(mapproj)
library(maps)
map("world", projection = "mercator", wrap = TRUE)
map("world", projection = "cylequalarea", param = 45, wrap = TRUE)
```
Figure 17\.5: The world according to the Mercator (left) and Gall–Peters (right) projections.
The Gall–Peters projection shown at right in Figure [17\.5](ch-spatial.html#fig:mercator) does preserve area. Note the difference between the two projections when comparing the size of Greenland to Africa. In reality (as shown in the Gall–Peters projection) Africa is 14 times larger than Greenland. However, because Greenland is much closer to the North Pole, its area is greatly distorted in the Mercator projection, making it appear to be larger than Africa.
This particular example—while illustrative—became famous because of the socio\-political [controversy](https://en.wikipedia.org/wiki/Gall-Peters_projection#Controversy) in which these projections became embroiled. Beginning in the 1960s, a German filmmaker named [Arno Peters](https://en.wikipedia.org/w/index.php?search=Arno%20Peters) alleged that the commonly\-used Mercator projection was an instrument of [*cartographic imperialism*](https://en.wikipedia.org/w/index.php?search=cartographic%20imperialism), in that it falsely focused attention on Northern and Southern countries at the expense of those in Africa and South America closer to the equator. Peters had a point—the Mercator projection has many shortcomings—but unfortunately his claims about the virtues of the Gall–Peters projection (particularly its originality) were mostly false. Peters either ignored or was not aware that cartographers had long campaigned against the Mercator.
Nevertheless, you should be aware that the “default” projection can be very misleading. As a data scientist, your choice of how to project your data can have a direct influence on what viewers will take away from your data maps. Simply ignoring the implications of projections is not an ethically tenable position! While we can’t offer a comprehensive list of map projections here, two common general\-purpose map projections are the [*Lambert conformal conic*](https://en.wikipedia.org/w/index.php?search=Lambert%20conformal%20conic) projection and the [*Albers equal\-area conic*](https://en.wikipedia.org/w/index.php?search=Albers%20equal-area%20conic) projection (see Figure [17\.6](ch-spatial.html#fig:lambert)). In the former, angles are preserved, while in the latter neither scale nor shape are preserved, but gross distortions of both are minimized.
```
map(
"state", projection = "lambert",
parameters = c(lat0 = 20, lat1 = 50), wrap = TRUE
)
map(
"state", projection = "albers",
parameters = c(lat0 = 20, lat1 = 50), wrap = TRUE
)
```
Figure 17\.6: The contiguous United States according to the Lambert conformal conic (left) and Albers equal area (right) projections. We have specified that the scales are true on the 20th and 50th parallels.
Always think about how your data are projected when making a map.
A [*coordinate reference system*](https://en.wikipedia.org/w/index.php?search=coordinate%20reference%20system) (CRS) is needed to keep track of geographic locations. Every spatially\-aware object in **R** can have a projection. Three formats that are common for storing information about the projection of a geospatial object are [*EPSG*](https://en.wikipedia.org/w/index.php?search=EPSG), [*PROJ.4*](https://en.wikipedia.org/w/index.php?search=PROJ.4), and [*WKT*](https://en.wikipedia.org/w/index.php?search=WKT). The former is simply an integer, while PROJ.4 is a cryptic string of text. The latter can be retrieved (or set) using the `st_crs()` command.
```
st_crs(CholeraDeaths)
```
```
Coordinate Reference System:
User input: OSGB 1936 / British National Grid
wkt:
PROJCRS["OSGB 1936 / British National Grid",
BASEGEOGCRS["OSGB 1936",
DATUM["OSGB 1936",
ELLIPSOID["Airy 1830",6377563.396,299.3249646,
LENGTHUNIT["metre",1]]],
PRIMEM["Greenwich",0,
ANGLEUNIT["degree",0.0174532925199433]],
ID["EPSG",4277]],
CONVERSION["British National Grid",
METHOD["Transverse Mercator",
ID["EPSG",9807]],
PARAMETER["Latitude of natural origin",49,
ANGLEUNIT["degree",0.0174532925199433],
ID["EPSG",8801]],
PARAMETER["Longitude of natural origin",-2,
ANGLEUNIT["degree",0.0174532925199433],
ID["EPSG",8802]],
PARAMETER["Scale factor at natural origin",0.9996012717,
SCALEUNIT["unity",1],
ID["EPSG",8805]],
PARAMETER["False easting",400000,
LENGTHUNIT["metre",1],
ID["EPSG",8806]],
PARAMETER["False northing",-100000,
LENGTHUNIT["metre",1],
ID["EPSG",8807]]],
CS[Cartesian,2],
AXIS["(E)",east,
ORDER[1],
LENGTHUNIT["metre",1]],
AXIS["(N)",north,
ORDER[2],
LENGTHUNIT["metre",1]],
USAGE[
SCOPE["unknown"],
AREA["UK - Britain and UKCS 49°46'N to 61°01'N, 7°33'W to 3°33'E"],
BBOX[49.75,-9.2,61.14,2.88]],
ID["EPSG",27700]]
```
It should be clear by now that the science of map projection is complicated, and it is likely unclear how to decipher this representation of the projection, which is in a format called [*Well\-Known Text*](https://en.wikipedia.org/w/index.php?search=Well-Known%20Text). What we can say is that `METHOD["Transverse_Mercator"]` indicates that these data are encoded using a [*Transverse Mercator*](https://en.wikipedia.org/w/index.php?search=Transverse%20Mercator) projection. The [*Airy ellipsoid*](https://en.wikipedia.org/w/index.php?search=Airy%20ellipsoid) is being used, and the units are meters.
The equivalent EPSG system is 27700\.
The [*datum*](https://en.wikipedia.org/w/index.php?search=datum)—or model of the Earth—is OSGB 1936, which is also known as the [*British National Grid*](https://en.wikipedia.org/w/index.php?search=British%20National%20Grid).
The rest of the terms in the string are parameters that specify properties of that projection. The unfamiliar coordinates that we saw earlier for the `CholeraDeaths` data set were relative to this CRS.
There are many CRSs, but a few are most common. A set of EPSG ([*European Petroleum Survey Group*](https://en.wikipedia.org/w/index.php?search=European%20Petroleum%20Survey%20Group)) codes provides a shorthand for the full descriptions (like the one shown above). The most commonly\-used are:
* [EPSG:4326](https://epsg.io/4326) \- Also known as [WGS84](https://en.wikipedia.org/wiki/World_Geodetic_System#WGS84), this is the standard for GPS systems and Google Earth.
* [EPSG:3857](https://epsg.io/3857) \- A Mercator projection used in maps tiles[33](#fn33) by Google Maps, Open Street Maps, etc.
* [EPSG:27700](https://epsg.io/27700) \- Also known as OSGB 1936, or the British National Grid: United Kingdom Ordnance Survey. It is commonly used in Britain.
The `st_crs()` function will translate from the shorthand EPSG code to the full\-text PROJ.4 strings and WKT.
```
st_crs(4326)$epsg
```
```
[1] 4326
```
```
st_crs(3857)$Wkt
```
```
[1] "PROJCS[\"WGS 84 / Pseudo-Mercator\",GEOGCS[\"WGS 84\",DATUM[\"WGS_1984\",SPHEROID[\"WGS 84\",6378137,298.257223563,AUTHORITY[\"EPSG\",\"7030\"]],AUTHORITY[\"EPSG\",\"6326\"]],PRIMEM[\"Greenwich\",0,AUTHORITY[\"EPSG\",\"8901\"]],UNIT[\"degree\",0.0174532925199433,AUTHORITY[\"EPSG\",\"9122\"]],AUTHORITY[\"EPSG\",\"4326\"]],PROJECTION[\"Mercator_1SP\"],PARAMETER[\"central_meridian\",0],PARAMETER[\"scale_factor\",1],PARAMETER[\"false_easting\",0],PARAMETER[\"false_northing\",0],UNIT[\"metre\",1,AUTHORITY[\"EPSG\",\"9001\"]],AXIS[\"Easting\",EAST],AXIS[\"Northing\",NORTH],EXTENSION[\"PROJ4\",\"+proj=merc +a=6378137 +b=6378137 +lat_ts=0 +lon_0=0 +x_0=0 +y_0=0 +k=1 +units=m +nadgrids=@null +wktext +no_defs\"],AUTHORITY[\"EPSG\",\"3857\"]]"
```
```
st_crs(27700)$proj4string
```
```
[1] "+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +ellps=airy +units=m +no_defs"
```
The `CholeraDeaths` points did not show up on our earlier map because we did not project them into the same coordinate system as the map tiles. Since we can’t project the map tiles, we had better project the points in the `CholeraDeaths` data. As noted above, Google Maps tiles (and Open Street Map tiles) are projected in the `espg:3857` system. However, they are confusingly returned with coordinates in the `epsg:4326` system. Thus, we use the `st_transform()` function to project our `CholeraDeaths` data to `epsg:4326`.
```
cholera_4326 <- CholeraDeaths %>%
st_transform(4326)
```
Note that the *bounding box* in our new coordinates are in the same familiar units of latitude and longitude.
```
st_bbox(cholera_4326)
```
```
xmin ymin xmax ymax
-0.140 51.512 -0.133 51.516
```
Unfortunately, the code below *still* produces a map with the points in the wrong places.
```
ggplot(cholera_4326) +
annotation_map_tile(type = "osm", zoomin = 0) +
geom_sf(aes(size = Count), alpha = 0.7)
```
A careful reading of the help file for `spTransform-methods()` (the underlying machinery) gives some clues to our mistake.
```
help("spTransform-methods", package = "rgdal")
```
> Not providing the appropriate `+datum` and `+towgs84` tags may lead to coordinates being out by hundreds of meters. Unfortunately, there is no easy way to provide this information: The user has to know the correct metadata for the data being used, even if this can be hard to discover.
That seems like our problem!
The `+datum` and `+towgs84` arguments were missing from our PROJ.4 string.
```
st_crs(CholeraDeaths)$proj4string
```
```
[1] "+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +ellps=airy +units=m +no_defs"
```
The `CholeraDeaths` object has all of the same specifications as [`epsg:27700`](https://epsg.io/27700) but without the missing `+datum` and `+towgs84` tags. Furthermore, the documentation for the original data source suggests using `epsg:27700`. Thus, we first assert that the `CholeraDeaths` data is in `epsg:27700`.
Then, projecting to `epsg:4326` works as intended.
```
cholera_latlong <- CholeraDeaths %>%
st_set_crs(27700) %>%
st_transform(4326)
snow <- ggplot(cholera_latlong) +
annotation_map_tile(type = "osm", zoomin = 0) +
geom_sf(aes(size = Count))
```
All that remains is to add the locations of the pumps.
```
pumps <- st_read(dsn, layer = "Pumps")
```
```
Reading layer `Pumps' from data source
`/home/bbaumer/Dropbox/git/mdsr-book/mdsr2e/data/shp/snow/SnowGIS_SHP'
using driver `ESRI Shapefile'
Simple feature collection with 8 features and 1 field
Geometry type: POINT
Dimension: XY
Bounding box: xmin: 529000 ymin: 181000 xmax: 530000 ymax: 181000
Projected CRS: OSGB 1936 / British National Grid
```
```
pumps_latlong <- pumps %>%
st_set_crs(27700) %>%
st_transform(4326)
snow +
geom_sf(data = pumps_latlong, size = 3, color = "red")
```
Figure 17\.7: Recreation of John Snow’s original map of the 1854 cholera outbreak. The size of each black dot is proportional to the number of people who died from cholera at that location. The red dots indicate the location of public water pumps. The strong clustering of deaths around the water pump on Broad(wick) Street suggests that perhaps the cholera was spread through water obtained at that pump.
In Figure [17\.7](ch-spatial.html#fig:snow-final), we finally see the clarity that judicious uses of spatial data in the proper context can provide. It is not necessary to fit a statistical model to these data to see that nearly all of the cholera deaths occurred in people closest to the Broad Street water pump, which was later found to be drawing fecal bacteria from a nearby cesspit.
### 17\.3\.3 Dynamic maps with `leaflet`
Leaflet is a powerful open\-source JavaScript library for building interactive maps in HTML. The corresponding **R** package [**leaflet**](https://rstudio.github.io/leaflet/) brings this functionality to **R** using the **htmlwidgets** platform introduced in Chapter [14](ch-vizIII.html#ch:vizIII).
Although the commands are different, the architecture is very similar to **ggplot2**.
However, instead of putting data\-based layers on top of a static map, **leaflet** allows you to put data\-based layers on top of an interactive map.
Because **leaflet** renders as HTML, you won’t be able to take advantage of our plots in the printed book (since they are displayed as screen shots).
We encourage you to run this code on your own and explore interactively.
A **leaflet** map widget is created with the `leaflet()` command. We will subsequently add layers to this widget. The first layer that we will add is a [*tile*](https://en.wikipedia.org/w/index.php?search=tile) layer containing all of the static map information, which by default comes from OpenStreetMap. The second layer we will add here is a marker, which designates a point location. Note how the `addMarkers()` function can take a `data` argument, just like a `geom_*()` layer in **ggplot2** would.
```
white_house <- tibble(
address = "The White House, Washington, DC"
) %>%
tidygeocoder::geocode(address, method = "osm")
library(leaflet)
white_house_map <- leaflet() %>%
addTiles() %>%
addMarkers(data = white_house)
white_house_map
```
Figure 17\.8: A **leaflet** plot of the White House.
When you render this in **RStudio**, or in an **R** Markdown document with HTML output, or in a Web browser using Shiny, you will be able to scroll and zoom on the fly. In Figure [17\.8](ch-spatial.html#fig:leaflet-white-house) we display our version.
We can also add a pop\-up to provide more information about a particular location, as shown in Figure [17\.9](ch-spatial.html#fig:leaflet-white-house-popup).
```
white_house <- white_house %>%
mutate(
title = "The White House",
street_address = "1600 Pennsylvania Ave"
)
white_house_map %>%
addPopups(
data = white_house,
popup = ~paste0("<b>", title, "</b></br>", street_address)
)
```
Figure 17\.9: A **leaflet** plot of the White House with a popup.
Although **leaflet** and **ggplot2** are not syntactically equivalent, they are conceptually similar.
Because the map tiles provide geographic context, the dynamic, zoomable, scrollable maps created by **leaflet** can be more informative than the static maps created by **ggplot2**.
17\.4 Extended example: Congressional districts
-----------------------------------------------
In the 2012 presidential election, the [*Republican*](https://en.wikipedia.org/w/index.php?search=Republican) challenger [Mitt Romney](https://en.wikipedia.org/w/index.php?search=Mitt%20Romney) [narrowly defeated](https://en.wikipedia.org/wiki/United_States_presidential_election_in_North_Carolina,_2012) President [Barack Obama](https://en.wikipedia.org/w/index.php?search=Barack%20Obama) in the state of [*North Carolina*](https://en.wikipedia.org/w/index.php?search=North%20Carolina), winning 50\.4% of the popular votes, and thereby earning all 15 electoral votes. Obama had won North Carolina in 2008—becoming the first [*Democrat*](https://en.wikipedia.org/w/index.php?search=Democrat) to do so since 1976\. As a swing state, North Carolina has voting patterns that are particularly interesting, and—as we will see—contentious.
The roughly 50/50 split in the popular vote suggests that there are about the same number of Democratic and Republican voters in the state.
In the fall of 2020,
10 of North Carolina’s 13 congressional representatives are Republican (with one seat currently vacant).
How can this be?
In this case, geospatial data can help us understand.
### 17\.4\.1 Election results
Our first step is to download the results of the 2012 congressional elections from the Federal Election
Commission. These data are available through the **fec12** package.[34](#fn34)
```
library(fec12)
```
Note that we have slightly more than 435 elections, since these data include U.S. territories like Puerto Rico and the Virgin Islands.
```
results_house %>%
group_by(state, district_id) %>%
summarize(N = n()) %>%
nrow()
```
```
[1] 445
```
According to the [*United States Constitution*](https://en.wikipedia.org/w/index.php?search=United%20States%20Constitution), congressional districts are apportioned according to population from the 2010 U.S. Census. In practice, we see that this is not quite the case. These are the 10 candidates who earned the most votes in the general election.
```
results_house %>%
left_join(candidates, by = "cand_id") %>%
select(state, district_id, cand_name, party, general_votes) %>%
arrange(desc(general_votes))
```
```
# A tibble: 2,343 × 5
state district_id cand_name party general_votes
<chr> <chr> <chr> <chr> <dbl>
1 PR 00 PIERLUISI, PEDRO R NPP 905066
2 PR 00 ALOMAR, RAFAEL COX PPD 881181
3 PA 02 FATTAH, CHAKA MR. D 318176
4 WA 07 MCDERMOTT, JAMES D 298368
5 MI 14 PETERS, GARY D 270450
6 MO 01 CLAY, WILLIAM LACY JR D 267927
7 WI 02 POCAN, MARK D 265422
8 OR 03 BLUMENAUER, EARL D 264979
9 MA 08 LYNCH, STEPHEN F D 263999
10 MN 05 ELLISON, KEITH MAURICE DFL 262102
# … with 2,333 more rows
```
Note that the representatives from Puerto Rico earned nearly three times as many votes as any other Congressional representative.
We are interested in the results from North Carolina.
We begin by creating a data frame specific to that state, with the votes aggregated by congressional district.
As there are 13 districts, the `nc_results` data frame will have exactly 13 rows.
```
district_elections <- results_house %>%
mutate(district = parse_number(district_id)) %>%
group_by(state, district) %>%
summarize(
N = n(),
total_votes = sum(general_votes, na.rm = TRUE),
d_votes = sum(ifelse(party == "D", general_votes, 0), na.rm = TRUE),
r_votes = sum(ifelse(party == "R", general_votes, 0), na.rm = TRUE)
) %>%
mutate(
other_votes = total_votes - d_votes - r_votes,
r_prop = r_votes / total_votes,
winner = ifelse(r_votes > d_votes, "Republican", "Democrat")
)
nc_results <- district_elections %>%
filter(state == "NC")
nc_results %>%
select(-state)
```
```
Adding missing grouping variables: `state`
```
```
# A tibble: 13 × 9
# Groups: state [1]
state district N total_votes d_votes r_votes other_votes r_prop
<chr> <dbl> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 NC 1 4 338066 254644 77288 6134 0.229
2 NC 2 8 311397 128973 174066 8358 0.559
3 NC 3 3 309885 114314 195571 0 0.631
4 NC 4 4 348485 259534 88951 0 0.255
5 NC 5 3 349197 148252 200945 0 0.575
6 NC 6 4 364583 142467 222116 0 0.609
7 NC 7 4 336736 168695 168041 0 0.499
8 NC 8 8 301824 137139 160695 3990 0.532
9 NC 9 13 375690 171503 194537 9650 0.518
10 NC 10 6 334849 144023 190826 0 0.570
11 NC 11 11 331426 141107 190319 0 0.574
12 NC 12 3 310908 247591 63317 0 0.204
13 NC 13 5 370610 160115 210495 0 0.568
# … with 1 more variable: winner <chr>
```
We see that the distribution of the number of votes cast across congressional districts in North Carolina is very narrow—all of the districts had between 301,824 and 375,690 votes cast.
```
nc_results %>%
skim(total_votes) %>%
select(-na)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var state n mean sd p0 p25 p50 p75 p100
<chr> <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 total_votes NC 13 337204. 24175. 301824 311397 336736 349197 375690
```
However, as the close presidential election suggests, the votes of North Carolinans were roughly evenly divided among Democratic and Republican congressional candidates. In fact, state Democrats earned a narrow majority—50\.6%—of the votes. Yet the Republicans won 9 of the 13 races.[35](#fn35)
```
nc_results %>%
summarize(
N = n(),
state_votes = sum(total_votes),
state_d = sum(d_votes),
state_r = sum(r_votes)
) %>%
mutate(
d_prop = state_d / state_votes,
r_prop = state_r / state_votes
)
```
```
# A tibble: 1 × 7
state N state_votes state_d state_r d_prop r_prop
<chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 NC 13 4383656 2218357 2137167 0.506 0.488
```
One clue is to look more closely at the distribution of the proportion of Republican votes in each district.
```
nc_results %>%
select(district, r_prop, winner) %>%
arrange(desc(r_prop))
```
```
# A tibble: 13 × 4
# Groups: state [1]
state district r_prop winner
<chr> <dbl> <dbl> <chr>
1 NC 3 0.631 Republican
2 NC 6 0.609 Republican
3 NC 5 0.575 Republican
4 NC 11 0.574 Republican
5 NC 10 0.570 Republican
6 NC 13 0.568 Republican
7 NC 2 0.559 Republican
8 NC 8 0.532 Republican
9 NC 9 0.518 Republican
10 NC 7 0.499 Democrat
11 NC 4 0.255 Democrat
12 NC 1 0.229 Democrat
13 NC 12 0.204 Democrat
```
In the nine districts that Republicans won, their share of the vote ranged from a narrow (51\.8%) to a comfortable (63\.1%) majority. With the exception of the essentially even 7th district, the three districts that Democrats won were routs, with the Democratic candidate winning between 75% and 80% of the vote. Thus, although Democrats won more votes across the state, most of their votes were clustered within three overwhelmingly Democratic districts, allowing Republicans to prevail with moderate majorities across the remaining nine districts.
Conventional wisdom states that Democratic voters tend to live in cities, so perhaps they were simply clustered in three cities, while Republican voters were spread out across the state in more rural areas.
Let’s look more closely at the districts.
### 17\.4\.2 Congressional districts
To do this, we first download the congressional district shapefiles for the 113th Congress.
```
src <- "http://cdmaps.polisci.ucla.edu/shp/districts113.zip"
dsn_districts <- usethis::use_zip(src, destdir = fs::path("data_large"))
```
Next, we read these shapefiles into **R** as an `sf` object.
```
library(sf)
st_layers(dsn_districts)
```
```
Driver: ESRI Shapefile
Available layers:
layer_name geometry_type features fields
1 districts113 Polygon 436 15
```
```
districts <- st_read(dsn_districts, layer = "districts113") %>%
mutate(DISTRICT = parse_number(as.character(DISTRICT)))
```
```
Reading layer `districts113' from data source
`/home/bbaumer/Dropbox/git/mdsr-book/mdsr2e/data_large/districtShapes'
using driver `ESRI Shapefile'
Simple feature collection with 436 features and 15 fields (with 1 geometry empty)
Geometry type: MULTIPOLYGON
Dimension: XY
Bounding box: xmin: -179 ymin: 18.9 xmax: 180 ymax: 71.4
Geodetic CRS: NAD83
```
```
glimpse(districts)
```
```
Rows: 436
Columns: 16
$ STATENAME <chr> "Louisiana", "Maine", "Maine", "Maryland", "Maryland", …
$ ID <chr> "022113114006", "023113114001", "023113114002", "024113…
$ DISTRICT <dbl> 6, 1, 2, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8…
$ STARTCONG <chr> "113", "113", "113", "113", "113", "113", "113", "113",…
$ ENDCONG <chr> "114", "114", "114", "114", "114", "114", "114", "114",…
$ DISTRICTSI <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ COUNTY <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ PAGE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ LAW <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ NOTE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ BESTDEC <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ FINALNOTE <chr> "{\"From US Census website\"}", "{\"From US Census webs…
$ RNOTE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ LASTCHANGE <chr> "2016-05-29 16:44:10.857626", "2016-05-29 16:44:10.8576…
$ FROMCOUNTY <chr> "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", …
$ geometry <MULTIPOLYGON [°]> MULTIPOLYGON (((-91.8 30.9,..., MULTIPOLYG…
```
We are investigating North Carolina, so we will create a smaller object with only those shapes using the `filter()` function. Note that since every **sf** object is *also* a `data.frame`, we can use all of our usual **dplyr** tools on our geospatial objects.
```
nc_shp <- districts %>%
filter(STATENAME == "North Carolina")
nc_shp %>%
st_geometry() %>%
plot(col = gray.colors(nrow(nc_shp)))
```
Figure 17\.10: A basic map of the North Carolina congressional districts.
It is hard to see exactly what is going on here, but it appears as though there are some traditionally shaped districts, as well as some very strange and narrow districts. Unfortunately the map in Figure [17\.10](ch-spatial.html#fig:nc-basic) is devoid of context, so it is not very informative.
We need the `nc_results` data to provide that context.
### 17\.4\.3 Putting it all together
How to merge these two together? The simplest way is to use the `inner_join()` function from **dplyr** (see Chapter [5](ch-join.html#ch:join)).
Since both `nc_shp` and `nc_results` are `data.frame`s, this will append the election results data to the geospatial data.
Here, we merge the `nc_shp` polygons with the `nc_results` election data frame using the district as the key.
Note that there are 13 polygons and 13 rows.
```
nc_merged <- nc_shp %>%
st_transform(4326) %>%
inner_join(nc_results, by = c("DISTRICT" = "district"))
glimpse(nc_merged)
```
```
Rows: 13
Columns: 24
$ STATENAME <chr> "North Carolina", "North Carolina", "North Carolina", …
$ ID <chr> "037113114002", "037113114003", "037113114004", "03711…
$ DISTRICT <dbl> 2, 3, 4, 1, 5, 6, 7, 8, 9, 10, 11, 12, 13
$ STARTCONG <chr> "113", "113", "113", "113", "113", "113", "113", "113"…
$ ENDCONG <chr> "114", "114", "114", "114", "114", "114", "114", "114"…
$ DISTRICTSI <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ COUNTY <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ PAGE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ LAW <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ NOTE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ BESTDEC <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ FINALNOTE <chr> "{\"From US Census website\"}", "{\"From US Census web…
$ RNOTE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ LASTCHANGE <chr> "2016-05-29 16:44:10.857626", "2016-05-29 16:44:10.857…
$ FROMCOUNTY <chr> "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F",…
$ state <chr> "NC", "NC", "NC", "NC", "NC", "NC", "NC", "NC", "NC", …
$ N <int> 8, 3, 4, 4, 3, 4, 4, 8, 13, 6, 11, 3, 5
$ total_votes <dbl> 311397, 309885, 348485, 338066, 349197, 364583, 336736…
$ d_votes <dbl> 128973, 114314, 259534, 254644, 148252, 142467, 168695…
$ r_votes <dbl> 174066, 195571, 88951, 77288, 200945, 222116, 168041, …
$ other_votes <dbl> 8358, 0, 0, 6134, 0, 0, 0, 3990, 9650, 0, 0, 0, 0
$ r_prop <dbl> 0.559, 0.631, 0.255, 0.229, 0.575, 0.609, 0.499, 0.532…
$ winner <chr> "Republican", "Republican", "Democrat", "Democrat", "R…
$ geometry <MULTIPOLYGON [°]> MULTIPOLYGON (((-80.1 35.8,..., MULTIPOLYGON (((-78.3 …
```
### 17\.4\.4 Using `ggplot2`
We are now ready to plot our map of [North Carolina’s congressional districts](https://upload.wikimedia.org/wikipedia/commons/thumb/3/3d/North_Carolina_Congressional_Districts%2C_113th_Congress.tif/lossless-page1-1280px-North_Carolina_Congressional_Districts%2C_113th_Congress.tif.png). We start by using a simple red–blue color scheme for the districts.
```
nc <- ggplot(data = nc_merged, aes(fill = winner)) +
annotation_map_tile(zoom = 6, type = "osm") +
geom_sf(alpha = 0.5) +
scale_fill_manual("Winner", values = c("blue", "red")) +
geom_sf_label(aes(label = DISTRICT), fill = "white") +
theme_void()
nc
```
Figure 17\.11: Bichromatic choropleth map of the results of the 2012 congressional elections in North Carolina.
Figure [17\.11](ch-spatial.html#fig:nc-bicolor) shows that it was the Democratic districts that tended to be irregularly shaped. Districts 12 and 4 have narrow, tortured shapes—both were heavily Democratic. This plot tells us who won, but it doesn’t convey the subtleties we observed about the *margins* of victory. In the next plot, we use a continuous color scale to to indicate the proportion of votes in each district. The `RdBu` diverging color palette comes from **RColorBrewer** (see Chapter [2](ch-vizI.html#ch:vizI)).
```
nc +
aes(fill = r_prop) +
scale_fill_distiller(
"Proportion\nRepublican",
palette = "RdBu",
limits = c(0.2, 0.8)
)
```
Figure 17\.12: Full color choropleth map of the results of the 2012 congressional elections in North Carolina. The clustering of Democratic voters is evident from the deeper blue in Democratic districts, versus the pale red in the more numerous Republican districts.
The `limits` argument to `scale_fill_distiller()` is important. This forces *red* to be the color associated with 80% Republican votes and *blue* to be associated with 80% Democratic votes. Without this argument, red would be associated with the maximum value in that data (about 63%) and blue with the minimum (about 20%). This would result in the neutral color of white not being at exactly 50%. When choosing color scales, it is critically important to make choices that reflect the data.
Choose colors and scales carefully when making maps.
In Figure [17\.12](ch-spatial.html#fig:nc-map), we can see that the three Democratic districts are “bluer” than the nine Republican counties are “red.” This reflects the clustering that we observed earlier. North Carolina has become one of the more egregious examples of [*gerrymandering*](https://en.wikipedia.org/w/index.php?search=gerrymandering), the phenomenon of when legislators of one party use their re\-districting power for political gain. This is evident in Figure [17\.12](ch-spatial.html#fig:nc-map), where Democratic votes are concentrated in three curiously\-drawn congressional districts. This enables Republican lawmakers to have 69% (9/13\) of the voting power in Congress despite earning only 48\.8% of the votes.
Since the 1st edition of this book, the North Carolina gerrymandering case went all the way to the [*United States Supreme Court*](https://en.wikipedia.org/w/index.php?search=United%20States%20Supreme%20Court).
In a landmark 2018 decision, the Justices ruled 5–4 in [Rucho vs. Common Cause](https://www.supremecourt.gov/search.aspx?FileName=/docket/docketfiles/html/public/18-422.html) that while partisan gerrymanders such as those in North Carolina may be problematic for democracy, they are not reviewable by the judicial system.
### 17\.4\.5 Using `leaflet`
Was it true that the Democratic districts were weaved together to contain many of the biggest cities in the state?
A similar map made in **leaflet** allows us to zoom in and pan out, making it easier to survey the districts.
First, we will define a color palette over the values \\(\[0,1]\\) that ranges from red to blue.
```
library(leaflet)
pal <- colorNumeric(palette = "RdBu", domain = c(0, 1))
```
To make our plot in **leaflet**, we have to add the tiles, and then the polygons defined by the `sf` object `nc_merged`.
Since we want red to be associated with the proportion of Republican votes, we will map `1 - r_prop` to color.
Note that we also add popups with the actual proportions, so that if you click on the map, it will show the district number and the proportion of Republican votes.
The resulting **leaflet** map is shown in Figure [17\.13](ch-spatial.html#fig:leaflet-nc).
```
leaflet_nc <- leaflet(nc_merged) %>%
addTiles() %>%
addPolygons(
weight = 1, fillOpacity = 0.7,
color = ~pal(1 - r_prop),
popup = ~paste("District", DISTRICT, "</br>", round(r_prop, 4))
) %>%
setView(lng = -80, lat = 35, zoom = 7)
```
Figure 17\.13: A **leaflet** plot of the North Carolina congressional districts.
Indeed, the curiously\-drawn districts in blue encompass all seven of the largest cities in the state: [*Charlotte*](https://en.wikipedia.org/w/index.php?search=Charlotte), [*Raleigh*](https://en.wikipedia.org/w/index.php?search=Raleigh), [*Greensboro*](https://en.wikipedia.org/w/index.php?search=Greensboro), [*Durham*](https://en.wikipedia.org/w/index.php?search=Durham), [*Winston\-Salem*](https://en.wikipedia.org/w/index.php?search=Winston-Salem), [*Fayetteville*](https://en.wikipedia.org/w/index.php?search=Fayetteville), and [*Cary*](https://en.wikipedia.org/w/index.php?search=Cary).
### 17\.4\.1 Election results
Our first step is to download the results of the 2012 congressional elections from the Federal Election
Commission. These data are available through the **fec12** package.[34](#fn34)
```
library(fec12)
```
Note that we have slightly more than 435 elections, since these data include U.S. territories like Puerto Rico and the Virgin Islands.
```
results_house %>%
group_by(state, district_id) %>%
summarize(N = n()) %>%
nrow()
```
```
[1] 445
```
According to the [*United States Constitution*](https://en.wikipedia.org/w/index.php?search=United%20States%20Constitution), congressional districts are apportioned according to population from the 2010 U.S. Census. In practice, we see that this is not quite the case. These are the 10 candidates who earned the most votes in the general election.
```
results_house %>%
left_join(candidates, by = "cand_id") %>%
select(state, district_id, cand_name, party, general_votes) %>%
arrange(desc(general_votes))
```
```
# A tibble: 2,343 × 5
state district_id cand_name party general_votes
<chr> <chr> <chr> <chr> <dbl>
1 PR 00 PIERLUISI, PEDRO R NPP 905066
2 PR 00 ALOMAR, RAFAEL COX PPD 881181
3 PA 02 FATTAH, CHAKA MR. D 318176
4 WA 07 MCDERMOTT, JAMES D 298368
5 MI 14 PETERS, GARY D 270450
6 MO 01 CLAY, WILLIAM LACY JR D 267927
7 WI 02 POCAN, MARK D 265422
8 OR 03 BLUMENAUER, EARL D 264979
9 MA 08 LYNCH, STEPHEN F D 263999
10 MN 05 ELLISON, KEITH MAURICE DFL 262102
# … with 2,333 more rows
```
Note that the representatives from Puerto Rico earned nearly three times as many votes as any other Congressional representative.
We are interested in the results from North Carolina.
We begin by creating a data frame specific to that state, with the votes aggregated by congressional district.
As there are 13 districts, the `nc_results` data frame will have exactly 13 rows.
```
district_elections <- results_house %>%
mutate(district = parse_number(district_id)) %>%
group_by(state, district) %>%
summarize(
N = n(),
total_votes = sum(general_votes, na.rm = TRUE),
d_votes = sum(ifelse(party == "D", general_votes, 0), na.rm = TRUE),
r_votes = sum(ifelse(party == "R", general_votes, 0), na.rm = TRUE)
) %>%
mutate(
other_votes = total_votes - d_votes - r_votes,
r_prop = r_votes / total_votes,
winner = ifelse(r_votes > d_votes, "Republican", "Democrat")
)
nc_results <- district_elections %>%
filter(state == "NC")
nc_results %>%
select(-state)
```
```
Adding missing grouping variables: `state`
```
```
# A tibble: 13 × 9
# Groups: state [1]
state district N total_votes d_votes r_votes other_votes r_prop
<chr> <dbl> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 NC 1 4 338066 254644 77288 6134 0.229
2 NC 2 8 311397 128973 174066 8358 0.559
3 NC 3 3 309885 114314 195571 0 0.631
4 NC 4 4 348485 259534 88951 0 0.255
5 NC 5 3 349197 148252 200945 0 0.575
6 NC 6 4 364583 142467 222116 0 0.609
7 NC 7 4 336736 168695 168041 0 0.499
8 NC 8 8 301824 137139 160695 3990 0.532
9 NC 9 13 375690 171503 194537 9650 0.518
10 NC 10 6 334849 144023 190826 0 0.570
11 NC 11 11 331426 141107 190319 0 0.574
12 NC 12 3 310908 247591 63317 0 0.204
13 NC 13 5 370610 160115 210495 0 0.568
# … with 1 more variable: winner <chr>
```
We see that the distribution of the number of votes cast across congressional districts in North Carolina is very narrow—all of the districts had between 301,824 and 375,690 votes cast.
```
nc_results %>%
skim(total_votes) %>%
select(-na)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var state n mean sd p0 p25 p50 p75 p100
<chr> <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 total_votes NC 13 337204. 24175. 301824 311397 336736 349197 375690
```
However, as the close presidential election suggests, the votes of North Carolinans were roughly evenly divided among Democratic and Republican congressional candidates. In fact, state Democrats earned a narrow majority—50\.6%—of the votes. Yet the Republicans won 9 of the 13 races.[35](#fn35)
```
nc_results %>%
summarize(
N = n(),
state_votes = sum(total_votes),
state_d = sum(d_votes),
state_r = sum(r_votes)
) %>%
mutate(
d_prop = state_d / state_votes,
r_prop = state_r / state_votes
)
```
```
# A tibble: 1 × 7
state N state_votes state_d state_r d_prop r_prop
<chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 NC 13 4383656 2218357 2137167 0.506 0.488
```
One clue is to look more closely at the distribution of the proportion of Republican votes in each district.
```
nc_results %>%
select(district, r_prop, winner) %>%
arrange(desc(r_prop))
```
```
# A tibble: 13 × 4
# Groups: state [1]
state district r_prop winner
<chr> <dbl> <dbl> <chr>
1 NC 3 0.631 Republican
2 NC 6 0.609 Republican
3 NC 5 0.575 Republican
4 NC 11 0.574 Republican
5 NC 10 0.570 Republican
6 NC 13 0.568 Republican
7 NC 2 0.559 Republican
8 NC 8 0.532 Republican
9 NC 9 0.518 Republican
10 NC 7 0.499 Democrat
11 NC 4 0.255 Democrat
12 NC 1 0.229 Democrat
13 NC 12 0.204 Democrat
```
In the nine districts that Republicans won, their share of the vote ranged from a narrow (51\.8%) to a comfortable (63\.1%) majority. With the exception of the essentially even 7th district, the three districts that Democrats won were routs, with the Democratic candidate winning between 75% and 80% of the vote. Thus, although Democrats won more votes across the state, most of their votes were clustered within three overwhelmingly Democratic districts, allowing Republicans to prevail with moderate majorities across the remaining nine districts.
Conventional wisdom states that Democratic voters tend to live in cities, so perhaps they were simply clustered in three cities, while Republican voters were spread out across the state in more rural areas.
Let’s look more closely at the districts.
### 17\.4\.2 Congressional districts
To do this, we first download the congressional district shapefiles for the 113th Congress.
```
src <- "http://cdmaps.polisci.ucla.edu/shp/districts113.zip"
dsn_districts <- usethis::use_zip(src, destdir = fs::path("data_large"))
```
Next, we read these shapefiles into **R** as an `sf` object.
```
library(sf)
st_layers(dsn_districts)
```
```
Driver: ESRI Shapefile
Available layers:
layer_name geometry_type features fields
1 districts113 Polygon 436 15
```
```
districts <- st_read(dsn_districts, layer = "districts113") %>%
mutate(DISTRICT = parse_number(as.character(DISTRICT)))
```
```
Reading layer `districts113' from data source
`/home/bbaumer/Dropbox/git/mdsr-book/mdsr2e/data_large/districtShapes'
using driver `ESRI Shapefile'
Simple feature collection with 436 features and 15 fields (with 1 geometry empty)
Geometry type: MULTIPOLYGON
Dimension: XY
Bounding box: xmin: -179 ymin: 18.9 xmax: 180 ymax: 71.4
Geodetic CRS: NAD83
```
```
glimpse(districts)
```
```
Rows: 436
Columns: 16
$ STATENAME <chr> "Louisiana", "Maine", "Maine", "Maryland", "Maryland", …
$ ID <chr> "022113114006", "023113114001", "023113114002", "024113…
$ DISTRICT <dbl> 6, 1, 2, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8…
$ STARTCONG <chr> "113", "113", "113", "113", "113", "113", "113", "113",…
$ ENDCONG <chr> "114", "114", "114", "114", "114", "114", "114", "114",…
$ DISTRICTSI <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ COUNTY <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ PAGE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ LAW <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ NOTE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ BESTDEC <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ FINALNOTE <chr> "{\"From US Census website\"}", "{\"From US Census webs…
$ RNOTE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ LASTCHANGE <chr> "2016-05-29 16:44:10.857626", "2016-05-29 16:44:10.8576…
$ FROMCOUNTY <chr> "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", …
$ geometry <MULTIPOLYGON [°]> MULTIPOLYGON (((-91.8 30.9,..., MULTIPOLYG…
```
We are investigating North Carolina, so we will create a smaller object with only those shapes using the `filter()` function. Note that since every **sf** object is *also* a `data.frame`, we can use all of our usual **dplyr** tools on our geospatial objects.
```
nc_shp <- districts %>%
filter(STATENAME == "North Carolina")
nc_shp %>%
st_geometry() %>%
plot(col = gray.colors(nrow(nc_shp)))
```
Figure 17\.10: A basic map of the North Carolina congressional districts.
It is hard to see exactly what is going on here, but it appears as though there are some traditionally shaped districts, as well as some very strange and narrow districts. Unfortunately the map in Figure [17\.10](ch-spatial.html#fig:nc-basic) is devoid of context, so it is not very informative.
We need the `nc_results` data to provide that context.
### 17\.4\.3 Putting it all together
How to merge these two together? The simplest way is to use the `inner_join()` function from **dplyr** (see Chapter [5](ch-join.html#ch:join)).
Since both `nc_shp` and `nc_results` are `data.frame`s, this will append the election results data to the geospatial data.
Here, we merge the `nc_shp` polygons with the `nc_results` election data frame using the district as the key.
Note that there are 13 polygons and 13 rows.
```
nc_merged <- nc_shp %>%
st_transform(4326) %>%
inner_join(nc_results, by = c("DISTRICT" = "district"))
glimpse(nc_merged)
```
```
Rows: 13
Columns: 24
$ STATENAME <chr> "North Carolina", "North Carolina", "North Carolina", …
$ ID <chr> "037113114002", "037113114003", "037113114004", "03711…
$ DISTRICT <dbl> 2, 3, 4, 1, 5, 6, 7, 8, 9, 10, 11, 12, 13
$ STARTCONG <chr> "113", "113", "113", "113", "113", "113", "113", "113"…
$ ENDCONG <chr> "114", "114", "114", "114", "114", "114", "114", "114"…
$ DISTRICTSI <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ COUNTY <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ PAGE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ LAW <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ NOTE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ BESTDEC <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ FINALNOTE <chr> "{\"From US Census website\"}", "{\"From US Census web…
$ RNOTE <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ LASTCHANGE <chr> "2016-05-29 16:44:10.857626", "2016-05-29 16:44:10.857…
$ FROMCOUNTY <chr> "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F",…
$ state <chr> "NC", "NC", "NC", "NC", "NC", "NC", "NC", "NC", "NC", …
$ N <int> 8, 3, 4, 4, 3, 4, 4, 8, 13, 6, 11, 3, 5
$ total_votes <dbl> 311397, 309885, 348485, 338066, 349197, 364583, 336736…
$ d_votes <dbl> 128973, 114314, 259534, 254644, 148252, 142467, 168695…
$ r_votes <dbl> 174066, 195571, 88951, 77288, 200945, 222116, 168041, …
$ other_votes <dbl> 8358, 0, 0, 6134, 0, 0, 0, 3990, 9650, 0, 0, 0, 0
$ r_prop <dbl> 0.559, 0.631, 0.255, 0.229, 0.575, 0.609, 0.499, 0.532…
$ winner <chr> "Republican", "Republican", "Democrat", "Democrat", "R…
$ geometry <MULTIPOLYGON [°]> MULTIPOLYGON (((-80.1 35.8,..., MULTIPOLYGON (((-78.3 …
```
### 17\.4\.4 Using `ggplot2`
We are now ready to plot our map of [North Carolina’s congressional districts](https://upload.wikimedia.org/wikipedia/commons/thumb/3/3d/North_Carolina_Congressional_Districts%2C_113th_Congress.tif/lossless-page1-1280px-North_Carolina_Congressional_Districts%2C_113th_Congress.tif.png). We start by using a simple red–blue color scheme for the districts.
```
nc <- ggplot(data = nc_merged, aes(fill = winner)) +
annotation_map_tile(zoom = 6, type = "osm") +
geom_sf(alpha = 0.5) +
scale_fill_manual("Winner", values = c("blue", "red")) +
geom_sf_label(aes(label = DISTRICT), fill = "white") +
theme_void()
nc
```
Figure 17\.11: Bichromatic choropleth map of the results of the 2012 congressional elections in North Carolina.
Figure [17\.11](ch-spatial.html#fig:nc-bicolor) shows that it was the Democratic districts that tended to be irregularly shaped. Districts 12 and 4 have narrow, tortured shapes—both were heavily Democratic. This plot tells us who won, but it doesn’t convey the subtleties we observed about the *margins* of victory. In the next plot, we use a continuous color scale to to indicate the proportion of votes in each district. The `RdBu` diverging color palette comes from **RColorBrewer** (see Chapter [2](ch-vizI.html#ch:vizI)).
```
nc +
aes(fill = r_prop) +
scale_fill_distiller(
"Proportion\nRepublican",
palette = "RdBu",
limits = c(0.2, 0.8)
)
```
Figure 17\.12: Full color choropleth map of the results of the 2012 congressional elections in North Carolina. The clustering of Democratic voters is evident from the deeper blue in Democratic districts, versus the pale red in the more numerous Republican districts.
The `limits` argument to `scale_fill_distiller()` is important. This forces *red* to be the color associated with 80% Republican votes and *blue* to be associated with 80% Democratic votes. Without this argument, red would be associated with the maximum value in that data (about 63%) and blue with the minimum (about 20%). This would result in the neutral color of white not being at exactly 50%. When choosing color scales, it is critically important to make choices that reflect the data.
Choose colors and scales carefully when making maps.
In Figure [17\.12](ch-spatial.html#fig:nc-map), we can see that the three Democratic districts are “bluer” than the nine Republican counties are “red.” This reflects the clustering that we observed earlier. North Carolina has become one of the more egregious examples of [*gerrymandering*](https://en.wikipedia.org/w/index.php?search=gerrymandering), the phenomenon of when legislators of one party use their re\-districting power for political gain. This is evident in Figure [17\.12](ch-spatial.html#fig:nc-map), where Democratic votes are concentrated in three curiously\-drawn congressional districts. This enables Republican lawmakers to have 69% (9/13\) of the voting power in Congress despite earning only 48\.8% of the votes.
Since the 1st edition of this book, the North Carolina gerrymandering case went all the way to the [*United States Supreme Court*](https://en.wikipedia.org/w/index.php?search=United%20States%20Supreme%20Court).
In a landmark 2018 decision, the Justices ruled 5–4 in [Rucho vs. Common Cause](https://www.supremecourt.gov/search.aspx?FileName=/docket/docketfiles/html/public/18-422.html) that while partisan gerrymanders such as those in North Carolina may be problematic for democracy, they are not reviewable by the judicial system.
### 17\.4\.5 Using `leaflet`
Was it true that the Democratic districts were weaved together to contain many of the biggest cities in the state?
A similar map made in **leaflet** allows us to zoom in and pan out, making it easier to survey the districts.
First, we will define a color palette over the values \\(\[0,1]\\) that ranges from red to blue.
```
library(leaflet)
pal <- colorNumeric(palette = "RdBu", domain = c(0, 1))
```
To make our plot in **leaflet**, we have to add the tiles, and then the polygons defined by the `sf` object `nc_merged`.
Since we want red to be associated with the proportion of Republican votes, we will map `1 - r_prop` to color.
Note that we also add popups with the actual proportions, so that if you click on the map, it will show the district number and the proportion of Republican votes.
The resulting **leaflet** map is shown in Figure [17\.13](ch-spatial.html#fig:leaflet-nc).
```
leaflet_nc <- leaflet(nc_merged) %>%
addTiles() %>%
addPolygons(
weight = 1, fillOpacity = 0.7,
color = ~pal(1 - r_prop),
popup = ~paste("District", DISTRICT, "</br>", round(r_prop, 4))
) %>%
setView(lng = -80, lat = 35, zoom = 7)
```
Figure 17\.13: A **leaflet** plot of the North Carolina congressional districts.
Indeed, the curiously\-drawn districts in blue encompass all seven of the largest cities in the state: [*Charlotte*](https://en.wikipedia.org/w/index.php?search=Charlotte), [*Raleigh*](https://en.wikipedia.org/w/index.php?search=Raleigh), [*Greensboro*](https://en.wikipedia.org/w/index.php?search=Greensboro), [*Durham*](https://en.wikipedia.org/w/index.php?search=Durham), [*Winston\-Salem*](https://en.wikipedia.org/w/index.php?search=Winston-Salem), [*Fayetteville*](https://en.wikipedia.org/w/index.php?search=Fayetteville), and [*Cary*](https://en.wikipedia.org/w/index.php?search=Cary).
17\.5 Effective maps: How (not) to lie
--------------------------------------
The map shown in Figure [17\.12](ch-spatial.html#fig:nc-map) is an example of a [*choropleth*](https://en.wikipedia.org/w/index.php?search=choropleth) map.
This is a very common type of map where coloring and/or shading is used to differentiate a region of the map based on the value of a variable.
These maps are popular, and can be very persuasive, but you should be aware of some challenges when making and interpreting choropleth maps and other data maps.
Three common map types include:
* **Choropleth**: color or shade regions based on the value of a variable
* **Proportional symbol**: associate a symbol with each location, but scale its size to reflect the value of a variable
* **Dot density**: place dots for each data point, and view their accumulation
We note that in a proportional symbol map the symbol placed on the map is usually two\-dimensional.
Its size—*in area*—should be scaled in proportion to the quantity being mapped.
Be aware that often the size of the symbol is defined by its *radius*.
If the *radius* is in direct proportion to the quantity being mapped, then the area will be disproportionately large.
Always scale the size of proportional symbols in terms of their area.
As noted in Chapter [2](ch-vizI.html#ch:vizI), the choice of scale is also important and often done poorly.
The relationship between various quantities can be altered by scale.
In Chapter [2](ch-vizI.html#ch:vizI), we showed how the use of logarithmic scale can be used to improve the readability of a scatterplot.
In Figure [17\.12](ch-spatial.html#fig:nc-map), we illustrated the importance of properly setting the scale of a proportion so that 0\.5 was exactly in the middle.
Try making Figure [17\.12](ch-spatial.html#fig:nc-map) without doing this and see if the results are as easily interpretable.
Decisions about colors are also crucial to making an effective map.
In Chapter [2](ch-vizI.html#ch:vizI), we mentioned the color palettes available through **RColorBrewer**.
When making maps, categorical variables should be displayed using a *qualitative* palette, while quantitative variables should be displayed using a *sequential* or *diverging* palette.
In Figure [17\.12](ch-spatial.html#fig:nc-map) we employed a diverging palette, because Republicans and Democrats are on two opposite ends of the scale, with the neutral white color representing 0\.5\.
It’s important to decide how to deal with missing values.
Leaving them a default color (e.g., white) might confuse them with observed values.
Finally, the concept of *normalization* is fundamental.
Plotting raw data values on maps can easily distort the truth.
This is particularly true in the case of data maps, because area is an implied variable.
Thus, on choropleth maps, we almost always want to show some sort of density or ratio rather than raw values (i.e., counts).
17\.6 Projecting polygons
-------------------------
It is worth briefly illustrating the hazards of mapping unprojected data.
Consider the congressional district map for the entire country.
To plot this, we follow the same steps as before, but omit the step of restricting to North Carolina.
There is one additional step here for creating a mapping between state names and their abbreviations.
Thankfully, these data are built into **R**.
```
districts_full <- districts %>%
left_join(
tibble(state.abb, state.name),
by = c("STATENAME" = "state.name")
) %>%
left_join(
district_elections,
by = c("state.abb" = "state", "DISTRICT" = "district")
)
```
We can make the map by adding white polygons for the generic map data and then adding colored polygons for each congressional district. Some clipping will make this easier to see.
```
box <- st_bbox(districts_full)
world <- map_data("world") %>%
st_as_sf(coords = c("long", "lat")) %>%
group_by(group) %>%
summarize(region = first(region), do_union = FALSE) %>%
st_cast("POLYGON") %>%
st_set_crs(4269)
```
We display the Mercator projection of this base map in Figure [17\.14](ch-spatial.html#fig:us-unprojected).
Note how massive Alaska appears to be in relation to the other states.
Alaska is big, but it is not that big!
This is a distortion of reality due to the projection.
```
map_4269 <- ggplot(data = districts_full) +
geom_sf(data = world, size = 0.1) +
geom_sf(aes(fill = r_prop), size = 0.1) +
scale_fill_distiller(palette = "RdBu", limits = c(0, 1)) +
theme_void() +
labs(fill = "Proportion\nRepublican") +
xlim(-180, -50) + ylim(box[c("ymin", "ymax")])
map_4269
```
Figure 17\.14: U.S. congressional election results, 2012 (Mercator projection).
We can use the Albers equal area projection to make a more representative picture, as shown in Figure [17\.15](ch-spatial.html#fig:us-albers).
Note how Alaska is still the biggest state (and district) by area, but it is now much closer in size to Texas.
```
districts_aea <- districts_full %>%
st_transform(5070)
box <- st_bbox(districts_aea)
map_4269 %+% districts_aea +
xlim(box[c("xmin", "xmax")]) + ylim(box[c("ymin", "ymax")])
```
Figure 17\.15: U.S. congressional election results, 2012 (Albers equal area projection).
17\.7 Playing well with others
------------------------------
There are many technologies outside of **R** that allow you to work with spatial data. [*ArcGIS*](https://en.wikipedia.org/w/index.php?search=ArcGIS) is a proprietary [*Geographic Information System*](https://en.wikipedia.org/w/index.php?search=Geographic%20Information%20System) (GIS) software that is considered by many to be the industry state\-of\-the\-art. [*QGIS*](https://en.wikipedia.org/w/index.php?search=QGIS) is its open\-source competitior. Both have graphical user interfaces.
[*Keyhole Markup Language*](https://en.wikipedia.org/w/index.php?search=Keyhole%20Markup%20Language) (KML) is an [*XML*](https://en.wikipedia.org/w/index.php?search=XML) file format for storing geographic data. KML files can be read by [*Google Earth*](https://en.wikipedia.org/w/index.php?search=Google%20Earth) and other GIS applications. An `sf` object in **R** can be written to KML using the `st_write()` function. These files can then be read by ArcGIS, Google Maps, or Google Earth. Here, we illustrate how to create a KML file for the North Carolina congressional districts data frame that we defined earlier. A screenshot of the resulting output in Google Earth is shown in Figure [17\.16](ch-spatial.html#fig:googleearth).
```
nc_merged %>%
st_transform(4326) %>%
st_write("/tmp/nc_congress113.kml", driver = "kml")
```
Figure 17\.16: Screenshot of the North Carolina congressional districts as rendered in Google Earth, after exporting to KML. Compare with Figure [17\.12](ch-spatial.html#fig:nc-map).
17\.8 Further resources
-----------------------
Some excellent resources for spatial methods include R. S. Bivand, Pebesma, and Gómez\-Rubio (2013\) and Cressie (1993\).
A [helpful pocket guide to CRS systems in **R**](https://www.nceas.ucsb.edu/~frazier/RSpatialGuides/OverviewCoordinateReferenceSystems.pdf) contains information about projections, ellipsoids, and datums (reference points).
Pebesma (2021\) discuss the mechanics of how to work with spatial data in **R** in addition to introducing spatial modeling.
The **tigris** package provides access to shapefiles and demographic data from the United States Census Bureau (Walker 2020\).
The **sf** package has superseded spatial packages **sp**, **rgdal**, and **rgeos** which were used in the first edition of this book.
A guide for [migrating](https://github.com/r-spatial/sf/wiki/migrating) from **sp** to **sf** is maintained by the `r-spatial` group.
The fascinating story of [John Snow](https://en.wikipedia.org/w/index.php?search=John%20Snow) and his pursuit of the causes of cholera can be found in Vinten\-Johansen et al. (2003\).
Quantitative measures of gerrymandering have been a subject of interest to political scientists for some time (Niemi et al. 1990; Engstrom and Wildgen 1977; Hodge, Marshall, and Patterson 2010; Mackenzie 2009\).
17\.9 Exercises
---------------
**Problem 1 (Easy)**: Use the `geocode` function from the `tidygeocoder` package to find the latitude and longitude of the Emily Dickinson Museum in Amherst, Massachusetts.
**Problem 2 (Medium)**: The `pdxTrees` package contains a dataset of over 20,000 trees in Portland, Oregon, parks.
1. Using `pdxTrees_parks` data, create a informative leaflet map for a tree enthusiast curious about the diversity and types of trees in the Portland area.
2. Not all trees were created equal. Create an interactive map that highlights trees in terms of their overall contribution to sustainability and value to the Portland community using variables such as `carbon_storage_value` and `total_annual_benefits`, etc.
3. Create an interactive map that helps identify any problematic trees that city officials should take note of.
**Problem 3 (Hard)**: Researchers at UCLA maintain historical congressional district shapefiles (see <http://cdmaps.polisci.ucla.edu>).
Use these data to discuss the history of gerrymandering in the United States.
Is the problem better or worse today?
**Problem 4 (Hard)**: Use the `tidycensus` package to conduct a spatial analysis of the Census data it contains for your home state. Can you illustrate how the demography of your state varies spatially?
**Problem 5 (Hard)**: Use the `tigris` package to make the congressional election district map for your home state. Do you see evidence of gerrymandering? Why or why not?
17\.10 Supplementary exercises
------------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/geospatial\-I.html\#geospatialI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/geospatial-I.html#geospatialI-online-exercises)
No exercises found
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-spatial2.html |
Chapter 18 Geospatial computations
==================================
In Chapter [17](ch-spatial.html#ch:spatial), we learned how to work with geospatial data. We learned about [*shapefiles*](https://en.wikipedia.org/w/index.php?search=shapefiles), [*map projections*](https://en.wikipedia.org/w/index.php?search=map%20projections), and how to plot spatial data using both **ggplot2** and **leaflet**.
In this chapter, we will learn how to perform computations on geospatial data that will enable us to answer questions about how long or how big spatial features are.
We will also learn how to use geometric operations and spatial joins to create new geospatial objects. These capabilities will broaden the spectrum of analytical tasks we can perform, and accordingly expand the range of questions we can answer.
18\.1 Geospatial operations
---------------------------
### 18\.1\.1 Geocoding, routes, and distances
The process of converting a human\-readable address into geographic coordinates is called [*geocoding*](https://en.wikipedia.org/w/index.php?search=geocoding). While there are numerous APIs available online that will do this for you, the functionality provided in **tidygeocoder** by the `geocode()` function uses [Open Street Map](https://www.openstreetmap.org) and does not require registration to use the API. Here, we build a data frame of the places of business of the three authors, geocode the addresses of the schools, convert the resulting data frame into an **sf** object, and set the projection to `epsg:4326` (see Chapter [17](ch-spatial.html#ch:spatial)).
```
library(tidyverse)
library(mdsr)
library(sf)
library(tidygeocoder)
colleges <- tribble(
~school, ~address,
"Smith", "44 College Lane, Northampton, MA 01063",
"Macalester", "1600 Grand Ave, St Paul, MN 55105",
"Amherst", "Amherst College, Amherst, MA 01002"
) %>%
geocode(address, method = "osm") %>%
st_as_sf(coords = c("long", "lat")) %>%
st_set_crs(4326)
colleges
```
```
Simple feature collection with 3 features and 2 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: -93.2 ymin: 42.3 xmax: -72.5 ymax: 44.9
Geodetic CRS: WGS 84
# A tibble: 3 × 3
school address geometry
* <chr> <chr> <POINT [°]>
1 Smith 44 College Lane, Northampton, MA 01063 (-72.6 42.3)
2 Macalester 1600 Grand Ave, St Paul, MN 55105 (-93.2 44.9)
3 Amherst Amherst College, Amherst, MA 01002 (-72.5 42.4)
```
[*Geodesic*](https://en.wikipedia.org/w/index.php?search=Geodesic) distances can be computed using the `st_distance()` function in **sf**. Here, we compute the distance between two of the Five Colleges[36](#fn36).
```
colleges %>%
filter(school != "Macalester") %>%
st_distance()
```
```
Units: [m]
[,1] [,2]
[1,] 0 11962
[2,] 11962 0
```
The geodesic distance is closer to “[*as the crow files*](https://en.wikipedia.org/w/index.php?search=as%20the%20crow%20files),” but we might be more interested in the *driving distance* between two locations along the road.
To compute this, we need to access a service with a database of roads.
Here, we use the [openroute service](https://www.openrouteservice.org), which requires an API key[37](#fn37). The **openrouteservice** package provides this access via the `ors_directions()` function.
Note that the value of the API key is not shown.
You will need your own API key to use this service.
```
library(openrouteservice)
ors_api_key()
```
```
smith_amherst <- colleges %>%
filter(school != "Macalester") %>%
st_coordinates() %>%
as_tibble()
route_driving <- smith_amherst %>%
ors_directions(profile = "driving-car", output = "sf")
```
Note the difference between the geodesic distance computed above and the driving distance computed below.
Of course, the driving distance must be longer than the geodesic distance.
```
route_driving %>%
st_length()
```
```
13541 [m]
```
If you prefer, you can convert meters to miles using the `set_units()` function from the **units** package.
```
route_driving %>%
st_length() %>%
units::set_units("miles")
```
```
8.41 [miles]
```
Given the convenient [Norwottuck Rail Trail](https://www.mass.gov/locations/norwottuck-rail-trail) connecting Northampton and Amherst, we might prefer to bike.
Will that be shorter?
```
route_cycling <- smith_amherst %>%
ors_directions(profile = "cycling-regular", output = "sf")
route_cycling %>%
st_length()
```
```
14050 [m]
```
It turns out that the rail trail path is slightly longer (but far more scenic).
Since the [*Calvin Coolidge Bridge*](https://en.wikipedia.org/w/index.php?search=Calvin%20Coolidge%20Bridge) is the only reasonable way to get from Northampton to Amherst when driving, there is only one shortest route between Smith and Amherst, as shown in Figure [18\.1](ch-spatial2.html#fig:smith-amherst).
We also show the shortest biking route, which follows the Norwottuck Rail Trail.
```
library(leaflet)
leaflet() %>%
addTiles() %>%
addPolylines(data = route_driving, weight = 10) %>%
addPolylines(data = route_cycling, color = "green", weight = 10)
```
Figure 18\.1: The fastest route from Smith College to Amherst College, by both car (blue) and bike (green).
However, shortest paths in a network are not unique (see Chapter [20](ch-netsci.html#ch:netsci)). Ben’s daily commute to [*Citi Field*](https://en.wikipedia.org/w/index.php?search=Citi%20Field) from his apartment in [*Brooklyn*](https://en.wikipedia.org/w/index.php?search=Brooklyn) presented three distinct alternatives:
1. One could take the [*Brooklyn\-Queens Expressway*](https://en.wikipedia.org/w/index.php?search=Brooklyn-Queens%20Expressway) (I\-278 E) to the [*Grand Central Parkway*](https://en.wikipedia.org/w/index.php?search=Grand%20Central%20Parkway) E and pass by [*LaGuardia Airport*](https://en.wikipedia.org/w/index.php?search=LaGuardia%20Airport).
2. One could continue on the [*Long Island Expressway*](https://en.wikipedia.org/w/index.php?search=Long%20Island%20Expressway) (I\-495 E) and then approach Citi Field from the opposite direction on the Grand Central Parkway W.
3. One could avoid highways altogether and take [*Roosevelt Avenue*](https://en.wikipedia.org/w/index.php?search=Roosevelt%20Avenue) all the way through [*Queens*](https://en.wikipedia.org/w/index.php?search=Queens).
The latter route is the shortest but often will take longer due to traffic. The first route is the most convenient approach to the Citi Field employee parking lot.
These two routes are overlaid on the map in Figure [18\.2](ch-spatial2.html#fig:citifield).
```
commute <- tribble(
~place, ~address,
"home", "736 Leonard St, Brooklyn, NY",
"lga", "LaGuardia Airport, Queens, NY",
"work", "Citi Field, 41 Seaver Way, Queens, NY 11368",
) %>%
geocode(address, method = "osm") %>%
st_as_sf(coords = c("long", "lat")) %>%
st_set_crs(4326)
route_direct <- commute %>%
filter(place %in% c("home", "work")) %>%
st_coordinates() %>%
as_tibble() %>%
ors_directions(output = "sf", preference = "recommended")
route_gcp <- commute %>%
st_coordinates() %>%
as_tibble() %>%
ors_directions(output = "sf")
leaflet() %>%
addTiles() %>%
addMarkers(data = commute, popup = ~place) %>%
addPolylines(data = route_direct, color = "green", weight = 10) %>%
addPolylines(data = route_gcp, weight = 10)
```
Figure 18\.2: Alternative commuting routes from Ben’s old apartment in Brooklyn to Citi Field. Note that the routes overlap for most of the way from Brooklyn to the I\-278 E onramp on Roosevelt Avenue.
### 18\.1\.2 Geometric operations
Much of the power of working with geospatial data comes from the interactions between various layers of data. The **sf** package provides many features that enable us to compute with geospatial data.
A basic geospatial question is: what part of one series of geospatial objects lies within another set? To illustrate, we use geospatial data from the MacLeish field station in [*Whately, MA*](https://en.wikipedia.org/w/index.php?search=Whately,%20MA). These data are provided by the **macleish** package. Figure [18\.3](ch-spatial2.html#fig:macleish-boundary) illustrates that there are several streams that pass though the MacLeish property.
```
library(sf)
library(macleish)
boundary <- macleish_layers %>%
pluck("boundary")
streams <- macleish_layers %>%
pluck("streams")
boundary_plot <- ggplot(boundary) +
geom_sf() +
scale_x_continuous(breaks = c(-72.677, -72.683))
boundary_plot +
geom_sf(data = streams, color = "blue", size = 1.5)
```
Figure 18\.3: Streams cross through the boundary of the MacLeish property.
The data from MacLeish happens to have a variable called `Shape_Area` that contains the precomputed size of the property.
```
boundary %>%
pull(Shape_Area)
```
```
[1] 1033988
```
Is this accurate? We can easily compute basic geometric properties of spatial objects like area and length using the **st\_area** function.
```
st_area(boundary)
```
```
1032353 [m^2]
```
The exact computed area is very close to the reported value. We can also convert the area in square meters to [*acres*](https://en.wikipedia.org/w/index.php?search=acres) by dividing by a known conversion factor.
```
st_area(boundary) / 4046.8564224
```
```
255 [m^2]
```
Similarly, we can compute the length of each segment of the streams and the location of the [*centroid*](https://en.wikipedia.org/w/index.php?search=centroid) of the property.
```
streams %>%
mutate(length = st_length(geometry))
```
```
Simple feature collection with 13 features and 2 fields
Geometry type: LINESTRING
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
First 10 features:
Id geometry length
1 1 LINESTRING (-72.7 42.5, -72... 593.3 [m]
2 1 LINESTRING (-72.7 42.5, -72... 412.3 [m]
3 1 LINESTRING (-72.7 42.5, -72... 137.9 [m]
4 1 LINESTRING (-72.7 42.5, -72... 40.3 [m]
5 1 LINESTRING (-72.7 42.5, -72... 51.0 [m]
6 1 LINESTRING (-72.7 42.5, -72... 592.8 [m]
7 1 LINESTRING (-72.7 42.5, -72... 2152.4 [m]
8 3 LINESTRING (-72.7 42.5, -72... 1651.3 [m]
9 3 LINESTRING (-72.7 42.5, -72... 316.2 [m]
10 3 LINESTRING (-72.7 42.5, -72... 388.3 [m]
```
```
boundary %>%
st_centroid()
```
```
Simple feature collection with 1 feature and 3 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.5 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
OBJECTID Shape_Leng Shape_Area geometry
1 1 5894 1033988 POINT (-72.7 42.5)
```
As promised, we can also combine two geospatial layers. The functions `st_intersects()` and `st_intersection()` take two geospatial objects and return a `logical` indicating whether they intersect, or another **sf** object representing that intersection, respectively.
```
st_intersects(boundary, streams)
```
```
Sparse geometry binary predicate list of length 1, where the
predicate was `intersects'
1: 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, ...
```
```
st_intersection(boundary, streams)
```
```
Simple feature collection with 11 features and 4 fields
Geometry type: GEOMETRY
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
First 10 features:
OBJECTID Shape_Leng Shape_Area Id geometry
1 1 5894 1033988 1 LINESTRING (-72.7 42.5, -72...
1.1 1 5894 1033988 1 LINESTRING (-72.7 42.5, -72...
1.2 1 5894 1033988 1 LINESTRING (-72.7 42.5, -72...
1.3 1 5894 1033988 1 MULTILINESTRING ((-72.7 42....
1.4 1 5894 1033988 1 LINESTRING (-72.7 42.4, -72...
1.5 1 5894 1033988 3 LINESTRING (-72.7 42.5, -72...
1.6 1 5894 1033988 3 LINESTRING (-72.7 42.5, -72...
1.7 1 5894 1033988 3 LINESTRING (-72.7 42.5, -72...
1.8 1 5894 1033988 3 LINESTRING (-72.7 42.5, -72...
1.9 1 5894 1033988 3 LINESTRING (-72.7 42.4, -72...
```
`st_intersects()` is called a [*predicate*](https://en.wikipedia.org/w/index.php?search=predicate) function because it returns a `logical`. It answers the question: “Do these two layers intersect?”
On the other hand, `st_intersection()` performs a set operation. It answers the question: “What is the set that represents the intersection of these two layers?” Similar functions compute familiar set operations like unions, differences, and symmetric differences, while a whole host of additional predicate functions detect containment (`st_contains()`, `st_within()`, etc.), crossings, overlaps, etc.
In Figure [18\.4](ch-spatial2.html#fig:macleish-boundary-streams)(a) we use the `st_intersection()` function to show only the parts of the streams that are contained within the MacLeish property.
In Figure [18\.4](ch-spatial2.html#fig:macleish-boundary-streams)(b), we show the corresponding set of stream parts that lie *outside* of the MacLeish property.
```
boundary_plot +
geom_sf(
data = st_intersection(boundary, streams),
color = "blue",
size = 1.5
)
boundary_plot +
geom_sf(
data = st_difference(streams, boundary),
color = "blue",
size = 1.5
)
```
Figure 18\.4: Streams on the MacLeish property.
Different spatial geometries intersect in different ways. Above, we saw that the intersection of steams (which are `LINESTRING`s) and the boundary (which is a `POLYGON`) produced `LINESTRING` geometries. Below, we compute the intersection of the streams with the trails that exist at MacLeish. The trails are also `LINESTRING` geometries, and the intersection of two `LINESTRING` geometries produces a set of `POINT` geometries.
```
trails <- macleish_layers %>%
pluck("trails")
st_intersection(trails, streams)
```
```
Simple feature collection with 10 features and 3 fields
Geometry type: GEOMETRY
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
name color Id geometry
8 entry trail - 3 POINT (-72.7 42.4)
9 Eastern Loop Blue 3 POINT (-72.7 42.5)
13 Snowmobile Trail <NA> 3 MULTIPOINT ((-72.7 42.5), (...
15 Driveway <NA> 3 POINT (-72.7 42.4)
6 Western Loop Red 3 POINT (-72.7 42.4)
1 Porcupine Trail White 3 POINT (-72.7 42.5)
6.1 Western Loop Red 3 POINT (-72.7 42.5)
4 Vernal Pool Loop Yellow 3 POINT (-72.7 42.4)
3 Poplar Hill Road Road 3 POINT (-72.7 42.5)
14 Snowmobile Trail <NA> 3 POINT (-72.7 42.5)
```
Note that one of the features is a `MULTIPOINT`. Occasionally, a trail might intersect a stream in more than one place (resulting in a `MULTIPOINT` geometry). This occurs here, where the Snowmobile Trail intersects one of the stream segments in two different places. To clean this up, we first use the `st_cast()` function to convert everything to `MULTIPOINT`, and then cast everything to `POINT`. (We can’t go straight to `POINT` because we start with a mixture of `POINT`s and `MULTIPOINT`s.)
```
bridges <- st_intersection(trails, streams) %>%
st_cast("MULTIPOINT") %>%
st_cast("POINT")
nrow(bridges)
```
```
[1] 11
```
Note that we now have 11 features instead of 10\. In this case, the intersections of trails and streams has a natural interpretation: these must be bridges of some type! How else could the trail continue through the stream? Figure [18\.5](ch-spatial2.html#fig:macleish-bridges) shows the trails, the streams, and these “bridges” (some of the points are hard to see because they partially overlap).
```
boundary_plot +
geom_sf(data = trails, color = "brown", size = 1.5) +
geom_sf(data = streams, color = "blue", size = 1.5) +
geom_sf(data = bridges, pch = 21, fill = "yellow", size = 3)
```
Figure 18\.5: Bridges on the MacLeish property where trails and streams intersect.
18\.2 Geospatial aggregation
----------------------------
In Section [18\.1\.2](ch-spatial2.html#sec:geometric-operations), we saw how we can split `MULTIPOINT` geometries into `POINT` geometries. This was, in a sense, geospatial disaggregation. Here, we consider the perhaps more natural behavior of spatial aggregation.
Just as we saw previously that the intersection of different geometries can produce different resulting geometries, so too will different geometries aggregate in different ways. For example, `POINT` geometries can be aggregated into `MULTIPOINT` geometries.
The **sf** package implements spatial aggreation using the same `group_by()` and `summarize()` function that you learned in Chapter [4](ch-dataI.html#ch:dataI). The only difference is that we might have to specify *how* we want the spatial layers to be aggregated. The default aggregation method is `st_union()`, which makes sense for most purposes.
Note that the `trails` layer is broken into segments: the Western Loop is comprised of three different features.
```
trails
```
```
Simple feature collection with 15 features and 2 fields
Geometry type: LINESTRING
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
First 10 features:
name color geometry
1 Porcupine Trail White LINESTRING (-72.7 42.5, -72...
2 Western Loop Red LINESTRING (-72.7 42.5, -72...
3 Poplar Hill Road Road LINESTRING (-72.7 42.5, -72...
4 Vernal Pool Loop Yellow LINESTRING (-72.7 42.4, -72...
5 Eastern Loop Blue LINESTRING (-72.7 42.5, -72...
6 Western Loop Red LINESTRING (-72.7 42.5, -72...
7 Western Loop Red LINESTRING (-72.7 42.4, -72...
8 entry trail - LINESTRING (-72.7 42.4, -72...
9 Eastern Loop Blue LINESTRING (-72.7 42.5, -72...
10 Easy Out Red LINESTRING (-72.7 42.5, -72...
```
Which trail is the longest? We know we can compute the length of the features with `st_length()`, but then we’d have to add up the lengths of each segment. Instead, we can aggregate the segments and do the length computation on the full trails.
```
trails_full <- trails %>%
group_by(name) %>%
summarize(num_segments = n()) %>%
mutate(trail_length = st_length(geometry)) %>%
arrange(desc(trail_length))
trails_full
```
```
Simple feature collection with 9 features and 3 fields
Geometry type: GEOMETRY
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
# A tibble: 9 × 4
name num_segments geometry trail_length
<fct> <int> <GEOMETRY [°]> [m]
1 Snowmobi… 2 MULTILINESTRING ((-72.7 42.5, -72.7 4… 2576.
2 Eastern … 2 MULTILINESTRING ((-72.7 42.5, -72.7 4… 1939.
3 Western … 3 MULTILINESTRING ((-72.7 42.5, -72.7 4… 1350.
4 Poplar H… 2 MULTILINESTRING ((-72.7 42.5, -72.7 4… 1040.
5 Porcupin… 1 LINESTRING (-72.7 42.5, -72.7 42.5, -… 700.
6 Vernal P… 1 LINESTRING (-72.7 42.4, -72.7 42.4, -… 360.
7 entry tr… 1 LINESTRING (-72.7 42.4, -72.7 42.4, -… 208.
8 Driveway 1 LINESTRING (-72.7 42.4, -72.7 42.4, -… 173.
9 Easy Out 2 MULTILINESTRING ((-72.7 42.5, -72.7 4… 136.
```
18\.3 Geospatial joins
----------------------
In Section [17\.4\.3](ch-spatial.html#sec:sf-inner-join), we show how the `inner_join()` function can be used to merge geospatial data with additional data. This works because the geospatial data was stored in an **sf** object, which is also a data frame. In that case, since the second data frame was not spatial, by necessity the key on which the join was performed was a non\-spatial attribute.
A geospatial join is a fundamentally different type of operation, in which *both* data frames are geospatial, and the joining key is a geospatial attribute. This operation is implemented by the `st_join()` function, which behaves similarly to the `inner_join()` function, but with some additional complexities due to the different nature of its task.
To illustrate this, we consider the question of in which type of forest the two campsites at MacLeish lie (see Figure [18\.6](ch-spatial2.html#fig:macleish-campsites)).
```
forests <- macleish_layers %>%
pluck("forests")
camp_sites <- macleish_layers %>%
pluck("camp_sites")
boundary_plot +
geom_sf(data = forests, fill = "green", alpha = 0.1) +
geom_sf(data = camp_sites, size = 4) +
geom_sf_label(
data = camp_sites, aes(label = name),
nudge_y = 0.001
)
```
Figure 18\.6: The MacLeish property has two campsites and many different types of forest.
It is important to note that this question is inherently spatial. There simply is no variable between the `forests` layer and the `camp_sites` layer that would allow you to connect them other than their geospatial location.
Like `inner_join()`, the `st_join()` function takes two data frames as its first two arguments. There is no `st_left_join()` function, but instead `st_join()` takes a `left` argument, that is set to `TRUE` by default. Finally, the `join` argument takes a predicate function that determines the criteria for whether the spatial features match. The default is `st_intersects()`, but here we employ `st_within()`, since we want the `POINT` geometries of the camp sites to lie *within* the `POLYGON` geometries of the forests.
```
st_join(camp_sites, forests, left = FALSE, join = st_within) %>%
select(name, type)
```
```
Simple feature collection with 2 features and 2 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.5 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
# A tibble: 2 × 3
name type geometry
<chr> <fct> <POINT [°]>
1 Group Campsite Old Field White Pine Forest (-72.7 42.5)
2 Remote Campsite Sugar Maple Forest (-72.7 42.5)
```
We see that the Group Campsite is in a [*Eastern White Pine*](https://en.wikipedia.org/w/index.php?search=Eastern%20White%20Pine) forest, while the Remote Campsite is in a [*Sugar Maple*](https://en.wikipedia.org/w/index.php?search=Sugar%20Maple) forest.
18\.4 Extended example: Trail elevations at MacLeish
----------------------------------------------------
Many hiking trails provide trail elevation maps (or [elevation profiles](http://downloads.esri.com/learnarcgis/educators/elevation-profiles.pdf)) that depict changes in elevation along the trail. These maps can help hikers understand the interplay between the uphill and downhill segments of the trail and where they occur along their hike.
More formally, various trail railing systems exist to numerically score the difficulty of a hike. [*Shenandoah National Park*](https://en.wikipedia.org/w/index.php?search=Shenandoah%20National%20Park) uses [this simple trail rating system](https://www.nps.gov/shen/planyourvisit/how-to-determine-hiking-difficulty.htm):
\\\[
rating \= \\sqrt{gain \\cdot 2 \\cdot distance}
\\]
A rating below 50 corresponds to the easiest class of hike.
In this example, we will construct an elevation profile and compute the trail rating for the longest trail at MacLeish.
The **macleish** package contains elevation contours that are 30 feet apart. These are relatively sparse, but they will suffice for our purposes.
```
elevations <- macleish_layers %>%
pluck("elevation")
```
First, we leverage the spatial aggregation work that we did previously to isolate the longest trail.
```
longest_trail <- trails_full %>%
head(1)
longest_trail
```
```
Simple feature collection with 1 feature and 3 fields
Geometry type: MULTILINESTRING
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
# A tibble: 1 × 4
name num_segments geometry trail_length
<fct> <int> <MULTILINESTRING [°]> [m]
1 Snowmobi… 2 ((-72.7 42.5, -72.7 42.5, -72.7 42.5,… 2576.
```
Next, we compute the geospatial intersection between the trails and the elevation contours, both of which are `LINESTRING` geometries. This results in `POINT`s, but just as we saw above, there are several places where the trail crosses the same contour line more than once. If you think about walking up a hill and then back down the other side, this should make sense (see Figure [18\.7](ch-spatial2.html#fig:macleish-snowmobile-elevation)). These multiple intersections result in `MULTIPOINT` geometries. We unravel these just as we did before: by casting everything to `MULTIPOINT` and then casting everything back to `POINT`.
```
trail_elevations <- longest_trail %>%
st_intersection(elevations) %>%
st_cast("MULTIPOINT") %>%
st_cast("POINT")
```
Figure [18\.7](ch-spatial2.html#fig:macleish-snowmobile-elevation) reveals that the Snowmobile trail starts near the southernmost edge of the property at about 750 feet above sea level, snakes along a ridge at 780 feet, before climbing to the local peak at 870 feet, and finally descending the back side of the hill near the northern border of the property.
```
boundary_plot +
geom_sf(data = elevations, color = "dark gray") +
geom_sf(data = longest_trail, color = "brown", size = 1.5) +
geom_sf(data = trail_elevations, fill = "yellow", pch = 21, size = 3) +
geom_sf_label(
data = trail_elevations,
aes(label = CONTOUR_FT),
hjust = "right",
size = 2.5,
nudge_x = -0.0005
)
```
Figure 18\.7: The Snowmobile Trail at MacLeish, with contour lines depicted.
Finally, we need to put the features in order, so that we can compute the distance from the start of the trail. Luckily, in this case the trail goes directly south to north, so that we can use the latitude coordinate as an ordering variable.
In this case, we use `st_distance()` to compute the geodesic distances between the elevation contours.
This function returns a \\(n \\times n\\) `matrix` with all of the pairwise distances, but since we only want the distances from the southernmost point (which is the first element), we only select the first column of the resulting matrix.
To compute the actual distances (i.e., along the trail) we would have to split the trail into pieces.
We leave this as an exercise.
```
trail_elevations <- trail_elevations %>%
mutate(lat = st_coordinates(geometry)[, 2]) %>%
arrange(lat) %>%
mutate(distance_from_start = as.numeric(st_distance(geometry)[, 1]))
```
Figure [18\.8](ch-spatial2.html#fig:snowmobile-trail-map) shows our elevation profile for the Snowmobile trail.
```
ggplot(trail_elevations, aes(x = distance_from_start)) +
geom_ribbon(aes(ymax = CONTOUR_FT, ymin = 750)) +
scale_y_continuous("Elevation (feet above sea level)") +
scale_x_continuous("Geodesic distance from trail head (meters)") +
labs(
title = "Trail elevation map: Snowmobile trail",
subtitle = "Whately, MA",
caption = "Source: macleish package for R"
)
```
Figure 18\.8: Trail elevation map for the Snowmobile trail at MacLeish.
With a rating under 20, this trail rates as one of the easiest according to the Shenandoah system.
```
trail_elevations %>%
summarize(
gain = max(CONTOUR_FT) - min(CONTOUR_FT),
trail_length = max(units::set_units(trail_length, "miles")),
rating = sqrt(gain * 2 * as.numeric(trail_length))
)
```
```
Simple feature collection with 1 feature and 3 fields
Geometry type: MULTIPOINT
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
# A tibble: 1 × 4
gain trail_length rating geometry
<dbl> [miles] <dbl> <MULTIPOINT [°]>
1 120 1.60 19.6 ((-72.7 42.4), (-72.7 42.5), (-72.7 42.5), (-72…
```
18\.5 Further resources
-----------------------
Lovelace, Nowosad, and Muenchow (2019\) and Engel (2019\) were both helpful in updating this material to take advantage of **sf**.
18\.6 Exercises
---------------
**Problem 1 (Medium)**: The `Violations` data frame in the `mdsr` package contains information on violations noted in Board of Health inspections of New York City restaurants. These data contain spatial information in the form of addresses and zip codes.
1. Use the `geocode` function in `tidygeocoder` to obtain spatial coordinates for these restaurants.
2. Using the spatial coordinates you obtained in the previous exercise, create an informative static map using `ggspatial` that illustrates the nature and extent of restaurant violations in New York City.
3. Using the spatial coordinates you obtained in the previous exercises, create an informative interactive map using `leaflet` that illustrates the nature and extent of restaurant violations in New York City.
**Problem 2 (Medium)**:
1. Use the spatial data in the `macleish` package and `ggspatial` to make an informative static map of the MacLeish Field Station property.
2. Use the spatial data in the `macleish` package and `leaflet` to make an informative interactive map of the MacLeish Field Station property.
**Problem 3 (Hard)**: GIS data in the form of shapefiles is all over the Web. Government agencies are particularly good sources for these. The following code downloads bike trail data in Massachusetts from MassGIS. Use `bike_trails` to answer the following questions:
```
if (!file.exists("./biketrails_arc.zip")) {
part1 <- "http://download.massgis.digital.mass.gov/"
part2 <- "shapefiles/state/biketrails_arc.zip"
url <- paste(part1, part2, sep = "")
local_file <- basename(url)
download.file(url, destfile = local_file)
unzip(local_file, exdir = "./biketrails/")
}
```
```
library(sf)
dsn <- path.expand("./biketrails/biketrails_arc")
st_layers(dsn)
```
```
Driver: ESRI Shapefile
Available layers:
layer_name geometry_type features fields
1 biketrails_arc Line String 272 13
```
```
bike_trails <- read_sf(dsn)
```
1. How many distinct bike trail segments are there?
2. What is the longest individual bike trail segment?
3. How many segments are associated with the Norwottuck Rail Trail?
4. Among all of the named trails (which may have multiple features), which one has the longest total length?
5. The bike trails are in a [Lambert conformal conic projection](https://en.wikipedia.org/wiki/Lambert_conformal_conic_projection). Note that the units of the coordinates are very different from lat/long. In order to get these data onto our leaflet map, we need to re\-project them. Convert the bike trails to EPSG:4326, and create a leaflet map.
6. Color\-code the bike trails based on their length, and add an informative legend to the plot.
**Problem 4 (Hard)**: The MacLeish snowmobile trail map generated in the book is quite rudimentary.
Generate your own map that improves upon the aesthetics and information content.
18\.7 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/geospatial\-II.html\#geospatialII\-online\-exercises](https://mdsr-book.github.io/mdsr2e/geospatial-II.html#geospatialII-online-exercises)
No exercises found
18\.1 Geospatial operations
---------------------------
### 18\.1\.1 Geocoding, routes, and distances
The process of converting a human\-readable address into geographic coordinates is called [*geocoding*](https://en.wikipedia.org/w/index.php?search=geocoding). While there are numerous APIs available online that will do this for you, the functionality provided in **tidygeocoder** by the `geocode()` function uses [Open Street Map](https://www.openstreetmap.org) and does not require registration to use the API. Here, we build a data frame of the places of business of the three authors, geocode the addresses of the schools, convert the resulting data frame into an **sf** object, and set the projection to `epsg:4326` (see Chapter [17](ch-spatial.html#ch:spatial)).
```
library(tidyverse)
library(mdsr)
library(sf)
library(tidygeocoder)
colleges <- tribble(
~school, ~address,
"Smith", "44 College Lane, Northampton, MA 01063",
"Macalester", "1600 Grand Ave, St Paul, MN 55105",
"Amherst", "Amherst College, Amherst, MA 01002"
) %>%
geocode(address, method = "osm") %>%
st_as_sf(coords = c("long", "lat")) %>%
st_set_crs(4326)
colleges
```
```
Simple feature collection with 3 features and 2 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: -93.2 ymin: 42.3 xmax: -72.5 ymax: 44.9
Geodetic CRS: WGS 84
# A tibble: 3 × 3
school address geometry
* <chr> <chr> <POINT [°]>
1 Smith 44 College Lane, Northampton, MA 01063 (-72.6 42.3)
2 Macalester 1600 Grand Ave, St Paul, MN 55105 (-93.2 44.9)
3 Amherst Amherst College, Amherst, MA 01002 (-72.5 42.4)
```
[*Geodesic*](https://en.wikipedia.org/w/index.php?search=Geodesic) distances can be computed using the `st_distance()` function in **sf**. Here, we compute the distance between two of the Five Colleges[36](#fn36).
```
colleges %>%
filter(school != "Macalester") %>%
st_distance()
```
```
Units: [m]
[,1] [,2]
[1,] 0 11962
[2,] 11962 0
```
The geodesic distance is closer to “[*as the crow files*](https://en.wikipedia.org/w/index.php?search=as%20the%20crow%20files),” but we might be more interested in the *driving distance* between two locations along the road.
To compute this, we need to access a service with a database of roads.
Here, we use the [openroute service](https://www.openrouteservice.org), which requires an API key[37](#fn37). The **openrouteservice** package provides this access via the `ors_directions()` function.
Note that the value of the API key is not shown.
You will need your own API key to use this service.
```
library(openrouteservice)
ors_api_key()
```
```
smith_amherst <- colleges %>%
filter(school != "Macalester") %>%
st_coordinates() %>%
as_tibble()
route_driving <- smith_amherst %>%
ors_directions(profile = "driving-car", output = "sf")
```
Note the difference between the geodesic distance computed above and the driving distance computed below.
Of course, the driving distance must be longer than the geodesic distance.
```
route_driving %>%
st_length()
```
```
13541 [m]
```
If you prefer, you can convert meters to miles using the `set_units()` function from the **units** package.
```
route_driving %>%
st_length() %>%
units::set_units("miles")
```
```
8.41 [miles]
```
Given the convenient [Norwottuck Rail Trail](https://www.mass.gov/locations/norwottuck-rail-trail) connecting Northampton and Amherst, we might prefer to bike.
Will that be shorter?
```
route_cycling <- smith_amherst %>%
ors_directions(profile = "cycling-regular", output = "sf")
route_cycling %>%
st_length()
```
```
14050 [m]
```
It turns out that the rail trail path is slightly longer (but far more scenic).
Since the [*Calvin Coolidge Bridge*](https://en.wikipedia.org/w/index.php?search=Calvin%20Coolidge%20Bridge) is the only reasonable way to get from Northampton to Amherst when driving, there is only one shortest route between Smith and Amherst, as shown in Figure [18\.1](ch-spatial2.html#fig:smith-amherst).
We also show the shortest biking route, which follows the Norwottuck Rail Trail.
```
library(leaflet)
leaflet() %>%
addTiles() %>%
addPolylines(data = route_driving, weight = 10) %>%
addPolylines(data = route_cycling, color = "green", weight = 10)
```
Figure 18\.1: The fastest route from Smith College to Amherst College, by both car (blue) and bike (green).
However, shortest paths in a network are not unique (see Chapter [20](ch-netsci.html#ch:netsci)). Ben’s daily commute to [*Citi Field*](https://en.wikipedia.org/w/index.php?search=Citi%20Field) from his apartment in [*Brooklyn*](https://en.wikipedia.org/w/index.php?search=Brooklyn) presented three distinct alternatives:
1. One could take the [*Brooklyn\-Queens Expressway*](https://en.wikipedia.org/w/index.php?search=Brooklyn-Queens%20Expressway) (I\-278 E) to the [*Grand Central Parkway*](https://en.wikipedia.org/w/index.php?search=Grand%20Central%20Parkway) E and pass by [*LaGuardia Airport*](https://en.wikipedia.org/w/index.php?search=LaGuardia%20Airport).
2. One could continue on the [*Long Island Expressway*](https://en.wikipedia.org/w/index.php?search=Long%20Island%20Expressway) (I\-495 E) and then approach Citi Field from the opposite direction on the Grand Central Parkway W.
3. One could avoid highways altogether and take [*Roosevelt Avenue*](https://en.wikipedia.org/w/index.php?search=Roosevelt%20Avenue) all the way through [*Queens*](https://en.wikipedia.org/w/index.php?search=Queens).
The latter route is the shortest but often will take longer due to traffic. The first route is the most convenient approach to the Citi Field employee parking lot.
These two routes are overlaid on the map in Figure [18\.2](ch-spatial2.html#fig:citifield).
```
commute <- tribble(
~place, ~address,
"home", "736 Leonard St, Brooklyn, NY",
"lga", "LaGuardia Airport, Queens, NY",
"work", "Citi Field, 41 Seaver Way, Queens, NY 11368",
) %>%
geocode(address, method = "osm") %>%
st_as_sf(coords = c("long", "lat")) %>%
st_set_crs(4326)
route_direct <- commute %>%
filter(place %in% c("home", "work")) %>%
st_coordinates() %>%
as_tibble() %>%
ors_directions(output = "sf", preference = "recommended")
route_gcp <- commute %>%
st_coordinates() %>%
as_tibble() %>%
ors_directions(output = "sf")
leaflet() %>%
addTiles() %>%
addMarkers(data = commute, popup = ~place) %>%
addPolylines(data = route_direct, color = "green", weight = 10) %>%
addPolylines(data = route_gcp, weight = 10)
```
Figure 18\.2: Alternative commuting routes from Ben’s old apartment in Brooklyn to Citi Field. Note that the routes overlap for most of the way from Brooklyn to the I\-278 E onramp on Roosevelt Avenue.
### 18\.1\.2 Geometric operations
Much of the power of working with geospatial data comes from the interactions between various layers of data. The **sf** package provides many features that enable us to compute with geospatial data.
A basic geospatial question is: what part of one series of geospatial objects lies within another set? To illustrate, we use geospatial data from the MacLeish field station in [*Whately, MA*](https://en.wikipedia.org/w/index.php?search=Whately,%20MA). These data are provided by the **macleish** package. Figure [18\.3](ch-spatial2.html#fig:macleish-boundary) illustrates that there are several streams that pass though the MacLeish property.
```
library(sf)
library(macleish)
boundary <- macleish_layers %>%
pluck("boundary")
streams <- macleish_layers %>%
pluck("streams")
boundary_plot <- ggplot(boundary) +
geom_sf() +
scale_x_continuous(breaks = c(-72.677, -72.683))
boundary_plot +
geom_sf(data = streams, color = "blue", size = 1.5)
```
Figure 18\.3: Streams cross through the boundary of the MacLeish property.
The data from MacLeish happens to have a variable called `Shape_Area` that contains the precomputed size of the property.
```
boundary %>%
pull(Shape_Area)
```
```
[1] 1033988
```
Is this accurate? We can easily compute basic geometric properties of spatial objects like area and length using the **st\_area** function.
```
st_area(boundary)
```
```
1032353 [m^2]
```
The exact computed area is very close to the reported value. We can also convert the area in square meters to [*acres*](https://en.wikipedia.org/w/index.php?search=acres) by dividing by a known conversion factor.
```
st_area(boundary) / 4046.8564224
```
```
255 [m^2]
```
Similarly, we can compute the length of each segment of the streams and the location of the [*centroid*](https://en.wikipedia.org/w/index.php?search=centroid) of the property.
```
streams %>%
mutate(length = st_length(geometry))
```
```
Simple feature collection with 13 features and 2 fields
Geometry type: LINESTRING
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
First 10 features:
Id geometry length
1 1 LINESTRING (-72.7 42.5, -72... 593.3 [m]
2 1 LINESTRING (-72.7 42.5, -72... 412.3 [m]
3 1 LINESTRING (-72.7 42.5, -72... 137.9 [m]
4 1 LINESTRING (-72.7 42.5, -72... 40.3 [m]
5 1 LINESTRING (-72.7 42.5, -72... 51.0 [m]
6 1 LINESTRING (-72.7 42.5, -72... 592.8 [m]
7 1 LINESTRING (-72.7 42.5, -72... 2152.4 [m]
8 3 LINESTRING (-72.7 42.5, -72... 1651.3 [m]
9 3 LINESTRING (-72.7 42.5, -72... 316.2 [m]
10 3 LINESTRING (-72.7 42.5, -72... 388.3 [m]
```
```
boundary %>%
st_centroid()
```
```
Simple feature collection with 1 feature and 3 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.5 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
OBJECTID Shape_Leng Shape_Area geometry
1 1 5894 1033988 POINT (-72.7 42.5)
```
As promised, we can also combine two geospatial layers. The functions `st_intersects()` and `st_intersection()` take two geospatial objects and return a `logical` indicating whether they intersect, or another **sf** object representing that intersection, respectively.
```
st_intersects(boundary, streams)
```
```
Sparse geometry binary predicate list of length 1, where the
predicate was `intersects'
1: 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, ...
```
```
st_intersection(boundary, streams)
```
```
Simple feature collection with 11 features and 4 fields
Geometry type: GEOMETRY
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
First 10 features:
OBJECTID Shape_Leng Shape_Area Id geometry
1 1 5894 1033988 1 LINESTRING (-72.7 42.5, -72...
1.1 1 5894 1033988 1 LINESTRING (-72.7 42.5, -72...
1.2 1 5894 1033988 1 LINESTRING (-72.7 42.5, -72...
1.3 1 5894 1033988 1 MULTILINESTRING ((-72.7 42....
1.4 1 5894 1033988 1 LINESTRING (-72.7 42.4, -72...
1.5 1 5894 1033988 3 LINESTRING (-72.7 42.5, -72...
1.6 1 5894 1033988 3 LINESTRING (-72.7 42.5, -72...
1.7 1 5894 1033988 3 LINESTRING (-72.7 42.5, -72...
1.8 1 5894 1033988 3 LINESTRING (-72.7 42.5, -72...
1.9 1 5894 1033988 3 LINESTRING (-72.7 42.4, -72...
```
`st_intersects()` is called a [*predicate*](https://en.wikipedia.org/w/index.php?search=predicate) function because it returns a `logical`. It answers the question: “Do these two layers intersect?”
On the other hand, `st_intersection()` performs a set operation. It answers the question: “What is the set that represents the intersection of these two layers?” Similar functions compute familiar set operations like unions, differences, and symmetric differences, while a whole host of additional predicate functions detect containment (`st_contains()`, `st_within()`, etc.), crossings, overlaps, etc.
In Figure [18\.4](ch-spatial2.html#fig:macleish-boundary-streams)(a) we use the `st_intersection()` function to show only the parts of the streams that are contained within the MacLeish property.
In Figure [18\.4](ch-spatial2.html#fig:macleish-boundary-streams)(b), we show the corresponding set of stream parts that lie *outside* of the MacLeish property.
```
boundary_plot +
geom_sf(
data = st_intersection(boundary, streams),
color = "blue",
size = 1.5
)
boundary_plot +
geom_sf(
data = st_difference(streams, boundary),
color = "blue",
size = 1.5
)
```
Figure 18\.4: Streams on the MacLeish property.
Different spatial geometries intersect in different ways. Above, we saw that the intersection of steams (which are `LINESTRING`s) and the boundary (which is a `POLYGON`) produced `LINESTRING` geometries. Below, we compute the intersection of the streams with the trails that exist at MacLeish. The trails are also `LINESTRING` geometries, and the intersection of two `LINESTRING` geometries produces a set of `POINT` geometries.
```
trails <- macleish_layers %>%
pluck("trails")
st_intersection(trails, streams)
```
```
Simple feature collection with 10 features and 3 fields
Geometry type: GEOMETRY
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
name color Id geometry
8 entry trail - 3 POINT (-72.7 42.4)
9 Eastern Loop Blue 3 POINT (-72.7 42.5)
13 Snowmobile Trail <NA> 3 MULTIPOINT ((-72.7 42.5), (...
15 Driveway <NA> 3 POINT (-72.7 42.4)
6 Western Loop Red 3 POINT (-72.7 42.4)
1 Porcupine Trail White 3 POINT (-72.7 42.5)
6.1 Western Loop Red 3 POINT (-72.7 42.5)
4 Vernal Pool Loop Yellow 3 POINT (-72.7 42.4)
3 Poplar Hill Road Road 3 POINT (-72.7 42.5)
14 Snowmobile Trail <NA> 3 POINT (-72.7 42.5)
```
Note that one of the features is a `MULTIPOINT`. Occasionally, a trail might intersect a stream in more than one place (resulting in a `MULTIPOINT` geometry). This occurs here, where the Snowmobile Trail intersects one of the stream segments in two different places. To clean this up, we first use the `st_cast()` function to convert everything to `MULTIPOINT`, and then cast everything to `POINT`. (We can’t go straight to `POINT` because we start with a mixture of `POINT`s and `MULTIPOINT`s.)
```
bridges <- st_intersection(trails, streams) %>%
st_cast("MULTIPOINT") %>%
st_cast("POINT")
nrow(bridges)
```
```
[1] 11
```
Note that we now have 11 features instead of 10\. In this case, the intersections of trails and streams has a natural interpretation: these must be bridges of some type! How else could the trail continue through the stream? Figure [18\.5](ch-spatial2.html#fig:macleish-bridges) shows the trails, the streams, and these “bridges” (some of the points are hard to see because they partially overlap).
```
boundary_plot +
geom_sf(data = trails, color = "brown", size = 1.5) +
geom_sf(data = streams, color = "blue", size = 1.5) +
geom_sf(data = bridges, pch = 21, fill = "yellow", size = 3)
```
Figure 18\.5: Bridges on the MacLeish property where trails and streams intersect.
### 18\.1\.1 Geocoding, routes, and distances
The process of converting a human\-readable address into geographic coordinates is called [*geocoding*](https://en.wikipedia.org/w/index.php?search=geocoding). While there are numerous APIs available online that will do this for you, the functionality provided in **tidygeocoder** by the `geocode()` function uses [Open Street Map](https://www.openstreetmap.org) and does not require registration to use the API. Here, we build a data frame of the places of business of the three authors, geocode the addresses of the schools, convert the resulting data frame into an **sf** object, and set the projection to `epsg:4326` (see Chapter [17](ch-spatial.html#ch:spatial)).
```
library(tidyverse)
library(mdsr)
library(sf)
library(tidygeocoder)
colleges <- tribble(
~school, ~address,
"Smith", "44 College Lane, Northampton, MA 01063",
"Macalester", "1600 Grand Ave, St Paul, MN 55105",
"Amherst", "Amherst College, Amherst, MA 01002"
) %>%
geocode(address, method = "osm") %>%
st_as_sf(coords = c("long", "lat")) %>%
st_set_crs(4326)
colleges
```
```
Simple feature collection with 3 features and 2 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: -93.2 ymin: 42.3 xmax: -72.5 ymax: 44.9
Geodetic CRS: WGS 84
# A tibble: 3 × 3
school address geometry
* <chr> <chr> <POINT [°]>
1 Smith 44 College Lane, Northampton, MA 01063 (-72.6 42.3)
2 Macalester 1600 Grand Ave, St Paul, MN 55105 (-93.2 44.9)
3 Amherst Amherst College, Amherst, MA 01002 (-72.5 42.4)
```
[*Geodesic*](https://en.wikipedia.org/w/index.php?search=Geodesic) distances can be computed using the `st_distance()` function in **sf**. Here, we compute the distance between two of the Five Colleges[36](#fn36).
```
colleges %>%
filter(school != "Macalester") %>%
st_distance()
```
```
Units: [m]
[,1] [,2]
[1,] 0 11962
[2,] 11962 0
```
The geodesic distance is closer to “[*as the crow files*](https://en.wikipedia.org/w/index.php?search=as%20the%20crow%20files),” but we might be more interested in the *driving distance* between two locations along the road.
To compute this, we need to access a service with a database of roads.
Here, we use the [openroute service](https://www.openrouteservice.org), which requires an API key[37](#fn37). The **openrouteservice** package provides this access via the `ors_directions()` function.
Note that the value of the API key is not shown.
You will need your own API key to use this service.
```
library(openrouteservice)
ors_api_key()
```
```
smith_amherst <- colleges %>%
filter(school != "Macalester") %>%
st_coordinates() %>%
as_tibble()
route_driving <- smith_amherst %>%
ors_directions(profile = "driving-car", output = "sf")
```
Note the difference between the geodesic distance computed above and the driving distance computed below.
Of course, the driving distance must be longer than the geodesic distance.
```
route_driving %>%
st_length()
```
```
13541 [m]
```
If you prefer, you can convert meters to miles using the `set_units()` function from the **units** package.
```
route_driving %>%
st_length() %>%
units::set_units("miles")
```
```
8.41 [miles]
```
Given the convenient [Norwottuck Rail Trail](https://www.mass.gov/locations/norwottuck-rail-trail) connecting Northampton and Amherst, we might prefer to bike.
Will that be shorter?
```
route_cycling <- smith_amherst %>%
ors_directions(profile = "cycling-regular", output = "sf")
route_cycling %>%
st_length()
```
```
14050 [m]
```
It turns out that the rail trail path is slightly longer (but far more scenic).
Since the [*Calvin Coolidge Bridge*](https://en.wikipedia.org/w/index.php?search=Calvin%20Coolidge%20Bridge) is the only reasonable way to get from Northampton to Amherst when driving, there is only one shortest route between Smith and Amherst, as shown in Figure [18\.1](ch-spatial2.html#fig:smith-amherst).
We also show the shortest biking route, which follows the Norwottuck Rail Trail.
```
library(leaflet)
leaflet() %>%
addTiles() %>%
addPolylines(data = route_driving, weight = 10) %>%
addPolylines(data = route_cycling, color = "green", weight = 10)
```
Figure 18\.1: The fastest route from Smith College to Amherst College, by both car (blue) and bike (green).
However, shortest paths in a network are not unique (see Chapter [20](ch-netsci.html#ch:netsci)). Ben’s daily commute to [*Citi Field*](https://en.wikipedia.org/w/index.php?search=Citi%20Field) from his apartment in [*Brooklyn*](https://en.wikipedia.org/w/index.php?search=Brooklyn) presented three distinct alternatives:
1. One could take the [*Brooklyn\-Queens Expressway*](https://en.wikipedia.org/w/index.php?search=Brooklyn-Queens%20Expressway) (I\-278 E) to the [*Grand Central Parkway*](https://en.wikipedia.org/w/index.php?search=Grand%20Central%20Parkway) E and pass by [*LaGuardia Airport*](https://en.wikipedia.org/w/index.php?search=LaGuardia%20Airport).
2. One could continue on the [*Long Island Expressway*](https://en.wikipedia.org/w/index.php?search=Long%20Island%20Expressway) (I\-495 E) and then approach Citi Field from the opposite direction on the Grand Central Parkway W.
3. One could avoid highways altogether and take [*Roosevelt Avenue*](https://en.wikipedia.org/w/index.php?search=Roosevelt%20Avenue) all the way through [*Queens*](https://en.wikipedia.org/w/index.php?search=Queens).
The latter route is the shortest but often will take longer due to traffic. The first route is the most convenient approach to the Citi Field employee parking lot.
These two routes are overlaid on the map in Figure [18\.2](ch-spatial2.html#fig:citifield).
```
commute <- tribble(
~place, ~address,
"home", "736 Leonard St, Brooklyn, NY",
"lga", "LaGuardia Airport, Queens, NY",
"work", "Citi Field, 41 Seaver Way, Queens, NY 11368",
) %>%
geocode(address, method = "osm") %>%
st_as_sf(coords = c("long", "lat")) %>%
st_set_crs(4326)
route_direct <- commute %>%
filter(place %in% c("home", "work")) %>%
st_coordinates() %>%
as_tibble() %>%
ors_directions(output = "sf", preference = "recommended")
route_gcp <- commute %>%
st_coordinates() %>%
as_tibble() %>%
ors_directions(output = "sf")
leaflet() %>%
addTiles() %>%
addMarkers(data = commute, popup = ~place) %>%
addPolylines(data = route_direct, color = "green", weight = 10) %>%
addPolylines(data = route_gcp, weight = 10)
```
Figure 18\.2: Alternative commuting routes from Ben’s old apartment in Brooklyn to Citi Field. Note that the routes overlap for most of the way from Brooklyn to the I\-278 E onramp on Roosevelt Avenue.
### 18\.1\.2 Geometric operations
Much of the power of working with geospatial data comes from the interactions between various layers of data. The **sf** package provides many features that enable us to compute with geospatial data.
A basic geospatial question is: what part of one series of geospatial objects lies within another set? To illustrate, we use geospatial data from the MacLeish field station in [*Whately, MA*](https://en.wikipedia.org/w/index.php?search=Whately,%20MA). These data are provided by the **macleish** package. Figure [18\.3](ch-spatial2.html#fig:macleish-boundary) illustrates that there are several streams that pass though the MacLeish property.
```
library(sf)
library(macleish)
boundary <- macleish_layers %>%
pluck("boundary")
streams <- macleish_layers %>%
pluck("streams")
boundary_plot <- ggplot(boundary) +
geom_sf() +
scale_x_continuous(breaks = c(-72.677, -72.683))
boundary_plot +
geom_sf(data = streams, color = "blue", size = 1.5)
```
Figure 18\.3: Streams cross through the boundary of the MacLeish property.
The data from MacLeish happens to have a variable called `Shape_Area` that contains the precomputed size of the property.
```
boundary %>%
pull(Shape_Area)
```
```
[1] 1033988
```
Is this accurate? We can easily compute basic geometric properties of spatial objects like area and length using the **st\_area** function.
```
st_area(boundary)
```
```
1032353 [m^2]
```
The exact computed area is very close to the reported value. We can also convert the area in square meters to [*acres*](https://en.wikipedia.org/w/index.php?search=acres) by dividing by a known conversion factor.
```
st_area(boundary) / 4046.8564224
```
```
255 [m^2]
```
Similarly, we can compute the length of each segment of the streams and the location of the [*centroid*](https://en.wikipedia.org/w/index.php?search=centroid) of the property.
```
streams %>%
mutate(length = st_length(geometry))
```
```
Simple feature collection with 13 features and 2 fields
Geometry type: LINESTRING
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
First 10 features:
Id geometry length
1 1 LINESTRING (-72.7 42.5, -72... 593.3 [m]
2 1 LINESTRING (-72.7 42.5, -72... 412.3 [m]
3 1 LINESTRING (-72.7 42.5, -72... 137.9 [m]
4 1 LINESTRING (-72.7 42.5, -72... 40.3 [m]
5 1 LINESTRING (-72.7 42.5, -72... 51.0 [m]
6 1 LINESTRING (-72.7 42.5, -72... 592.8 [m]
7 1 LINESTRING (-72.7 42.5, -72... 2152.4 [m]
8 3 LINESTRING (-72.7 42.5, -72... 1651.3 [m]
9 3 LINESTRING (-72.7 42.5, -72... 316.2 [m]
10 3 LINESTRING (-72.7 42.5, -72... 388.3 [m]
```
```
boundary %>%
st_centroid()
```
```
Simple feature collection with 1 feature and 3 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.5 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
OBJECTID Shape_Leng Shape_Area geometry
1 1 5894 1033988 POINT (-72.7 42.5)
```
As promised, we can also combine two geospatial layers. The functions `st_intersects()` and `st_intersection()` take two geospatial objects and return a `logical` indicating whether they intersect, or another **sf** object representing that intersection, respectively.
```
st_intersects(boundary, streams)
```
```
Sparse geometry binary predicate list of length 1, where the
predicate was `intersects'
1: 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, ...
```
```
st_intersection(boundary, streams)
```
```
Simple feature collection with 11 features and 4 fields
Geometry type: GEOMETRY
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
First 10 features:
OBJECTID Shape_Leng Shape_Area Id geometry
1 1 5894 1033988 1 LINESTRING (-72.7 42.5, -72...
1.1 1 5894 1033988 1 LINESTRING (-72.7 42.5, -72...
1.2 1 5894 1033988 1 LINESTRING (-72.7 42.5, -72...
1.3 1 5894 1033988 1 MULTILINESTRING ((-72.7 42....
1.4 1 5894 1033988 1 LINESTRING (-72.7 42.4, -72...
1.5 1 5894 1033988 3 LINESTRING (-72.7 42.5, -72...
1.6 1 5894 1033988 3 LINESTRING (-72.7 42.5, -72...
1.7 1 5894 1033988 3 LINESTRING (-72.7 42.5, -72...
1.8 1 5894 1033988 3 LINESTRING (-72.7 42.5, -72...
1.9 1 5894 1033988 3 LINESTRING (-72.7 42.4, -72...
```
`st_intersects()` is called a [*predicate*](https://en.wikipedia.org/w/index.php?search=predicate) function because it returns a `logical`. It answers the question: “Do these two layers intersect?”
On the other hand, `st_intersection()` performs a set operation. It answers the question: “What is the set that represents the intersection of these two layers?” Similar functions compute familiar set operations like unions, differences, and symmetric differences, while a whole host of additional predicate functions detect containment (`st_contains()`, `st_within()`, etc.), crossings, overlaps, etc.
In Figure [18\.4](ch-spatial2.html#fig:macleish-boundary-streams)(a) we use the `st_intersection()` function to show only the parts of the streams that are contained within the MacLeish property.
In Figure [18\.4](ch-spatial2.html#fig:macleish-boundary-streams)(b), we show the corresponding set of stream parts that lie *outside* of the MacLeish property.
```
boundary_plot +
geom_sf(
data = st_intersection(boundary, streams),
color = "blue",
size = 1.5
)
boundary_plot +
geom_sf(
data = st_difference(streams, boundary),
color = "blue",
size = 1.5
)
```
Figure 18\.4: Streams on the MacLeish property.
Different spatial geometries intersect in different ways. Above, we saw that the intersection of steams (which are `LINESTRING`s) and the boundary (which is a `POLYGON`) produced `LINESTRING` geometries. Below, we compute the intersection of the streams with the trails that exist at MacLeish. The trails are also `LINESTRING` geometries, and the intersection of two `LINESTRING` geometries produces a set of `POINT` geometries.
```
trails <- macleish_layers %>%
pluck("trails")
st_intersection(trails, streams)
```
```
Simple feature collection with 10 features and 3 fields
Geometry type: GEOMETRY
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
name color Id geometry
8 entry trail - 3 POINT (-72.7 42.4)
9 Eastern Loop Blue 3 POINT (-72.7 42.5)
13 Snowmobile Trail <NA> 3 MULTIPOINT ((-72.7 42.5), (...
15 Driveway <NA> 3 POINT (-72.7 42.4)
6 Western Loop Red 3 POINT (-72.7 42.4)
1 Porcupine Trail White 3 POINT (-72.7 42.5)
6.1 Western Loop Red 3 POINT (-72.7 42.5)
4 Vernal Pool Loop Yellow 3 POINT (-72.7 42.4)
3 Poplar Hill Road Road 3 POINT (-72.7 42.5)
14 Snowmobile Trail <NA> 3 POINT (-72.7 42.5)
```
Note that one of the features is a `MULTIPOINT`. Occasionally, a trail might intersect a stream in more than one place (resulting in a `MULTIPOINT` geometry). This occurs here, where the Snowmobile Trail intersects one of the stream segments in two different places. To clean this up, we first use the `st_cast()` function to convert everything to `MULTIPOINT`, and then cast everything to `POINT`. (We can’t go straight to `POINT` because we start with a mixture of `POINT`s and `MULTIPOINT`s.)
```
bridges <- st_intersection(trails, streams) %>%
st_cast("MULTIPOINT") %>%
st_cast("POINT")
nrow(bridges)
```
```
[1] 11
```
Note that we now have 11 features instead of 10\. In this case, the intersections of trails and streams has a natural interpretation: these must be bridges of some type! How else could the trail continue through the stream? Figure [18\.5](ch-spatial2.html#fig:macleish-bridges) shows the trails, the streams, and these “bridges” (some of the points are hard to see because they partially overlap).
```
boundary_plot +
geom_sf(data = trails, color = "brown", size = 1.5) +
geom_sf(data = streams, color = "blue", size = 1.5) +
geom_sf(data = bridges, pch = 21, fill = "yellow", size = 3)
```
Figure 18\.5: Bridges on the MacLeish property where trails and streams intersect.
18\.2 Geospatial aggregation
----------------------------
In Section [18\.1\.2](ch-spatial2.html#sec:geometric-operations), we saw how we can split `MULTIPOINT` geometries into `POINT` geometries. This was, in a sense, geospatial disaggregation. Here, we consider the perhaps more natural behavior of spatial aggregation.
Just as we saw previously that the intersection of different geometries can produce different resulting geometries, so too will different geometries aggregate in different ways. For example, `POINT` geometries can be aggregated into `MULTIPOINT` geometries.
The **sf** package implements spatial aggreation using the same `group_by()` and `summarize()` function that you learned in Chapter [4](ch-dataI.html#ch:dataI). The only difference is that we might have to specify *how* we want the spatial layers to be aggregated. The default aggregation method is `st_union()`, which makes sense for most purposes.
Note that the `trails` layer is broken into segments: the Western Loop is comprised of three different features.
```
trails
```
```
Simple feature collection with 15 features and 2 fields
Geometry type: LINESTRING
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
First 10 features:
name color geometry
1 Porcupine Trail White LINESTRING (-72.7 42.5, -72...
2 Western Loop Red LINESTRING (-72.7 42.5, -72...
3 Poplar Hill Road Road LINESTRING (-72.7 42.5, -72...
4 Vernal Pool Loop Yellow LINESTRING (-72.7 42.4, -72...
5 Eastern Loop Blue LINESTRING (-72.7 42.5, -72...
6 Western Loop Red LINESTRING (-72.7 42.5, -72...
7 Western Loop Red LINESTRING (-72.7 42.4, -72...
8 entry trail - LINESTRING (-72.7 42.4, -72...
9 Eastern Loop Blue LINESTRING (-72.7 42.5, -72...
10 Easy Out Red LINESTRING (-72.7 42.5, -72...
```
Which trail is the longest? We know we can compute the length of the features with `st_length()`, but then we’d have to add up the lengths of each segment. Instead, we can aggregate the segments and do the length computation on the full trails.
```
trails_full <- trails %>%
group_by(name) %>%
summarize(num_segments = n()) %>%
mutate(trail_length = st_length(geometry)) %>%
arrange(desc(trail_length))
trails_full
```
```
Simple feature collection with 9 features and 3 fields
Geometry type: GEOMETRY
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
# A tibble: 9 × 4
name num_segments geometry trail_length
<fct> <int> <GEOMETRY [°]> [m]
1 Snowmobi… 2 MULTILINESTRING ((-72.7 42.5, -72.7 4… 2576.
2 Eastern … 2 MULTILINESTRING ((-72.7 42.5, -72.7 4… 1939.
3 Western … 3 MULTILINESTRING ((-72.7 42.5, -72.7 4… 1350.
4 Poplar H… 2 MULTILINESTRING ((-72.7 42.5, -72.7 4… 1040.
5 Porcupin… 1 LINESTRING (-72.7 42.5, -72.7 42.5, -… 700.
6 Vernal P… 1 LINESTRING (-72.7 42.4, -72.7 42.4, -… 360.
7 entry tr… 1 LINESTRING (-72.7 42.4, -72.7 42.4, -… 208.
8 Driveway 1 LINESTRING (-72.7 42.4, -72.7 42.4, -… 173.
9 Easy Out 2 MULTILINESTRING ((-72.7 42.5, -72.7 4… 136.
```
18\.3 Geospatial joins
----------------------
In Section [17\.4\.3](ch-spatial.html#sec:sf-inner-join), we show how the `inner_join()` function can be used to merge geospatial data with additional data. This works because the geospatial data was stored in an **sf** object, which is also a data frame. In that case, since the second data frame was not spatial, by necessity the key on which the join was performed was a non\-spatial attribute.
A geospatial join is a fundamentally different type of operation, in which *both* data frames are geospatial, and the joining key is a geospatial attribute. This operation is implemented by the `st_join()` function, which behaves similarly to the `inner_join()` function, but with some additional complexities due to the different nature of its task.
To illustrate this, we consider the question of in which type of forest the two campsites at MacLeish lie (see Figure [18\.6](ch-spatial2.html#fig:macleish-campsites)).
```
forests <- macleish_layers %>%
pluck("forests")
camp_sites <- macleish_layers %>%
pluck("camp_sites")
boundary_plot +
geom_sf(data = forests, fill = "green", alpha = 0.1) +
geom_sf(data = camp_sites, size = 4) +
geom_sf_label(
data = camp_sites, aes(label = name),
nudge_y = 0.001
)
```
Figure 18\.6: The MacLeish property has two campsites and many different types of forest.
It is important to note that this question is inherently spatial. There simply is no variable between the `forests` layer and the `camp_sites` layer that would allow you to connect them other than their geospatial location.
Like `inner_join()`, the `st_join()` function takes two data frames as its first two arguments. There is no `st_left_join()` function, but instead `st_join()` takes a `left` argument, that is set to `TRUE` by default. Finally, the `join` argument takes a predicate function that determines the criteria for whether the spatial features match. The default is `st_intersects()`, but here we employ `st_within()`, since we want the `POINT` geometries of the camp sites to lie *within* the `POLYGON` geometries of the forests.
```
st_join(camp_sites, forests, left = FALSE, join = st_within) %>%
select(name, type)
```
```
Simple feature collection with 2 features and 2 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.5 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
# A tibble: 2 × 3
name type geometry
<chr> <fct> <POINT [°]>
1 Group Campsite Old Field White Pine Forest (-72.7 42.5)
2 Remote Campsite Sugar Maple Forest (-72.7 42.5)
```
We see that the Group Campsite is in a [*Eastern White Pine*](https://en.wikipedia.org/w/index.php?search=Eastern%20White%20Pine) forest, while the Remote Campsite is in a [*Sugar Maple*](https://en.wikipedia.org/w/index.php?search=Sugar%20Maple) forest.
18\.4 Extended example: Trail elevations at MacLeish
----------------------------------------------------
Many hiking trails provide trail elevation maps (or [elevation profiles](http://downloads.esri.com/learnarcgis/educators/elevation-profiles.pdf)) that depict changes in elevation along the trail. These maps can help hikers understand the interplay between the uphill and downhill segments of the trail and where they occur along their hike.
More formally, various trail railing systems exist to numerically score the difficulty of a hike. [*Shenandoah National Park*](https://en.wikipedia.org/w/index.php?search=Shenandoah%20National%20Park) uses [this simple trail rating system](https://www.nps.gov/shen/planyourvisit/how-to-determine-hiking-difficulty.htm):
\\\[
rating \= \\sqrt{gain \\cdot 2 \\cdot distance}
\\]
A rating below 50 corresponds to the easiest class of hike.
In this example, we will construct an elevation profile and compute the trail rating for the longest trail at MacLeish.
The **macleish** package contains elevation contours that are 30 feet apart. These are relatively sparse, but they will suffice for our purposes.
```
elevations <- macleish_layers %>%
pluck("elevation")
```
First, we leverage the spatial aggregation work that we did previously to isolate the longest trail.
```
longest_trail <- trails_full %>%
head(1)
longest_trail
```
```
Simple feature collection with 1 feature and 3 fields
Geometry type: MULTILINESTRING
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
# A tibble: 1 × 4
name num_segments geometry trail_length
<fct> <int> <MULTILINESTRING [°]> [m]
1 Snowmobi… 2 ((-72.7 42.5, -72.7 42.5, -72.7 42.5,… 2576.
```
Next, we compute the geospatial intersection between the trails and the elevation contours, both of which are `LINESTRING` geometries. This results in `POINT`s, but just as we saw above, there are several places where the trail crosses the same contour line more than once. If you think about walking up a hill and then back down the other side, this should make sense (see Figure [18\.7](ch-spatial2.html#fig:macleish-snowmobile-elevation)). These multiple intersections result in `MULTIPOINT` geometries. We unravel these just as we did before: by casting everything to `MULTIPOINT` and then casting everything back to `POINT`.
```
trail_elevations <- longest_trail %>%
st_intersection(elevations) %>%
st_cast("MULTIPOINT") %>%
st_cast("POINT")
```
Figure [18\.7](ch-spatial2.html#fig:macleish-snowmobile-elevation) reveals that the Snowmobile trail starts near the southernmost edge of the property at about 750 feet above sea level, snakes along a ridge at 780 feet, before climbing to the local peak at 870 feet, and finally descending the back side of the hill near the northern border of the property.
```
boundary_plot +
geom_sf(data = elevations, color = "dark gray") +
geom_sf(data = longest_trail, color = "brown", size = 1.5) +
geom_sf(data = trail_elevations, fill = "yellow", pch = 21, size = 3) +
geom_sf_label(
data = trail_elevations,
aes(label = CONTOUR_FT),
hjust = "right",
size = 2.5,
nudge_x = -0.0005
)
```
Figure 18\.7: The Snowmobile Trail at MacLeish, with contour lines depicted.
Finally, we need to put the features in order, so that we can compute the distance from the start of the trail. Luckily, in this case the trail goes directly south to north, so that we can use the latitude coordinate as an ordering variable.
In this case, we use `st_distance()` to compute the geodesic distances between the elevation contours.
This function returns a \\(n \\times n\\) `matrix` with all of the pairwise distances, but since we only want the distances from the southernmost point (which is the first element), we only select the first column of the resulting matrix.
To compute the actual distances (i.e., along the trail) we would have to split the trail into pieces.
We leave this as an exercise.
```
trail_elevations <- trail_elevations %>%
mutate(lat = st_coordinates(geometry)[, 2]) %>%
arrange(lat) %>%
mutate(distance_from_start = as.numeric(st_distance(geometry)[, 1]))
```
Figure [18\.8](ch-spatial2.html#fig:snowmobile-trail-map) shows our elevation profile for the Snowmobile trail.
```
ggplot(trail_elevations, aes(x = distance_from_start)) +
geom_ribbon(aes(ymax = CONTOUR_FT, ymin = 750)) +
scale_y_continuous("Elevation (feet above sea level)") +
scale_x_continuous("Geodesic distance from trail head (meters)") +
labs(
title = "Trail elevation map: Snowmobile trail",
subtitle = "Whately, MA",
caption = "Source: macleish package for R"
)
```
Figure 18\.8: Trail elevation map for the Snowmobile trail at MacLeish.
With a rating under 20, this trail rates as one of the easiest according to the Shenandoah system.
```
trail_elevations %>%
summarize(
gain = max(CONTOUR_FT) - min(CONTOUR_FT),
trail_length = max(units::set_units(trail_length, "miles")),
rating = sqrt(gain * 2 * as.numeric(trail_length))
)
```
```
Simple feature collection with 1 feature and 3 fields
Geometry type: MULTIPOINT
Dimension: XY
Bounding box: xmin: -72.7 ymin: 42.4 xmax: -72.7 ymax: 42.5
Geodetic CRS: WGS 84
# A tibble: 1 × 4
gain trail_length rating geometry
<dbl> [miles] <dbl> <MULTIPOINT [°]>
1 120 1.60 19.6 ((-72.7 42.4), (-72.7 42.5), (-72.7 42.5), (-72…
```
18\.5 Further resources
-----------------------
Lovelace, Nowosad, and Muenchow (2019\) and Engel (2019\) were both helpful in updating this material to take advantage of **sf**.
18\.6 Exercises
---------------
**Problem 1 (Medium)**: The `Violations` data frame in the `mdsr` package contains information on violations noted in Board of Health inspections of New York City restaurants. These data contain spatial information in the form of addresses and zip codes.
1. Use the `geocode` function in `tidygeocoder` to obtain spatial coordinates for these restaurants.
2. Using the spatial coordinates you obtained in the previous exercise, create an informative static map using `ggspatial` that illustrates the nature and extent of restaurant violations in New York City.
3. Using the spatial coordinates you obtained in the previous exercises, create an informative interactive map using `leaflet` that illustrates the nature and extent of restaurant violations in New York City.
**Problem 2 (Medium)**:
1. Use the spatial data in the `macleish` package and `ggspatial` to make an informative static map of the MacLeish Field Station property.
2. Use the spatial data in the `macleish` package and `leaflet` to make an informative interactive map of the MacLeish Field Station property.
**Problem 3 (Hard)**: GIS data in the form of shapefiles is all over the Web. Government agencies are particularly good sources for these. The following code downloads bike trail data in Massachusetts from MassGIS. Use `bike_trails` to answer the following questions:
```
if (!file.exists("./biketrails_arc.zip")) {
part1 <- "http://download.massgis.digital.mass.gov/"
part2 <- "shapefiles/state/biketrails_arc.zip"
url <- paste(part1, part2, sep = "")
local_file <- basename(url)
download.file(url, destfile = local_file)
unzip(local_file, exdir = "./biketrails/")
}
```
```
library(sf)
dsn <- path.expand("./biketrails/biketrails_arc")
st_layers(dsn)
```
```
Driver: ESRI Shapefile
Available layers:
layer_name geometry_type features fields
1 biketrails_arc Line String 272 13
```
```
bike_trails <- read_sf(dsn)
```
1. How many distinct bike trail segments are there?
2. What is the longest individual bike trail segment?
3. How many segments are associated with the Norwottuck Rail Trail?
4. Among all of the named trails (which may have multiple features), which one has the longest total length?
5. The bike trails are in a [Lambert conformal conic projection](https://en.wikipedia.org/wiki/Lambert_conformal_conic_projection). Note that the units of the coordinates are very different from lat/long. In order to get these data onto our leaflet map, we need to re\-project them. Convert the bike trails to EPSG:4326, and create a leaflet map.
6. Color\-code the bike trails based on their length, and add an informative legend to the plot.
**Problem 4 (Hard)**: The MacLeish snowmobile trail map generated in the book is quite rudimentary.
Generate your own map that improves upon the aesthetics and information content.
18\.7 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/geospatial\-II.html\#geospatialII\-online\-exercises](https://mdsr-book.github.io/mdsr2e/geospatial-II.html#geospatialII-online-exercises)
No exercises found
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-text.html |
Chapter 19 Text as data
=======================
So far, we have focused primarily on numerical data, but there is a whole field of research that focuses on unstructured textual data.
Fields such as [*natural language processing*](https://en.wikipedia.org/w/index.php?search=natural%20language%20processing) and [*computational linguistics*](https://en.wikipedia.org/w/index.php?search=computational%20linguistics) work directly with text documents to extract meaning algorithmically. Not surprisingly, the fact that computers are really good at storing text, but not very good at understanding it, whereas humans are really good at understanding text, but not very good at storing it, is a fundamental challenge.
Processing text data requires an additional set of wrangling skills.
In this chapter, we will introduce how text can be ingested, how [*corpora*](https://en.wikipedia.org/w/index.php?search=corpora) (collections of text documents) can be created, sentiments extracted, patterns described, and how [*regular expressions*](https://en.wikipedia.org/w/index.php?search=regular%20expressions) can be used to automate searches that would otherwise be excruciatingly labor\-intensive.
19\.1 Regular expressions using *Macbeth*
-----------------------------------------
As noted previously, working with textual data requires new tools.
In this section, we introduce the powerful grammar of regular expressions.
### 19\.1\.1 Parsing the text of the [*Scottish play*](https://en.wikipedia.org/w/index.php?search=Scottish%20play)
[*Project Gutenberg*](https://en.wikipedia.org/w/index.php?search=Project%20Gutenberg) contains the full\-text for all of [William Shakespeare](https://en.wikipedia.org/w/index.php?search=William%20Shakespeare)’s plays.
In this example, we will use text mining techniques to explore *The Tragedy of Macbeth*.
The text can be downloaded directly from Project Gutenberg.
Alternatively, the `Macbeth_raw` object is also included in the **mdsr** package.
```
library(tidyverse)
library(mdsr)
macbeth_url <- "http://www.gutenberg.org/cache/epub/1129/pg1129.txt"
Macbeth_raw <- RCurl::getURL(macbeth_url)
```
```
data(Macbeth_raw)
```
Note that `Macbeth_raw` is a *single* string of text (i.e., a character vector of length 1\) that contains the entire play. In order to work with this, we want to split this single string into a vector of strings using the `str_split()` function from the **stringr**. To do this, we just have to specify the end\-of\-line character(s), which in this case are: `\r\n`.
```
# str_split returns a list: we only want the first element
macbeth <- Macbeth_raw %>%
str_split("\r\n") %>%
pluck(1)
length(macbeth)
```
```
[1] 3194
```
Now let’s examine the text. Note that each speaking line begins with two spaces, followed by the speaker’s name in capital letters.
```
macbeth[300:310]
```
```
[1] "meeting a bleeding Sergeant."
[2] ""
[3] " DUNCAN. What bloody man is that? He can report,"
[4] " As seemeth by his plight, of the revolt"
[5] " The newest state."
[6] " MALCOLM. This is the sergeant"
[7] " Who like a good and hardy soldier fought"
[8] " 'Gainst my captivity. Hail, brave friend!"
[9] " Say to the King the knowledge of the broil"
[10] " As thou didst leave it."
[11] " SERGEANT. Doubtful it stood,"
```
The power of text mining comes from quantifying ideas embedded in the text. For example, how many times does the character Macbeth speak in the play? Think about this question for a moment. If you were holding a physical copy of the play, how would you compute this number? Would you flip through the book and mark down each speaking line on a separate piece of paper? Is your algorithm scalable? What if you had to do it for *all* characters in the play, and not just Macbeth? What if you had to do it for *all 37* of Shakespeare’s plays? What if you had to do it for all plays written in English?
Naturally, a computer cannot read the play and figure this out, but we can find all instances of Macbeth’s speaking lines by cleverly counting patterns in the text.
```
macbeth_lines <- macbeth %>%
str_subset(" MACBETH")
length(macbeth_lines)
```
```
[1] 147
```
```
head(macbeth_lines)
```
```
[1] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[2] " MACBETH. So foul and fair a day I have not seen."
[3] " MACBETH. Speak, if you can. What are you?"
[4] " MACBETH. Stay, you imperfect speakers, tell me more."
[5] " MACBETH. Into the air, and what seem'd corporal melted"
[6] " MACBETH. Your children shall be kings."
```
The `str_subset()` function works using a [*needle*](https://en.wikipedia.org/w/index.php?search=needle) in a [*haystack*](https://en.wikipedia.org/w/index.php?search=haystack) paradigm, wherein the first argument is the character vector in which you want to find patterns (i.e., the haystack) and the second argument is the [*regular expression*](https://en.wikipedia.org/w/index.php?search=regular%20expression) (or pattern) you want to find (i.e., the needle).
Alternatively, `str_which()` returns the *indices* of the haystack in which the needles were found.
By changing the needle, we find different results:
```
macbeth %>%
str_subset(" MACDUFF") %>%
length()
```
```
[1] 60
```
The `str_detect()` function—which we use in the example in the next section—uses the same syntax but returns a logical vector as long as the haystack. Thus, while the length of the vector returned by `str_subset()` is the number of matches, the length of the vector returned by `str_detect()` is always the same as the length of the haystack vector.[38](#fn38)
```
macbeth %>%
str_subset(" MACBETH") %>%
length()
```
```
[1] 147
```
```
macbeth %>%
str_detect(" MACBETH") %>%
length()
```
```
[1] 3194
```
To extract the piece of each matching line that actually matched, use the `str_extract()` function from the **stringr** package.
```
pattern <- " MACBETH"
macbeth %>%
str_subset(pattern) %>%
str_extract(pattern) %>%
head()
```
```
[1] " MACBETH" " MACBETH" " MACBETH" " MACBETH" " MACBETH" " MACBETH"
```
Above, we use a literal string (e.g., “`MACBETH`”) as our needle to find exact matches in our haystack. This is the simplest type of pattern for which we could have searched, but the needle that `str_extract()` searches for can be any regular expression.
Regular expression syntax is very powerful and as a result, can become very complicated. Still, regular expressions are a grammar, so that learning a few basic concepts will allow you to build more efficient searches.
* **Metacharacters**: `.` is a [*metacharacter*](https://en.wikipedia.org/w/index.php?search=metacharacter) that matches any character. Note that if you want to search for the literal value of a metacharacter (e.g., a period), you have to escape it with a backslash. To use the pattern in **R**, two backslashes are needed. Note the difference in the results below.
```
macbeth %>%
str_subset("MAC.") %>%
head()
```
```
[1] "MACHINE READABLE COPIES MAY BE DISTRIBUTED SO LONG AS SUCH COPIES"
[2] "MACHINE READABLE COPIES OF THIS ETEXT, SO LONG AS SUCH COPIES"
[3] "WITH PERMISSION. ELECTRONIC AND MACHINE READABLE COPIES MAY BE"
[4] "THE TRAGEDY OF MACBETH"
[5] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[6] " LADY MACBETH, his wife"
```
```
macbeth %>%
str_subset("MACBETH\\.") %>%
head()
```
```
[1] " MACBETH. So foul and fair a day I have not seen."
[2] " MACBETH. Speak, if you can. What are you?"
[3] " MACBETH. Stay, you imperfect speakers, tell me more."
[4] " MACBETH. Into the air, and what seem'd corporal melted"
[5] " MACBETH. Your children shall be kings."
[6] " MACBETH. And Thane of Cawdor too. Went it not so?"
```
* **Character sets**: Use brackets to define sets of characters to match. This pattern will match any lines that contain `MAC` followed by any capital letter other than `A`. It will match `MACBETH` but not `MACALESTER`.
```
macbeth %>%
str_subset("MAC[B-Z]") %>%
head()
```
```
[1] "MACHINE READABLE COPIES MAY BE DISTRIBUTED SO LONG AS SUCH COPIES"
[2] "MACHINE READABLE COPIES OF THIS ETEXT, SO LONG AS SUCH COPIES"
[3] "WITH PERMISSION. ELECTRONIC AND MACHINE READABLE COPIES MAY BE"
[4] "THE TRAGEDY OF MACBETH"
[5] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[6] " LADY MACBETH, his wife"
```
* **Alternation**: To search for a few specific alternatives, use the `|` wrapped in parentheses. This pattern will match any lines that contain either `MACB` or `MACD`.
```
macbeth %>%
str_subset("MAC(B|D)") %>%
head()
```
```
[1] "THE TRAGEDY OF MACBETH"
[2] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[3] " LADY MACBETH, his wife"
[4] " MACDUFF, Thane of Fife, a nobleman of Scotland"
[5] " LADY MACDUFF, his wife"
[6] " MACBETH. So foul and fair a day I have not seen."
```
* **Anchors**: Use `^` to anchor a pattern to the beginning of a piece of text, and `$` to anchor it to the end.
```
macbeth %>%
str_subset("^ MAC[B-Z]") %>%
head()
```
```
[1] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[2] " MACDUFF, Thane of Fife, a nobleman of Scotland"
[3] " MACBETH. So foul and fair a day I have not seen."
[4] " MACBETH. Speak, if you can. What are you?"
[5] " MACBETH. Stay, you imperfect speakers, tell me more."
[6] " MACBETH. Into the air, and what seem'd corporal melted"
```
* **Repetitions**: We can also specify the number of times that we want certain patterns to occur: `?` indicates zero or one time, `*` indicates zero or more times, and `+` indicates one or more times. This quantification is applied to the previous element in the pattern—in this case, a space.
```
macbeth %>%
str_subset("^ ?MAC[B-Z]") %>%
head()
```
```
[1] "MACHINE READABLE COPIES MAY BE DISTRIBUTED SO LONG AS SUCH COPIES"
[2] "MACHINE READABLE COPIES OF THIS ETEXT, SO LONG AS SUCH COPIES"
```
```
macbeth %>%
str_subset("^ *MAC[B-Z]") %>%
head()
```
```
[1] "MACHINE READABLE COPIES MAY BE DISTRIBUTED SO LONG AS SUCH COPIES"
[2] "MACHINE READABLE COPIES OF THIS ETEXT, SO LONG AS SUCH COPIES"
[3] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[4] " MACDUFF, Thane of Fife, a nobleman of Scotland"
[5] " MACBETH. So foul and fair a day I have not seen."
[6] " MACBETH. Speak, if you can. What are you?"
```
```
macbeth %>%
str_subset("^ +MAC[B-Z]") %>%
head()
```
```
[1] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[2] " MACDUFF, Thane of Fife, a nobleman of Scotland"
[3] " MACBETH. So foul and fair a day I have not seen."
[4] " MACBETH. Speak, if you can. What are you?"
[5] " MACBETH. Stay, you imperfect speakers, tell me more."
[6] " MACBETH. Into the air, and what seem'd corporal melted"
```
Combining these basic rules can automate incredibly powerful and sophisticated searches and are an increasingly necessary tool in every data scientist’s toolbox.
Regular expressions are a powerful and commonly\-used tool. They are implemented in many programming languages. Developing a working understanding of regular expressions will pay off in text wrangling.
### 19\.1\.2 Life and death in *Macbeth*
Can we use these techniques to analyze the speaking patterns in Macbeth? Are there things we can learn about the play simply by noting who speaks when? Four of the major characters in *Macbeth* are the titular character, his wife Lady Macbeth, his friend Banquo, and Duncan, the King of Scotland.
We might learn something about the play by knowing when each character speaks as a function of the line number in the play. We can retrieve this information using `str_detect()`.
```
macbeth_chars <- tribble(
~name, ~regexp,
"Macbeth", " MACBETH\\.",
"Lady Macbeth", " LADY MACBETH\\.",
"Banquo", " BANQUO\\.",
"Duncan", " DUNCAN\\.",
) %>%
mutate(speaks = map(regexp, str_detect, string = macbeth))
```
However, for plotting purposes we will want to convert these `logical` vectors into `numeric` vectors, and tidy up the data. Since there is unwanted text at the beginning and the end of the play text, we will also restrict our analysis to the actual contents of the play (which occurs from line 218 to line 3172\).
```
speaker_freq <- macbeth_chars %>%
unnest(cols = speaks) %>%
mutate(
line = rep(1:length(macbeth), 4),
speaks = as.numeric(speaks)
) %>%
filter(line > 218 & line < 3172)
glimpse(speaker_freq)
```
```
Rows: 11,812
Columns: 4
$ name <chr> "Macbeth", "Macbeth", "Macbeth", "Macbeth", "Macbeth", "Mac…
$ regexp <chr> " MACBETH\\.", " MACBETH\\.", " MACBETH\\.", " MACBETH\…
$ speaks <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
$ line <int> 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230,…
```
Before we create the plot, we will gather some helpful contextual information about when each Act begins.
```
acts <- tibble(
line = str_which(macbeth, "^ACT [I|V]+"),
line_text = str_subset(macbeth, "^ACT [I|V]+"),
labels = str_extract(line_text, "^ACT [I|V]+")
)
```
Finally, Figure [19\.1](ch-text.html#fig:macbeth) illustrates how King Duncan of Scotland is killed early in Act II (never to speak again), with Banquo to follow in Act III.
Soon afterwards in Act IV,
Lady Macbeth—overcome by guilt over the role she played in Duncan’s murder—kills herself. The play and Act V conclude with a battle in which Macbeth is killed.
```
ggplot(data = speaker_freq, aes(x = line, y = speaks)) +
geom_smooth(
aes(color = name), method = "loess",
se = FALSE, span = 0.4
) +
geom_vline(
data = acts,
aes(xintercept = line),
color = "darkgray", lty = 3
) +
geom_text(
data = acts,
aes(y = 0.085, label = labels),
hjust = "left", color = "darkgray"
) +
ylim(c(0, NA)) +
xlab("Line Number") +
ylab("Proportion of Speeches") +
scale_color_brewer(palette = "Set2")
```
Figure 19\.1: Speaking parts for four major characters. Duncan is killed early in the play and never speaks again.
19\.2 Extended example: Analyzing textual data from arXiv.org
-------------------------------------------------------------
The [*arXiv*](https://en.wikipedia.org/w/index.php?search=arXiv) (pronounced “archive”) is a fast\-growing electronic repository of preprints of scientific papers from many disciplines.
The **aRxiv** package provides an application programming interface (API) to the files and metadata available on [the arXiv](https://www.arxiv.org).
We will use the 1,089 papers that matched the search term “`data science`” in the repository as of August, 2020 to try to better understand the discipline.
The following code was used to generate this file.
```
library(aRxiv)
DataSciencePapers <- arxiv_search(
query = '"Data Science"',
limit = 20000,
batchsize = 100
)
```
We have also included the resulting data frame `DataSciencePapers` in the **mdsr** package, so to use this selection of papers downloaded from the archive, you can simply load it (this will avoid unduly straining the arXiv server).
```
data(DataSciencePapers)
```
Note that there are two columns in this data set (`submitted` and `updated`) that are clearly storing dates, but they are stored as `character` vectors.
```
glimpse(DataSciencePapers)
```
```
Rows: 1,089
Columns: 15
$ id <chr> "astro-ph/0701361v1", "0901.2805v1", "0901.3118v2…
$ submitted <chr> "2007-01-12 03:28:11", "2009-01-19 10:38:33", "20…
$ updated <chr> "2007-01-12 03:28:11", "2009-01-19 10:38:33", "20…
$ title <chr> "How to Make the Dream Come True: The Astronomers…
$ abstract <chr> " Astronomy is one of the most data-intensive of…
$ authors <chr> "Ray P Norris", "Heinz Andernach", "O. V. Verkhod…
$ affiliations <chr> "", "", "Special Astrophysical Observatory, Nizhn…
$ link_abstract <chr> "http://arxiv.org/abs/astro-ph/0701361v1", "http:…
$ link_pdf <chr> "http://arxiv.org/pdf/astro-ph/0701361v1", "http:…
$ link_doi <chr> "", "http://dx.doi.org/10.2481/dsj.8.41", "http:/…
$ comment <chr> "Submitted to Data Science Journal Presented at C…
$ journal_ref <chr> "", "", "", "", "EPJ Data Science, 1:9, 2012", ""…
$ doi <chr> "", "10.2481/dsj.8.41", "10.2481/dsj.8.34", "", "…
$ primary_category <chr> "astro-ph", "astro-ph.IM", "astro-ph.IM", "astro-…
$ categories <chr> "astro-ph", "astro-ph.IM|astro-ph.CO", "astro-ph.…
```
To make sure that **R** understands those variables as dates, we will once again use the **lubridate** package (see Chapter [6](ch-dataII.html#ch:dataII)).
After this conversion, **R** can deal with these two columns as measurements of time.
```
library(lubridate)
DataSciencePapers <- DataSciencePapers %>%
mutate(
submitted = lubridate::ymd_hms(submitted),
updated = lubridate::ymd_hms(updated)
)
glimpse(DataSciencePapers)
```
```
Rows: 1,089
Columns: 15
$ id <chr> "astro-ph/0701361v1", "0901.2805v1", "0901.3118v2…
$ submitted <dttm> 2007-01-12 03:28:11, 2009-01-19 10:38:33, 2009-0…
$ updated <dttm> 2007-01-12 03:28:11, 2009-01-19 10:38:33, 2009-0…
$ title <chr> "How to Make the Dream Come True: The Astronomers…
$ abstract <chr> " Astronomy is one of the most data-intensive of…
$ authors <chr> "Ray P Norris", "Heinz Andernach", "O. V. Verkhod…
$ affiliations <chr> "", "", "Special Astrophysical Observatory, Nizhn…
$ link_abstract <chr> "http://arxiv.org/abs/astro-ph/0701361v1", "http:…
$ link_pdf <chr> "http://arxiv.org/pdf/astro-ph/0701361v1", "http:…
$ link_doi <chr> "", "http://dx.doi.org/10.2481/dsj.8.41", "http:/…
$ comment <chr> "Submitted to Data Science Journal Presented at C…
$ journal_ref <chr> "", "", "", "", "EPJ Data Science, 1:9, 2012", ""…
$ doi <chr> "", "10.2481/dsj.8.41", "10.2481/dsj.8.34", "", "…
$ primary_category <chr> "astro-ph", "astro-ph.IM", "astro-ph.IM", "astro-…
$ categories <chr> "astro-ph", "astro-ph.IM|astro-ph.CO", "astro-ph.…
```
We begin by examining the distribution of submission years.
How has interest grown in `data science`?
```
mosaic::tally(~ year(submitted), data = DataSciencePapers)
```
```
year(submitted)
2007 2009 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020
1 3 3 7 15 25 52 94 151 187 313 238
```
We see that the first paper was submitted in 2007, but that submissions have increased considerably since then.
Let’s take a closer look at one of the papers, in this case one that focuses on causal inference.
```
DataSciencePapers %>%
filter(id == "1809.02408v2") %>%
glimpse()
```
```
Rows: 1
Columns: 15
$ id <chr> "1809.02408v2"
$ submitted <dttm> 2018-09-07 11:26:51
$ updated <dttm> 2019-03-05 04:38:35
$ title <chr> "A Primer on Causality in Data Science"
$ abstract <chr> " Many questions in Data Science are fundamental…
$ authors <chr> "Hachem Saddiki|Laura B. Balzer"
$ affiliations <chr> ""
$ link_abstract <chr> "http://arxiv.org/abs/1809.02408v2"
$ link_pdf <chr> "http://arxiv.org/pdf/1809.02408v2"
$ link_doi <chr> ""
$ comment <chr> "26 pages (with references); 4 figures"
$ journal_ref <chr> ""
$ doi <chr> ""
$ primary_category <chr> "stat.AP"
$ categories <chr> "stat.AP|stat.ME|stat.ML"
```
We see that this is a primer on causality in data science that was submitted in 2018 and updated in 2019 with a primary category of `stat.AP`.
What fields are generating the most papers in our dataset?
A quick glance at the `primary_category` variable reveals a cryptic list of fields and sub\-fields starting alphabetically with astronomy.
```
DataSciencePapers %>%
group_by(primary_category) %>%
count() %>%
head()
```
```
# A tibble: 6 × 2
# Groups: primary_category [6]
primary_category n
<chr> <int>
1 astro-ph 1
2 astro-ph.CO 3
3 astro-ph.EP 1
4 astro-ph.GA 7
5 astro-ph.IM 20
6 astro-ph.SR 6
```
It may be more helpful to focus simply on the primary field (the part before the period).
We can use a regular expression to extract only the primary field, which may contain a dash (`-`), but otherwise is all lowercase characters.
Once we have this information extracted, we can `tally()` those primary fields.
```
DataSciencePapers <- DataSciencePapers %>%
mutate(
field = str_extract(primary_category, "^[a-z,-]+"),
)
mosaic::tally(x = ~field, margins = TRUE, data = DataSciencePapers) %>%
sort()
```
```
field
gr-qc hep-ph nucl-th hep-th econ quant-ph cond-mat q-fin
1 1 1 3 5 7 12 15
q-bio eess astro-ph physics math stat cs Total
16 29 38 62 103 150 646 1089
```
It appears that more than half (\\(646/1089 \= 59\\)%) of these papers come from computer science, while roughly one quarter come from mathematics and statistics.
### 19\.2\.1 Corpora
Text mining is often performed not just on one text document, but on a collection of many text documents, called a [*corpus*](https://en.wikipedia.org/w/index.php?search=corpus).
Can we use the arXiv.org papers to learn more about papers in data science?
The **tidytext** package provides a consistent and elegant approach to analyzing text data.
The `unnest_tokens()` function helps prepare data for text analysis.
It uses a [*tokenizer*](https://en.wikipedia.org/w/index.php?search=tokenizer) to split the text lines.
By default the function maps characters to lowercase.
Here we use this function to count word frequencies for each of the papers (other options include N\-grams, lines, or sentences).
```
library(tidytext)
DataSciencePapers %>%
unnest_tokens(word, abstract) %>%
count(id, word, sort = TRUE)
```
```
# A tibble: 120,330 × 3
id word n
<chr> <chr> <int>
1 2003.11213v1 the 31
2 1508.02387v1 the 30
3 1711.10558v1 the 30
4 1805.09320v2 the 30
5 2004.04813v2 the 27
6 2007.08242v1 the 27
7 1711.09726v3 the 26
8 1805.11012v1 the 26
9 1909.10578v1 the 26
10 1404.5971v2 the 25
# … with 120,320 more rows
```
We see that the word `the` is the most common word in many abstracts.
This is not a particularly helpful insight.
It’s a common practice to exclude [*stop words*](https://en.wikipedia.org/w/index.php?search=stop%20words) such as `a`, `the`, and `you`.
The `get_stopwords()` function from the **tidytext** package uses the **stopwords** package to facilitate this task.
Let’s try again.
```
arxiv_words <- DataSciencePapers %>%
unnest_tokens(word, abstract) %>%
anti_join(get_stopwords(), by = "word")
arxiv_words %>%
count(id, word, sort = TRUE)
```
```
# A tibble: 93,559 × 3
id word n
<chr> <chr> <int>
1 2007.03606v1 data 20
2 1708.04664v1 data 19
3 1606.06769v1 traffic 17
4 1705.03451v2 data 17
5 1601.06035v1 models 16
6 1807.09127v2 job 16
7 2003.10534v1 data 16
8 1611.09874v1 ii 15
9 1808.04849v1 data 15
10 1906.03418v1 data 15
# … with 93,549 more rows
```
We now see that the word `data` is, not surprisingly, the most common non\-stop word in many of the abstracts.
It is convenient to save a variable (`abstract_clean`) with the abstract after removing stopwords and mapping all characters to lowercase.
```
arxiv_abstracts <- arxiv_words %>%
group_by(id) %>%
summarize(abstract_clean = paste(word, collapse = " "))
arxiv_papers <- DataSciencePapers %>%
left_join(arxiv_abstracts, by = "id")
```
We can now see the before and after for the first part of the abstract of our previously selected paper.
```
single_paper <- arxiv_papers %>%
filter(id == "1809.02408v2")
single_paper %>%
pull(abstract) %>%
strwrap() %>%
head()
```
```
[1] "Many questions in Data Science are fundamentally causal in that our"
[2] "objective is to learn the effect of some exposure, randomized or"
[3] "not, on an outcome interest. Even studies that are seemingly"
[4] "non-causal, such as those with the goal of prediction or prevalence"
[5] "estimation, have causal elements, including differential censoring"
[6] "or measurement. As a result, we, as Data Scientists, need to"
```
```
single_paper %>%
pull(abstract_clean) %>%
strwrap() %>%
head(4)
```
```
[1] "many questions data science fundamentally causal objective learn"
[2] "effect exposure randomized outcome interest even studies seemingly"
[3] "non causal goal prediction prevalence estimation causal elements"
[4] "including differential censoring measurement result data scientists"
```
### 19\.2\.2 Word clouds
At this stage, we have taken what was a coherent English abstract and reduced it to a collection of individual, non\-trivial English words.
We have transformed something that was easy for humans to read into *data*.
Unfortunately, it is not obvious how we can learn from these data.
One rudimentary approach is to construct a [*word cloud*](https://en.wikipedia.org/w/index.php?search=word%20cloud)—a kind of multivariate histogram for words. The **wordcloud** package can generate these graphical depictions of word frequencies.
```
library(wordcloud)
set.seed(1966)
arxiv_papers %>%
pull(abstract_clean) %>%
wordcloud(
max.words = 40,
scale = c(8, 1),
colors = topo.colors(n = 30),
random.color = TRUE
)
```
Figure 19\.2: A word cloud of terms that appear in the abstracts of arXiv papers on data science.
Although word clouds such as the one shown in Figure [19\.2](ch-text.html#fig:wordcloud1) have limited abilities to convey meaning, they can be useful for quickly visualizing the prevalence of words in large corpora.
### 19\.2\.3 Sentiment analysis
Can we start to automate a process to discern some meaning from the text?
The use of [*sentiment analysis*](https://en.wikipedia.org/w/index.php?search=sentiment%20analysis) is a simplistic but straightforward way to begin.
A [*lexicon*](https://en.wikipedia.org/w/index.php?search=lexicon) is a word list with associated sentiments (e.g., positivity, negativity) that have been labeled.
A number of such lexicons have been created with such tags.
Here is a sample of sentiment scores for one lexicon.
```
afinn <- get_sentiments("afinn")
afinn %>%
slice_sample(n = 15) %>%
arrange(desc(value))
```
```
# A tibble: 15 × 2
word value
<chr> <dbl>
1 impress 3
2 joyfully 3
3 advantage 2
4 faith 1
5 grant 1
6 laugh 1
7 apologise -1
8 lurk -1
9 ghost -1
10 deriding -2
11 detention -2
12 dirtiest -2
13 embarrassment -2
14 mocks -2
15 mournful -2
```
For the AFINN (Nielsen 2011\) lexicon, each word is associated with an integer value, ranging from \\(\-5\\) to 5\.
We can join this lexicon with our data to calculate a sentiment score.
```
arxiv_words %>%
inner_join(afinn, by = "word") %>%
select(word, id, value)
```
```
# A tibble: 7,393 × 3
word id value
<chr> <chr> <dbl>
1 ambitious astro-ph/0701361v1 2
2 powerful astro-ph/0701361v1 2
3 impotent astro-ph/0701361v1 -2
4 like astro-ph/0701361v1 2
5 agree astro-ph/0701361v1 1
6 better 0901.2805v1 2
7 better 0901.2805v1 2
8 better 0901.2805v1 2
9 improve 0901.2805v1 2
10 support 0901.3118v2 2
# … with 7,383 more rows
```
```
arxiv_sentiments <- arxiv_words %>%
left_join(afinn, by = "word") %>%
group_by(id) %>%
summarize(
num_words = n(),
sentiment = sum(value, na.rm = TRUE),
.groups = "drop"
) %>%
mutate(sentiment_per_word = sentiment / num_words) %>%
arrange(desc(sentiment))
```
Here we used `left_join()` to ensure that if no words in the abstract matched words in the lexicon, we will still have something to sum (in this case a number of NA’s, which sum to 0\).
We can now add this new variable to our dataset of papers.
```
arxiv_papers <- arxiv_papers %>%
left_join(arxiv_sentiments, by = "id")
arxiv_papers %>%
skim(sentiment, sentiment_per_word)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75
1 sentiment 1089 0 4.02 7.00 -26 0 4 8
2 sentiment_per_word 1089 0 0.0360 0.0633 -0.227 0 0.0347 0.0714
p100
1 39
2 0.333
```
The average sentiment score of these papers is 4, but they range from \\(\-26\\) to 39\.
Surely, abstracts with more words might accrue a higher sentiment score.
We can control for abstract length by dividing by the number of words.
The paper with the highest sentiment score per word had a score of 0\.333\.
Let’s take a closer look at the most positive abstract.
```
most_positive <- arxiv_papers %>%
filter(sentiment_per_word == max(sentiment_per_word)) %>%
pull(abstract)
strwrap(most_positive)
```
```
[1] "Data science is creating very exciting trends as well as"
[2] "significant controversy. A critical matter for the healthy"
[3] "development of data science in its early stages is to deeply"
[4] "understand the nature of data and data science, and to discuss the"
[5] "various pitfalls. These important issues motivate the discussions"
[6] "in this article."
```
We see a number of positive words (e.g., “exciting,” “significant,” “important”) included in this upbeat abstract.
We can also explore if there are time trends or differences between different disciplines (see Figure [19\.3](ch-text.html#fig:arxiv-papers)).
```
ggplot(
arxiv_papers,
aes(
x = submitted, y = sentiment_per_word,
color = field == "cs"
)
) +
geom_smooth(se = TRUE) +
scale_color_brewer("Computer Science?", palette = "Set2") +
labs(x = "Date submitted", y = "Sentiment score per word")
```
Figure 19\.3: Average sum sentiment scores over time by field.
There’s mild evidence for a downward trend over time.
Computer science papers have slightly higher sentiment, but the difference is modest.
### 19\.2\.4 Bigrams and N\-grams
We can also start to explore more sophisticated patterns within our corpus.
An [*N\-gram*](https://en.wikipedia.org/w/index.php?search=N-gram) is a contiguous sequence of \\(n\\) “words.”
Thus, a \\(1\\)\-gram is a single word (e.g., “text”), while a 2\-gram ([*bigram*](https://en.wikipedia.org/w/index.php?search=bigram)) is a pair of words (e.g. “text mining”).
We can use the same techniques to identify the most common pairs of words.
```
arxiv_bigrams <- arxiv_papers %>%
unnest_tokens(
arxiv_bigram,
abstract_clean,
token = "ngrams",
n = 2
) %>%
select(arxiv_bigram, id)
arxiv_bigrams
```
```
# A tibble: 121,454 × 2
arxiv_bigram id
<chr> <chr>
1 astronomy one astro-ph/0701361v1
2 one data astro-ph/0701361v1
3 data intensive astro-ph/0701361v1
4 intensive sciences astro-ph/0701361v1
5 sciences data astro-ph/0701361v1
6 data technology astro-ph/0701361v1
7 technology accelerating astro-ph/0701361v1
8 accelerating quality astro-ph/0701361v1
9 quality effectiveness astro-ph/0701361v1
10 effectiveness research astro-ph/0701361v1
# … with 121,444 more rows
```
```
arxiv_bigrams %>%
count(arxiv_bigram, sort = TRUE)
```
```
# A tibble: 96,822 × 2
arxiv_bigram n
<chr> <int>
1 data science 953
2 machine learning 403
3 big data 139
4 state art 121
5 data analysis 111
6 deep learning 108
7 neural networks 100
8 real world 97
9 large scale 83
10 data driven 80
# … with 96,812 more rows
```
Not surprisingly, `data science` is the most common bigram.
### 19\.2\.5 Document term matrices
Another important technique in text mining involves the calculation of a [*term frequency\-inverse document frequency*](https://en.wikipedia.org/w/index.php?search=term%20frequency-inverse%20document%20frequency) ([*tf\-idf*](https://en.wikipedia.org/w/index.php?search=tf-idf)), or [*document term matrix*](https://en.wikipedia.org/w/index.php?search=document%20term%20matrix).
The term frequency of a term \\(t\\) in a document \\(d\\) is denoted \\(tf(t,d)\\) and is simply equal to the number of times that the term \\(t\\) appears in document \\(d\\) divided by the number of words in the document.
On the other hand, the inverse document frequency measures the prevalence of a term across a set of documents \\(D\\).
In particular,
\\\[
idf(t, D) \= \\log \\frac{\|D\|}{\|\\{d \\in D: t \\in d\\}\|} \\,.
\\]
Finally, \\(tf\\\_idf(t,d,D) \= tf(t,d) \\cdot idf(t, D)\\).
The \\(tf\\\_idf\\) is commonly used in search engines, when the relevance of a particular word is needed across a body of documents.
Note that unless they are excluded (as we have done above) commonly\-used words like `the` will appear in every document.
Thus, their inverse document frequency score will be zero, and thus their \\(tf\\\_idf\\) will also be zero regardless of the term frequency.
This is a desired result, since words like `the` are never important in full\-text searches.
Rather, documents with high \\(tf\\\_idf\\) scores for a particular term will contain that particular term many times relative to its appearance across many documents.
Such documents are likely to be more relevant to the search term being used.
The most commonly\-used words in our corpora are listed below.
Not surprisingly “data” and “science” are at the top of the list.
```
arxiv_words %>%
count(word) %>%
arrange(desc(n)) %>%
head()
```
```
# A tibble: 6 × 2
word n
<chr> <int>
1 data 3222
2 science 1122
3 learning 804
4 can 731
5 model 540
6 analysis 488
```
However, the term frequency metric is calculated on a per word, per document basis.
It answers the question of which abstracts use a word most often.
```
tidy_DTM <- arxiv_words %>%
count(id, word) %>%
bind_tf_idf(word, id, n)
tidy_DTM %>%
arrange(desc(tf)) %>%
head()
```
```
# A tibble: 6 × 6
id word n tf idf tf_idf
<chr> <chr> <int> <dbl> <dbl> <dbl>
1 2007.03606v1 data 20 0.169 0.128 0.0217
2 1707.07029v1 concept 1 0.167 3.30 0.551
3 1707.07029v1 data 1 0.167 0.128 0.0214
4 1707.07029v1 implications 1 0.167 3.77 0.629
5 1707.07029v1 reflections 1 0.167 6.30 1.05
6 1707.07029v1 science 1 0.167 0.408 0.0680
```
We see that among all terms in all papers, “data” has the highest term frequency for paper `2007.03606v1` (0\.169\).
Nearly 17% of the non\-stopwords in this papers abstract were “data.”
However, as we saw above, since “data” is the most common word in the entire corpus, it has the *lowest* inverse document frequency (0\.128\).
The `tf_idf` score for “data” in paper `2007.03606v1` is thus \\(0\.169 \\cdot 0\.128 \= 0\.022\\).
This is not a particularly large value, so a search for “data” would not bring this paper to the top of the list.
```
tidy_DTM %>%
arrange(desc(idf), desc(n)) %>%
head()
```
```
# A tibble: 6 × 6
id word n tf idf tf_idf
<chr> <chr> <int> <dbl> <dbl> <dbl>
1 1507.00333v3 mf 14 0.107 6.99 0.747
2 1611.09874v1 fe 13 0.0549 6.99 0.384
3 1611.09874v1 mg 11 0.0464 6.99 0.325
4 2003.00646v1 wildfire 10 0.0518 6.99 0.362
5 1506.08903v7 ph 9 0.0703 6.99 0.492
6 1710.06905v1 homeless 9 0.0559 6.99 0.391
```
On the other hand, “wildfire” has a high `idf` score since it is included in only one abstract (though it is used 10 times).
```
arxiv_papers %>%
pull(abstract) %>%
str_subset("wildfire") %>%
strwrap() %>%
head()
```
```
[1] "Artificial intelligence has been applied in wildfire science and"
[2] "management since the 1990s, with early applications including"
[3] "neural networks and expert systems. Since then the field has"
[4] "rapidly progressed congruently with the wide adoption of machine"
[5] "learning (ML) in the environmental sciences. Here, we present a"
[6] "scoping review of ML in wildfire science and management. Our"
```
In contrast, “implications” appears in 25 abstracts.
```
tidy_DTM %>%
filter(word == "implications")
```
```
# A tibble: 25 × 6
id word n tf idf tf_idf
<chr> <chr> <int> <dbl> <dbl> <dbl>
1 1310.4461v2 implications 1 0.00840 3.77 0.0317
2 1410.6646v1 implications 1 0.00719 3.77 0.0272
3 1511.07643v1 implications 1 0.00621 3.77 0.0234
4 1601.04890v2 implications 1 0.00680 3.77 0.0257
5 1608.05127v1 implications 1 0.00595 3.77 0.0225
6 1706.03102v1 implications 1 0.00862 3.77 0.0325
7 1707.07029v1 implications 1 0.167 3.77 0.629
8 1711.04712v1 implications 1 0.00901 3.77 0.0340
9 1803.05991v1 implications 1 0.00595 3.77 0.0225
10 1804.10846v6 implications 1 0.00909 3.77 0.0343
# … with 15 more rows
```
The `tf_idf` field can be used to help identify keywords for an article.
For our previously selected paper, “causal,” “exposure,” or “question” would be good choices.
```
tidy_DTM %>%
filter(id == "1809.02408v2") %>%
arrange(desc(tf_idf)) %>%
head()
```
```
# A tibble: 6 × 6
id word n tf idf tf_idf
<chr> <chr> <int> <dbl> <dbl> <dbl>
1 1809.02408v2 causal 10 0.0775 4.10 0.318
2 1809.02408v2 exposure 2 0.0155 5.38 0.0835
3 1809.02408v2 question 3 0.0233 3.23 0.0752
4 1809.02408v2 roadmap 2 0.0155 4.80 0.0744
5 1809.02408v2 parametric 2 0.0155 4.16 0.0645
6 1809.02408v2 effect 2 0.0155 3.95 0.0612
```
A search for “covid” yields several papers that address the pandemic directly.
```
tidy_DTM %>%
filter(word == "covid") %>%
arrange(desc(tf_idf)) %>%
head() %>%
left_join(select(arxiv_papers, id, abstract), by = "id")
```
```
# A tibble: 6 × 7
id word n tf idf tf_idf abstract
<chr> <chr> <int> <dbl> <dbl> <dbl> <chr>
1 2006.00… covid 10 0.0637 4.80 0.305 " Context: The dire consequence…
2 2004.09… covid 5 0.0391 4.80 0.187 " The Covid-19 outbreak, beyond…
3 2003.08… covid 3 0.0246 4.80 0.118 " The relative case fatality ra…
4 2006.01… covid 3 0.0222 4.80 0.107 " This document analyzes the ro…
5 2003.12… covid 3 0.0217 4.80 0.104 " The COVID-19 pandemic demands…
6 2006.05… covid 3 0.0170 4.80 0.0817 " This paper aims at providing …
```
The (document, term) pair with the highest overall `tf_idf` is “reflections” (a rarely\-used word having a high `idf` score), in a paper that includes only six non\-stopwords in its abstract.
Note that “implications” and “society” also garner high `tf_idf` scores for that same paper.
```
tidy_DTM %>%
arrange(desc(tf_idf)) %>%
head() %>%
left_join(select(arxiv_papers, id, abstract), by = "id")
```
```
# A tibble: 6 × 7
id word n tf idf tf_idf abstract
<chr> <chr> <int> <dbl> <dbl> <dbl> <chr>
1 1707.07… reflec… 1 0.167 6.30 1.05 " Reflections on the Concept …
2 2007.12… fintech 8 0.123 6.99 0.861 " Smart FinTech has emerged a…
3 1507.00… mf 14 0.107 6.99 0.747 " Low-rank matrix factorizati…
4 1707.07… implic… 1 0.167 3.77 0.629 " Reflections on the Concept …
5 1707.07… society 1 0.167 3.70 0.616 " Reflections on the Concept …
6 1906.04… utv 8 0.0860 6.99 0.602 " In this work, a novel rank-…
```
The `cast_dtm()` function can be used to create a document term matrix.
```
tm_DTM <- arxiv_words %>%
count(id, word) %>%
cast_dtm(id, word, n, weighting = tm::weightTfIdf)
tm_DTM
```
```
<<DocumentTermMatrix (documents: 1089, terms: 12317)>>
Non-/sparse entries: 93559/13319654
Sparsity : 99%
Maximal term length: 37
Weighting : term frequency - inverse document frequency (normalized) (tf-idf)
```
By default, each entry in that matrix records the [*term frequency*](https://en.wikipedia.org/w/index.php?search=term%20frequency) (i.e., the number of times that each word appeared in each document).
However, in this case we will specify that the entries record the normalized \\(tf\\\_idf\\) as defined above.
Note that the `DTM` matrix is very sparse—99% of the entries are 0\.
This makes sense, since most words do not appear in most documents (abstracts, for our example).
We can now use tools from other packages (e.g., **tm**) to explore associations.
We can now use the `findFreqTerms()` function with the `DTM` object to find the words with the highest \\(tf\\\_idf\\) scores.
Note how these results differ from the word cloud in Figure [19\.2](ch-text.html#fig:wordcloud1).
By term frequency, the word `data` is by far the most common, but this gives it a low \\(idf\\) score that brings down its \\(tf\\\_idf\\).
```
tm::findFreqTerms(tm_DTM, lowfreq = 7)
```
```
[1] "analysis" "information" "research" "learning" "time"
[6] "network" "problem" "can" "algorithm" "algorithms"
[11] "based" "methods" "model" "models" "machine"
```
Since `tm_DTM` contains all of the \\(tf\\\_idf\\) scores for each word, we can extract those values and calculate the score of each word across all of the abstracts.
```
tm_DTM %>%
as.matrix() %>%
as_tibble() %>%
map_dbl(sum) %>%
sort(decreasing = TRUE) %>%
head()
```
```
learning model models machine analysis algorithms
10.10 9.30 8.81 8.04 7.84 7.72
```
Moreover, we can identify which terms tend to show up in the same documents as the word `causal` using the `findAssocs()` function.
In this case, we explore the words that have a correlation of at least 0\.35 with the terms `causal`.
```
tm::findAssocs(tm_DTM, terms = "causal", corlimit = 0.35)
```
```
$causal
estimand laan petersen stating tmle exposure der
0.57 0.57 0.57 0.57 0.57 0.39 0.38
censoring gave
0.35 0.35
```
19\.3 Ingesting text
--------------------
In Chapter [6](ch-dataII.html#ch:dataII) (see Section [6\.4\.1\.2](ch-dataII.html#sec:htmltab))
we illustrated how the **rvest** package can be used to convert tabular data presented on the Web in HTML format into a proper **R** data table. Here, we present another example of how this process can bring text data into **R**.
### 19\.3\.1 Example: Scraping the songs of the Beatles
In Chapter [14](ch-vizIII.html#ch:vizIII), we explored the popularity of the names for the four members of the [*Beatles*](https://en.wikipedia.org/w/index.php?search=Beatles). During their heyday from 1962–1970, the Beatles were prolific—recording hundreds of songs.
In this example, we explore some of who sang and what words were included in song titles.
We begin by downloading the contents of [the Wikipedia page that lists the Beatles’ songs](http://en.wikipedia.org/wiki/List_of_songs_recorded_by_the_Beatles).
```
library(rvest)
url <- "http://en.wikipedia.org/wiki/List_of_songs_recorded_by_the_Beatles"
tables <- url %>%
read_html() %>%
html_nodes("table")
Beatles_songs <- tables %>%
purrr::pluck(3) %>%
html_table(fill = TRUE) %>%
janitor::clean_names() %>%
select(song, lead_vocal_s_d)
glimpse(Beatles_songs)
```
```
Rows: 213
Columns: 2
$ song <chr> "\"Across the Universe\"[e]", "\"Act Naturally\"", …
$ lead_vocal_s_d <chr> "John Lennon", "Ringo Starr", "Lennon", "Paul McCar…
```
We need to clean these data a bit.
Note that the `song` variable contains quotation marks.
The `lead_vocal_s_d` variable would benefit from being renamed.
```
Beatles_songs <- Beatles_songs %>%
mutate(song = str_remove_all(song, pattern = '\\"')) %>%
rename(vocals = lead_vocal_s_d)
```
Most of the Beatles’ songs were sung by some combination of [John Lennon](https://en.wikipedia.org/w/index.php?search=John%20Lennon) and [Paul McCartney](https://en.wikipedia.org/w/index.php?search=Paul%20McCartney).
While their productive but occasionally contentious working relationship is well\-documented, we might be interested in determining how many songs each person is credited with singing.
```
Beatles_songs %>%
group_by(vocals) %>%
count() %>%
arrange(desc(n))
```
```
# A tibble: 18 × 2
# Groups: vocals [18]
vocals n
<chr> <int>
1 Lennon 66
2 McCartney 60
3 Harrison 28
4 LennonMcCartney 15
5 Lennon(with McCartney) 12
6 Starr 10
7 McCartney(with Lennon) 9
8 Lennon(with McCartneyand Harrison) 3
9 Instrumental 1
10 John Lennon 1
11 Lennon(with Yoko Ono) 1
12 LennonHarrison 1
13 LennonMcCartneyHarrison 1
14 McCartney(with Lennon,Harrison,and Starr) 1
15 McCartneyLennonHarrison 1
16 Paul McCartney 1
17 Ringo Starr 1
18 Sound Collage 1
```
Lennon and McCartney sang separately and together.
Other band members (notably [Ringo Starr](https://en.wikipedia.org/w/index.php?search=Ringo%20Starr) and [George Harrison](https://en.wikipedia.org/w/index.php?search=George%20Harrison)) also sang, along with many rarer combinations.
Regular expressions can help us parse these data.
We already saw the number of songs sung by each person individually, and it isn’t hard to figure out the number of songs that each person contributed to in some form in terms of vocals.
```
Beatles_songs %>%
pull(vocals) %>%
str_subset("McCartney") %>%
length()
```
```
[1] 103
```
```
Beatles_songs %>%
pull(vocals) %>%
str_subset("Lennon") %>%
length()
```
```
[1] 111
```
John was credited with singing on more songs than Paul.
How many of these songs were the product of some type of Lennon\-McCartney collaboration?
Given the inconsistency in how the vocals are attributed, it requires some ingenuity to extract these data.
We can search the `vocals` variable for either `McCartney` or `Lennon` (or both), and count these instances.
```
Beatles_songs %>%
pull(vocals) %>%
str_subset("(McCartney|Lennon)") %>%
length()
```
```
[1] 172
```
At this point, we need another regular expression to figure out how many songs they both sang on.
The following will find the pattern consisting of either `McCartney` or `Lennon`, followed by a possibly empty string of characters, followed by another instance of either `McCartney` or `Lennon`.
```
pj_regexp <- "(McCartney|Lennon).*(McCartney|Lennon)"
Beatles_songs %>%
pull(vocals) %>%
str_subset(pj_regexp) %>%
length()
```
```
[1] 42
```
Note also that we can use `str_detect()` in a `filter()` command to retrieve the list of songs upon which Lennon and McCartney both sang.
```
Beatles_songs %>%
filter(str_detect(vocals, pj_regexp)) %>%
select(song, vocals) %>%
head()
```
```
# A tibble: 6 × 2
song vocals
<chr> <chr>
1 All Together Now McCartney(with Lennon)
2 Any Time at All Lennon(with McCartney)
3 Baby's in Black LennonMcCartney
4 Because LennonMcCartneyHarrison
5 Birthday McCartney(with Lennon)
6 Carry That Weight McCartney(with Lennon,Harrison,and Starr)
```
The Beatles have had such a profound influence upon musicians of all stripes that it might be worth investigating the titles of their songs.
What were they singing about?
```
Beatles_songs %>%
unnest_tokens(word, song) %>%
anti_join(get_stopwords(), by = "word") %>%
count(word, sort = TRUE) %>%
arrange(desc(n)) %>%
head()
```
```
# A tibble: 6 × 2
word n
<chr> <int>
1 love 9
2 want 7
3 got 6
4 hey 6
5 long 6
6 baby 4
```
Fittingly, “Love” is the most common word in the title of Beatles songs.
19\.4 Further resources
-----------------------
Silge and Robinson’s [*Tidy Text Mining in R*](https://github.com/dgrtwo/tidy-text-mining) book
has an extensive set of examples of text mining and sentiment analysis (Silge and Robinson 2017, 2016\).
[Emil Hvitfeldt](https://en.wikipedia.org/w/index.php?search=Emil%20Hvitfeldt) and [Julia Silge](https://en.wikipedia.org/w/index.php?search=Julia%20Silge) have [announced](https://www.hvitfeldt.me/blog/smltar-announcement/) a tidy approach to supervised machine learning for text analysis.
Text analytics has a rich history of being used to infer authorship of the Federalist papers (Frederick Mosteller and Wallace 1963\) and Beatles songs (Glickman, Brown, and Song 2019\).
Google has collected \\(n\\)\-grams for a huge number of books and provides an [interface](https://books.google.com/ngrams) to these data.
[Wikipedia](http://en.wikipedia.org/wiki/Regular_expression) provides a clear overview of syntax for sophisticated pattern\-matching within strings using regular expressions.
There are many sources to find text data online.
[Project Gutenberg](http://www.gutenberg.org/wiki/Main_Page) is a massive free online library. Project Gutenberg collects the full\-text of more than 50,000 books whose copyrights have expired. It is great for older, classic books. You won’t find anything by [Stephen King](https://en.wikipedia.org/w/index.php?search=Stephen%20King) (but there
is one by [Stephen King\-Hall](https://en.wikipedia.org/w/index.php?search=Stephen%20King-Hall). Direct access to Project Gutenberg is available in **R** through the **gutenbergr** package.
The **tidytext** and **textdata** packages support other lexicons for sentiment analysis, including “bing,” “nrc,” and “loughran.”
19\.5 Exercises
---------------
**Problem 1 (Easy)**: Use the `Macbeth_raw` data from the `mdsr` package to answer the following questions:
1. Speaking lines in Shakespeare’s plays are identified by a line that starts with two spaces, then a string of capital letters and spaces (the character’s name) followed by a period. Use `grep` to find all of the speaking lines in *Macbeth*. How many are there?
2. Find all the hyphenated words in *Macbeth*.
**Problem 2 (Easy)**:
1. Find all of the adjectives in *Macbeth* that end in *more* or *less* using `Machbeth_raw` in `mdsr`.
2. Find all of the lines containing the stage direction *Exit* or *Exeunt* in *Macbeth*.
**Problem 3 (Easy)**: Given the vector of words below, determine the output of the following regular expressions without running the R code.
```
x <- c(
"popular", "popularity", "popularize", "popularise",
"Popular", "Population", "repopulate", "reproduce",
"happy family", "happier\tfamily", " happy family", "P6dn"
)
x
```
```
[1] "popular" "popularity" "popularize" "popularise"
[5] "Popular" "Population" "repopulate" "reproduce"
[9] "happy family" "happier\tfamily" " happy family" "P6dn"
```
```
str_subset(x, pattern = "pop") #1
str_detect(x, pattern = "^pop") #2
str_detect(x, pattern = "populari[sz]e") #3
str_detect(x, pattern = "pop.*e") #4
str_detect(x, pattern = "p[a-z]*e") #5
str_detect(x, pattern = "^[Pp][a-z]+.*n") #6
str_subset(x, pattern = "^[^Pp]") #7
str_detect(x, pattern = "^[A-Za-p]") #8
str_detect(x, pattern = "[ ]") #9
str_subset(x, pattern = "[\t]") #10
str_detect(x, pattern = "[ \t]") #11
str_subset(x, pattern = "^[ ]") #12
```
**Problem 4 (Easy)**: Use the `babynames` data table from the `babynames` package to find the 10 most popular:
1. Boys’ names ending in a vowel.
2. Names ending with `joe`, `jo`, `Joe`, or `Jo` (e.g., *Billyjoe*).
**Problem 5 (Easy)**: Wikipedia defines a hashtag as “a type of metadata tag used on social networks such as Twitter and other microblogging services, allowing users to apply dynamic, user\-generated tagging which makes it possible for others to easily find messages with a specific theme or content.” A hashtag must begin with a hash character followed by other characters, and is terminated by a space or end of message. It is always safe to precede the \# with a space, and to include letters without diacritics (e.g., accents), digits, and underscores." Provide a regular expression that matches whether a string contains a valid hashtag.
```
strings <- c(
"This string has no hashtags",
"#hashtag city!",
"This string has a #hashtag",
"This string has #two #hashtags"
)
```
**Problem 6 (Easy)**: A ZIP (zone improvement program) code is a code used by the United States Postal Service to route mail. The Zip \+ 4 code include the five digits of the ZIP Code, followed by a hyphen and four digits that designate a more specific location. Provide a regular expression that matches strings that consist of a Zip \+ 4 code.
**Problem 7 (Medium)**: Create a DTM (document term matrix) for the collection of Emily Dickinson’s poems in the `DickinsonPoems` package. Find the terms with the highest *tf.idf* scores. Choose one of these terms and find any of its strongly correlated terms.
```
# remotes::install_github("Amherst-Statistics/DickinsonPoems")
```
**Problem 8 (Medium)**: A text analytics project is using scanned data to create a corpus.
Many of the lines have been hyphenated in the original text.
```
text_lines <- tibble(
lines = c("This is the first line.",
"This line is hyphen- ",
"ated. It's very diff-",
"icult to use at present.")
)
```
Write a function that can be used to remove the hyphens and concatenate the parts of the words that are split on the line where they first appeared.
**Problem 9 (Medium)**: Find all titles of Emily Dickinson’s poems (not including the Roman numerals) in the first 10 poems of the `DickinsonPoems` package.
(Hint: the titles are all caps.)
**Problem 10 (Medium)**:
Classify Emily Dickinson’s poem *The Lonely House* as either positive or negative using the `AFINN` lexicon. Does this match with your own interpretation of the poem? Use the `DickinsonPoems` package.
```
library(DickinsonPoems)
poem <- get_poem("gutenberg1.txt014")
```
**Problem 11 (Medium)**: Generate a regular expression to return the second word in a vector.
```
x <- c("one two three", "four five six", "SEVEN EIGHT")
```
When applied to vector x, the result should be:
```
[1] "two" "five" "EIGHT"
```
**Problem 12 (Hard)**: The `pdxTrees_parks` dataset from the `pdxTrees` package contains information on thousands of trees in the Portland, Oregon area. Using the `species_factoid` variable, investigate any interesting trends within the facts.
19\.6 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/text.html\#text\-online\-exercises](https://mdsr-book.github.io/mdsr2e/text.html#text-online-exercises)
**Problem 1 (Medium)**:
1. The site <stackexchange.com> displays questions and answers on technical topics. The following code downloads the most recent R questions related to the `dplyr` package.
```
library(httr)
# Find the most recent R questions on stackoverflow
getresult <- GET("http://api.stackexchange.com",
path = "questions",
query = list(site = "stackoverflow.com", tagged = "dplyr")
)
# Ensure returned without error
stop_for_status(getresult)
```
```
questions <- httr::content(getresult) # Grab content
names(questions$items[[1]]) # What does the returned data look like?
```
```
[1] "tags" "owner" "is_answered"
[4] "view_count" "answer_count" "score"
[7] "last_activity_date" "creation_date" "last_edit_date"
[10] "question_id" "content_license" "link"
[13] "title"
```
```
length(questions$items)
```
```
[1] 30
```
```
substr(questions$items[[1]]$title, 1, 68)
```
```
[1] "How to loop distance calculations for multiple instances using dplyr"
```
```
substr(questions$items[[2]]$title, 1, 68)
```
```
[1] "Transform to wide format from long in R"
```
```
substr(questions$items[[3]]$title, 1, 68)
```
```
[1] "filter row when multiple colums can be concerned"
```
How many questions were returned? Without using jargon, describe in words what is being displayed and how it might be used.
2. Repeat the process of downloading the content from <stackexchange.com> related to
the `dplyr` package and summarize the results.
**Problem 2 (Medium)**:
1. Use regular expressions to determine the number of speaking lines [The Complete Works of William Shakespeare](http://www.gutenberg.org/cache/epub/100/pg100.txt). Here, we care only about how many times a character speaks—not what they say or for how long they speak.
2. Make a bar chart displaying the top 100 characters with the greatest number of lines. **Hint** you may want to use either the `stringr::str_extract` or `strsplit` function here.
3. In this problem, you will do much of the work to recreate Mark Hansen’s *Shakespeare Machine*. Start by watching a [video clip](http://vimeo.com/54858820) of the exhibit. Use *The Complete Works of William Shakespeare* and regular expressions to find all of the hyphenated words in Shakespeare Machine. How many are there? Use `\%in\%` to verify that your list contains the following hyphenated words pictured at 00:46 of the clip.
**Problem 3 (Hard)**: Given the dataframe of Emily Dickinson poems in the `DickinsonPoems` package, perform sentiment analysis and identify any interesting insights about her work overall.
```
library(tidyverse)
# remotes::install_github("Amherst-Statistics/DickinsonPoems")
library(DickinsonPoems)
library(tidytext)
poems_df <- list_poems() %>%
purrr::map(get_poem) %>%
unlist() %>%
enframe(value = "words") %>%
unnest_tokens(word, words)
```
19\.1 Regular expressions using *Macbeth*
-----------------------------------------
As noted previously, working with textual data requires new tools.
In this section, we introduce the powerful grammar of regular expressions.
### 19\.1\.1 Parsing the text of the [*Scottish play*](https://en.wikipedia.org/w/index.php?search=Scottish%20play)
[*Project Gutenberg*](https://en.wikipedia.org/w/index.php?search=Project%20Gutenberg) contains the full\-text for all of [William Shakespeare](https://en.wikipedia.org/w/index.php?search=William%20Shakespeare)’s plays.
In this example, we will use text mining techniques to explore *The Tragedy of Macbeth*.
The text can be downloaded directly from Project Gutenberg.
Alternatively, the `Macbeth_raw` object is also included in the **mdsr** package.
```
library(tidyverse)
library(mdsr)
macbeth_url <- "http://www.gutenberg.org/cache/epub/1129/pg1129.txt"
Macbeth_raw <- RCurl::getURL(macbeth_url)
```
```
data(Macbeth_raw)
```
Note that `Macbeth_raw` is a *single* string of text (i.e., a character vector of length 1\) that contains the entire play. In order to work with this, we want to split this single string into a vector of strings using the `str_split()` function from the **stringr**. To do this, we just have to specify the end\-of\-line character(s), which in this case are: `\r\n`.
```
# str_split returns a list: we only want the first element
macbeth <- Macbeth_raw %>%
str_split("\r\n") %>%
pluck(1)
length(macbeth)
```
```
[1] 3194
```
Now let’s examine the text. Note that each speaking line begins with two spaces, followed by the speaker’s name in capital letters.
```
macbeth[300:310]
```
```
[1] "meeting a bleeding Sergeant."
[2] ""
[3] " DUNCAN. What bloody man is that? He can report,"
[4] " As seemeth by his plight, of the revolt"
[5] " The newest state."
[6] " MALCOLM. This is the sergeant"
[7] " Who like a good and hardy soldier fought"
[8] " 'Gainst my captivity. Hail, brave friend!"
[9] " Say to the King the knowledge of the broil"
[10] " As thou didst leave it."
[11] " SERGEANT. Doubtful it stood,"
```
The power of text mining comes from quantifying ideas embedded in the text. For example, how many times does the character Macbeth speak in the play? Think about this question for a moment. If you were holding a physical copy of the play, how would you compute this number? Would you flip through the book and mark down each speaking line on a separate piece of paper? Is your algorithm scalable? What if you had to do it for *all* characters in the play, and not just Macbeth? What if you had to do it for *all 37* of Shakespeare’s plays? What if you had to do it for all plays written in English?
Naturally, a computer cannot read the play and figure this out, but we can find all instances of Macbeth’s speaking lines by cleverly counting patterns in the text.
```
macbeth_lines <- macbeth %>%
str_subset(" MACBETH")
length(macbeth_lines)
```
```
[1] 147
```
```
head(macbeth_lines)
```
```
[1] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[2] " MACBETH. So foul and fair a day I have not seen."
[3] " MACBETH. Speak, if you can. What are you?"
[4] " MACBETH. Stay, you imperfect speakers, tell me more."
[5] " MACBETH. Into the air, and what seem'd corporal melted"
[6] " MACBETH. Your children shall be kings."
```
The `str_subset()` function works using a [*needle*](https://en.wikipedia.org/w/index.php?search=needle) in a [*haystack*](https://en.wikipedia.org/w/index.php?search=haystack) paradigm, wherein the first argument is the character vector in which you want to find patterns (i.e., the haystack) and the second argument is the [*regular expression*](https://en.wikipedia.org/w/index.php?search=regular%20expression) (or pattern) you want to find (i.e., the needle).
Alternatively, `str_which()` returns the *indices* of the haystack in which the needles were found.
By changing the needle, we find different results:
```
macbeth %>%
str_subset(" MACDUFF") %>%
length()
```
```
[1] 60
```
The `str_detect()` function—which we use in the example in the next section—uses the same syntax but returns a logical vector as long as the haystack. Thus, while the length of the vector returned by `str_subset()` is the number of matches, the length of the vector returned by `str_detect()` is always the same as the length of the haystack vector.[38](#fn38)
```
macbeth %>%
str_subset(" MACBETH") %>%
length()
```
```
[1] 147
```
```
macbeth %>%
str_detect(" MACBETH") %>%
length()
```
```
[1] 3194
```
To extract the piece of each matching line that actually matched, use the `str_extract()` function from the **stringr** package.
```
pattern <- " MACBETH"
macbeth %>%
str_subset(pattern) %>%
str_extract(pattern) %>%
head()
```
```
[1] " MACBETH" " MACBETH" " MACBETH" " MACBETH" " MACBETH" " MACBETH"
```
Above, we use a literal string (e.g., “`MACBETH`”) as our needle to find exact matches in our haystack. This is the simplest type of pattern for which we could have searched, but the needle that `str_extract()` searches for can be any regular expression.
Regular expression syntax is very powerful and as a result, can become very complicated. Still, regular expressions are a grammar, so that learning a few basic concepts will allow you to build more efficient searches.
* **Metacharacters**: `.` is a [*metacharacter*](https://en.wikipedia.org/w/index.php?search=metacharacter) that matches any character. Note that if you want to search for the literal value of a metacharacter (e.g., a period), you have to escape it with a backslash. To use the pattern in **R**, two backslashes are needed. Note the difference in the results below.
```
macbeth %>%
str_subset("MAC.") %>%
head()
```
```
[1] "MACHINE READABLE COPIES MAY BE DISTRIBUTED SO LONG AS SUCH COPIES"
[2] "MACHINE READABLE COPIES OF THIS ETEXT, SO LONG AS SUCH COPIES"
[3] "WITH PERMISSION. ELECTRONIC AND MACHINE READABLE COPIES MAY BE"
[4] "THE TRAGEDY OF MACBETH"
[5] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[6] " LADY MACBETH, his wife"
```
```
macbeth %>%
str_subset("MACBETH\\.") %>%
head()
```
```
[1] " MACBETH. So foul and fair a day I have not seen."
[2] " MACBETH. Speak, if you can. What are you?"
[3] " MACBETH. Stay, you imperfect speakers, tell me more."
[4] " MACBETH. Into the air, and what seem'd corporal melted"
[5] " MACBETH. Your children shall be kings."
[6] " MACBETH. And Thane of Cawdor too. Went it not so?"
```
* **Character sets**: Use brackets to define sets of characters to match. This pattern will match any lines that contain `MAC` followed by any capital letter other than `A`. It will match `MACBETH` but not `MACALESTER`.
```
macbeth %>%
str_subset("MAC[B-Z]") %>%
head()
```
```
[1] "MACHINE READABLE COPIES MAY BE DISTRIBUTED SO LONG AS SUCH COPIES"
[2] "MACHINE READABLE COPIES OF THIS ETEXT, SO LONG AS SUCH COPIES"
[3] "WITH PERMISSION. ELECTRONIC AND MACHINE READABLE COPIES MAY BE"
[4] "THE TRAGEDY OF MACBETH"
[5] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[6] " LADY MACBETH, his wife"
```
* **Alternation**: To search for a few specific alternatives, use the `|` wrapped in parentheses. This pattern will match any lines that contain either `MACB` or `MACD`.
```
macbeth %>%
str_subset("MAC(B|D)") %>%
head()
```
```
[1] "THE TRAGEDY OF MACBETH"
[2] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[3] " LADY MACBETH, his wife"
[4] " MACDUFF, Thane of Fife, a nobleman of Scotland"
[5] " LADY MACDUFF, his wife"
[6] " MACBETH. So foul and fair a day I have not seen."
```
* **Anchors**: Use `^` to anchor a pattern to the beginning of a piece of text, and `$` to anchor it to the end.
```
macbeth %>%
str_subset("^ MAC[B-Z]") %>%
head()
```
```
[1] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[2] " MACDUFF, Thane of Fife, a nobleman of Scotland"
[3] " MACBETH. So foul and fair a day I have not seen."
[4] " MACBETH. Speak, if you can. What are you?"
[5] " MACBETH. Stay, you imperfect speakers, tell me more."
[6] " MACBETH. Into the air, and what seem'd corporal melted"
```
* **Repetitions**: We can also specify the number of times that we want certain patterns to occur: `?` indicates zero or one time, `*` indicates zero or more times, and `+` indicates one or more times. This quantification is applied to the previous element in the pattern—in this case, a space.
```
macbeth %>%
str_subset("^ ?MAC[B-Z]") %>%
head()
```
```
[1] "MACHINE READABLE COPIES MAY BE DISTRIBUTED SO LONG AS SUCH COPIES"
[2] "MACHINE READABLE COPIES OF THIS ETEXT, SO LONG AS SUCH COPIES"
```
```
macbeth %>%
str_subset("^ *MAC[B-Z]") %>%
head()
```
```
[1] "MACHINE READABLE COPIES MAY BE DISTRIBUTED SO LONG AS SUCH COPIES"
[2] "MACHINE READABLE COPIES OF THIS ETEXT, SO LONG AS SUCH COPIES"
[3] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[4] " MACDUFF, Thane of Fife, a nobleman of Scotland"
[5] " MACBETH. So foul and fair a day I have not seen."
[6] " MACBETH. Speak, if you can. What are you?"
```
```
macbeth %>%
str_subset("^ +MAC[B-Z]") %>%
head()
```
```
[1] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[2] " MACDUFF, Thane of Fife, a nobleman of Scotland"
[3] " MACBETH. So foul and fair a day I have not seen."
[4] " MACBETH. Speak, if you can. What are you?"
[5] " MACBETH. Stay, you imperfect speakers, tell me more."
[6] " MACBETH. Into the air, and what seem'd corporal melted"
```
Combining these basic rules can automate incredibly powerful and sophisticated searches and are an increasingly necessary tool in every data scientist’s toolbox.
Regular expressions are a powerful and commonly\-used tool. They are implemented in many programming languages. Developing a working understanding of regular expressions will pay off in text wrangling.
### 19\.1\.2 Life and death in *Macbeth*
Can we use these techniques to analyze the speaking patterns in Macbeth? Are there things we can learn about the play simply by noting who speaks when? Four of the major characters in *Macbeth* are the titular character, his wife Lady Macbeth, his friend Banquo, and Duncan, the King of Scotland.
We might learn something about the play by knowing when each character speaks as a function of the line number in the play. We can retrieve this information using `str_detect()`.
```
macbeth_chars <- tribble(
~name, ~regexp,
"Macbeth", " MACBETH\\.",
"Lady Macbeth", " LADY MACBETH\\.",
"Banquo", " BANQUO\\.",
"Duncan", " DUNCAN\\.",
) %>%
mutate(speaks = map(regexp, str_detect, string = macbeth))
```
However, for plotting purposes we will want to convert these `logical` vectors into `numeric` vectors, and tidy up the data. Since there is unwanted text at the beginning and the end of the play text, we will also restrict our analysis to the actual contents of the play (which occurs from line 218 to line 3172\).
```
speaker_freq <- macbeth_chars %>%
unnest(cols = speaks) %>%
mutate(
line = rep(1:length(macbeth), 4),
speaks = as.numeric(speaks)
) %>%
filter(line > 218 & line < 3172)
glimpse(speaker_freq)
```
```
Rows: 11,812
Columns: 4
$ name <chr> "Macbeth", "Macbeth", "Macbeth", "Macbeth", "Macbeth", "Mac…
$ regexp <chr> " MACBETH\\.", " MACBETH\\.", " MACBETH\\.", " MACBETH\…
$ speaks <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
$ line <int> 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230,…
```
Before we create the plot, we will gather some helpful contextual information about when each Act begins.
```
acts <- tibble(
line = str_which(macbeth, "^ACT [I|V]+"),
line_text = str_subset(macbeth, "^ACT [I|V]+"),
labels = str_extract(line_text, "^ACT [I|V]+")
)
```
Finally, Figure [19\.1](ch-text.html#fig:macbeth) illustrates how King Duncan of Scotland is killed early in Act II (never to speak again), with Banquo to follow in Act III.
Soon afterwards in Act IV,
Lady Macbeth—overcome by guilt over the role she played in Duncan’s murder—kills herself. The play and Act V conclude with a battle in which Macbeth is killed.
```
ggplot(data = speaker_freq, aes(x = line, y = speaks)) +
geom_smooth(
aes(color = name), method = "loess",
se = FALSE, span = 0.4
) +
geom_vline(
data = acts,
aes(xintercept = line),
color = "darkgray", lty = 3
) +
geom_text(
data = acts,
aes(y = 0.085, label = labels),
hjust = "left", color = "darkgray"
) +
ylim(c(0, NA)) +
xlab("Line Number") +
ylab("Proportion of Speeches") +
scale_color_brewer(palette = "Set2")
```
Figure 19\.1: Speaking parts for four major characters. Duncan is killed early in the play and never speaks again.
### 19\.1\.1 Parsing the text of the [*Scottish play*](https://en.wikipedia.org/w/index.php?search=Scottish%20play)
[*Project Gutenberg*](https://en.wikipedia.org/w/index.php?search=Project%20Gutenberg) contains the full\-text for all of [William Shakespeare](https://en.wikipedia.org/w/index.php?search=William%20Shakespeare)’s plays.
In this example, we will use text mining techniques to explore *The Tragedy of Macbeth*.
The text can be downloaded directly from Project Gutenberg.
Alternatively, the `Macbeth_raw` object is also included in the **mdsr** package.
```
library(tidyverse)
library(mdsr)
macbeth_url <- "http://www.gutenberg.org/cache/epub/1129/pg1129.txt"
Macbeth_raw <- RCurl::getURL(macbeth_url)
```
```
data(Macbeth_raw)
```
Note that `Macbeth_raw` is a *single* string of text (i.e., a character vector of length 1\) that contains the entire play. In order to work with this, we want to split this single string into a vector of strings using the `str_split()` function from the **stringr**. To do this, we just have to specify the end\-of\-line character(s), which in this case are: `\r\n`.
```
# str_split returns a list: we only want the first element
macbeth <- Macbeth_raw %>%
str_split("\r\n") %>%
pluck(1)
length(macbeth)
```
```
[1] 3194
```
Now let’s examine the text. Note that each speaking line begins with two spaces, followed by the speaker’s name in capital letters.
```
macbeth[300:310]
```
```
[1] "meeting a bleeding Sergeant."
[2] ""
[3] " DUNCAN. What bloody man is that? He can report,"
[4] " As seemeth by his plight, of the revolt"
[5] " The newest state."
[6] " MALCOLM. This is the sergeant"
[7] " Who like a good and hardy soldier fought"
[8] " 'Gainst my captivity. Hail, brave friend!"
[9] " Say to the King the knowledge of the broil"
[10] " As thou didst leave it."
[11] " SERGEANT. Doubtful it stood,"
```
The power of text mining comes from quantifying ideas embedded in the text. For example, how many times does the character Macbeth speak in the play? Think about this question for a moment. If you were holding a physical copy of the play, how would you compute this number? Would you flip through the book and mark down each speaking line on a separate piece of paper? Is your algorithm scalable? What if you had to do it for *all* characters in the play, and not just Macbeth? What if you had to do it for *all 37* of Shakespeare’s plays? What if you had to do it for all plays written in English?
Naturally, a computer cannot read the play and figure this out, but we can find all instances of Macbeth’s speaking lines by cleverly counting patterns in the text.
```
macbeth_lines <- macbeth %>%
str_subset(" MACBETH")
length(macbeth_lines)
```
```
[1] 147
```
```
head(macbeth_lines)
```
```
[1] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[2] " MACBETH. So foul and fair a day I have not seen."
[3] " MACBETH. Speak, if you can. What are you?"
[4] " MACBETH. Stay, you imperfect speakers, tell me more."
[5] " MACBETH. Into the air, and what seem'd corporal melted"
[6] " MACBETH. Your children shall be kings."
```
The `str_subset()` function works using a [*needle*](https://en.wikipedia.org/w/index.php?search=needle) in a [*haystack*](https://en.wikipedia.org/w/index.php?search=haystack) paradigm, wherein the first argument is the character vector in which you want to find patterns (i.e., the haystack) and the second argument is the [*regular expression*](https://en.wikipedia.org/w/index.php?search=regular%20expression) (or pattern) you want to find (i.e., the needle).
Alternatively, `str_which()` returns the *indices* of the haystack in which the needles were found.
By changing the needle, we find different results:
```
macbeth %>%
str_subset(" MACDUFF") %>%
length()
```
```
[1] 60
```
The `str_detect()` function—which we use in the example in the next section—uses the same syntax but returns a logical vector as long as the haystack. Thus, while the length of the vector returned by `str_subset()` is the number of matches, the length of the vector returned by `str_detect()` is always the same as the length of the haystack vector.[38](#fn38)
```
macbeth %>%
str_subset(" MACBETH") %>%
length()
```
```
[1] 147
```
```
macbeth %>%
str_detect(" MACBETH") %>%
length()
```
```
[1] 3194
```
To extract the piece of each matching line that actually matched, use the `str_extract()` function from the **stringr** package.
```
pattern <- " MACBETH"
macbeth %>%
str_subset(pattern) %>%
str_extract(pattern) %>%
head()
```
```
[1] " MACBETH" " MACBETH" " MACBETH" " MACBETH" " MACBETH" " MACBETH"
```
Above, we use a literal string (e.g., “`MACBETH`”) as our needle to find exact matches in our haystack. This is the simplest type of pattern for which we could have searched, but the needle that `str_extract()` searches for can be any regular expression.
Regular expression syntax is very powerful and as a result, can become very complicated. Still, regular expressions are a grammar, so that learning a few basic concepts will allow you to build more efficient searches.
* **Metacharacters**: `.` is a [*metacharacter*](https://en.wikipedia.org/w/index.php?search=metacharacter) that matches any character. Note that if you want to search for the literal value of a metacharacter (e.g., a period), you have to escape it with a backslash. To use the pattern in **R**, two backslashes are needed. Note the difference in the results below.
```
macbeth %>%
str_subset("MAC.") %>%
head()
```
```
[1] "MACHINE READABLE COPIES MAY BE DISTRIBUTED SO LONG AS SUCH COPIES"
[2] "MACHINE READABLE COPIES OF THIS ETEXT, SO LONG AS SUCH COPIES"
[3] "WITH PERMISSION. ELECTRONIC AND MACHINE READABLE COPIES MAY BE"
[4] "THE TRAGEDY OF MACBETH"
[5] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[6] " LADY MACBETH, his wife"
```
```
macbeth %>%
str_subset("MACBETH\\.") %>%
head()
```
```
[1] " MACBETH. So foul and fair a day I have not seen."
[2] " MACBETH. Speak, if you can. What are you?"
[3] " MACBETH. Stay, you imperfect speakers, tell me more."
[4] " MACBETH. Into the air, and what seem'd corporal melted"
[5] " MACBETH. Your children shall be kings."
[6] " MACBETH. And Thane of Cawdor too. Went it not so?"
```
* **Character sets**: Use brackets to define sets of characters to match. This pattern will match any lines that contain `MAC` followed by any capital letter other than `A`. It will match `MACBETH` but not `MACALESTER`.
```
macbeth %>%
str_subset("MAC[B-Z]") %>%
head()
```
```
[1] "MACHINE READABLE COPIES MAY BE DISTRIBUTED SO LONG AS SUCH COPIES"
[2] "MACHINE READABLE COPIES OF THIS ETEXT, SO LONG AS SUCH COPIES"
[3] "WITH PERMISSION. ELECTRONIC AND MACHINE READABLE COPIES MAY BE"
[4] "THE TRAGEDY OF MACBETH"
[5] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[6] " LADY MACBETH, his wife"
```
* **Alternation**: To search for a few specific alternatives, use the `|` wrapped in parentheses. This pattern will match any lines that contain either `MACB` or `MACD`.
```
macbeth %>%
str_subset("MAC(B|D)") %>%
head()
```
```
[1] "THE TRAGEDY OF MACBETH"
[2] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[3] " LADY MACBETH, his wife"
[4] " MACDUFF, Thane of Fife, a nobleman of Scotland"
[5] " LADY MACDUFF, his wife"
[6] " MACBETH. So foul and fair a day I have not seen."
```
* **Anchors**: Use `^` to anchor a pattern to the beginning of a piece of text, and `$` to anchor it to the end.
```
macbeth %>%
str_subset("^ MAC[B-Z]") %>%
head()
```
```
[1] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[2] " MACDUFF, Thane of Fife, a nobleman of Scotland"
[3] " MACBETH. So foul and fair a day I have not seen."
[4] " MACBETH. Speak, if you can. What are you?"
[5] " MACBETH. Stay, you imperfect speakers, tell me more."
[6] " MACBETH. Into the air, and what seem'd corporal melted"
```
* **Repetitions**: We can also specify the number of times that we want certain patterns to occur: `?` indicates zero or one time, `*` indicates zero or more times, and `+` indicates one or more times. This quantification is applied to the previous element in the pattern—in this case, a space.
```
macbeth %>%
str_subset("^ ?MAC[B-Z]") %>%
head()
```
```
[1] "MACHINE READABLE COPIES MAY BE DISTRIBUTED SO LONG AS SUCH COPIES"
[2] "MACHINE READABLE COPIES OF THIS ETEXT, SO LONG AS SUCH COPIES"
```
```
macbeth %>%
str_subset("^ *MAC[B-Z]") %>%
head()
```
```
[1] "MACHINE READABLE COPIES MAY BE DISTRIBUTED SO LONG AS SUCH COPIES"
[2] "MACHINE READABLE COPIES OF THIS ETEXT, SO LONG AS SUCH COPIES"
[3] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[4] " MACDUFF, Thane of Fife, a nobleman of Scotland"
[5] " MACBETH. So foul and fair a day I have not seen."
[6] " MACBETH. Speak, if you can. What are you?"
```
```
macbeth %>%
str_subset("^ +MAC[B-Z]") %>%
head()
```
```
[1] " MACBETH, Thane of Glamis and Cawdor, a general in the King's"
[2] " MACDUFF, Thane of Fife, a nobleman of Scotland"
[3] " MACBETH. So foul and fair a day I have not seen."
[4] " MACBETH. Speak, if you can. What are you?"
[5] " MACBETH. Stay, you imperfect speakers, tell me more."
[6] " MACBETH. Into the air, and what seem'd corporal melted"
```
Combining these basic rules can automate incredibly powerful and sophisticated searches and are an increasingly necessary tool in every data scientist’s toolbox.
Regular expressions are a powerful and commonly\-used tool. They are implemented in many programming languages. Developing a working understanding of regular expressions will pay off in text wrangling.
### 19\.1\.2 Life and death in *Macbeth*
Can we use these techniques to analyze the speaking patterns in Macbeth? Are there things we can learn about the play simply by noting who speaks when? Four of the major characters in *Macbeth* are the titular character, his wife Lady Macbeth, his friend Banquo, and Duncan, the King of Scotland.
We might learn something about the play by knowing when each character speaks as a function of the line number in the play. We can retrieve this information using `str_detect()`.
```
macbeth_chars <- tribble(
~name, ~regexp,
"Macbeth", " MACBETH\\.",
"Lady Macbeth", " LADY MACBETH\\.",
"Banquo", " BANQUO\\.",
"Duncan", " DUNCAN\\.",
) %>%
mutate(speaks = map(regexp, str_detect, string = macbeth))
```
However, for plotting purposes we will want to convert these `logical` vectors into `numeric` vectors, and tidy up the data. Since there is unwanted text at the beginning and the end of the play text, we will also restrict our analysis to the actual contents of the play (which occurs from line 218 to line 3172\).
```
speaker_freq <- macbeth_chars %>%
unnest(cols = speaks) %>%
mutate(
line = rep(1:length(macbeth), 4),
speaks = as.numeric(speaks)
) %>%
filter(line > 218 & line < 3172)
glimpse(speaker_freq)
```
```
Rows: 11,812
Columns: 4
$ name <chr> "Macbeth", "Macbeth", "Macbeth", "Macbeth", "Macbeth", "Mac…
$ regexp <chr> " MACBETH\\.", " MACBETH\\.", " MACBETH\\.", " MACBETH\…
$ speaks <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
$ line <int> 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230,…
```
Before we create the plot, we will gather some helpful contextual information about when each Act begins.
```
acts <- tibble(
line = str_which(macbeth, "^ACT [I|V]+"),
line_text = str_subset(macbeth, "^ACT [I|V]+"),
labels = str_extract(line_text, "^ACT [I|V]+")
)
```
Finally, Figure [19\.1](ch-text.html#fig:macbeth) illustrates how King Duncan of Scotland is killed early in Act II (never to speak again), with Banquo to follow in Act III.
Soon afterwards in Act IV,
Lady Macbeth—overcome by guilt over the role she played in Duncan’s murder—kills herself. The play and Act V conclude with a battle in which Macbeth is killed.
```
ggplot(data = speaker_freq, aes(x = line, y = speaks)) +
geom_smooth(
aes(color = name), method = "loess",
se = FALSE, span = 0.4
) +
geom_vline(
data = acts,
aes(xintercept = line),
color = "darkgray", lty = 3
) +
geom_text(
data = acts,
aes(y = 0.085, label = labels),
hjust = "left", color = "darkgray"
) +
ylim(c(0, NA)) +
xlab("Line Number") +
ylab("Proportion of Speeches") +
scale_color_brewer(palette = "Set2")
```
Figure 19\.1: Speaking parts for four major characters. Duncan is killed early in the play and never speaks again.
19\.2 Extended example: Analyzing textual data from arXiv.org
-------------------------------------------------------------
The [*arXiv*](https://en.wikipedia.org/w/index.php?search=arXiv) (pronounced “archive”) is a fast\-growing electronic repository of preprints of scientific papers from many disciplines.
The **aRxiv** package provides an application programming interface (API) to the files and metadata available on [the arXiv](https://www.arxiv.org).
We will use the 1,089 papers that matched the search term “`data science`” in the repository as of August, 2020 to try to better understand the discipline.
The following code was used to generate this file.
```
library(aRxiv)
DataSciencePapers <- arxiv_search(
query = '"Data Science"',
limit = 20000,
batchsize = 100
)
```
We have also included the resulting data frame `DataSciencePapers` in the **mdsr** package, so to use this selection of papers downloaded from the archive, you can simply load it (this will avoid unduly straining the arXiv server).
```
data(DataSciencePapers)
```
Note that there are two columns in this data set (`submitted` and `updated`) that are clearly storing dates, but they are stored as `character` vectors.
```
glimpse(DataSciencePapers)
```
```
Rows: 1,089
Columns: 15
$ id <chr> "astro-ph/0701361v1", "0901.2805v1", "0901.3118v2…
$ submitted <chr> "2007-01-12 03:28:11", "2009-01-19 10:38:33", "20…
$ updated <chr> "2007-01-12 03:28:11", "2009-01-19 10:38:33", "20…
$ title <chr> "How to Make the Dream Come True: The Astronomers…
$ abstract <chr> " Astronomy is one of the most data-intensive of…
$ authors <chr> "Ray P Norris", "Heinz Andernach", "O. V. Verkhod…
$ affiliations <chr> "", "", "Special Astrophysical Observatory, Nizhn…
$ link_abstract <chr> "http://arxiv.org/abs/astro-ph/0701361v1", "http:…
$ link_pdf <chr> "http://arxiv.org/pdf/astro-ph/0701361v1", "http:…
$ link_doi <chr> "", "http://dx.doi.org/10.2481/dsj.8.41", "http:/…
$ comment <chr> "Submitted to Data Science Journal Presented at C…
$ journal_ref <chr> "", "", "", "", "EPJ Data Science, 1:9, 2012", ""…
$ doi <chr> "", "10.2481/dsj.8.41", "10.2481/dsj.8.34", "", "…
$ primary_category <chr> "astro-ph", "astro-ph.IM", "astro-ph.IM", "astro-…
$ categories <chr> "astro-ph", "astro-ph.IM|astro-ph.CO", "astro-ph.…
```
To make sure that **R** understands those variables as dates, we will once again use the **lubridate** package (see Chapter [6](ch-dataII.html#ch:dataII)).
After this conversion, **R** can deal with these two columns as measurements of time.
```
library(lubridate)
DataSciencePapers <- DataSciencePapers %>%
mutate(
submitted = lubridate::ymd_hms(submitted),
updated = lubridate::ymd_hms(updated)
)
glimpse(DataSciencePapers)
```
```
Rows: 1,089
Columns: 15
$ id <chr> "astro-ph/0701361v1", "0901.2805v1", "0901.3118v2…
$ submitted <dttm> 2007-01-12 03:28:11, 2009-01-19 10:38:33, 2009-0…
$ updated <dttm> 2007-01-12 03:28:11, 2009-01-19 10:38:33, 2009-0…
$ title <chr> "How to Make the Dream Come True: The Astronomers…
$ abstract <chr> " Astronomy is one of the most data-intensive of…
$ authors <chr> "Ray P Norris", "Heinz Andernach", "O. V. Verkhod…
$ affiliations <chr> "", "", "Special Astrophysical Observatory, Nizhn…
$ link_abstract <chr> "http://arxiv.org/abs/astro-ph/0701361v1", "http:…
$ link_pdf <chr> "http://arxiv.org/pdf/astro-ph/0701361v1", "http:…
$ link_doi <chr> "", "http://dx.doi.org/10.2481/dsj.8.41", "http:/…
$ comment <chr> "Submitted to Data Science Journal Presented at C…
$ journal_ref <chr> "", "", "", "", "EPJ Data Science, 1:9, 2012", ""…
$ doi <chr> "", "10.2481/dsj.8.41", "10.2481/dsj.8.34", "", "…
$ primary_category <chr> "astro-ph", "astro-ph.IM", "astro-ph.IM", "astro-…
$ categories <chr> "astro-ph", "astro-ph.IM|astro-ph.CO", "astro-ph.…
```
We begin by examining the distribution of submission years.
How has interest grown in `data science`?
```
mosaic::tally(~ year(submitted), data = DataSciencePapers)
```
```
year(submitted)
2007 2009 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020
1 3 3 7 15 25 52 94 151 187 313 238
```
We see that the first paper was submitted in 2007, but that submissions have increased considerably since then.
Let’s take a closer look at one of the papers, in this case one that focuses on causal inference.
```
DataSciencePapers %>%
filter(id == "1809.02408v2") %>%
glimpse()
```
```
Rows: 1
Columns: 15
$ id <chr> "1809.02408v2"
$ submitted <dttm> 2018-09-07 11:26:51
$ updated <dttm> 2019-03-05 04:38:35
$ title <chr> "A Primer on Causality in Data Science"
$ abstract <chr> " Many questions in Data Science are fundamental…
$ authors <chr> "Hachem Saddiki|Laura B. Balzer"
$ affiliations <chr> ""
$ link_abstract <chr> "http://arxiv.org/abs/1809.02408v2"
$ link_pdf <chr> "http://arxiv.org/pdf/1809.02408v2"
$ link_doi <chr> ""
$ comment <chr> "26 pages (with references); 4 figures"
$ journal_ref <chr> ""
$ doi <chr> ""
$ primary_category <chr> "stat.AP"
$ categories <chr> "stat.AP|stat.ME|stat.ML"
```
We see that this is a primer on causality in data science that was submitted in 2018 and updated in 2019 with a primary category of `stat.AP`.
What fields are generating the most papers in our dataset?
A quick glance at the `primary_category` variable reveals a cryptic list of fields and sub\-fields starting alphabetically with astronomy.
```
DataSciencePapers %>%
group_by(primary_category) %>%
count() %>%
head()
```
```
# A tibble: 6 × 2
# Groups: primary_category [6]
primary_category n
<chr> <int>
1 astro-ph 1
2 astro-ph.CO 3
3 astro-ph.EP 1
4 astro-ph.GA 7
5 astro-ph.IM 20
6 astro-ph.SR 6
```
It may be more helpful to focus simply on the primary field (the part before the period).
We can use a regular expression to extract only the primary field, which may contain a dash (`-`), but otherwise is all lowercase characters.
Once we have this information extracted, we can `tally()` those primary fields.
```
DataSciencePapers <- DataSciencePapers %>%
mutate(
field = str_extract(primary_category, "^[a-z,-]+"),
)
mosaic::tally(x = ~field, margins = TRUE, data = DataSciencePapers) %>%
sort()
```
```
field
gr-qc hep-ph nucl-th hep-th econ quant-ph cond-mat q-fin
1 1 1 3 5 7 12 15
q-bio eess astro-ph physics math stat cs Total
16 29 38 62 103 150 646 1089
```
It appears that more than half (\\(646/1089 \= 59\\)%) of these papers come from computer science, while roughly one quarter come from mathematics and statistics.
### 19\.2\.1 Corpora
Text mining is often performed not just on one text document, but on a collection of many text documents, called a [*corpus*](https://en.wikipedia.org/w/index.php?search=corpus).
Can we use the arXiv.org papers to learn more about papers in data science?
The **tidytext** package provides a consistent and elegant approach to analyzing text data.
The `unnest_tokens()` function helps prepare data for text analysis.
It uses a [*tokenizer*](https://en.wikipedia.org/w/index.php?search=tokenizer) to split the text lines.
By default the function maps characters to lowercase.
Here we use this function to count word frequencies for each of the papers (other options include N\-grams, lines, or sentences).
```
library(tidytext)
DataSciencePapers %>%
unnest_tokens(word, abstract) %>%
count(id, word, sort = TRUE)
```
```
# A tibble: 120,330 × 3
id word n
<chr> <chr> <int>
1 2003.11213v1 the 31
2 1508.02387v1 the 30
3 1711.10558v1 the 30
4 1805.09320v2 the 30
5 2004.04813v2 the 27
6 2007.08242v1 the 27
7 1711.09726v3 the 26
8 1805.11012v1 the 26
9 1909.10578v1 the 26
10 1404.5971v2 the 25
# … with 120,320 more rows
```
We see that the word `the` is the most common word in many abstracts.
This is not a particularly helpful insight.
It’s a common practice to exclude [*stop words*](https://en.wikipedia.org/w/index.php?search=stop%20words) such as `a`, `the`, and `you`.
The `get_stopwords()` function from the **tidytext** package uses the **stopwords** package to facilitate this task.
Let’s try again.
```
arxiv_words <- DataSciencePapers %>%
unnest_tokens(word, abstract) %>%
anti_join(get_stopwords(), by = "word")
arxiv_words %>%
count(id, word, sort = TRUE)
```
```
# A tibble: 93,559 × 3
id word n
<chr> <chr> <int>
1 2007.03606v1 data 20
2 1708.04664v1 data 19
3 1606.06769v1 traffic 17
4 1705.03451v2 data 17
5 1601.06035v1 models 16
6 1807.09127v2 job 16
7 2003.10534v1 data 16
8 1611.09874v1 ii 15
9 1808.04849v1 data 15
10 1906.03418v1 data 15
# … with 93,549 more rows
```
We now see that the word `data` is, not surprisingly, the most common non\-stop word in many of the abstracts.
It is convenient to save a variable (`abstract_clean`) with the abstract after removing stopwords and mapping all characters to lowercase.
```
arxiv_abstracts <- arxiv_words %>%
group_by(id) %>%
summarize(abstract_clean = paste(word, collapse = " "))
arxiv_papers <- DataSciencePapers %>%
left_join(arxiv_abstracts, by = "id")
```
We can now see the before and after for the first part of the abstract of our previously selected paper.
```
single_paper <- arxiv_papers %>%
filter(id == "1809.02408v2")
single_paper %>%
pull(abstract) %>%
strwrap() %>%
head()
```
```
[1] "Many questions in Data Science are fundamentally causal in that our"
[2] "objective is to learn the effect of some exposure, randomized or"
[3] "not, on an outcome interest. Even studies that are seemingly"
[4] "non-causal, such as those with the goal of prediction or prevalence"
[5] "estimation, have causal elements, including differential censoring"
[6] "or measurement. As a result, we, as Data Scientists, need to"
```
```
single_paper %>%
pull(abstract_clean) %>%
strwrap() %>%
head(4)
```
```
[1] "many questions data science fundamentally causal objective learn"
[2] "effect exposure randomized outcome interest even studies seemingly"
[3] "non causal goal prediction prevalence estimation causal elements"
[4] "including differential censoring measurement result data scientists"
```
### 19\.2\.2 Word clouds
At this stage, we have taken what was a coherent English abstract and reduced it to a collection of individual, non\-trivial English words.
We have transformed something that was easy for humans to read into *data*.
Unfortunately, it is not obvious how we can learn from these data.
One rudimentary approach is to construct a [*word cloud*](https://en.wikipedia.org/w/index.php?search=word%20cloud)—a kind of multivariate histogram for words. The **wordcloud** package can generate these graphical depictions of word frequencies.
```
library(wordcloud)
set.seed(1966)
arxiv_papers %>%
pull(abstract_clean) %>%
wordcloud(
max.words = 40,
scale = c(8, 1),
colors = topo.colors(n = 30),
random.color = TRUE
)
```
Figure 19\.2: A word cloud of terms that appear in the abstracts of arXiv papers on data science.
Although word clouds such as the one shown in Figure [19\.2](ch-text.html#fig:wordcloud1) have limited abilities to convey meaning, they can be useful for quickly visualizing the prevalence of words in large corpora.
### 19\.2\.3 Sentiment analysis
Can we start to automate a process to discern some meaning from the text?
The use of [*sentiment analysis*](https://en.wikipedia.org/w/index.php?search=sentiment%20analysis) is a simplistic but straightforward way to begin.
A [*lexicon*](https://en.wikipedia.org/w/index.php?search=lexicon) is a word list with associated sentiments (e.g., positivity, negativity) that have been labeled.
A number of such lexicons have been created with such tags.
Here is a sample of sentiment scores for one lexicon.
```
afinn <- get_sentiments("afinn")
afinn %>%
slice_sample(n = 15) %>%
arrange(desc(value))
```
```
# A tibble: 15 × 2
word value
<chr> <dbl>
1 impress 3
2 joyfully 3
3 advantage 2
4 faith 1
5 grant 1
6 laugh 1
7 apologise -1
8 lurk -1
9 ghost -1
10 deriding -2
11 detention -2
12 dirtiest -2
13 embarrassment -2
14 mocks -2
15 mournful -2
```
For the AFINN (Nielsen 2011\) lexicon, each word is associated with an integer value, ranging from \\(\-5\\) to 5\.
We can join this lexicon with our data to calculate a sentiment score.
```
arxiv_words %>%
inner_join(afinn, by = "word") %>%
select(word, id, value)
```
```
# A tibble: 7,393 × 3
word id value
<chr> <chr> <dbl>
1 ambitious astro-ph/0701361v1 2
2 powerful astro-ph/0701361v1 2
3 impotent astro-ph/0701361v1 -2
4 like astro-ph/0701361v1 2
5 agree astro-ph/0701361v1 1
6 better 0901.2805v1 2
7 better 0901.2805v1 2
8 better 0901.2805v1 2
9 improve 0901.2805v1 2
10 support 0901.3118v2 2
# … with 7,383 more rows
```
```
arxiv_sentiments <- arxiv_words %>%
left_join(afinn, by = "word") %>%
group_by(id) %>%
summarize(
num_words = n(),
sentiment = sum(value, na.rm = TRUE),
.groups = "drop"
) %>%
mutate(sentiment_per_word = sentiment / num_words) %>%
arrange(desc(sentiment))
```
Here we used `left_join()` to ensure that if no words in the abstract matched words in the lexicon, we will still have something to sum (in this case a number of NA’s, which sum to 0\).
We can now add this new variable to our dataset of papers.
```
arxiv_papers <- arxiv_papers %>%
left_join(arxiv_sentiments, by = "id")
arxiv_papers %>%
skim(sentiment, sentiment_per_word)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75
1 sentiment 1089 0 4.02 7.00 -26 0 4 8
2 sentiment_per_word 1089 0 0.0360 0.0633 -0.227 0 0.0347 0.0714
p100
1 39
2 0.333
```
The average sentiment score of these papers is 4, but they range from \\(\-26\\) to 39\.
Surely, abstracts with more words might accrue a higher sentiment score.
We can control for abstract length by dividing by the number of words.
The paper with the highest sentiment score per word had a score of 0\.333\.
Let’s take a closer look at the most positive abstract.
```
most_positive <- arxiv_papers %>%
filter(sentiment_per_word == max(sentiment_per_word)) %>%
pull(abstract)
strwrap(most_positive)
```
```
[1] "Data science is creating very exciting trends as well as"
[2] "significant controversy. A critical matter for the healthy"
[3] "development of data science in its early stages is to deeply"
[4] "understand the nature of data and data science, and to discuss the"
[5] "various pitfalls. These important issues motivate the discussions"
[6] "in this article."
```
We see a number of positive words (e.g., “exciting,” “significant,” “important”) included in this upbeat abstract.
We can also explore if there are time trends or differences between different disciplines (see Figure [19\.3](ch-text.html#fig:arxiv-papers)).
```
ggplot(
arxiv_papers,
aes(
x = submitted, y = sentiment_per_word,
color = field == "cs"
)
) +
geom_smooth(se = TRUE) +
scale_color_brewer("Computer Science?", palette = "Set2") +
labs(x = "Date submitted", y = "Sentiment score per word")
```
Figure 19\.3: Average sum sentiment scores over time by field.
There’s mild evidence for a downward trend over time.
Computer science papers have slightly higher sentiment, but the difference is modest.
### 19\.2\.4 Bigrams and N\-grams
We can also start to explore more sophisticated patterns within our corpus.
An [*N\-gram*](https://en.wikipedia.org/w/index.php?search=N-gram) is a contiguous sequence of \\(n\\) “words.”
Thus, a \\(1\\)\-gram is a single word (e.g., “text”), while a 2\-gram ([*bigram*](https://en.wikipedia.org/w/index.php?search=bigram)) is a pair of words (e.g. “text mining”).
We can use the same techniques to identify the most common pairs of words.
```
arxiv_bigrams <- arxiv_papers %>%
unnest_tokens(
arxiv_bigram,
abstract_clean,
token = "ngrams",
n = 2
) %>%
select(arxiv_bigram, id)
arxiv_bigrams
```
```
# A tibble: 121,454 × 2
arxiv_bigram id
<chr> <chr>
1 astronomy one astro-ph/0701361v1
2 one data astro-ph/0701361v1
3 data intensive astro-ph/0701361v1
4 intensive sciences astro-ph/0701361v1
5 sciences data astro-ph/0701361v1
6 data technology astro-ph/0701361v1
7 technology accelerating astro-ph/0701361v1
8 accelerating quality astro-ph/0701361v1
9 quality effectiveness astro-ph/0701361v1
10 effectiveness research astro-ph/0701361v1
# … with 121,444 more rows
```
```
arxiv_bigrams %>%
count(arxiv_bigram, sort = TRUE)
```
```
# A tibble: 96,822 × 2
arxiv_bigram n
<chr> <int>
1 data science 953
2 machine learning 403
3 big data 139
4 state art 121
5 data analysis 111
6 deep learning 108
7 neural networks 100
8 real world 97
9 large scale 83
10 data driven 80
# … with 96,812 more rows
```
Not surprisingly, `data science` is the most common bigram.
### 19\.2\.5 Document term matrices
Another important technique in text mining involves the calculation of a [*term frequency\-inverse document frequency*](https://en.wikipedia.org/w/index.php?search=term%20frequency-inverse%20document%20frequency) ([*tf\-idf*](https://en.wikipedia.org/w/index.php?search=tf-idf)), or [*document term matrix*](https://en.wikipedia.org/w/index.php?search=document%20term%20matrix).
The term frequency of a term \\(t\\) in a document \\(d\\) is denoted \\(tf(t,d)\\) and is simply equal to the number of times that the term \\(t\\) appears in document \\(d\\) divided by the number of words in the document.
On the other hand, the inverse document frequency measures the prevalence of a term across a set of documents \\(D\\).
In particular,
\\\[
idf(t, D) \= \\log \\frac{\|D\|}{\|\\{d \\in D: t \\in d\\}\|} \\,.
\\]
Finally, \\(tf\\\_idf(t,d,D) \= tf(t,d) \\cdot idf(t, D)\\).
The \\(tf\\\_idf\\) is commonly used in search engines, when the relevance of a particular word is needed across a body of documents.
Note that unless they are excluded (as we have done above) commonly\-used words like `the` will appear in every document.
Thus, their inverse document frequency score will be zero, and thus their \\(tf\\\_idf\\) will also be zero regardless of the term frequency.
This is a desired result, since words like `the` are never important in full\-text searches.
Rather, documents with high \\(tf\\\_idf\\) scores for a particular term will contain that particular term many times relative to its appearance across many documents.
Such documents are likely to be more relevant to the search term being used.
The most commonly\-used words in our corpora are listed below.
Not surprisingly “data” and “science” are at the top of the list.
```
arxiv_words %>%
count(word) %>%
arrange(desc(n)) %>%
head()
```
```
# A tibble: 6 × 2
word n
<chr> <int>
1 data 3222
2 science 1122
3 learning 804
4 can 731
5 model 540
6 analysis 488
```
However, the term frequency metric is calculated on a per word, per document basis.
It answers the question of which abstracts use a word most often.
```
tidy_DTM <- arxiv_words %>%
count(id, word) %>%
bind_tf_idf(word, id, n)
tidy_DTM %>%
arrange(desc(tf)) %>%
head()
```
```
# A tibble: 6 × 6
id word n tf idf tf_idf
<chr> <chr> <int> <dbl> <dbl> <dbl>
1 2007.03606v1 data 20 0.169 0.128 0.0217
2 1707.07029v1 concept 1 0.167 3.30 0.551
3 1707.07029v1 data 1 0.167 0.128 0.0214
4 1707.07029v1 implications 1 0.167 3.77 0.629
5 1707.07029v1 reflections 1 0.167 6.30 1.05
6 1707.07029v1 science 1 0.167 0.408 0.0680
```
We see that among all terms in all papers, “data” has the highest term frequency for paper `2007.03606v1` (0\.169\).
Nearly 17% of the non\-stopwords in this papers abstract were “data.”
However, as we saw above, since “data” is the most common word in the entire corpus, it has the *lowest* inverse document frequency (0\.128\).
The `tf_idf` score for “data” in paper `2007.03606v1` is thus \\(0\.169 \\cdot 0\.128 \= 0\.022\\).
This is not a particularly large value, so a search for “data” would not bring this paper to the top of the list.
```
tidy_DTM %>%
arrange(desc(idf), desc(n)) %>%
head()
```
```
# A tibble: 6 × 6
id word n tf idf tf_idf
<chr> <chr> <int> <dbl> <dbl> <dbl>
1 1507.00333v3 mf 14 0.107 6.99 0.747
2 1611.09874v1 fe 13 0.0549 6.99 0.384
3 1611.09874v1 mg 11 0.0464 6.99 0.325
4 2003.00646v1 wildfire 10 0.0518 6.99 0.362
5 1506.08903v7 ph 9 0.0703 6.99 0.492
6 1710.06905v1 homeless 9 0.0559 6.99 0.391
```
On the other hand, “wildfire” has a high `idf` score since it is included in only one abstract (though it is used 10 times).
```
arxiv_papers %>%
pull(abstract) %>%
str_subset("wildfire") %>%
strwrap() %>%
head()
```
```
[1] "Artificial intelligence has been applied in wildfire science and"
[2] "management since the 1990s, with early applications including"
[3] "neural networks and expert systems. Since then the field has"
[4] "rapidly progressed congruently with the wide adoption of machine"
[5] "learning (ML) in the environmental sciences. Here, we present a"
[6] "scoping review of ML in wildfire science and management. Our"
```
In contrast, “implications” appears in 25 abstracts.
```
tidy_DTM %>%
filter(word == "implications")
```
```
# A tibble: 25 × 6
id word n tf idf tf_idf
<chr> <chr> <int> <dbl> <dbl> <dbl>
1 1310.4461v2 implications 1 0.00840 3.77 0.0317
2 1410.6646v1 implications 1 0.00719 3.77 0.0272
3 1511.07643v1 implications 1 0.00621 3.77 0.0234
4 1601.04890v2 implications 1 0.00680 3.77 0.0257
5 1608.05127v1 implications 1 0.00595 3.77 0.0225
6 1706.03102v1 implications 1 0.00862 3.77 0.0325
7 1707.07029v1 implications 1 0.167 3.77 0.629
8 1711.04712v1 implications 1 0.00901 3.77 0.0340
9 1803.05991v1 implications 1 0.00595 3.77 0.0225
10 1804.10846v6 implications 1 0.00909 3.77 0.0343
# … with 15 more rows
```
The `tf_idf` field can be used to help identify keywords for an article.
For our previously selected paper, “causal,” “exposure,” or “question” would be good choices.
```
tidy_DTM %>%
filter(id == "1809.02408v2") %>%
arrange(desc(tf_idf)) %>%
head()
```
```
# A tibble: 6 × 6
id word n tf idf tf_idf
<chr> <chr> <int> <dbl> <dbl> <dbl>
1 1809.02408v2 causal 10 0.0775 4.10 0.318
2 1809.02408v2 exposure 2 0.0155 5.38 0.0835
3 1809.02408v2 question 3 0.0233 3.23 0.0752
4 1809.02408v2 roadmap 2 0.0155 4.80 0.0744
5 1809.02408v2 parametric 2 0.0155 4.16 0.0645
6 1809.02408v2 effect 2 0.0155 3.95 0.0612
```
A search for “covid” yields several papers that address the pandemic directly.
```
tidy_DTM %>%
filter(word == "covid") %>%
arrange(desc(tf_idf)) %>%
head() %>%
left_join(select(arxiv_papers, id, abstract), by = "id")
```
```
# A tibble: 6 × 7
id word n tf idf tf_idf abstract
<chr> <chr> <int> <dbl> <dbl> <dbl> <chr>
1 2006.00… covid 10 0.0637 4.80 0.305 " Context: The dire consequence…
2 2004.09… covid 5 0.0391 4.80 0.187 " The Covid-19 outbreak, beyond…
3 2003.08… covid 3 0.0246 4.80 0.118 " The relative case fatality ra…
4 2006.01… covid 3 0.0222 4.80 0.107 " This document analyzes the ro…
5 2003.12… covid 3 0.0217 4.80 0.104 " The COVID-19 pandemic demands…
6 2006.05… covid 3 0.0170 4.80 0.0817 " This paper aims at providing …
```
The (document, term) pair with the highest overall `tf_idf` is “reflections” (a rarely\-used word having a high `idf` score), in a paper that includes only six non\-stopwords in its abstract.
Note that “implications” and “society” also garner high `tf_idf` scores for that same paper.
```
tidy_DTM %>%
arrange(desc(tf_idf)) %>%
head() %>%
left_join(select(arxiv_papers, id, abstract), by = "id")
```
```
# A tibble: 6 × 7
id word n tf idf tf_idf abstract
<chr> <chr> <int> <dbl> <dbl> <dbl> <chr>
1 1707.07… reflec… 1 0.167 6.30 1.05 " Reflections on the Concept …
2 2007.12… fintech 8 0.123 6.99 0.861 " Smart FinTech has emerged a…
3 1507.00… mf 14 0.107 6.99 0.747 " Low-rank matrix factorizati…
4 1707.07… implic… 1 0.167 3.77 0.629 " Reflections on the Concept …
5 1707.07… society 1 0.167 3.70 0.616 " Reflections on the Concept …
6 1906.04… utv 8 0.0860 6.99 0.602 " In this work, a novel rank-…
```
The `cast_dtm()` function can be used to create a document term matrix.
```
tm_DTM <- arxiv_words %>%
count(id, word) %>%
cast_dtm(id, word, n, weighting = tm::weightTfIdf)
tm_DTM
```
```
<<DocumentTermMatrix (documents: 1089, terms: 12317)>>
Non-/sparse entries: 93559/13319654
Sparsity : 99%
Maximal term length: 37
Weighting : term frequency - inverse document frequency (normalized) (tf-idf)
```
By default, each entry in that matrix records the [*term frequency*](https://en.wikipedia.org/w/index.php?search=term%20frequency) (i.e., the number of times that each word appeared in each document).
However, in this case we will specify that the entries record the normalized \\(tf\\\_idf\\) as defined above.
Note that the `DTM` matrix is very sparse—99% of the entries are 0\.
This makes sense, since most words do not appear in most documents (abstracts, for our example).
We can now use tools from other packages (e.g., **tm**) to explore associations.
We can now use the `findFreqTerms()` function with the `DTM` object to find the words with the highest \\(tf\\\_idf\\) scores.
Note how these results differ from the word cloud in Figure [19\.2](ch-text.html#fig:wordcloud1).
By term frequency, the word `data` is by far the most common, but this gives it a low \\(idf\\) score that brings down its \\(tf\\\_idf\\).
```
tm::findFreqTerms(tm_DTM, lowfreq = 7)
```
```
[1] "analysis" "information" "research" "learning" "time"
[6] "network" "problem" "can" "algorithm" "algorithms"
[11] "based" "methods" "model" "models" "machine"
```
Since `tm_DTM` contains all of the \\(tf\\\_idf\\) scores for each word, we can extract those values and calculate the score of each word across all of the abstracts.
```
tm_DTM %>%
as.matrix() %>%
as_tibble() %>%
map_dbl(sum) %>%
sort(decreasing = TRUE) %>%
head()
```
```
learning model models machine analysis algorithms
10.10 9.30 8.81 8.04 7.84 7.72
```
Moreover, we can identify which terms tend to show up in the same documents as the word `causal` using the `findAssocs()` function.
In this case, we explore the words that have a correlation of at least 0\.35 with the terms `causal`.
```
tm::findAssocs(tm_DTM, terms = "causal", corlimit = 0.35)
```
```
$causal
estimand laan petersen stating tmle exposure der
0.57 0.57 0.57 0.57 0.57 0.39 0.38
censoring gave
0.35 0.35
```
### 19\.2\.1 Corpora
Text mining is often performed not just on one text document, but on a collection of many text documents, called a [*corpus*](https://en.wikipedia.org/w/index.php?search=corpus).
Can we use the arXiv.org papers to learn more about papers in data science?
The **tidytext** package provides a consistent and elegant approach to analyzing text data.
The `unnest_tokens()` function helps prepare data for text analysis.
It uses a [*tokenizer*](https://en.wikipedia.org/w/index.php?search=tokenizer) to split the text lines.
By default the function maps characters to lowercase.
Here we use this function to count word frequencies for each of the papers (other options include N\-grams, lines, or sentences).
```
library(tidytext)
DataSciencePapers %>%
unnest_tokens(word, abstract) %>%
count(id, word, sort = TRUE)
```
```
# A tibble: 120,330 × 3
id word n
<chr> <chr> <int>
1 2003.11213v1 the 31
2 1508.02387v1 the 30
3 1711.10558v1 the 30
4 1805.09320v2 the 30
5 2004.04813v2 the 27
6 2007.08242v1 the 27
7 1711.09726v3 the 26
8 1805.11012v1 the 26
9 1909.10578v1 the 26
10 1404.5971v2 the 25
# … with 120,320 more rows
```
We see that the word `the` is the most common word in many abstracts.
This is not a particularly helpful insight.
It’s a common practice to exclude [*stop words*](https://en.wikipedia.org/w/index.php?search=stop%20words) such as `a`, `the`, and `you`.
The `get_stopwords()` function from the **tidytext** package uses the **stopwords** package to facilitate this task.
Let’s try again.
```
arxiv_words <- DataSciencePapers %>%
unnest_tokens(word, abstract) %>%
anti_join(get_stopwords(), by = "word")
arxiv_words %>%
count(id, word, sort = TRUE)
```
```
# A tibble: 93,559 × 3
id word n
<chr> <chr> <int>
1 2007.03606v1 data 20
2 1708.04664v1 data 19
3 1606.06769v1 traffic 17
4 1705.03451v2 data 17
5 1601.06035v1 models 16
6 1807.09127v2 job 16
7 2003.10534v1 data 16
8 1611.09874v1 ii 15
9 1808.04849v1 data 15
10 1906.03418v1 data 15
# … with 93,549 more rows
```
We now see that the word `data` is, not surprisingly, the most common non\-stop word in many of the abstracts.
It is convenient to save a variable (`abstract_clean`) with the abstract after removing stopwords and mapping all characters to lowercase.
```
arxiv_abstracts <- arxiv_words %>%
group_by(id) %>%
summarize(abstract_clean = paste(word, collapse = " "))
arxiv_papers <- DataSciencePapers %>%
left_join(arxiv_abstracts, by = "id")
```
We can now see the before and after for the first part of the abstract of our previously selected paper.
```
single_paper <- arxiv_papers %>%
filter(id == "1809.02408v2")
single_paper %>%
pull(abstract) %>%
strwrap() %>%
head()
```
```
[1] "Many questions in Data Science are fundamentally causal in that our"
[2] "objective is to learn the effect of some exposure, randomized or"
[3] "not, on an outcome interest. Even studies that are seemingly"
[4] "non-causal, such as those with the goal of prediction or prevalence"
[5] "estimation, have causal elements, including differential censoring"
[6] "or measurement. As a result, we, as Data Scientists, need to"
```
```
single_paper %>%
pull(abstract_clean) %>%
strwrap() %>%
head(4)
```
```
[1] "many questions data science fundamentally causal objective learn"
[2] "effect exposure randomized outcome interest even studies seemingly"
[3] "non causal goal prediction prevalence estimation causal elements"
[4] "including differential censoring measurement result data scientists"
```
### 19\.2\.2 Word clouds
At this stage, we have taken what was a coherent English abstract and reduced it to a collection of individual, non\-trivial English words.
We have transformed something that was easy for humans to read into *data*.
Unfortunately, it is not obvious how we can learn from these data.
One rudimentary approach is to construct a [*word cloud*](https://en.wikipedia.org/w/index.php?search=word%20cloud)—a kind of multivariate histogram for words. The **wordcloud** package can generate these graphical depictions of word frequencies.
```
library(wordcloud)
set.seed(1966)
arxiv_papers %>%
pull(abstract_clean) %>%
wordcloud(
max.words = 40,
scale = c(8, 1),
colors = topo.colors(n = 30),
random.color = TRUE
)
```
Figure 19\.2: A word cloud of terms that appear in the abstracts of arXiv papers on data science.
Although word clouds such as the one shown in Figure [19\.2](ch-text.html#fig:wordcloud1) have limited abilities to convey meaning, they can be useful for quickly visualizing the prevalence of words in large corpora.
### 19\.2\.3 Sentiment analysis
Can we start to automate a process to discern some meaning from the text?
The use of [*sentiment analysis*](https://en.wikipedia.org/w/index.php?search=sentiment%20analysis) is a simplistic but straightforward way to begin.
A [*lexicon*](https://en.wikipedia.org/w/index.php?search=lexicon) is a word list with associated sentiments (e.g., positivity, negativity) that have been labeled.
A number of such lexicons have been created with such tags.
Here is a sample of sentiment scores for one lexicon.
```
afinn <- get_sentiments("afinn")
afinn %>%
slice_sample(n = 15) %>%
arrange(desc(value))
```
```
# A tibble: 15 × 2
word value
<chr> <dbl>
1 impress 3
2 joyfully 3
3 advantage 2
4 faith 1
5 grant 1
6 laugh 1
7 apologise -1
8 lurk -1
9 ghost -1
10 deriding -2
11 detention -2
12 dirtiest -2
13 embarrassment -2
14 mocks -2
15 mournful -2
```
For the AFINN (Nielsen 2011\) lexicon, each word is associated with an integer value, ranging from \\(\-5\\) to 5\.
We can join this lexicon with our data to calculate a sentiment score.
```
arxiv_words %>%
inner_join(afinn, by = "word") %>%
select(word, id, value)
```
```
# A tibble: 7,393 × 3
word id value
<chr> <chr> <dbl>
1 ambitious astro-ph/0701361v1 2
2 powerful astro-ph/0701361v1 2
3 impotent astro-ph/0701361v1 -2
4 like astro-ph/0701361v1 2
5 agree astro-ph/0701361v1 1
6 better 0901.2805v1 2
7 better 0901.2805v1 2
8 better 0901.2805v1 2
9 improve 0901.2805v1 2
10 support 0901.3118v2 2
# … with 7,383 more rows
```
```
arxiv_sentiments <- arxiv_words %>%
left_join(afinn, by = "word") %>%
group_by(id) %>%
summarize(
num_words = n(),
sentiment = sum(value, na.rm = TRUE),
.groups = "drop"
) %>%
mutate(sentiment_per_word = sentiment / num_words) %>%
arrange(desc(sentiment))
```
Here we used `left_join()` to ensure that if no words in the abstract matched words in the lexicon, we will still have something to sum (in this case a number of NA’s, which sum to 0\).
We can now add this new variable to our dataset of papers.
```
arxiv_papers <- arxiv_papers %>%
left_join(arxiv_sentiments, by = "id")
arxiv_papers %>%
skim(sentiment, sentiment_per_word)
```
```
── Variable type: numeric ──────────────────────────────────────────────────
var n na mean sd p0 p25 p50 p75
1 sentiment 1089 0 4.02 7.00 -26 0 4 8
2 sentiment_per_word 1089 0 0.0360 0.0633 -0.227 0 0.0347 0.0714
p100
1 39
2 0.333
```
The average sentiment score of these papers is 4, but they range from \\(\-26\\) to 39\.
Surely, abstracts with more words might accrue a higher sentiment score.
We can control for abstract length by dividing by the number of words.
The paper with the highest sentiment score per word had a score of 0\.333\.
Let’s take a closer look at the most positive abstract.
```
most_positive <- arxiv_papers %>%
filter(sentiment_per_word == max(sentiment_per_word)) %>%
pull(abstract)
strwrap(most_positive)
```
```
[1] "Data science is creating very exciting trends as well as"
[2] "significant controversy. A critical matter for the healthy"
[3] "development of data science in its early stages is to deeply"
[4] "understand the nature of data and data science, and to discuss the"
[5] "various pitfalls. These important issues motivate the discussions"
[6] "in this article."
```
We see a number of positive words (e.g., “exciting,” “significant,” “important”) included in this upbeat abstract.
We can also explore if there are time trends or differences between different disciplines (see Figure [19\.3](ch-text.html#fig:arxiv-papers)).
```
ggplot(
arxiv_papers,
aes(
x = submitted, y = sentiment_per_word,
color = field == "cs"
)
) +
geom_smooth(se = TRUE) +
scale_color_brewer("Computer Science?", palette = "Set2") +
labs(x = "Date submitted", y = "Sentiment score per word")
```
Figure 19\.3: Average sum sentiment scores over time by field.
There’s mild evidence for a downward trend over time.
Computer science papers have slightly higher sentiment, but the difference is modest.
### 19\.2\.4 Bigrams and N\-grams
We can also start to explore more sophisticated patterns within our corpus.
An [*N\-gram*](https://en.wikipedia.org/w/index.php?search=N-gram) is a contiguous sequence of \\(n\\) “words.”
Thus, a \\(1\\)\-gram is a single word (e.g., “text”), while a 2\-gram ([*bigram*](https://en.wikipedia.org/w/index.php?search=bigram)) is a pair of words (e.g. “text mining”).
We can use the same techniques to identify the most common pairs of words.
```
arxiv_bigrams <- arxiv_papers %>%
unnest_tokens(
arxiv_bigram,
abstract_clean,
token = "ngrams",
n = 2
) %>%
select(arxiv_bigram, id)
arxiv_bigrams
```
```
# A tibble: 121,454 × 2
arxiv_bigram id
<chr> <chr>
1 astronomy one astro-ph/0701361v1
2 one data astro-ph/0701361v1
3 data intensive astro-ph/0701361v1
4 intensive sciences astro-ph/0701361v1
5 sciences data astro-ph/0701361v1
6 data technology astro-ph/0701361v1
7 technology accelerating astro-ph/0701361v1
8 accelerating quality astro-ph/0701361v1
9 quality effectiveness astro-ph/0701361v1
10 effectiveness research astro-ph/0701361v1
# … with 121,444 more rows
```
```
arxiv_bigrams %>%
count(arxiv_bigram, sort = TRUE)
```
```
# A tibble: 96,822 × 2
arxiv_bigram n
<chr> <int>
1 data science 953
2 machine learning 403
3 big data 139
4 state art 121
5 data analysis 111
6 deep learning 108
7 neural networks 100
8 real world 97
9 large scale 83
10 data driven 80
# … with 96,812 more rows
```
Not surprisingly, `data science` is the most common bigram.
### 19\.2\.5 Document term matrices
Another important technique in text mining involves the calculation of a [*term frequency\-inverse document frequency*](https://en.wikipedia.org/w/index.php?search=term%20frequency-inverse%20document%20frequency) ([*tf\-idf*](https://en.wikipedia.org/w/index.php?search=tf-idf)), or [*document term matrix*](https://en.wikipedia.org/w/index.php?search=document%20term%20matrix).
The term frequency of a term \\(t\\) in a document \\(d\\) is denoted \\(tf(t,d)\\) and is simply equal to the number of times that the term \\(t\\) appears in document \\(d\\) divided by the number of words in the document.
On the other hand, the inverse document frequency measures the prevalence of a term across a set of documents \\(D\\).
In particular,
\\\[
idf(t, D) \= \\log \\frac{\|D\|}{\|\\{d \\in D: t \\in d\\}\|} \\,.
\\]
Finally, \\(tf\\\_idf(t,d,D) \= tf(t,d) \\cdot idf(t, D)\\).
The \\(tf\\\_idf\\) is commonly used in search engines, when the relevance of a particular word is needed across a body of documents.
Note that unless they are excluded (as we have done above) commonly\-used words like `the` will appear in every document.
Thus, their inverse document frequency score will be zero, and thus their \\(tf\\\_idf\\) will also be zero regardless of the term frequency.
This is a desired result, since words like `the` are never important in full\-text searches.
Rather, documents with high \\(tf\\\_idf\\) scores for a particular term will contain that particular term many times relative to its appearance across many documents.
Such documents are likely to be more relevant to the search term being used.
The most commonly\-used words in our corpora are listed below.
Not surprisingly “data” and “science” are at the top of the list.
```
arxiv_words %>%
count(word) %>%
arrange(desc(n)) %>%
head()
```
```
# A tibble: 6 × 2
word n
<chr> <int>
1 data 3222
2 science 1122
3 learning 804
4 can 731
5 model 540
6 analysis 488
```
However, the term frequency metric is calculated on a per word, per document basis.
It answers the question of which abstracts use a word most often.
```
tidy_DTM <- arxiv_words %>%
count(id, word) %>%
bind_tf_idf(word, id, n)
tidy_DTM %>%
arrange(desc(tf)) %>%
head()
```
```
# A tibble: 6 × 6
id word n tf idf tf_idf
<chr> <chr> <int> <dbl> <dbl> <dbl>
1 2007.03606v1 data 20 0.169 0.128 0.0217
2 1707.07029v1 concept 1 0.167 3.30 0.551
3 1707.07029v1 data 1 0.167 0.128 0.0214
4 1707.07029v1 implications 1 0.167 3.77 0.629
5 1707.07029v1 reflections 1 0.167 6.30 1.05
6 1707.07029v1 science 1 0.167 0.408 0.0680
```
We see that among all terms in all papers, “data” has the highest term frequency for paper `2007.03606v1` (0\.169\).
Nearly 17% of the non\-stopwords in this papers abstract were “data.”
However, as we saw above, since “data” is the most common word in the entire corpus, it has the *lowest* inverse document frequency (0\.128\).
The `tf_idf` score for “data” in paper `2007.03606v1` is thus \\(0\.169 \\cdot 0\.128 \= 0\.022\\).
This is not a particularly large value, so a search for “data” would not bring this paper to the top of the list.
```
tidy_DTM %>%
arrange(desc(idf), desc(n)) %>%
head()
```
```
# A tibble: 6 × 6
id word n tf idf tf_idf
<chr> <chr> <int> <dbl> <dbl> <dbl>
1 1507.00333v3 mf 14 0.107 6.99 0.747
2 1611.09874v1 fe 13 0.0549 6.99 0.384
3 1611.09874v1 mg 11 0.0464 6.99 0.325
4 2003.00646v1 wildfire 10 0.0518 6.99 0.362
5 1506.08903v7 ph 9 0.0703 6.99 0.492
6 1710.06905v1 homeless 9 0.0559 6.99 0.391
```
On the other hand, “wildfire” has a high `idf` score since it is included in only one abstract (though it is used 10 times).
```
arxiv_papers %>%
pull(abstract) %>%
str_subset("wildfire") %>%
strwrap() %>%
head()
```
```
[1] "Artificial intelligence has been applied in wildfire science and"
[2] "management since the 1990s, with early applications including"
[3] "neural networks and expert systems. Since then the field has"
[4] "rapidly progressed congruently with the wide adoption of machine"
[5] "learning (ML) in the environmental sciences. Here, we present a"
[6] "scoping review of ML in wildfire science and management. Our"
```
In contrast, “implications” appears in 25 abstracts.
```
tidy_DTM %>%
filter(word == "implications")
```
```
# A tibble: 25 × 6
id word n tf idf tf_idf
<chr> <chr> <int> <dbl> <dbl> <dbl>
1 1310.4461v2 implications 1 0.00840 3.77 0.0317
2 1410.6646v1 implications 1 0.00719 3.77 0.0272
3 1511.07643v1 implications 1 0.00621 3.77 0.0234
4 1601.04890v2 implications 1 0.00680 3.77 0.0257
5 1608.05127v1 implications 1 0.00595 3.77 0.0225
6 1706.03102v1 implications 1 0.00862 3.77 0.0325
7 1707.07029v1 implications 1 0.167 3.77 0.629
8 1711.04712v1 implications 1 0.00901 3.77 0.0340
9 1803.05991v1 implications 1 0.00595 3.77 0.0225
10 1804.10846v6 implications 1 0.00909 3.77 0.0343
# … with 15 more rows
```
The `tf_idf` field can be used to help identify keywords for an article.
For our previously selected paper, “causal,” “exposure,” or “question” would be good choices.
```
tidy_DTM %>%
filter(id == "1809.02408v2") %>%
arrange(desc(tf_idf)) %>%
head()
```
```
# A tibble: 6 × 6
id word n tf idf tf_idf
<chr> <chr> <int> <dbl> <dbl> <dbl>
1 1809.02408v2 causal 10 0.0775 4.10 0.318
2 1809.02408v2 exposure 2 0.0155 5.38 0.0835
3 1809.02408v2 question 3 0.0233 3.23 0.0752
4 1809.02408v2 roadmap 2 0.0155 4.80 0.0744
5 1809.02408v2 parametric 2 0.0155 4.16 0.0645
6 1809.02408v2 effect 2 0.0155 3.95 0.0612
```
A search for “covid” yields several papers that address the pandemic directly.
```
tidy_DTM %>%
filter(word == "covid") %>%
arrange(desc(tf_idf)) %>%
head() %>%
left_join(select(arxiv_papers, id, abstract), by = "id")
```
```
# A tibble: 6 × 7
id word n tf idf tf_idf abstract
<chr> <chr> <int> <dbl> <dbl> <dbl> <chr>
1 2006.00… covid 10 0.0637 4.80 0.305 " Context: The dire consequence…
2 2004.09… covid 5 0.0391 4.80 0.187 " The Covid-19 outbreak, beyond…
3 2003.08… covid 3 0.0246 4.80 0.118 " The relative case fatality ra…
4 2006.01… covid 3 0.0222 4.80 0.107 " This document analyzes the ro…
5 2003.12… covid 3 0.0217 4.80 0.104 " The COVID-19 pandemic demands…
6 2006.05… covid 3 0.0170 4.80 0.0817 " This paper aims at providing …
```
The (document, term) pair with the highest overall `tf_idf` is “reflections” (a rarely\-used word having a high `idf` score), in a paper that includes only six non\-stopwords in its abstract.
Note that “implications” and “society” also garner high `tf_idf` scores for that same paper.
```
tidy_DTM %>%
arrange(desc(tf_idf)) %>%
head() %>%
left_join(select(arxiv_papers, id, abstract), by = "id")
```
```
# A tibble: 6 × 7
id word n tf idf tf_idf abstract
<chr> <chr> <int> <dbl> <dbl> <dbl> <chr>
1 1707.07… reflec… 1 0.167 6.30 1.05 " Reflections on the Concept …
2 2007.12… fintech 8 0.123 6.99 0.861 " Smart FinTech has emerged a…
3 1507.00… mf 14 0.107 6.99 0.747 " Low-rank matrix factorizati…
4 1707.07… implic… 1 0.167 3.77 0.629 " Reflections on the Concept …
5 1707.07… society 1 0.167 3.70 0.616 " Reflections on the Concept …
6 1906.04… utv 8 0.0860 6.99 0.602 " In this work, a novel rank-…
```
The `cast_dtm()` function can be used to create a document term matrix.
```
tm_DTM <- arxiv_words %>%
count(id, word) %>%
cast_dtm(id, word, n, weighting = tm::weightTfIdf)
tm_DTM
```
```
<<DocumentTermMatrix (documents: 1089, terms: 12317)>>
Non-/sparse entries: 93559/13319654
Sparsity : 99%
Maximal term length: 37
Weighting : term frequency - inverse document frequency (normalized) (tf-idf)
```
By default, each entry in that matrix records the [*term frequency*](https://en.wikipedia.org/w/index.php?search=term%20frequency) (i.e., the number of times that each word appeared in each document).
However, in this case we will specify that the entries record the normalized \\(tf\\\_idf\\) as defined above.
Note that the `DTM` matrix is very sparse—99% of the entries are 0\.
This makes sense, since most words do not appear in most documents (abstracts, for our example).
We can now use tools from other packages (e.g., **tm**) to explore associations.
We can now use the `findFreqTerms()` function with the `DTM` object to find the words with the highest \\(tf\\\_idf\\) scores.
Note how these results differ from the word cloud in Figure [19\.2](ch-text.html#fig:wordcloud1).
By term frequency, the word `data` is by far the most common, but this gives it a low \\(idf\\) score that brings down its \\(tf\\\_idf\\).
```
tm::findFreqTerms(tm_DTM, lowfreq = 7)
```
```
[1] "analysis" "information" "research" "learning" "time"
[6] "network" "problem" "can" "algorithm" "algorithms"
[11] "based" "methods" "model" "models" "machine"
```
Since `tm_DTM` contains all of the \\(tf\\\_idf\\) scores for each word, we can extract those values and calculate the score of each word across all of the abstracts.
```
tm_DTM %>%
as.matrix() %>%
as_tibble() %>%
map_dbl(sum) %>%
sort(decreasing = TRUE) %>%
head()
```
```
learning model models machine analysis algorithms
10.10 9.30 8.81 8.04 7.84 7.72
```
Moreover, we can identify which terms tend to show up in the same documents as the word `causal` using the `findAssocs()` function.
In this case, we explore the words that have a correlation of at least 0\.35 with the terms `causal`.
```
tm::findAssocs(tm_DTM, terms = "causal", corlimit = 0.35)
```
```
$causal
estimand laan petersen stating tmle exposure der
0.57 0.57 0.57 0.57 0.57 0.39 0.38
censoring gave
0.35 0.35
```
19\.3 Ingesting text
--------------------
In Chapter [6](ch-dataII.html#ch:dataII) (see Section [6\.4\.1\.2](ch-dataII.html#sec:htmltab))
we illustrated how the **rvest** package can be used to convert tabular data presented on the Web in HTML format into a proper **R** data table. Here, we present another example of how this process can bring text data into **R**.
### 19\.3\.1 Example: Scraping the songs of the Beatles
In Chapter [14](ch-vizIII.html#ch:vizIII), we explored the popularity of the names for the four members of the [*Beatles*](https://en.wikipedia.org/w/index.php?search=Beatles). During their heyday from 1962–1970, the Beatles were prolific—recording hundreds of songs.
In this example, we explore some of who sang and what words were included in song titles.
We begin by downloading the contents of [the Wikipedia page that lists the Beatles’ songs](http://en.wikipedia.org/wiki/List_of_songs_recorded_by_the_Beatles).
```
library(rvest)
url <- "http://en.wikipedia.org/wiki/List_of_songs_recorded_by_the_Beatles"
tables <- url %>%
read_html() %>%
html_nodes("table")
Beatles_songs <- tables %>%
purrr::pluck(3) %>%
html_table(fill = TRUE) %>%
janitor::clean_names() %>%
select(song, lead_vocal_s_d)
glimpse(Beatles_songs)
```
```
Rows: 213
Columns: 2
$ song <chr> "\"Across the Universe\"[e]", "\"Act Naturally\"", …
$ lead_vocal_s_d <chr> "John Lennon", "Ringo Starr", "Lennon", "Paul McCar…
```
We need to clean these data a bit.
Note that the `song` variable contains quotation marks.
The `lead_vocal_s_d` variable would benefit from being renamed.
```
Beatles_songs <- Beatles_songs %>%
mutate(song = str_remove_all(song, pattern = '\\"')) %>%
rename(vocals = lead_vocal_s_d)
```
Most of the Beatles’ songs were sung by some combination of [John Lennon](https://en.wikipedia.org/w/index.php?search=John%20Lennon) and [Paul McCartney](https://en.wikipedia.org/w/index.php?search=Paul%20McCartney).
While their productive but occasionally contentious working relationship is well\-documented, we might be interested in determining how many songs each person is credited with singing.
```
Beatles_songs %>%
group_by(vocals) %>%
count() %>%
arrange(desc(n))
```
```
# A tibble: 18 × 2
# Groups: vocals [18]
vocals n
<chr> <int>
1 Lennon 66
2 McCartney 60
3 Harrison 28
4 LennonMcCartney 15
5 Lennon(with McCartney) 12
6 Starr 10
7 McCartney(with Lennon) 9
8 Lennon(with McCartneyand Harrison) 3
9 Instrumental 1
10 John Lennon 1
11 Lennon(with Yoko Ono) 1
12 LennonHarrison 1
13 LennonMcCartneyHarrison 1
14 McCartney(with Lennon,Harrison,and Starr) 1
15 McCartneyLennonHarrison 1
16 Paul McCartney 1
17 Ringo Starr 1
18 Sound Collage 1
```
Lennon and McCartney sang separately and together.
Other band members (notably [Ringo Starr](https://en.wikipedia.org/w/index.php?search=Ringo%20Starr) and [George Harrison](https://en.wikipedia.org/w/index.php?search=George%20Harrison)) also sang, along with many rarer combinations.
Regular expressions can help us parse these data.
We already saw the number of songs sung by each person individually, and it isn’t hard to figure out the number of songs that each person contributed to in some form in terms of vocals.
```
Beatles_songs %>%
pull(vocals) %>%
str_subset("McCartney") %>%
length()
```
```
[1] 103
```
```
Beatles_songs %>%
pull(vocals) %>%
str_subset("Lennon") %>%
length()
```
```
[1] 111
```
John was credited with singing on more songs than Paul.
How many of these songs were the product of some type of Lennon\-McCartney collaboration?
Given the inconsistency in how the vocals are attributed, it requires some ingenuity to extract these data.
We can search the `vocals` variable for either `McCartney` or `Lennon` (or both), and count these instances.
```
Beatles_songs %>%
pull(vocals) %>%
str_subset("(McCartney|Lennon)") %>%
length()
```
```
[1] 172
```
At this point, we need another regular expression to figure out how many songs they both sang on.
The following will find the pattern consisting of either `McCartney` or `Lennon`, followed by a possibly empty string of characters, followed by another instance of either `McCartney` or `Lennon`.
```
pj_regexp <- "(McCartney|Lennon).*(McCartney|Lennon)"
Beatles_songs %>%
pull(vocals) %>%
str_subset(pj_regexp) %>%
length()
```
```
[1] 42
```
Note also that we can use `str_detect()` in a `filter()` command to retrieve the list of songs upon which Lennon and McCartney both sang.
```
Beatles_songs %>%
filter(str_detect(vocals, pj_regexp)) %>%
select(song, vocals) %>%
head()
```
```
# A tibble: 6 × 2
song vocals
<chr> <chr>
1 All Together Now McCartney(with Lennon)
2 Any Time at All Lennon(with McCartney)
3 Baby's in Black LennonMcCartney
4 Because LennonMcCartneyHarrison
5 Birthday McCartney(with Lennon)
6 Carry That Weight McCartney(with Lennon,Harrison,and Starr)
```
The Beatles have had such a profound influence upon musicians of all stripes that it might be worth investigating the titles of their songs.
What were they singing about?
```
Beatles_songs %>%
unnest_tokens(word, song) %>%
anti_join(get_stopwords(), by = "word") %>%
count(word, sort = TRUE) %>%
arrange(desc(n)) %>%
head()
```
```
# A tibble: 6 × 2
word n
<chr> <int>
1 love 9
2 want 7
3 got 6
4 hey 6
5 long 6
6 baby 4
```
Fittingly, “Love” is the most common word in the title of Beatles songs.
### 19\.3\.1 Example: Scraping the songs of the Beatles
In Chapter [14](ch-vizIII.html#ch:vizIII), we explored the popularity of the names for the four members of the [*Beatles*](https://en.wikipedia.org/w/index.php?search=Beatles). During their heyday from 1962–1970, the Beatles were prolific—recording hundreds of songs.
In this example, we explore some of who sang and what words were included in song titles.
We begin by downloading the contents of [the Wikipedia page that lists the Beatles’ songs](http://en.wikipedia.org/wiki/List_of_songs_recorded_by_the_Beatles).
```
library(rvest)
url <- "http://en.wikipedia.org/wiki/List_of_songs_recorded_by_the_Beatles"
tables <- url %>%
read_html() %>%
html_nodes("table")
Beatles_songs <- tables %>%
purrr::pluck(3) %>%
html_table(fill = TRUE) %>%
janitor::clean_names() %>%
select(song, lead_vocal_s_d)
glimpse(Beatles_songs)
```
```
Rows: 213
Columns: 2
$ song <chr> "\"Across the Universe\"[e]", "\"Act Naturally\"", …
$ lead_vocal_s_d <chr> "John Lennon", "Ringo Starr", "Lennon", "Paul McCar…
```
We need to clean these data a bit.
Note that the `song` variable contains quotation marks.
The `lead_vocal_s_d` variable would benefit from being renamed.
```
Beatles_songs <- Beatles_songs %>%
mutate(song = str_remove_all(song, pattern = '\\"')) %>%
rename(vocals = lead_vocal_s_d)
```
Most of the Beatles’ songs were sung by some combination of [John Lennon](https://en.wikipedia.org/w/index.php?search=John%20Lennon) and [Paul McCartney](https://en.wikipedia.org/w/index.php?search=Paul%20McCartney).
While their productive but occasionally contentious working relationship is well\-documented, we might be interested in determining how many songs each person is credited with singing.
```
Beatles_songs %>%
group_by(vocals) %>%
count() %>%
arrange(desc(n))
```
```
# A tibble: 18 × 2
# Groups: vocals [18]
vocals n
<chr> <int>
1 Lennon 66
2 McCartney 60
3 Harrison 28
4 LennonMcCartney 15
5 Lennon(with McCartney) 12
6 Starr 10
7 McCartney(with Lennon) 9
8 Lennon(with McCartneyand Harrison) 3
9 Instrumental 1
10 John Lennon 1
11 Lennon(with Yoko Ono) 1
12 LennonHarrison 1
13 LennonMcCartneyHarrison 1
14 McCartney(with Lennon,Harrison,and Starr) 1
15 McCartneyLennonHarrison 1
16 Paul McCartney 1
17 Ringo Starr 1
18 Sound Collage 1
```
Lennon and McCartney sang separately and together.
Other band members (notably [Ringo Starr](https://en.wikipedia.org/w/index.php?search=Ringo%20Starr) and [George Harrison](https://en.wikipedia.org/w/index.php?search=George%20Harrison)) also sang, along with many rarer combinations.
Regular expressions can help us parse these data.
We already saw the number of songs sung by each person individually, and it isn’t hard to figure out the number of songs that each person contributed to in some form in terms of vocals.
```
Beatles_songs %>%
pull(vocals) %>%
str_subset("McCartney") %>%
length()
```
```
[1] 103
```
```
Beatles_songs %>%
pull(vocals) %>%
str_subset("Lennon") %>%
length()
```
```
[1] 111
```
John was credited with singing on more songs than Paul.
How many of these songs were the product of some type of Lennon\-McCartney collaboration?
Given the inconsistency in how the vocals are attributed, it requires some ingenuity to extract these data.
We can search the `vocals` variable for either `McCartney` or `Lennon` (or both), and count these instances.
```
Beatles_songs %>%
pull(vocals) %>%
str_subset("(McCartney|Lennon)") %>%
length()
```
```
[1] 172
```
At this point, we need another regular expression to figure out how many songs they both sang on.
The following will find the pattern consisting of either `McCartney` or `Lennon`, followed by a possibly empty string of characters, followed by another instance of either `McCartney` or `Lennon`.
```
pj_regexp <- "(McCartney|Lennon).*(McCartney|Lennon)"
Beatles_songs %>%
pull(vocals) %>%
str_subset(pj_regexp) %>%
length()
```
```
[1] 42
```
Note also that we can use `str_detect()` in a `filter()` command to retrieve the list of songs upon which Lennon and McCartney both sang.
```
Beatles_songs %>%
filter(str_detect(vocals, pj_regexp)) %>%
select(song, vocals) %>%
head()
```
```
# A tibble: 6 × 2
song vocals
<chr> <chr>
1 All Together Now McCartney(with Lennon)
2 Any Time at All Lennon(with McCartney)
3 Baby's in Black LennonMcCartney
4 Because LennonMcCartneyHarrison
5 Birthday McCartney(with Lennon)
6 Carry That Weight McCartney(with Lennon,Harrison,and Starr)
```
The Beatles have had such a profound influence upon musicians of all stripes that it might be worth investigating the titles of their songs.
What were they singing about?
```
Beatles_songs %>%
unnest_tokens(word, song) %>%
anti_join(get_stopwords(), by = "word") %>%
count(word, sort = TRUE) %>%
arrange(desc(n)) %>%
head()
```
```
# A tibble: 6 × 2
word n
<chr> <int>
1 love 9
2 want 7
3 got 6
4 hey 6
5 long 6
6 baby 4
```
Fittingly, “Love” is the most common word in the title of Beatles songs.
19\.4 Further resources
-----------------------
Silge and Robinson’s [*Tidy Text Mining in R*](https://github.com/dgrtwo/tidy-text-mining) book
has an extensive set of examples of text mining and sentiment analysis (Silge and Robinson 2017, 2016\).
[Emil Hvitfeldt](https://en.wikipedia.org/w/index.php?search=Emil%20Hvitfeldt) and [Julia Silge](https://en.wikipedia.org/w/index.php?search=Julia%20Silge) have [announced](https://www.hvitfeldt.me/blog/smltar-announcement/) a tidy approach to supervised machine learning for text analysis.
Text analytics has a rich history of being used to infer authorship of the Federalist papers (Frederick Mosteller and Wallace 1963\) and Beatles songs (Glickman, Brown, and Song 2019\).
Google has collected \\(n\\)\-grams for a huge number of books and provides an [interface](https://books.google.com/ngrams) to these data.
[Wikipedia](http://en.wikipedia.org/wiki/Regular_expression) provides a clear overview of syntax for sophisticated pattern\-matching within strings using regular expressions.
There are many sources to find text data online.
[Project Gutenberg](http://www.gutenberg.org/wiki/Main_Page) is a massive free online library. Project Gutenberg collects the full\-text of more than 50,000 books whose copyrights have expired. It is great for older, classic books. You won’t find anything by [Stephen King](https://en.wikipedia.org/w/index.php?search=Stephen%20King) (but there
is one by [Stephen King\-Hall](https://en.wikipedia.org/w/index.php?search=Stephen%20King-Hall). Direct access to Project Gutenberg is available in **R** through the **gutenbergr** package.
The **tidytext** and **textdata** packages support other lexicons for sentiment analysis, including “bing,” “nrc,” and “loughran.”
19\.5 Exercises
---------------
**Problem 1 (Easy)**: Use the `Macbeth_raw` data from the `mdsr` package to answer the following questions:
1. Speaking lines in Shakespeare’s plays are identified by a line that starts with two spaces, then a string of capital letters and spaces (the character’s name) followed by a period. Use `grep` to find all of the speaking lines in *Macbeth*. How many are there?
2. Find all the hyphenated words in *Macbeth*.
**Problem 2 (Easy)**:
1. Find all of the adjectives in *Macbeth* that end in *more* or *less* using `Machbeth_raw` in `mdsr`.
2. Find all of the lines containing the stage direction *Exit* or *Exeunt* in *Macbeth*.
**Problem 3 (Easy)**: Given the vector of words below, determine the output of the following regular expressions without running the R code.
```
x <- c(
"popular", "popularity", "popularize", "popularise",
"Popular", "Population", "repopulate", "reproduce",
"happy family", "happier\tfamily", " happy family", "P6dn"
)
x
```
```
[1] "popular" "popularity" "popularize" "popularise"
[5] "Popular" "Population" "repopulate" "reproduce"
[9] "happy family" "happier\tfamily" " happy family" "P6dn"
```
```
str_subset(x, pattern = "pop") #1
str_detect(x, pattern = "^pop") #2
str_detect(x, pattern = "populari[sz]e") #3
str_detect(x, pattern = "pop.*e") #4
str_detect(x, pattern = "p[a-z]*e") #5
str_detect(x, pattern = "^[Pp][a-z]+.*n") #6
str_subset(x, pattern = "^[^Pp]") #7
str_detect(x, pattern = "^[A-Za-p]") #8
str_detect(x, pattern = "[ ]") #9
str_subset(x, pattern = "[\t]") #10
str_detect(x, pattern = "[ \t]") #11
str_subset(x, pattern = "^[ ]") #12
```
**Problem 4 (Easy)**: Use the `babynames` data table from the `babynames` package to find the 10 most popular:
1. Boys’ names ending in a vowel.
2. Names ending with `joe`, `jo`, `Joe`, or `Jo` (e.g., *Billyjoe*).
**Problem 5 (Easy)**: Wikipedia defines a hashtag as “a type of metadata tag used on social networks such as Twitter and other microblogging services, allowing users to apply dynamic, user\-generated tagging which makes it possible for others to easily find messages with a specific theme or content.” A hashtag must begin with a hash character followed by other characters, and is terminated by a space or end of message. It is always safe to precede the \# with a space, and to include letters without diacritics (e.g., accents), digits, and underscores." Provide a regular expression that matches whether a string contains a valid hashtag.
```
strings <- c(
"This string has no hashtags",
"#hashtag city!",
"This string has a #hashtag",
"This string has #two #hashtags"
)
```
**Problem 6 (Easy)**: A ZIP (zone improvement program) code is a code used by the United States Postal Service to route mail. The Zip \+ 4 code include the five digits of the ZIP Code, followed by a hyphen and four digits that designate a more specific location. Provide a regular expression that matches strings that consist of a Zip \+ 4 code.
**Problem 7 (Medium)**: Create a DTM (document term matrix) for the collection of Emily Dickinson’s poems in the `DickinsonPoems` package. Find the terms with the highest *tf.idf* scores. Choose one of these terms and find any of its strongly correlated terms.
```
# remotes::install_github("Amherst-Statistics/DickinsonPoems")
```
**Problem 8 (Medium)**: A text analytics project is using scanned data to create a corpus.
Many of the lines have been hyphenated in the original text.
```
text_lines <- tibble(
lines = c("This is the first line.",
"This line is hyphen- ",
"ated. It's very diff-",
"icult to use at present.")
)
```
Write a function that can be used to remove the hyphens and concatenate the parts of the words that are split on the line where they first appeared.
**Problem 9 (Medium)**: Find all titles of Emily Dickinson’s poems (not including the Roman numerals) in the first 10 poems of the `DickinsonPoems` package.
(Hint: the titles are all caps.)
**Problem 10 (Medium)**:
Classify Emily Dickinson’s poem *The Lonely House* as either positive or negative using the `AFINN` lexicon. Does this match with your own interpretation of the poem? Use the `DickinsonPoems` package.
```
library(DickinsonPoems)
poem <- get_poem("gutenberg1.txt014")
```
**Problem 11 (Medium)**: Generate a regular expression to return the second word in a vector.
```
x <- c("one two three", "four five six", "SEVEN EIGHT")
```
When applied to vector x, the result should be:
```
[1] "two" "five" "EIGHT"
```
**Problem 12 (Hard)**: The `pdxTrees_parks` dataset from the `pdxTrees` package contains information on thousands of trees in the Portland, Oregon area. Using the `species_factoid` variable, investigate any interesting trends within the facts.
19\.6 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/text.html\#text\-online\-exercises](https://mdsr-book.github.io/mdsr2e/text.html#text-online-exercises)
**Problem 1 (Medium)**:
1. The site <stackexchange.com> displays questions and answers on technical topics. The following code downloads the most recent R questions related to the `dplyr` package.
```
library(httr)
# Find the most recent R questions on stackoverflow
getresult <- GET("http://api.stackexchange.com",
path = "questions",
query = list(site = "stackoverflow.com", tagged = "dplyr")
)
# Ensure returned without error
stop_for_status(getresult)
```
```
questions <- httr::content(getresult) # Grab content
names(questions$items[[1]]) # What does the returned data look like?
```
```
[1] "tags" "owner" "is_answered"
[4] "view_count" "answer_count" "score"
[7] "last_activity_date" "creation_date" "last_edit_date"
[10] "question_id" "content_license" "link"
[13] "title"
```
```
length(questions$items)
```
```
[1] 30
```
```
substr(questions$items[[1]]$title, 1, 68)
```
```
[1] "How to loop distance calculations for multiple instances using dplyr"
```
```
substr(questions$items[[2]]$title, 1, 68)
```
```
[1] "Transform to wide format from long in R"
```
```
substr(questions$items[[3]]$title, 1, 68)
```
```
[1] "filter row when multiple colums can be concerned"
```
How many questions were returned? Without using jargon, describe in words what is being displayed and how it might be used.
2. Repeat the process of downloading the content from <stackexchange.com> related to
the `dplyr` package and summarize the results.
**Problem 2 (Medium)**:
1. Use regular expressions to determine the number of speaking lines [The Complete Works of William Shakespeare](http://www.gutenberg.org/cache/epub/100/pg100.txt). Here, we care only about how many times a character speaks—not what they say or for how long they speak.
2. Make a bar chart displaying the top 100 characters with the greatest number of lines. **Hint** you may want to use either the `stringr::str_extract` or `strsplit` function here.
3. In this problem, you will do much of the work to recreate Mark Hansen’s *Shakespeare Machine*. Start by watching a [video clip](http://vimeo.com/54858820) of the exhibit. Use *The Complete Works of William Shakespeare* and regular expressions to find all of the hyphenated words in Shakespeare Machine. How many are there? Use `\%in\%` to verify that your list contains the following hyphenated words pictured at 00:46 of the clip.
**Problem 3 (Hard)**: Given the dataframe of Emily Dickinson poems in the `DickinsonPoems` package, perform sentiment analysis and identify any interesting insights about her work overall.
```
library(tidyverse)
# remotes::install_github("Amherst-Statistics/DickinsonPoems")
library(DickinsonPoems)
library(tidytext)
poems_df <- list_poems() %>%
purrr::map(get_poem) %>%
unlist() %>%
enframe(value = "words") %>%
unnest_tokens(word, words)
```
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-netsci.html |
Chapter 20 Network science
==========================
[Network science](http://en.wikipedia.org/wiki/Network_science) is an emerging interdisciplinary field that studies the properties of large and complex networks. Network scientists are interested in both theoretical properties of networks (e.g., mathematical models for degree distribution) and data\-based discoveries in real networks (e.g., the distribution of the number of friends on Facebook).
20\.1 Introduction to network science
-------------------------------------
### 20\.1\.1 Definitions
The roots of network science are in the mathematical discipline of [*graph theory*](https://en.wikipedia.org/w/index.php?search=graph%20theory). There are a few basic definitions that we need before we can proceed.
* A [*graph*](https://en.wikipedia.org/w/index.php?search=graph) \\(G\=(V,E)\\) is simply a set of [*vertices*](https://en.wikipedia.org/w/index.php?search=vertices) (or nodes) \\(V\\), and a set of *edges* (or links, or even ties) \\(E\\) between those nodes. It may be more convenient to think about a graph as being a [*network*](https://en.wikipedia.org/w/index.php?search=network). For example, in a network model of Facebook, each user is a vertex and each friend relation is an edge connecting two users. Thus, one can think of Facebook as a [*social network*](https://en.wikipedia.org/w/index.php?search=social%20network), but the underlying mathematical structure is just a graph. Discrete mathematicians have been studying graphs since [Leonhard Euler](https://en.wikipedia.org/w/index.php?search=Leonhard%20Euler) posed the [*Seven Bridges of Königsberg*](https://en.wikipedia.org/w/index.php?search=Seven%20Bridges%20of%20Königsberg) problem in 1736 (Euler 1953\).
* Edges in graphs can be [*directed*](https://en.wikipedia.org/w/index.php?search=directed) or [*undirected*](https://en.wikipedia.org/w/index.php?search=undirected). The difference is whether the relationship is mutual or one\-sided. For example, edges in the Facebook social network are undirected, because friendship is a mutual relationship. Conversely, edges in Twitter are directed, since you may follow someone who does not necessarily follow you.
* Edges (or less commonly, vertices) may be [*weighted*](https://en.wikipedia.org/w/index.php?search=weighted). The value of the weight represents some quantitative measure. For example, an airline may envision its flight network as a graph, in which each airport is a node, and edges are weighted according to the distance (in miles) from one airport to another. (If edges are unweighted, this is equivalent to setting all weights to 1\.)
* A [*path*](https://en.wikipedia.org/w/index.php?search=path) is a non\-self\-intersecting sequence of edges that connect two vertices. More formally, a path is a special case of a [*walk*](https://en.wikipedia.org/w/index.php?search=walk), which does allow self\-intersections (i.e., a vertex may appear in the walk more than once). There may be many paths, or no paths, between two vertices in a graph, but if there are any paths, then there is at least one [*shortest path*](https://en.wikipedia.org/w/index.php?search=shortest%20path) (or [*geodesic*](https://en.wikipedia.org/w/index.php?search=geodesic)). The notion of a shortest path is dependent upon a distance measure in the graph (usually, just the number of edges, or the sum of the edge weights).
A graph is [*connected*](https://en.wikipedia.org/w/index.php?search=connected) if there is a path between all pairs of vertices.
* The [*diameter*](https://en.wikipedia.org/w/index.php?search=diameter) of a graph is the length of the longest geodesic (i.e., the longest shortest \[sic] path) between any pairs of vertices. The [*eccentricity*](https://en.wikipedia.org/w/index.php?search=eccentricity) of a vertex \\(v\\) in a graph is the length of the longest geodesic starting at that vertex. Thus, in some sense a vertex with a low eccentricity is more central to the graph.
* In general, graphs do not have coordinates. Thus, there is no right way to draw a graph. Visualizing a graph is more art than science, but several graph layout algorithms are popular.
* Centrality: Since graphs don’t have coordinates, there is no obvious measure of [*centrality*](https://en.wikipedia.org/w/index.php?search=centrality). That is, it is frequently of interest to determine which nodes are most “central” to the network, but there are many different notions of centrality. We will discuss three:
+ Degree centrality: The [*degree*](https://en.wikipedia.org/w/index.php?search=degree) of a vertex within a graph is the number of edges incident to it. Thus, the degree of a node is a simple measure of centrality in which more highly connected nodes rank higher. President Obama has almost [100 million followers on Twitter](https://twitter.com/POTUS), whereas the vast majority of users have fewer than a thousand. Thus, the degree of the vertex representing President Obama in the Twitter network is in the millions, and he is more central to the network in terms of degree centrality.
+ Betweenness centrality: If a vertex \\(v\\) is more central to a graph, then you would suspect that more shortest paths between vertices would pass through \\(v\\). This is the notion of [*betweenness centrality*](https://en.wikipedia.org/w/index.php?search=betweenness%20centrality). Specifically, let \\(\\sigma(s,t)\\) be the number of geodesics between vertices \\(s\\) and \\(t\\) in a graph. Let \\(\\sigma\_v(s,t)\\) be the number of shortest paths between \\(s\\) and \\(t\\) that pass through \\(v\\). Then the betweenness centrality for \\(v\\) is the sum of the fractions \\(\\sigma\_v(s,t) / \\sigma(s,t)\\) over all possible pairs \\((s,t)\\). This figure (\\(C\_B(v)\\)) is often normalized by dividing by the number of pairs of vertices that do not include \\(v\\) in the graph.
\\\[
C\_B(v) \= \\frac{2}{(n\-1\)(n\-2\)} \\sum\_{s,t \\in V \\setminus \\{v\\}} \\frac{\\sigma\_v(s,t)}{\\sigma(s,t)} \\,,
\\]
where \\(n\\) is the number of vertices in the graph.
Note that President Obama’s high degree centrality would not necessarily translate into a high betweenness centrality.
+ [*Eigenvector centrality*](https://en.wikipedia.org/w/index.php?search=Eigenvector%20centrality): This is the essence of Google’s [*PageRank*](https://en.wikipedia.org/w/index.php?search=PageRank) algorithm, which we will discuss in Section [20\.3](ch-netsci.html#sec:pagerank).
Note that there are also notions of edge centrality that we will not discuss further.
* In a social network, it is usually believed that if Alice and Bob are friends, and Alice and Carol are friends, then it is more likely than it otherwise would be that Bob and Carol are friends. This is the notion of [*triadic closure*](https://en.wikipedia.org/w/index.php?search=triadic%20closure), and it leads to measurements of [*clusters*](https://en.wikipedia.org/w/index.php?search=clusters) in real\-world networks.
### 20\.1\.2 A brief history of network science
As noted above, the study of graph theory began in the 1700s, but the inception of the field of network science was a paper published in 1959 by the legendary [Paul Erdős](https://en.wikipedia.org/w/index.php?search=Paul%20Erdős) and [Alfréd Rényi](https://en.wikipedia.org/w/index.php?search=Alfréd%20Rényi) (Erdős and Rényi 1959\). Erdős and Rényi proposed a model for a [*random graph*](https://en.wikipedia.org/w/index.php?search=random%20graph), where the number of vertices \\(n\\) is fixed, but the probability of an edge connecting any two vertices is \\(p\\). What do such graphs look like? What properties do they have? It is obvious that if \\(p\\) is very close to 0, then the graph will be almost empty, while conversely, if \\(p\\) is very close to 1, then the graph will be almost complete. Erdős and Rényi unexpectedly proved that for many graph properties \\(c\\) (e.g., connectedness, the existence of a cycle of a certain size, etc.), there is a threshold function \\(p\_c(n)\\) around which the structure of the graph seems to change rapidly. That is, for values of \\(p\\) slightly less than \\(p\_c(n)\\), the probability that a random graph is connected is close to zero, while for values of \\(p\\) just a bit larger than \\(p\_c(n)\\), the probability that a random graph is connected is close to one (see Figure [20\.1](ch-netsci.html#fig:er-graphs)).
This somewhat bizarre behavior has been called the [*phase transition*](https://en.wikipedia.org/w/index.php?search=phase%20transition) in allusion to physics, because it evokes at a molecular level how solids turn to liquids and liquids turn to gasses. When temperatures are just above 32 degrees Fahrenheit, water is a liquid, but at just below 32 degrees, it becomes a solid.
```
library(tidyverse)
library(mdsr)
library(tidygraph)
library(ggraph)
set.seed(21)
n <- 100
p_star <- log(n)/n
plot_er <- function(n, p) {
g <- play_erdos_renyi(n, p, directed = FALSE)
ggraph(g) +
geom_edge_fan(width = 0.1) +
geom_node_point(size = 3, color = "dodgerblue") +
labs(
title = "Erdős--Rényi random graph",
subtitle = paste0("n = ", n, ", p = ", round(p, 4))
) +
theme_void()
}
plot_er(n, p = 0.8 * p_star)
plot_er(n, p = 1.2 * p_star)
```
Figure 20\.1: Two Erdős–Rényi random graphs on 100 vertices with different values of \\(p\\). The graph at left is not connected, but the graph at right is. The value of \\(p\\) hasn’t changed by much.
While many properties of the phase transition have been proven mathematically, they can also be illustrated using simulation (see Chapter [13](ch-simulation.html#ch:simulation)).
Throughout this chapter, we use the **tidygraph** package for constructing and manipulating graphs[39](#fn39), and the **ggraph** package for plotting graphs as **ggplot2** objects.
The **tidygraph** package provides the `play_erdos_renyi()` function for simulating Erdős–Rényi random graphs. In Figure [20\.2](ch-netsci.html#fig:connectedness-threshold), we show how the phase transition for connectedness appears around the threshold value of \\(p(n) \= \\log{n}/n\\). With \\(n\=1000\\), we have \\(p(n) \=\\) 0\.007\. Note how quickly the probability of being connected increases near the value of the threshold function.
```
n <- 1000
p_star <- log(n)/n
p <- rep(seq(from = 0, to = 2 * p_star, by = 0.001), each = 100)
sims <- tibble(n, p) %>%
mutate(
g = map2(n, p, play_erdos_renyi, directed = FALSE),
is_connected = map_int(g, ~with_graph(., graph_is_connected()))
)
ggplot(data = sims, aes(x = p, y = is_connected)) +
geom_vline(xintercept = p_star, color = "darkgray") +
geom_text(
x = p_star, y = 0.9, label = "Threshold value", hjust = "right"
) +
labs(
x = "Probability of edge existing",
y = "Probability that random graph is connected"
) +
geom_count() +
geom_smooth()
```
Figure 20\.2: Simulation of connectedness of ER random graphs on 1,000 vertices.
This surprising discovery demonstrated that random graphs had interesting properties. Yet it was less clear whether the Erdős–Rényi random graph model could produce graphs whose properties were similar to those that we observe in reality. That is, while the Erdős–Rényi random graph model was interesting in its own right, did it model reality well?
The answer turned out to be “no,” or at least, “not really.” In particular, Watts and Strogatz (1998\) identified two properties present in real\-world networks that were not present in Erdős–Rényi random graphs: triadic closure and large hubs.
As we saw above, triadic closure is the idea that two people with a friend in common are likely to be friends themselves. Real\-world (not necessarily social) networks tend to have this property, but Erdős–Rényi random graphs do not.
Similarly, real\-world networks tend to have large hubs—individual nodes with many edges. More specifically, whereas the distribution of the degrees of vertices in Erdős–Rényi random graphs can be shown to follow a [*Poisson distribution*](https://en.wikipedia.org/w/index.php?search=Poisson%20distribution), in real\-world networks the distribution tends to be flatter.
The Watts–Strogatz model provides a second random graph model that produces graphs more similar to those we observe in reality.
```
g <- play_smallworld(n_dim = 2, dim_size = 10, order = 5, p_rewire = 0.05)
```
In particular, many real\-world networks, including not only social networks but also the World Wide Web, citation networks, and many others, have a degree distribution that follows a [*power\-law*](https://en.wikipedia.org/w/index.php?search=power-law). These are known as [*scale\-free*](https://en.wikipedia.org/w/index.php?search=scale-free) networks and were popularized by [Albert\-László Barabási](https://en.wikipedia.org/w/index.php?search=Albert-László%20Barabási) in two widely\-cited papers R. Albert and Barabási (2002\) and his readable book (Barabási and Frangos 2014\).
Barabási and Albert proposed a third random graph model based on the notion of [*preferential attachment*](https://en.wikipedia.org/w/index.php?search=preferential%20attachment).
Here, new nodes are connected to old nodes based on the existing degree distribution of the old nodes.
Their model produces the power\-law degree distribution that has been observed in many different real\-world networks.
Here again, we can illustrate these properties using simulation.
The `play_barabasi_albert()` function in **tidygraph** will allow us to simulate a Barabási–Albert random graph.
Figure [20\.3](ch-netsci.html#fig:degree-dist) compares the degree distribution between an Erdős–Rényi random graph and a Barabási–Albert random graph.
```
g1 <- play_erdos_renyi(n, p = log(n)/n, directed = FALSE)
g2 <- play_barabasi_albert(n, power = 1, growth = 3, directed = FALSE)
summary(g1)
```
```
IGRAPH 31a3c1d U--- 1000 3419 -- Erdos renyi (gnp) graph
+ attr: name (g/c), type (g/c), loops (g/l), p (g/n)
```
```
summary(g2)
```
```
IGRAPH 88bc59e U--- 1000 2994 -- Barabasi graph
+ attr: name (g/c), power (g/n), m (g/n), zero.appeal (g/n),
| algorithm (g/c)
```
```
d <- tibble(
type = c("Erdos-Renyi", "Barabasi-Albert"),
graph = list(g1, g2)
) %>%
mutate(node_degree = map(graph, ~with_graph(., centrality_degree()))) %>%
unnest(node_degree)
ggplot(data = d, aes(x = node_degree, color = type)) +
geom_density(size = 2) +
scale_x_continuous(limits = c(0, 25))
```
Figure 20\.3: Degree distribution for two random graphs.
Network science is a very active area of research, with interesting unsolved problems for data scientists to investigate.
20\.2 Extended example: Six degrees of Kristen Stewart
------------------------------------------------------
In this extended example, we will explore a fun application of network science to [*Hollywood*](https://en.wikipedia.org/w/index.php?search=Hollywood) movies.
The notion of [*Six Degrees of Separation*](https://en.wikipedia.org/w/index.php?search=Six%20Degrees%20of%20Separation) was conjectured by a Hungarian network theorist in 1929, and later popularized by a play (and movie starring [Will Smith](https://en.wikipedia.org/w/index.php?search=Will%20Smith)). [Stanley Milgram](https://en.wikipedia.org/w/index.php?search=Stanley%20Milgram)’s famous letter\-mailing [*small\-world*](https://en.wikipedia.org/w/index.php?search=small-world) experiment supposedly lent credence to the idea that all people are connected by relatively few “social hops” (Travers and Milgram 1969\). That is, we are all part of a social network with a relatively small diameter (as small as 6\).
Two popular incarnations of these ideas are the notion of an [*Erdős number*](https://en.wikipedia.org/w/index.php?search=Erdős%20number) and the [Kevin Bacon game](http://oracleofbacon.org/).
The question in each case is the same: How many hops are you away from [Paul Erdős](https://en.wikipedia.org/w/index.php?search=Paul%20Erdős) (or [Kevin Bacon](https://en.wikipedia.org/w/index.php?search=Kevin%20Bacon)? The former is popular among academics (mathematicians especially), where edges are defined by co\-authored papers.
Ben’s Erdős number is three, since he has co\-authored a paper with [Amotz Bar–Noy](https://en.wikipedia.org/w/index.php?search=Amotz%20Bar--Noy), who has co\-authored a paper with [Noga Alon](https://en.wikipedia.org/w/index.php?search=Noga%20Alon), who co\-authored a paper with Erdős.
According to [MathSciNet](http://www.ams.org/mathscinet/collaborationDistance.html), Nick’s Erdős number is four (through Ben given (B. S. Baumer et al. 2014\); but also through [Nan Laird](https://en.wikipedia.org/w/index.php?search=Nan%20Laird), [Fred Mosteller](https://en.wikipedia.org/w/index.php?search=Fred%20Mosteller), and [Persi Diaconis](https://en.wikipedia.org/w/index.php?search=Persi%20Diaconis)), and Danny’s is four (also through Ben).
These data reflect the reality that Ben’s research is “closer” to Erdős’s, since he has written about network science (Bogdanov et al. 2013; Ben S. Baumer et al. 2015; Basu et al. 2015; B. Baumer, Basu, and Bar\-Noy 2011\) and graph theory (Benjamin S. Baumer, Wei, and Bloom 2016\).
Similarly, the idea is that in principle, every actor in Hollywood can be connected to Kevin Bacon in at most six movie hops.
We’ll explore this idea using the [*Internet Movie Database*](https://en.wikipedia.org/w/index.php?search=Internet%20Movie%20Database) (IMDB.com 2013\).
### 20\.2\.1 Collecting Hollywood data
We will populate a [*Hollywood*](https://en.wikipedia.org/w/index.php?search=Hollywood) network using actors in the IMDb.
In this network, each actor is a node, and two actors share an edge if they have ever appeared in a movie together.
Our goal will be to determine the centrality of [Kevin Bacon](https://en.wikipedia.org/w/index.php?search=Kevin%20Bacon).
First, we want to determine the edges, since we can then look up the node information based on the edges that are present.
One caveat is that these networks can grow very rapidly (since the number of edges is \\(O(n^2\)\\), where \\(n\\) is the number of vertices).
For this example, we will be conservative by including popular (at least 150,000 ratings) feature films (i.e., `kind_id` equal to `1`) in 2012, and we will consider only the top\-20 credited roles in each film.
To retrieve the list of edges, we need to consider all possible cast assignment pairs.
To get this list, we start by forming all total pairs using the `CROSS JOIN` operation in MySQL (see Chapter [15](ch-sql.html#ch:sql)), which has no direct **dplyr** equivalent.
Thus, in this case we will have to actually write the SQL code.
Note that we filter this list down to the unique pairs, which we can do by only including pairs where `person_id` from the first table is strictly less than `person_id` from the second table.
The result of the following query will come into **R** as the object `E`.
```
library(mdsr)
db <- dbConnect_scidb("imdb")
```
```
SELECT a.person_id AS src, b.person_id AS dest,
a.movie_id,
a.nr_order * b.nr_order AS weight,
t.title, idx.info AS ratings
FROM imdb.cast_info AS a
CROSS JOIN imdb.cast_info AS b USING (movie_id)
LEFT JOIN imdb.title AS t ON a.movie_id = t.id
LEFT JOIN imdb.movie_info_idx AS idx ON idx.movie_id = a.movie_id
WHERE t.production_year = 2012 AND t.kind_id = 1
AND info_type_id = 100 AND idx.info > 150000
AND a.nr_order <= 20 AND b.nr_order <= 20
AND a.role_id IN (1,2) AND b.role_id IN (1,2)
AND a.person_id < b.person_id
GROUP BY src, dest, movie_id
```
```
E <- E %>%
mutate(ratings = parse_number(ratings))
glimpse(E)
```
```
Rows: 10,223
Columns: 6
$ src <int> 6388, 6388, 6388, 6388, 6388, 6388, 6388, 6388, 6388, 638…
$ dest <int> 405570, 445466, 688358, 722062, 830618, 838704, 960997, 1…
$ movie_id <int> 4590482, 4590482, 4590482, 4590482, 4590482, 4590482, 459…
$ weight <dbl> 52, 13, 143, 234, 260, 208, 156, 247, 104, 130, 26, 182, …
$ title <chr> "Zero Dark Thirty", "Zero Dark Thirty", "Zero Dark Thirty…
$ ratings <dbl> 231992, 231992, 231992, 231992, 231992, 231992, 231992, 2…
```
We have also computed a `weight` variable that we can use to weight the edges in the resulting graph. In this case, the `weight` is based on the order in which each actor appears in the credits. So a ranking of `1` means that the actor had top billing. These weights will be useful because a higher order in the credits usually means more screen time.
```
E %>%
summarize(
num_rows = n(),
num_titles = n_distinct(title)
)
```
```
num_rows num_titles
1 10223 55
```
Our query resulted in 10,223 connections between 55 films. We can see that [*Batman: The Dark Knight Rises*](https://en.wikipedia.org/w/index.php?search=Batman:%20The%20Dark%20Knight%20Rises) received the most user ratings on IMDb.
```
movies <- E %>%
group_by(movie_id) %>%
summarize(title = max(title), N = n(), numRatings = max(ratings)) %>%
arrange(desc(numRatings))
movies
```
```
# A tibble: 55 × 4
movie_id title N numRatings
<int> <chr> <int> <dbl>
1 4339115 The Dark Knight Rises 190 1258255
2 3519403 Django Unchained 190 1075891
3 4316706 The Avengers 190 1067306
4 4368646 The Hunger Games 190 750674
5 4366574 The Hobbit: An Unexpected Journey 190 681957
6 4224391 Silver Linings Playbook 190 577500
7 4231376 Skyfall 190 557652
8 4116220 Prometheus 190 504980
9 4300124 Ted 190 504893
10 3298411 Argo 190 493001
# … with 45 more rows
```
Next, we should gather some information about the vertices in this graph. We could have done this with another `JOIN` in the original query, but doing it now will be more efficient. (Why? See the cross\-join exercise.) In this case, all we need is each actor’s name and IMDb identifier.
```
actor_ids <- unique(c(E$src, E$dest))
V <- db %>%
tbl("name") %>%
filter(id %in% actor_ids) %>%
select(actor_id = id, actor_name = name) %>%
collect() %>%
arrange(actor_id) %>%
mutate(id = row_number())
glimpse(V)
```
```
Rows: 1,010
Columns: 3
$ actor_id <int> 6388, 6897, 8462, 16644, 17039, 18760, 28535, 33799, 42…
$ actor_name <chr> "Abkarian, Simon", "Aboutboul, Alon", "Abtahi, Omid", "…
$ id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, …
```
### 20\.2\.2 Building the Hollywood network
To build a graph, we specify the edges, whether we want them to be directed, and add information about the vertices.
```
edges <- E %>%
left_join(select(V, from = id, actor_id), by = c("src" = "actor_id")) %>%
left_join(select(V, to = id, actor_id), by = c("dest" = "actor_id"))
g <- tbl_graph(nodes = V, directed = FALSE, edges = edges)
summary(g)
```
```
IGRAPH 62d5731 U-W- 1010 10223 --
+ attr: actor_id (v/n), actor_name (v/c), id (v/n), src (e/n), dest
| (e/n), movie_id (e/n), weight (e/n), title (e/c), ratings (e/n)
```
From the `summary()` command above, we can see that we have 1,010 actors and 10,223 edges between them. Note that we have associated metadata with each edge: namely, information about the movie that gave rise to the edge, and the aforementioned `weight` metric based on the order in the credits where each actor appeared. (The idea is that top\-billed stars are likely to appear on screen longer, and thus have more meaningful interactions with more of the cast.)
With our network intact, we can visualize it. There are *many* graphical parameters that you may wish to set, and the default choices are not always good. In this case we have 1,010 vertices, so we’ll make them small, and omit labels.
Figure [20\.4](ch-netsci.html#fig:hollywood) displays the results.
```
ggraph(g, 'drl') +
geom_edge_fan(width = 0.1) +
geom_node_point(color = "dodgerblue") +
theme_void()
```
Figure 20\.4: Visualization of Hollywood network for popular 2012 movies.
It is easy to see the clusters based on movies, but you can also see a few actors who have appeared in multiple movies, and how they tend to be more “central” to the network. If an actor has appeared in multiple movies, then it stands to reason that they will have more connections to other actors. This is captured by degree centrality.
```
g <- g %>%
mutate(degree = centrality_degree())
g %>%
as_tibble() %>%
arrange(desc(degree)) %>%
head()
```
```
# A tibble: 6 × 4
actor_id actor_name id degree
<int> <chr> <int> <dbl>
1 502126 Cranston, Bryan 113 57
2 891094 Gordon-Levitt, Joseph 228 57
3 975636 Hardy, Tom 257 57
4 1012171 Hemsworth, Chris 272 57
5 1713855 Neeson, Liam 466 57
6 1114312 Ivanek, Zeljko 304 56
```
There are a number of big name actors on this list who appeared in multiple movies in 2012\. Why does [Bryan Cranston](http://m.imdb.com/name/nm0186505/filmotype/actor?ref_=m_nmfm_1) have so many connections? The following quick function will retrieve the list of movies for a particular actor.
```
show_movies <- function(g, id) {
g %>%
activate(edges) %>%
as_tibble() %>%
filter(src == id | dest == id) %>%
group_by(movie_id) %>%
summarize(title = first(title), num_connections = n())
}
show_movies(g, 502126)
```
```
# A tibble: 3 × 3
movie_id title num_connections
<int> <chr> <int>
1 3298411 Argo 19
2 3780482 John Carter 19
3 4472483 Total Recall 19
```
Cranston appeared in all three of these movies.
Note however, that the distribution of degrees is not terribly smooth (see Figure [20\.5](ch-netsci.html#fig:crans)). That is, the number of connections that each actor has appears to be limited to a few discrete possibilities. Can you think of why that might be?
```
ggplot(data = enframe(igraph::degree(g)), aes(x = value)) +
geom_density(size = 2)
```
Figure 20\.5: Distribution of degrees for actors in the Hollywood network of popular 2012 movies.
We use the **ggraph** package, which provides `geom_node_*()` and `geom_edge_*()` functions for plotting graphs directly with **ggplot2**. (Alternative plotting packages include **ggnetwork**, **geomnet**, and **GGally**)
```
hollywood <- ggraph(g, layout = 'drl') +
geom_edge_fan(aes(alpha = weight), color = "lightgray") +
geom_node_point(aes(color = degree), alpha = 0.6) +
scale_edge_alpha_continuous(range = c(0, 1)) +
scale_color_viridis_c() +
theme_void()
```
We don’t want to show vertex labels for everyone, because that would result in an unreadable mess. However, it would be nice to see the highly central actors. Figure [20\.6](ch-netsci.html#fig:ggplot-network) shows our completed plot.
The transparency of the edges is scaled relatively to the `weight` measure that we computed earlier.
The `ggnetwork()` function transforms our **igraph** object into a data frame, from which the `geom_nodes()` and `geom_edges()` functions can map variables to aesthetics. In this case, since there are so many edges, we use the `scale_size_continuous()` function to make the edges very thin.
```
hollywood +
geom_node_label(
aes(
filter = degree > 40,
label = str_replace_all(actor_name, ", ", ",\n")
),
repel = TRUE
)
```
Figure 20\.6: The Hollywood network for popular 2012 movies. Color is mapped to degree centrality.
### 20\.2\.3 Building a Kristen Stewart oracle
Degree centrality does not take into account the weights on the edges. If we want to emphasize the pathways through leading actors, we could consider [*betweenness centrality*](https://en.wikipedia.org/w/index.php?search=betweenness%20centrality).
```
g <- g %>%
mutate(btw = centrality_betweenness(weights = weight, normalized = TRUE))
g %>%
as_tibble() %>%
arrange(desc(btw)) %>%
head(10)
```
```
# A tibble: 10 × 5
actor_id actor_name id degree btw
<int> <chr> <int> <dbl> <dbl>
1 3945132 Stewart, Kristen 964 38 0.236
2 891094 Gordon-Levitt, Joseph 228 57 0.217
3 3346548 Kendrick, Anna 857 38 0.195
4 135422 Bale, Christian 27 19 0.179
5 76481 Ansari, Aziz 15 19 0.176
6 558059 Day-Lewis, Daniel 135 19 0.176
7 1318021 LaBeouf, Shia 363 19 0.156
8 2987679 Dean, Ester 787 38 0.152
9 2589137 Willis, Bruce 694 56 0.141
10 975636 Hardy, Tom 257 57 0.134
```
```
show_movies(g, 3945132)
```
```
# A tibble: 2 × 3
movie_id title num_connections
<int> <chr> <int>
1 4237818 Snow White and the Huntsman 19
2 4436842 The Twilight Saga: Breaking Dawn - Part 2 19
```
Notice that [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart) has the highest betweenness centrality, while [Joseph Gordon\-Levitt](https://en.wikipedia.org/w/index.php?search=Joseph%20Gordon-Levitt) and [Tom Hardy](https://en.wikipedia.org/w/index.php?search=Tom%20Hardy) (and others) have the highest degree centrality.
Moreover, [Christian Bale](https://en.wikipedia.org/w/index.php?search=Christian%20Bale) has the third\-highest betweenness centrality despite appearing in only one movie. This is because he played the lead in *The Dark Knight Rises*, the movie responsible for the most edges. Most shortest paths through *The Dark Knight Rises* pass through [Christian Bale](https://en.wikipedia.org/w/index.php?search=Christian%20Bale).
If Kristen Stewart (`imdbId` `3945132`) is very central to this network, then perhaps instead of a Bacon number, we could consider a Stewart number.
[Charlize Theron](https://en.wikipedia.org/w/index.php?search=Charlize%20Theron)’s Stewart number is obviously 1, since they appeared in [*Snow White and the Huntsman*](https://en.wikipedia.org/w/index.php?search=Snow%20White%20and%20the%20Huntsman) together:
```
ks <- V %>%
filter(actor_name == "Stewart, Kristen")
ct <- V %>%
filter(actor_name == "Theron, Charlize")
g %>%
convert(to_shortest_path, from = ks$id, to = ct$id)
```
```
# A tbl_graph: 2 nodes and 1 edges
#
# An unrooted tree
#
# Node Data: 2 × 6 (active)
actor_id actor_name id degree btw .tidygraph_node_index
<int> <chr> <int> <dbl> <dbl> <int>
1 3945132 Stewart, Kristen 964 38 0.236 964
2 3990819 Theron, Charlize 974 38 0.0940 974
#
# Edge Data: 1 × 9
from to src dest movie_id weight title ratings .tidygraph_edge…
<int> <int> <int> <int> <int> <dbl> <chr> <dbl> <int>
1 1 2 3945132 3.99e6 4237818 3 Snow … 243824 10198
```
On the other hand, her distance from [Joseph Gordon\-Levitt](https://en.wikipedia.org/w/index.php?search=Joseph%20Gordon-Levitt) is 5\. The interpretation here is that [Joseph Gordon\-Levitt](https://en.wikipedia.org/w/index.php?search=Joseph%20Gordon-Levitt) was in [*Batman: The Dark Knight Rises*](https://en.wikipedia.org/w/index.php?search=Batman:%20The%20Dark%20Knight%20Rises) with [Tom Hardy](https://en.wikipedia.org/w/index.php?search=Tom%20Hardy), who was in [*Lawless*](https://en.wikipedia.org/w/index.php?search=Lawless) with [Guy Pearce](https://en.wikipedia.org/w/index.php?search=Guy%20Pearce), who was in [*Prometheus*](https://en.wikipedia.org/w/index.php?search=Prometheus) with [Charlize Theron](https://en.wikipedia.org/w/index.php?search=Charlize%20Theron), who was in [*Snow White and the Huntsman*](https://en.wikipedia.org/w/index.php?search=Snow%20White%20and%20the%20Huntsman) with [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart).
We show this graphically in Figure [20\.7](ch-netsci.html#fig:ks).
```
set.seed(47)
jgl <- V %>%
filter(actor_name == "Gordon-Levitt, Joseph")
h <- g %>%
convert(to_shortest_path, from = jgl$id, to = ks$id, weights = NA)
h %>%
ggraph('gem') +
geom_node_point() +
geom_node_label(aes(label = actor_name)) +
geom_edge_fan2(aes(label = title)) +
coord_cartesian(clip = "off") +
theme(plot.margin = margin(6, 36, 6, 36))
```
Figure 20\.7: Subgraph showing a shortest path through the Hollywood network from Joseph Gordon\-Levitt to Kristen Stewart.
Note, however, that these shortest paths are not unique. In fact, there are 9 shortest paths between [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart) and [Joseph Gordon\-Levitt](https://en.wikipedia.org/w/index.php?search=Joseph%20Gordon-Levitt), each having a length of 5\.
```
igraph::all_shortest_paths(g, from = ks$id, to = jgl$id, weights = NA) %>%
pluck("res") %>%
length()
```
```
[1] 9
```
As we saw in Figure [20\.6](ch-netsci.html#fig:ggplot-network), our Hollywood network is not connected, and thus its diameter is infinite. However, the diameter of the largest connected component can be computed. This number (in this case, 10\) indicates how many hops separate the two most distant actors in the network.
```
igraph::diameter(g, weights = NA)
```
```
[1] 10
```
```
g %>%
mutate(eccentricity = node_eccentricity()) %>%
filter(actor_name == "Stewart, Kristen")
```
```
# A tbl_graph: 1 nodes and 0 edges
#
# An unrooted tree
#
# Node Data: 1 × 6 (active)
actor_id actor_name id degree btw eccentricity
<int> <chr> <int> <dbl> <dbl> <dbl>
1 3945132 Stewart, Kristen 964 38 0.236 6
#
# Edge Data: 0 × 8
# … with 8 variables: from <int>, to <int>, src <int>, dest <int>,
# movie_id <int>, weight <dbl>, title <chr>, ratings <dbl>
```
On the other hand, we note that [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart)’s eccentricity is 6\. This means that there is no actor in the connected part of the network who is more than 6 hops away from [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart).
Six degrees of separation indeed!
20\.3 PageRank
--------------
For many readers, it may be difficult (or impossible) to remember what search engines on the Web were like before Google. Search engines such as [*Altavista*](https://en.wikipedia.org/w/index.php?search=Altavista), [*Web Crawler*](https://en.wikipedia.org/w/index.php?search=Web%20Crawler), [*Excite*](https://en.wikipedia.org/w/index.php?search=Excite), and [*Yahoo!*](https://en.wikipedia.org/w/index.php?search=Yahoo!) vied for supremacy, but none returned results that were of comparable use to the ones we get today. Frequently, finding what you wanted required sifting through pages of slow\-to\-load links.
Consider the search problem. A user types in a [*search query*](https://en.wikipedia.org/w/index.php?search=search%20query) consisting of one or more words or terms. Then the search engine produces an ordered list of Web pages ranked by their relevance to that search query. How would you instruct a computer to determine the relevance of a Web page to a query?
This problem is not trivial. Most pre\-Google search engines worked by categorizing the words on every Web page, and then determining—based on the search query—which pages were most relevant to that query.
One problem with this approach is that it relies on each Web designer to have the words on its page accurately reflect the content. Naturally, advertisers could easily manipulate search engines by loading their pages with popular search terms, written in the same color as the background (making them invisible to the user), regardless of whether those words were related to the actual content of the page. So naïve search engines might rank these pages more highly, even though they were not relevant to the user.
Google conquered search by thinking about the problem in a fundamentally different way and taking advantage of the network structure of the World Wide Web. The web is a directed graph, in which each webpage (URL) is a node, and edges reflect links from one webpage to another. In 1998, [Sergey Brin](https://en.wikipedia.org/w/index.php?search=Sergey%20Brin) and [Larry Page](https://en.wikipedia.org/w/index.php?search=Larry%20Page)—while computer science graduate students at [*Stanford University*](https://en.wikipedia.org/w/index.php?search=Stanford%20University)—developed a centrality measure called [*PageRank*](https://en.wikipedia.org/w/index.php?search=PageRank) that formed the basis of Google’s search algorithms (Page et al. 1999\). The algorithm led to search results that were so much better than those of its competitors that Google quickly swallowed the entire search market, and is now one of the world’s largest companies. The key insight was that one could use the directed links on the Web as a means of “voting” in a way that was much more difficult to exploit. That is, advertisers could only control links on their pages, but not links to their pages from other sites.
### 20\.3\.1 Eigenvector centrality
Computing PageRank is a rather simple exercise in linear algebra. It is an example of a [*Markov process*](https://en.wikipedia.org/w/index.php?search=Markov%20process). Suppose there are \\(n\\) webpages on the Web. Let \\(\\mathbf{v}\_0 \= \\mathbf{1}/n\\) be a vector that gives the initial probability that a randomly chosen Web surfer will be on any given page. In the absence of any information about this user, there is an equal probability that they might be on any page.
But for each of these \\(n\\) webpages, we also know to which pages it links. These are outgoing directed edges in the Web graph. We assume that a random surfer will follow each link with equal probability, so if there are \\(m\_i\\) outgoing links on the \\(i^{th}\\) webpage, then the probability that the random surfer goes from page \\(i\\) to page \\(j\\) is \\(p\_{ij} \= 1 / m\_i\\). Note that if the \\(i^{th}\\) page doesn’t link to the \\(j^{th}\\) page, then \\(p\_{ij} \= 0\\). In this manner, we can form the \\(n \\times n\\) [*transition matrix*](https://en.wikipedia.org/w/index.php?search=transition%20matrix) \\(\\mathbf{P}\\), wherein each entry describes the probability of moving from page \\(i\\) to page \\(j\\).
The product \\(\\mathbf{P} \\mathbf{v}\_0 \= \\mathbf{v}\_1\\) is a vector where \\(v\_{1i}\\) indicates the probability of being at the \\(i^{th}\\) webpage, after picking a webpage uniformly at random to start, and then clicking on one link chosen at random (with equal probability). The product \\(\\mathbf{P} \\mathbf{v}\_1 \= \\mathbf{P}^2 \\mathbf{v}\_0\\) gives us the probabilities after two clicks, etc.
It can be shown mathematically that if we continue to iterate this process, then we will arrive at a [*stationary distribution*](https://en.wikipedia.org/w/index.php?search=stationary%20distribution) \\(\\mathbf{v}^\*\\) that reflects the long\-term probability of being on any given page.
Each entry in that vector then represents the popularity of the corresponding webpage—\\(\\mathbf{v}^\*\\) is the PageRank of each webpage.[40](#fn40)
Because \\(\\mathbf{v}^\*\\) is an eigenvector of the transition matrix (since \\(\\mathbf{P} \\mathbf{v}^\* \= \\mathbf{v}^\*\\)), this measure of centrality is known as [*eigenvector centrality*](https://en.wikipedia.org/w/index.php?search=eigenvector%20centrality).
It was, in fact, developed earlier, but Page and Brin were the first to apply the idea to the World Wide Web for the purpose of search.
The success of PageRank has led to its being applied in a wide variety of contexts—virtually any problem in which a ranking measure on a network setting is feasible.
In addition to the college team sports example below, applications of PageRank include: scholarly citations (e.g., [eigenfactor.org](http://www.eigenfactor.org/)), doctoral programs, protein networks, and lexical semantics.
Another metaphor that may be helpful in understanding PageRank is that of movable mass. That is, suppose that there is a certain amount of mass in a network. The initial vector \\(\\mathbf{v}\_0\\) models a uniform distribution of that mass over the vertices. That is, \\(1/n\\) of the total mass is located on each vertex. The transition matrix \\(\\mathbf{P}\\) models that mass flowing through the network according to the weights on each edge. After a while, the mass will “settle” on the vertices, but in a non\-uniform distribution. The node that has accumulated the most mass has the largest PageRank.
20\.4 Extended example: 1996 men’s college basketball
-----------------------------------------------------
Every March (with exception in 2020 due to COVID\-19\), the attention of many sports fans and college students is captured by the NCAA basketball tournament, which pits 68 of the best teams against each other in a winner\-take\-all, single\-elimination tournament.
(A [*tournament*](https://en.wikipedia.org/w/index.php?search=tournament) is a special type of directed graph.)
However, each team in the tournament is seeded based on their performance during the regular season.
These seeds are important, since getting a higher seed can mean an easier path through the tournament.
Moreover, a tournament berth itself can mean millions of dollars in revenue to a school’s basketball program. Finally, predicting the outcome of the tournament has become something of a sport unto itself.
Kaggle has held a machine learning (see Chapters [11](ch-learningI.html#ch:learningI) and [12](ch-learningII.html#ch:learningII)) competition each spring to solicit these predictions. We will use their data to build a PageRank metric for team strength for the 1995–1996 regular season (the best season in the history of the [*University of Massachusetts*](https://en.wikipedia.org/w/index.php?search=University%20of%20Massachusetts)). To do this, we will build a directed graph whereby each team is a node, and each game creates a directed edge from the losing team to the winning team, which can be weighted based on the margin of victory. The PageRank in such a network is a measure of each team’s strength.
First, we need to download the game\-by\-game results and a lookup table that translates the team IDs into school names. Note that Kaggle requires a sign\-in, so the code below may not work for you without your using your Web browser to authenticate.
```
prefix <- "https://www.kaggle.com/c/march-machine-learning-mania-2015"
url_teams <- paste(prefix, "download/teams.csv", sep = "/")
url_games <- paste(
prefix,
"download/regular_season_compact_results.csv", sep = "/"
)
download.file(url_teams, destfile = "data/teams.csv")
download.file(url_games, destfile = "data/games.csv")
```
Next, we will load this data and `filter()` to select just the 1996 season.
```
library(mdsr)
teams <- readr::read_csv("data/teams.csv")
games <- readr::read_csv("data/games.csv") %>%
filter(season == 1996)
dim(games)
```
```
[1] 4122 8
```
Since the basketball schedule is very unbalanced (each team does not play the same number of games against each other team), margin of victory seems like an important factor in determining how much better one team is than another. We will use the ratio of the winning team’s score to the losing team’s score as an edge weight.
```
E <- games %>%
mutate(score_ratio = wscore/lscore) %>%
select(lteam, wteam, score_ratio)
V <- teams %>%
filter(team_id %in% unique(c(E$lteam, E$wteam)))
g <- igraph::graph_from_data_frame(E, directed = TRUE, vertices = V) %>%
as_tbl_graph() %>%
mutate(team_id = parse_number(name))
summary(g)
```
```
IGRAPH fa6df40 DN-- 305 4122 --
+ attr: name (v/c), team_name (v/c), team_id (v/n), score_ratio
| (e/n)
```
Our graph for this season contains 305 teams, who played a total of 4,122 games.
The **igraph** package contains a `centrality_pagerank()` function that will compute PageRank for us.
In the results below, we can see that by this measure, [*George Washington University*](https://en.wikipedia.org/w/index.php?search=George%20Washington%20University) was the highest\-ranked team, followed by UMass and [*Georgetown*](https://en.wikipedia.org/w/index.php?search=Georgetown).
In reality, the 7th\-ranked team, Kentucky, won the tournament by beating [*Syracuse*](https://en.wikipedia.org/w/index.php?search=Syracuse), the 16th\-ranked team.
All four semifinalists (Kentucky, Syracuse, UMass, and Mississippi State) ranked in the top\-16 according to PageRank, and all 8 quarterfinalists (also including Wake Forest, Kansas, Georgetown, and Cincinnati) were in the top\-20\.
Thus, assessing team strength by computing PageRank on regular season results would have made for a high\-quality prediction of the postseason results.
```
g <- g %>%
mutate(pagerank = centrality_pagerank())
g %>%
as_tibble() %>%
arrange(desc(pagerank)) %>%
head(20)
```
```
# A tibble: 20 × 4
name team_name team_id pagerank
<chr> <chr> <dbl> <dbl>
1 1203 G Washington 1203 0.0219
2 1269 Massachusetts 1269 0.0205
3 1207 Georgetown 1207 0.0164
4 1234 Iowa 1234 0.0143
5 1163 Connecticut 1163 0.0141
6 1437 Villanova 1437 0.0131
7 1246 Kentucky 1246 0.0127
8 1345 Purdue 1345 0.0115
9 1280 Mississippi St 1280 0.0114
10 1210 Georgia Tech 1210 0.0106
11 1112 Arizona 1112 0.0103
12 1448 Wake Forest 1448 0.0101
13 1242 Kansas 1242 0.00992
14 1336 Penn St 1336 0.00975
15 1185 E Michigan 1185 0.00971
16 1393 Syracuse 1393 0.00956
17 1266 Marquette 1266 0.00944
18 1314 North Carolina 1314 0.00942
19 1153 Cincinnati 1153 0.00940
20 1396 Temple 1396 0.00860
```
Note that these rankings are very different from simply assessing each team’s record and winning percentage, since it implicitly considers *who beat whom*, and by how much. Using won–loss record alone, UMass was the best team, with a 31–1 record, while Kentucky was 4th at 28–2\.
```
wins <- E %>%
group_by(wteam) %>%
summarize(W = n())
losses <- E %>%
group_by(lteam) %>%
summarize(L = n())
g <- g %>%
left_join(wins, by = c("team_id" = "wteam")) %>%
left_join(losses, by = c("team_id" = "lteam")) %>%
mutate(win_pct = W / (W + L))
g %>%
as_tibble() %>%
arrange(desc(win_pct)) %>%
head(20)
```
```
# A tibble: 20 × 7
name team_name team_id pagerank W L win_pct
<chr> <chr> <dbl> <dbl> <int> <int> <dbl>
1 1269 Massachusetts 1269 0.0205 31 1 0.969
2 1403 Texas Tech 1403 0.00548 28 1 0.966
3 1163 Connecticut 1163 0.0141 30 2 0.938
4 1246 Kentucky 1246 0.0127 28 2 0.933
5 1180 Drexel 1180 0.00253 25 3 0.893
6 1453 WI Green Bay 1453 0.00438 24 3 0.889
7 1158 Col Charleston 1158 0.00190 22 3 0.88
8 1307 New Mexico 1307 0.00531 26 4 0.867
9 1153 Cincinnati 1153 0.00940 25 4 0.862
10 1242 Kansas 1242 0.00992 25 4 0.862
11 1172 Davidson 1172 0.00237 22 4 0.846
12 1345 Purdue 1345 0.0115 25 5 0.833
13 1448 Wake Forest 1448 0.0101 23 5 0.821
14 1185 E Michigan 1185 0.00971 22 5 0.815
15 1439 Virginia Tech 1439 0.00633 22 5 0.815
16 1437 Villanova 1437 0.0131 25 6 0.806
17 1112 Arizona 1112 0.0103 24 6 0.8
18 1428 Utah 1428 0.00613 23 6 0.793
19 1265 Marist 1265 0.00260 22 6 0.786
20 1114 Ark Little Rock 1114 0.00429 21 6 0.778
```
```
g %>%
as_tibble() %>%
summarize(pr_wpct_cor = cor(pagerank, win_pct, use = "complete.obs"))
```
```
# A tibble: 1 × 1
pr_wpct_cor
<dbl>
1 0.639
```
While PageRank and winning percentage are moderately correlated, PageRank recognizes that, for example, [*Texas Tech*](https://en.wikipedia.org/w/index.php?search=Texas%20Tech)’s 28\-1 record did not even make them a top\-20 team. Georgetown beat Texas Tech in the quarterfinals.
This particular graph has some interesting features. First, UMass beat Kentucky in their first game of the season.
```
E %>%
filter(wteam == 1269 & lteam == 1246)
```
```
# A tibble: 1 × 3
lteam wteam score_ratio
<dbl> <dbl> <dbl>
1 1246 1269 1.12
```
This helps to explain why UMass has a higher PageRank than Kentucky, since the only edge between them points to UMass. Sadly, Kentucky beat UMass in the semifinal round of the tournament—but that game is not present in this regular season data set.
Secondly, George Washington finished the regular season 21–7, yet they had the highest PageRank in the country. How could this have happened? In this case, George Washington was the only team to beat UMass in the regular season. Even though the two teams split their season series, this allows much of the mass that flows to UMass to flow to George Washington.
```
E %>%
filter(lteam %in% c(1203, 1269) & wteam %in% c(1203, 1269))
```
```
# A tibble: 2 × 3
lteam wteam score_ratio
<dbl> <dbl> <dbl>
1 1269 1203 1.13
2 1203 1269 1.14
```
The national network is large and complex, and therefore we will focus on the [*Atlantic 10 conference*](https://en.wikipedia.org/w/index.php?search=Atlantic%2010%20conference) to illustrate how PageRank is actually computed. The A\-10 consisted of 12 teams in 1996\.
```
A_10 <- c("Massachusetts", "Temple", "G Washington", "Rhode Island",
"St Bonaventure", "St Joseph's PA", "Virginia Tech", "Xavier",
"Dayton", "Duquesne", "La Salle", "Fordham")
```
We can form an [*induced subgraph*](https://en.wikipedia.org/w/index.php?search=induced%20subgraph) of our national network that consists solely of vertices and edges among the A\-10 teams.
We will also compute PageRank on this network.
```
a10 <- g %>%
filter(team_name %in% A_10) %>%
mutate(pagerank = centrality_pagerank())
summary(a10)
```
```
IGRAPH 46da9dd DN-- 12 107 --
+ attr: name (v/c), team_name (v/c), team_id (v/n), pagerank (v/n),
| W (v/n), L (v/n), win_pct (v/n), score_ratio (e/n)
```
We visualize this network in Figure [20\.8](ch-netsci.html#fig:a10), where the size of the vertices are proportional to each team’s PageRank, and the transparency of the edges is based on the ratio of the scores in that game. We note that George Washington and UMass are the largest nodes, and that all but one of the edges connected to UMass point towards it.
```
library(ggraph)
ggraph(a10, layout = 'kk') +
geom_edge_arc(
aes(alpha = score_ratio), color = "lightgray",
arrow = arrow(length = unit(0.2, "cm")),
end_cap = circle(1, 'cm'),
strength = 0.2
) +
geom_node_point(aes(size = pagerank, color = pagerank), alpha = 0.6) +
geom_node_label(aes(label = team_name), repel = TRUE) +
scale_alpha_continuous(range = c(0.4, 1)) +
scale_size_continuous(range = c(1, 10)) +
guides(
color = guide_legend("PageRank"),
size = guide_legend("PageRank")
) +
theme_void()
```
Figure 20\.8: Atlantic 10 Conference network, NCAA men’s basketball, 1995–1996\.
Now, let’s compute PageRank for this network using nothing but matrix multiplication. First, we need to get the transition matrix for the graph. This is the same thing as the [*adjacency matrix*](https://en.wikipedia.org/w/index.php?search=adjacency%20matrix), with the entries weighted by the score ratios.
```
P <- a10 %>%
igraph::as_adjacency_matrix(sparse = FALSE, attr = "score_ratio") %>%
t()
```
However, entries in \\(\\mathbf{P}\\) need to be probabilities, and thus they need to be normalized so that each column sums to 1\. We can achieve this using the `scale()` function.
```
P <- scale(P, center = FALSE, scale = colSums(P))
round(P, 2)
```
```
1173 1182 1200 1203 1247 1269 1348 1382 1386 1396 1439 1462
1173 0.00 0.09 0.00 0.00 0.09 0 0.14 0.11 0.00 0.00 0.00 0.16
1182 0.10 0.00 0.10 0.00 0.10 0 0.00 0.00 0.00 0.00 0.00 0.00
1200 0.11 0.00 0.00 0.00 0.09 0 0.00 0.00 0.00 0.00 0.00 0.00
1203 0.11 0.10 0.10 0.00 0.10 1 0.14 0.11 0.17 0.37 0.27 0.15
1247 0.00 0.09 0.00 0.25 0.00 0 0.00 0.12 0.00 0.00 0.00 0.00
1269 0.12 0.09 0.13 0.26 0.11 0 0.14 0.12 0.16 0.34 0.25 0.15
1348 0.00 0.11 0.11 0.00 0.12 0 0.00 0.12 0.16 0.29 0.21 0.18
1382 0.11 0.09 0.13 0.00 0.00 0 0.14 0.00 0.00 0.00 0.00 0.00
1386 0.11 0.10 0.10 0.24 0.09 0 0.14 0.11 0.00 0.00 0.00 0.00
1396 0.12 0.15 0.12 0.00 0.12 0 0.16 0.10 0.16 0.00 0.27 0.19
1439 0.12 0.09 0.12 0.25 0.09 0 0.14 0.11 0.17 0.00 0.00 0.17
1462 0.10 0.09 0.09 0.00 0.09 0 0.00 0.12 0.18 0.00 0.00 0.00
attr(,"scaled:scale")
1173 1182 1200 1203 1247 1269 1348 1382 1386 1396 1439 1462
10.95 11.64 11.91 4.39 11.64 1.13 7.66 10.56 6.54 3.65 5.11 6.95
```
One shortcoming of this construction is that our graph has multiple edges between pairs of vertices, since teams in the same conference usually play each other twice. Unfortunately, the **igraph** function `as_adjacency_matrix()` doesn’t handle this well:
> If the graph has multiple edges, the edge attribute of an arbitrarily chosen edge (for the multiple edges) is included.
Even though UMass beat Temple twice, only one of those edges (apparently chosen arbitrarily) will show up in the adjacency matrix. Note also that in the transition matrix shown above, the column labeled `1269` contains a one and eleven zeros. This indicates that the probability of UMass (`1269`) transitioning to George Washington (`1203`) is 1—since UMass’s only loss was to George Washington. This is not accurate, because the model doesn’t handle multiple edges in a sufficiently sophisticated way.
It is apparent from the matrix that
George Washington is nearly equally likely to move to La Salle, UMass, St. Joseph’s, and Virginia Tech—their four losses in the Atlantic 10\.
Next, we’ll define the initial vector with uniform probabilities—each team has an initial value of 1/12\.
```
num_vertices <- nrow(as_tibble(a10))
v0 <- rep(1, num_vertices) / num_vertices
v0
```
```
[1] 0.0833 0.0833 0.0833 0.0833 0.0833 0.0833 0.0833 0.0833 0.0833 0.0833
[11] 0.0833 0.0833
```
To compute PageRank, we iteratively multiply the initial vector \\(\\mathbf{v}\_0\\) by the transition matrix \\(\\mathbf{P}\\). We’ll do 20 multiplications with a loop:
```
v <- v0
for (i in 1:20) {
v <- P %*% v
}
as.vector(v)
```
```
[1] 0.02552 0.01049 0.00935 0.28427 0.07319 0.17688 0.08206 0.01612 0.09253
[10] 0.08199 0.11828 0.02930
```
We find that the fourth vertex—George Washington—has the highest PageRank. Compare these with the values returned by the built\-in `page_rank()` function from **igraph**:
```
igraph::page_rank(a10)$vector
```
```
1173 1182 1200 1203 1247 1269 1348 1382 1386 1396
0.0346 0.0204 0.0193 0.2467 0.0679 0.1854 0.0769 0.0259 0.0870 0.0894
1439 1462
0.1077 0.0390
```
Why are they different? One limitation of PageRank as we’ve defined it is that there could be [*sinks*](https://en.wikipedia.org/w/index.php?search=sinks), or [*spider traps*](https://en.wikipedia.org/w/index.php?search=spider%20traps), in a network. These are individual nodes, or even a collection of nodes, out of which there are no outgoing edges. (UMass is nearly—but not quite—a spider trap in this network.) In this event, if random surfers find themselves in a spider trap, there is no way out, and all of the probability will end up in those vertices.
In practice, PageRank is modified by adding a [*random restart*](https://en.wikipedia.org/w/index.php?search=random%20restart).
This means that every so often, the random surfer simply picks up and starts over again.
The parameter that controls this in `page_rank()` is called `damping`, and it has a default value of 0\.85\.
If we set the `damping` argument to 1, corresponding to the matrix multiplication we did above, we get a little closer.
```
igraph::page_rank(a10, damping = 1)$vector
```
```
1173 1182 1200 1203 1247 1269 1348 1382 1386
0.02290 0.00778 0.00729 0.28605 0.07297 0.20357 0.07243 0.01166 0.09073
1396 1439 1462
0.08384 0.11395 0.02683
```
Alternatively, we can do the random walk again, but allow for random restarts.
```
w <- v0
d <- 0.85
for (i in 1:20) {
w <- d * P %*% w + (1 - d) * v0
}
as.vector(w)
```
```
[1] 0.0382 0.0231 0.0213 0.2453 0.0689 0.1601 0.0866 0.0302 0.0880 0.0872
[11] 0.1106 0.0407
```
```
igraph::page_rank(a10, damping = 0.85)$vector
```
```
1173 1182 1200 1203 1247 1269 1348 1382 1386 1396
0.0346 0.0204 0.0193 0.2467 0.0679 0.1854 0.0769 0.0259 0.0870 0.0894
1439 1462
0.1077 0.0390
```
Again, the results are not exactly the same due to the approximation of values in the adjacency matrix \\(\\mathbf{P}\\) mentioned earlier, but they are quite close.
20\.5 Further resources
-----------------------
There are two popular workhorse **R** packages for network analysis: **igraph** and **sna**. Both have large user bases and are actively developed. **igraph** also has bindings for Python and C, see Chapter [21](ch-big.html#ch:big).
For more sophisticated graph visualization software, see [*Gephi*](https://en.wikipedia.org/w/index.php?search=Gephi).
In addition to **igraph**, the **ggnetwork**, **sna**, and **network** **R** packages are useful for working with graph objects.
[Albert\-László Barabási](https://en.wikipedia.org/w/index.php?search=Albert-László%20Barabási)’s book *Linked* is a popular introduction to network science (Barabási and Frangos 2014\). For a broader undergraduate textbook, see Easley and Kleinberg (2010\).
20\.6 Exercises
---------------
**Problem 1 (Medium)**: The following problem considers the U.S. airport network as a graph.
1. What information do you need to compute the PageRank of the U.S. airport network? Write an SQL query to retrieve this information for 2012\.
(Hint: use the `dbConnect_scidb` function to connect to the `airlines` database.)
2. Use the data you pulled from SQL and build the network as a *weighted* `tidygraph` object, where the weights are proportional to the frequency of flights between each pair of airports.
3. Compute the PageRank of each airport in your network. What are the top\-10 “most central” airports? Where does Oakland International Airport `OAK` rank?
4. Update the vertex attributes of your network with the geographic coordinates of each airport (available in the `airports` table).
5. Use `ggraph` to draw the airport network. Make the thickness or transparency of each edge proportional to its weight.
6. Overlay your airport network on a U.S. map (see the spatial data chapter).
7. Project the map and the airport network using the Lambert Conformal Conic projection.
8. Crop the map you created to zoom in on your local airport.
**Problem 2 (Hard)**: Let’s reconsider the Internet Movie Database (IMDb) example.
1. In the `CROSS JOIN` query in the movies example, how could we have modified the SQL query to include the actor’s and actresses’ names in the original query? Why would this have been less efficient from a computational and data storage point of view?
2. Expand the Hollywood network by going further back in time. If you go back to 2000, which actor/actress has the highest degree centrality? Betweenness centrality? Eigenvector centrality?
**Problem 3 (Hard)**: Use the `dbConnect_scidb` function to connect to the `airlines` database using the data from 2013 to answer the following problem. For a while, [Edward Snowden](https://en.wikipedia.org/wiki/Edward_Snowden) was trapped in a Moscow airport. Suppose that you were trapped not in *one* airport, but in *all* airports. If you were forced to randomly fly around the United States, where would you be most likely to end up?
20\.7 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-networks.html\#networks\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-networks.html#networks-online-exercises)
No exercises found
---
20\.1 Introduction to network science
-------------------------------------
### 20\.1\.1 Definitions
The roots of network science are in the mathematical discipline of [*graph theory*](https://en.wikipedia.org/w/index.php?search=graph%20theory). There are a few basic definitions that we need before we can proceed.
* A [*graph*](https://en.wikipedia.org/w/index.php?search=graph) \\(G\=(V,E)\\) is simply a set of [*vertices*](https://en.wikipedia.org/w/index.php?search=vertices) (or nodes) \\(V\\), and a set of *edges* (or links, or even ties) \\(E\\) between those nodes. It may be more convenient to think about a graph as being a [*network*](https://en.wikipedia.org/w/index.php?search=network). For example, in a network model of Facebook, each user is a vertex and each friend relation is an edge connecting two users. Thus, one can think of Facebook as a [*social network*](https://en.wikipedia.org/w/index.php?search=social%20network), but the underlying mathematical structure is just a graph. Discrete mathematicians have been studying graphs since [Leonhard Euler](https://en.wikipedia.org/w/index.php?search=Leonhard%20Euler) posed the [*Seven Bridges of Königsberg*](https://en.wikipedia.org/w/index.php?search=Seven%20Bridges%20of%20Königsberg) problem in 1736 (Euler 1953\).
* Edges in graphs can be [*directed*](https://en.wikipedia.org/w/index.php?search=directed) or [*undirected*](https://en.wikipedia.org/w/index.php?search=undirected). The difference is whether the relationship is mutual or one\-sided. For example, edges in the Facebook social network are undirected, because friendship is a mutual relationship. Conversely, edges in Twitter are directed, since you may follow someone who does not necessarily follow you.
* Edges (or less commonly, vertices) may be [*weighted*](https://en.wikipedia.org/w/index.php?search=weighted). The value of the weight represents some quantitative measure. For example, an airline may envision its flight network as a graph, in which each airport is a node, and edges are weighted according to the distance (in miles) from one airport to another. (If edges are unweighted, this is equivalent to setting all weights to 1\.)
* A [*path*](https://en.wikipedia.org/w/index.php?search=path) is a non\-self\-intersecting sequence of edges that connect two vertices. More formally, a path is a special case of a [*walk*](https://en.wikipedia.org/w/index.php?search=walk), which does allow self\-intersections (i.e., a vertex may appear in the walk more than once). There may be many paths, or no paths, between two vertices in a graph, but if there are any paths, then there is at least one [*shortest path*](https://en.wikipedia.org/w/index.php?search=shortest%20path) (or [*geodesic*](https://en.wikipedia.org/w/index.php?search=geodesic)). The notion of a shortest path is dependent upon a distance measure in the graph (usually, just the number of edges, or the sum of the edge weights).
A graph is [*connected*](https://en.wikipedia.org/w/index.php?search=connected) if there is a path between all pairs of vertices.
* The [*diameter*](https://en.wikipedia.org/w/index.php?search=diameter) of a graph is the length of the longest geodesic (i.e., the longest shortest \[sic] path) between any pairs of vertices. The [*eccentricity*](https://en.wikipedia.org/w/index.php?search=eccentricity) of a vertex \\(v\\) in a graph is the length of the longest geodesic starting at that vertex. Thus, in some sense a vertex with a low eccentricity is more central to the graph.
* In general, graphs do not have coordinates. Thus, there is no right way to draw a graph. Visualizing a graph is more art than science, but several graph layout algorithms are popular.
* Centrality: Since graphs don’t have coordinates, there is no obvious measure of [*centrality*](https://en.wikipedia.org/w/index.php?search=centrality). That is, it is frequently of interest to determine which nodes are most “central” to the network, but there are many different notions of centrality. We will discuss three:
+ Degree centrality: The [*degree*](https://en.wikipedia.org/w/index.php?search=degree) of a vertex within a graph is the number of edges incident to it. Thus, the degree of a node is a simple measure of centrality in which more highly connected nodes rank higher. President Obama has almost [100 million followers on Twitter](https://twitter.com/POTUS), whereas the vast majority of users have fewer than a thousand. Thus, the degree of the vertex representing President Obama in the Twitter network is in the millions, and he is more central to the network in terms of degree centrality.
+ Betweenness centrality: If a vertex \\(v\\) is more central to a graph, then you would suspect that more shortest paths between vertices would pass through \\(v\\). This is the notion of [*betweenness centrality*](https://en.wikipedia.org/w/index.php?search=betweenness%20centrality). Specifically, let \\(\\sigma(s,t)\\) be the number of geodesics between vertices \\(s\\) and \\(t\\) in a graph. Let \\(\\sigma\_v(s,t)\\) be the number of shortest paths between \\(s\\) and \\(t\\) that pass through \\(v\\). Then the betweenness centrality for \\(v\\) is the sum of the fractions \\(\\sigma\_v(s,t) / \\sigma(s,t)\\) over all possible pairs \\((s,t)\\). This figure (\\(C\_B(v)\\)) is often normalized by dividing by the number of pairs of vertices that do not include \\(v\\) in the graph.
\\\[
C\_B(v) \= \\frac{2}{(n\-1\)(n\-2\)} \\sum\_{s,t \\in V \\setminus \\{v\\}} \\frac{\\sigma\_v(s,t)}{\\sigma(s,t)} \\,,
\\]
where \\(n\\) is the number of vertices in the graph.
Note that President Obama’s high degree centrality would not necessarily translate into a high betweenness centrality.
+ [*Eigenvector centrality*](https://en.wikipedia.org/w/index.php?search=Eigenvector%20centrality): This is the essence of Google’s [*PageRank*](https://en.wikipedia.org/w/index.php?search=PageRank) algorithm, which we will discuss in Section [20\.3](ch-netsci.html#sec:pagerank).
Note that there are also notions of edge centrality that we will not discuss further.
* In a social network, it is usually believed that if Alice and Bob are friends, and Alice and Carol are friends, then it is more likely than it otherwise would be that Bob and Carol are friends. This is the notion of [*triadic closure*](https://en.wikipedia.org/w/index.php?search=triadic%20closure), and it leads to measurements of [*clusters*](https://en.wikipedia.org/w/index.php?search=clusters) in real\-world networks.
### 20\.1\.2 A brief history of network science
As noted above, the study of graph theory began in the 1700s, but the inception of the field of network science was a paper published in 1959 by the legendary [Paul Erdős](https://en.wikipedia.org/w/index.php?search=Paul%20Erdős) and [Alfréd Rényi](https://en.wikipedia.org/w/index.php?search=Alfréd%20Rényi) (Erdős and Rényi 1959\). Erdős and Rényi proposed a model for a [*random graph*](https://en.wikipedia.org/w/index.php?search=random%20graph), where the number of vertices \\(n\\) is fixed, but the probability of an edge connecting any two vertices is \\(p\\). What do such graphs look like? What properties do they have? It is obvious that if \\(p\\) is very close to 0, then the graph will be almost empty, while conversely, if \\(p\\) is very close to 1, then the graph will be almost complete. Erdős and Rényi unexpectedly proved that for many graph properties \\(c\\) (e.g., connectedness, the existence of a cycle of a certain size, etc.), there is a threshold function \\(p\_c(n)\\) around which the structure of the graph seems to change rapidly. That is, for values of \\(p\\) slightly less than \\(p\_c(n)\\), the probability that a random graph is connected is close to zero, while for values of \\(p\\) just a bit larger than \\(p\_c(n)\\), the probability that a random graph is connected is close to one (see Figure [20\.1](ch-netsci.html#fig:er-graphs)).
This somewhat bizarre behavior has been called the [*phase transition*](https://en.wikipedia.org/w/index.php?search=phase%20transition) in allusion to physics, because it evokes at a molecular level how solids turn to liquids and liquids turn to gasses. When temperatures are just above 32 degrees Fahrenheit, water is a liquid, but at just below 32 degrees, it becomes a solid.
```
library(tidyverse)
library(mdsr)
library(tidygraph)
library(ggraph)
set.seed(21)
n <- 100
p_star <- log(n)/n
plot_er <- function(n, p) {
g <- play_erdos_renyi(n, p, directed = FALSE)
ggraph(g) +
geom_edge_fan(width = 0.1) +
geom_node_point(size = 3, color = "dodgerblue") +
labs(
title = "Erdős--Rényi random graph",
subtitle = paste0("n = ", n, ", p = ", round(p, 4))
) +
theme_void()
}
plot_er(n, p = 0.8 * p_star)
plot_er(n, p = 1.2 * p_star)
```
Figure 20\.1: Two Erdős–Rényi random graphs on 100 vertices with different values of \\(p\\). The graph at left is not connected, but the graph at right is. The value of \\(p\\) hasn’t changed by much.
While many properties of the phase transition have been proven mathematically, they can also be illustrated using simulation (see Chapter [13](ch-simulation.html#ch:simulation)).
Throughout this chapter, we use the **tidygraph** package for constructing and manipulating graphs[39](#fn39), and the **ggraph** package for plotting graphs as **ggplot2** objects.
The **tidygraph** package provides the `play_erdos_renyi()` function for simulating Erdős–Rényi random graphs. In Figure [20\.2](ch-netsci.html#fig:connectedness-threshold), we show how the phase transition for connectedness appears around the threshold value of \\(p(n) \= \\log{n}/n\\). With \\(n\=1000\\), we have \\(p(n) \=\\) 0\.007\. Note how quickly the probability of being connected increases near the value of the threshold function.
```
n <- 1000
p_star <- log(n)/n
p <- rep(seq(from = 0, to = 2 * p_star, by = 0.001), each = 100)
sims <- tibble(n, p) %>%
mutate(
g = map2(n, p, play_erdos_renyi, directed = FALSE),
is_connected = map_int(g, ~with_graph(., graph_is_connected()))
)
ggplot(data = sims, aes(x = p, y = is_connected)) +
geom_vline(xintercept = p_star, color = "darkgray") +
geom_text(
x = p_star, y = 0.9, label = "Threshold value", hjust = "right"
) +
labs(
x = "Probability of edge existing",
y = "Probability that random graph is connected"
) +
geom_count() +
geom_smooth()
```
Figure 20\.2: Simulation of connectedness of ER random graphs on 1,000 vertices.
This surprising discovery demonstrated that random graphs had interesting properties. Yet it was less clear whether the Erdős–Rényi random graph model could produce graphs whose properties were similar to those that we observe in reality. That is, while the Erdős–Rényi random graph model was interesting in its own right, did it model reality well?
The answer turned out to be “no,” or at least, “not really.” In particular, Watts and Strogatz (1998\) identified two properties present in real\-world networks that were not present in Erdős–Rényi random graphs: triadic closure and large hubs.
As we saw above, triadic closure is the idea that two people with a friend in common are likely to be friends themselves. Real\-world (not necessarily social) networks tend to have this property, but Erdős–Rényi random graphs do not.
Similarly, real\-world networks tend to have large hubs—individual nodes with many edges. More specifically, whereas the distribution of the degrees of vertices in Erdős–Rényi random graphs can be shown to follow a [*Poisson distribution*](https://en.wikipedia.org/w/index.php?search=Poisson%20distribution), in real\-world networks the distribution tends to be flatter.
The Watts–Strogatz model provides a second random graph model that produces graphs more similar to those we observe in reality.
```
g <- play_smallworld(n_dim = 2, dim_size = 10, order = 5, p_rewire = 0.05)
```
In particular, many real\-world networks, including not only social networks but also the World Wide Web, citation networks, and many others, have a degree distribution that follows a [*power\-law*](https://en.wikipedia.org/w/index.php?search=power-law). These are known as [*scale\-free*](https://en.wikipedia.org/w/index.php?search=scale-free) networks and were popularized by [Albert\-László Barabási](https://en.wikipedia.org/w/index.php?search=Albert-László%20Barabási) in two widely\-cited papers R. Albert and Barabási (2002\) and his readable book (Barabási and Frangos 2014\).
Barabási and Albert proposed a third random graph model based on the notion of [*preferential attachment*](https://en.wikipedia.org/w/index.php?search=preferential%20attachment).
Here, new nodes are connected to old nodes based on the existing degree distribution of the old nodes.
Their model produces the power\-law degree distribution that has been observed in many different real\-world networks.
Here again, we can illustrate these properties using simulation.
The `play_barabasi_albert()` function in **tidygraph** will allow us to simulate a Barabási–Albert random graph.
Figure [20\.3](ch-netsci.html#fig:degree-dist) compares the degree distribution between an Erdős–Rényi random graph and a Barabási–Albert random graph.
```
g1 <- play_erdos_renyi(n, p = log(n)/n, directed = FALSE)
g2 <- play_barabasi_albert(n, power = 1, growth = 3, directed = FALSE)
summary(g1)
```
```
IGRAPH 31a3c1d U--- 1000 3419 -- Erdos renyi (gnp) graph
+ attr: name (g/c), type (g/c), loops (g/l), p (g/n)
```
```
summary(g2)
```
```
IGRAPH 88bc59e U--- 1000 2994 -- Barabasi graph
+ attr: name (g/c), power (g/n), m (g/n), zero.appeal (g/n),
| algorithm (g/c)
```
```
d <- tibble(
type = c("Erdos-Renyi", "Barabasi-Albert"),
graph = list(g1, g2)
) %>%
mutate(node_degree = map(graph, ~with_graph(., centrality_degree()))) %>%
unnest(node_degree)
ggplot(data = d, aes(x = node_degree, color = type)) +
geom_density(size = 2) +
scale_x_continuous(limits = c(0, 25))
```
Figure 20\.3: Degree distribution for two random graphs.
Network science is a very active area of research, with interesting unsolved problems for data scientists to investigate.
### 20\.1\.1 Definitions
The roots of network science are in the mathematical discipline of [*graph theory*](https://en.wikipedia.org/w/index.php?search=graph%20theory). There are a few basic definitions that we need before we can proceed.
* A [*graph*](https://en.wikipedia.org/w/index.php?search=graph) \\(G\=(V,E)\\) is simply a set of [*vertices*](https://en.wikipedia.org/w/index.php?search=vertices) (or nodes) \\(V\\), and a set of *edges* (or links, or even ties) \\(E\\) between those nodes. It may be more convenient to think about a graph as being a [*network*](https://en.wikipedia.org/w/index.php?search=network). For example, in a network model of Facebook, each user is a vertex and each friend relation is an edge connecting two users. Thus, one can think of Facebook as a [*social network*](https://en.wikipedia.org/w/index.php?search=social%20network), but the underlying mathematical structure is just a graph. Discrete mathematicians have been studying graphs since [Leonhard Euler](https://en.wikipedia.org/w/index.php?search=Leonhard%20Euler) posed the [*Seven Bridges of Königsberg*](https://en.wikipedia.org/w/index.php?search=Seven%20Bridges%20of%20Königsberg) problem in 1736 (Euler 1953\).
* Edges in graphs can be [*directed*](https://en.wikipedia.org/w/index.php?search=directed) or [*undirected*](https://en.wikipedia.org/w/index.php?search=undirected). The difference is whether the relationship is mutual or one\-sided. For example, edges in the Facebook social network are undirected, because friendship is a mutual relationship. Conversely, edges in Twitter are directed, since you may follow someone who does not necessarily follow you.
* Edges (or less commonly, vertices) may be [*weighted*](https://en.wikipedia.org/w/index.php?search=weighted). The value of the weight represents some quantitative measure. For example, an airline may envision its flight network as a graph, in which each airport is a node, and edges are weighted according to the distance (in miles) from one airport to another. (If edges are unweighted, this is equivalent to setting all weights to 1\.)
* A [*path*](https://en.wikipedia.org/w/index.php?search=path) is a non\-self\-intersecting sequence of edges that connect two vertices. More formally, a path is a special case of a [*walk*](https://en.wikipedia.org/w/index.php?search=walk), which does allow self\-intersections (i.e., a vertex may appear in the walk more than once). There may be many paths, or no paths, between two vertices in a graph, but if there are any paths, then there is at least one [*shortest path*](https://en.wikipedia.org/w/index.php?search=shortest%20path) (or [*geodesic*](https://en.wikipedia.org/w/index.php?search=geodesic)). The notion of a shortest path is dependent upon a distance measure in the graph (usually, just the number of edges, or the sum of the edge weights).
A graph is [*connected*](https://en.wikipedia.org/w/index.php?search=connected) if there is a path between all pairs of vertices.
* The [*diameter*](https://en.wikipedia.org/w/index.php?search=diameter) of a graph is the length of the longest geodesic (i.e., the longest shortest \[sic] path) between any pairs of vertices. The [*eccentricity*](https://en.wikipedia.org/w/index.php?search=eccentricity) of a vertex \\(v\\) in a graph is the length of the longest geodesic starting at that vertex. Thus, in some sense a vertex with a low eccentricity is more central to the graph.
* In general, graphs do not have coordinates. Thus, there is no right way to draw a graph. Visualizing a graph is more art than science, but several graph layout algorithms are popular.
* Centrality: Since graphs don’t have coordinates, there is no obvious measure of [*centrality*](https://en.wikipedia.org/w/index.php?search=centrality). That is, it is frequently of interest to determine which nodes are most “central” to the network, but there are many different notions of centrality. We will discuss three:
+ Degree centrality: The [*degree*](https://en.wikipedia.org/w/index.php?search=degree) of a vertex within a graph is the number of edges incident to it. Thus, the degree of a node is a simple measure of centrality in which more highly connected nodes rank higher. President Obama has almost [100 million followers on Twitter](https://twitter.com/POTUS), whereas the vast majority of users have fewer than a thousand. Thus, the degree of the vertex representing President Obama in the Twitter network is in the millions, and he is more central to the network in terms of degree centrality.
+ Betweenness centrality: If a vertex \\(v\\) is more central to a graph, then you would suspect that more shortest paths between vertices would pass through \\(v\\). This is the notion of [*betweenness centrality*](https://en.wikipedia.org/w/index.php?search=betweenness%20centrality). Specifically, let \\(\\sigma(s,t)\\) be the number of geodesics between vertices \\(s\\) and \\(t\\) in a graph. Let \\(\\sigma\_v(s,t)\\) be the number of shortest paths between \\(s\\) and \\(t\\) that pass through \\(v\\). Then the betweenness centrality for \\(v\\) is the sum of the fractions \\(\\sigma\_v(s,t) / \\sigma(s,t)\\) over all possible pairs \\((s,t)\\). This figure (\\(C\_B(v)\\)) is often normalized by dividing by the number of pairs of vertices that do not include \\(v\\) in the graph.
\\\[
C\_B(v) \= \\frac{2}{(n\-1\)(n\-2\)} \\sum\_{s,t \\in V \\setminus \\{v\\}} \\frac{\\sigma\_v(s,t)}{\\sigma(s,t)} \\,,
\\]
where \\(n\\) is the number of vertices in the graph.
Note that President Obama’s high degree centrality would not necessarily translate into a high betweenness centrality.
+ [*Eigenvector centrality*](https://en.wikipedia.org/w/index.php?search=Eigenvector%20centrality): This is the essence of Google’s [*PageRank*](https://en.wikipedia.org/w/index.php?search=PageRank) algorithm, which we will discuss in Section [20\.3](ch-netsci.html#sec:pagerank).
Note that there are also notions of edge centrality that we will not discuss further.
* In a social network, it is usually believed that if Alice and Bob are friends, and Alice and Carol are friends, then it is more likely than it otherwise would be that Bob and Carol are friends. This is the notion of [*triadic closure*](https://en.wikipedia.org/w/index.php?search=triadic%20closure), and it leads to measurements of [*clusters*](https://en.wikipedia.org/w/index.php?search=clusters) in real\-world networks.
### 20\.1\.2 A brief history of network science
As noted above, the study of graph theory began in the 1700s, but the inception of the field of network science was a paper published in 1959 by the legendary [Paul Erdős](https://en.wikipedia.org/w/index.php?search=Paul%20Erdős) and [Alfréd Rényi](https://en.wikipedia.org/w/index.php?search=Alfréd%20Rényi) (Erdős and Rényi 1959\). Erdős and Rényi proposed a model for a [*random graph*](https://en.wikipedia.org/w/index.php?search=random%20graph), where the number of vertices \\(n\\) is fixed, but the probability of an edge connecting any two vertices is \\(p\\). What do such graphs look like? What properties do they have? It is obvious that if \\(p\\) is very close to 0, then the graph will be almost empty, while conversely, if \\(p\\) is very close to 1, then the graph will be almost complete. Erdős and Rényi unexpectedly proved that for many graph properties \\(c\\) (e.g., connectedness, the existence of a cycle of a certain size, etc.), there is a threshold function \\(p\_c(n)\\) around which the structure of the graph seems to change rapidly. That is, for values of \\(p\\) slightly less than \\(p\_c(n)\\), the probability that a random graph is connected is close to zero, while for values of \\(p\\) just a bit larger than \\(p\_c(n)\\), the probability that a random graph is connected is close to one (see Figure [20\.1](ch-netsci.html#fig:er-graphs)).
This somewhat bizarre behavior has been called the [*phase transition*](https://en.wikipedia.org/w/index.php?search=phase%20transition) in allusion to physics, because it evokes at a molecular level how solids turn to liquids and liquids turn to gasses. When temperatures are just above 32 degrees Fahrenheit, water is a liquid, but at just below 32 degrees, it becomes a solid.
```
library(tidyverse)
library(mdsr)
library(tidygraph)
library(ggraph)
set.seed(21)
n <- 100
p_star <- log(n)/n
plot_er <- function(n, p) {
g <- play_erdos_renyi(n, p, directed = FALSE)
ggraph(g) +
geom_edge_fan(width = 0.1) +
geom_node_point(size = 3, color = "dodgerblue") +
labs(
title = "Erdős--Rényi random graph",
subtitle = paste0("n = ", n, ", p = ", round(p, 4))
) +
theme_void()
}
plot_er(n, p = 0.8 * p_star)
plot_er(n, p = 1.2 * p_star)
```
Figure 20\.1: Two Erdős–Rényi random graphs on 100 vertices with different values of \\(p\\). The graph at left is not connected, but the graph at right is. The value of \\(p\\) hasn’t changed by much.
While many properties of the phase transition have been proven mathematically, they can also be illustrated using simulation (see Chapter [13](ch-simulation.html#ch:simulation)).
Throughout this chapter, we use the **tidygraph** package for constructing and manipulating graphs[39](#fn39), and the **ggraph** package for plotting graphs as **ggplot2** objects.
The **tidygraph** package provides the `play_erdos_renyi()` function for simulating Erdős–Rényi random graphs. In Figure [20\.2](ch-netsci.html#fig:connectedness-threshold), we show how the phase transition for connectedness appears around the threshold value of \\(p(n) \= \\log{n}/n\\). With \\(n\=1000\\), we have \\(p(n) \=\\) 0\.007\. Note how quickly the probability of being connected increases near the value of the threshold function.
```
n <- 1000
p_star <- log(n)/n
p <- rep(seq(from = 0, to = 2 * p_star, by = 0.001), each = 100)
sims <- tibble(n, p) %>%
mutate(
g = map2(n, p, play_erdos_renyi, directed = FALSE),
is_connected = map_int(g, ~with_graph(., graph_is_connected()))
)
ggplot(data = sims, aes(x = p, y = is_connected)) +
geom_vline(xintercept = p_star, color = "darkgray") +
geom_text(
x = p_star, y = 0.9, label = "Threshold value", hjust = "right"
) +
labs(
x = "Probability of edge existing",
y = "Probability that random graph is connected"
) +
geom_count() +
geom_smooth()
```
Figure 20\.2: Simulation of connectedness of ER random graphs on 1,000 vertices.
This surprising discovery demonstrated that random graphs had interesting properties. Yet it was less clear whether the Erdős–Rényi random graph model could produce graphs whose properties were similar to those that we observe in reality. That is, while the Erdős–Rényi random graph model was interesting in its own right, did it model reality well?
The answer turned out to be “no,” or at least, “not really.” In particular, Watts and Strogatz (1998\) identified two properties present in real\-world networks that were not present in Erdős–Rényi random graphs: triadic closure and large hubs.
As we saw above, triadic closure is the idea that two people with a friend in common are likely to be friends themselves. Real\-world (not necessarily social) networks tend to have this property, but Erdős–Rényi random graphs do not.
Similarly, real\-world networks tend to have large hubs—individual nodes with many edges. More specifically, whereas the distribution of the degrees of vertices in Erdős–Rényi random graphs can be shown to follow a [*Poisson distribution*](https://en.wikipedia.org/w/index.php?search=Poisson%20distribution), in real\-world networks the distribution tends to be flatter.
The Watts–Strogatz model provides a second random graph model that produces graphs more similar to those we observe in reality.
```
g <- play_smallworld(n_dim = 2, dim_size = 10, order = 5, p_rewire = 0.05)
```
In particular, many real\-world networks, including not only social networks but also the World Wide Web, citation networks, and many others, have a degree distribution that follows a [*power\-law*](https://en.wikipedia.org/w/index.php?search=power-law). These are known as [*scale\-free*](https://en.wikipedia.org/w/index.php?search=scale-free) networks and were popularized by [Albert\-László Barabási](https://en.wikipedia.org/w/index.php?search=Albert-László%20Barabási) in two widely\-cited papers R. Albert and Barabási (2002\) and his readable book (Barabási and Frangos 2014\).
Barabási and Albert proposed a third random graph model based on the notion of [*preferential attachment*](https://en.wikipedia.org/w/index.php?search=preferential%20attachment).
Here, new nodes are connected to old nodes based on the existing degree distribution of the old nodes.
Their model produces the power\-law degree distribution that has been observed in many different real\-world networks.
Here again, we can illustrate these properties using simulation.
The `play_barabasi_albert()` function in **tidygraph** will allow us to simulate a Barabási–Albert random graph.
Figure [20\.3](ch-netsci.html#fig:degree-dist) compares the degree distribution between an Erdős–Rényi random graph and a Barabási–Albert random graph.
```
g1 <- play_erdos_renyi(n, p = log(n)/n, directed = FALSE)
g2 <- play_barabasi_albert(n, power = 1, growth = 3, directed = FALSE)
summary(g1)
```
```
IGRAPH 31a3c1d U--- 1000 3419 -- Erdos renyi (gnp) graph
+ attr: name (g/c), type (g/c), loops (g/l), p (g/n)
```
```
summary(g2)
```
```
IGRAPH 88bc59e U--- 1000 2994 -- Barabasi graph
+ attr: name (g/c), power (g/n), m (g/n), zero.appeal (g/n),
| algorithm (g/c)
```
```
d <- tibble(
type = c("Erdos-Renyi", "Barabasi-Albert"),
graph = list(g1, g2)
) %>%
mutate(node_degree = map(graph, ~with_graph(., centrality_degree()))) %>%
unnest(node_degree)
ggplot(data = d, aes(x = node_degree, color = type)) +
geom_density(size = 2) +
scale_x_continuous(limits = c(0, 25))
```
Figure 20\.3: Degree distribution for two random graphs.
Network science is a very active area of research, with interesting unsolved problems for data scientists to investigate.
20\.2 Extended example: Six degrees of Kristen Stewart
------------------------------------------------------
In this extended example, we will explore a fun application of network science to [*Hollywood*](https://en.wikipedia.org/w/index.php?search=Hollywood) movies.
The notion of [*Six Degrees of Separation*](https://en.wikipedia.org/w/index.php?search=Six%20Degrees%20of%20Separation) was conjectured by a Hungarian network theorist in 1929, and later popularized by a play (and movie starring [Will Smith](https://en.wikipedia.org/w/index.php?search=Will%20Smith)). [Stanley Milgram](https://en.wikipedia.org/w/index.php?search=Stanley%20Milgram)’s famous letter\-mailing [*small\-world*](https://en.wikipedia.org/w/index.php?search=small-world) experiment supposedly lent credence to the idea that all people are connected by relatively few “social hops” (Travers and Milgram 1969\). That is, we are all part of a social network with a relatively small diameter (as small as 6\).
Two popular incarnations of these ideas are the notion of an [*Erdős number*](https://en.wikipedia.org/w/index.php?search=Erdős%20number) and the [Kevin Bacon game](http://oracleofbacon.org/).
The question in each case is the same: How many hops are you away from [Paul Erdős](https://en.wikipedia.org/w/index.php?search=Paul%20Erdős) (or [Kevin Bacon](https://en.wikipedia.org/w/index.php?search=Kevin%20Bacon)? The former is popular among academics (mathematicians especially), where edges are defined by co\-authored papers.
Ben’s Erdős number is three, since he has co\-authored a paper with [Amotz Bar–Noy](https://en.wikipedia.org/w/index.php?search=Amotz%20Bar--Noy), who has co\-authored a paper with [Noga Alon](https://en.wikipedia.org/w/index.php?search=Noga%20Alon), who co\-authored a paper with Erdős.
According to [MathSciNet](http://www.ams.org/mathscinet/collaborationDistance.html), Nick’s Erdős number is four (through Ben given (B. S. Baumer et al. 2014\); but also through [Nan Laird](https://en.wikipedia.org/w/index.php?search=Nan%20Laird), [Fred Mosteller](https://en.wikipedia.org/w/index.php?search=Fred%20Mosteller), and [Persi Diaconis](https://en.wikipedia.org/w/index.php?search=Persi%20Diaconis)), and Danny’s is four (also through Ben).
These data reflect the reality that Ben’s research is “closer” to Erdős’s, since he has written about network science (Bogdanov et al. 2013; Ben S. Baumer et al. 2015; Basu et al. 2015; B. Baumer, Basu, and Bar\-Noy 2011\) and graph theory (Benjamin S. Baumer, Wei, and Bloom 2016\).
Similarly, the idea is that in principle, every actor in Hollywood can be connected to Kevin Bacon in at most six movie hops.
We’ll explore this idea using the [*Internet Movie Database*](https://en.wikipedia.org/w/index.php?search=Internet%20Movie%20Database) (IMDB.com 2013\).
### 20\.2\.1 Collecting Hollywood data
We will populate a [*Hollywood*](https://en.wikipedia.org/w/index.php?search=Hollywood) network using actors in the IMDb.
In this network, each actor is a node, and two actors share an edge if they have ever appeared in a movie together.
Our goal will be to determine the centrality of [Kevin Bacon](https://en.wikipedia.org/w/index.php?search=Kevin%20Bacon).
First, we want to determine the edges, since we can then look up the node information based on the edges that are present.
One caveat is that these networks can grow very rapidly (since the number of edges is \\(O(n^2\)\\), where \\(n\\) is the number of vertices).
For this example, we will be conservative by including popular (at least 150,000 ratings) feature films (i.e., `kind_id` equal to `1`) in 2012, and we will consider only the top\-20 credited roles in each film.
To retrieve the list of edges, we need to consider all possible cast assignment pairs.
To get this list, we start by forming all total pairs using the `CROSS JOIN` operation in MySQL (see Chapter [15](ch-sql.html#ch:sql)), which has no direct **dplyr** equivalent.
Thus, in this case we will have to actually write the SQL code.
Note that we filter this list down to the unique pairs, which we can do by only including pairs where `person_id` from the first table is strictly less than `person_id` from the second table.
The result of the following query will come into **R** as the object `E`.
```
library(mdsr)
db <- dbConnect_scidb("imdb")
```
```
SELECT a.person_id AS src, b.person_id AS dest,
a.movie_id,
a.nr_order * b.nr_order AS weight,
t.title, idx.info AS ratings
FROM imdb.cast_info AS a
CROSS JOIN imdb.cast_info AS b USING (movie_id)
LEFT JOIN imdb.title AS t ON a.movie_id = t.id
LEFT JOIN imdb.movie_info_idx AS idx ON idx.movie_id = a.movie_id
WHERE t.production_year = 2012 AND t.kind_id = 1
AND info_type_id = 100 AND idx.info > 150000
AND a.nr_order <= 20 AND b.nr_order <= 20
AND a.role_id IN (1,2) AND b.role_id IN (1,2)
AND a.person_id < b.person_id
GROUP BY src, dest, movie_id
```
```
E <- E %>%
mutate(ratings = parse_number(ratings))
glimpse(E)
```
```
Rows: 10,223
Columns: 6
$ src <int> 6388, 6388, 6388, 6388, 6388, 6388, 6388, 6388, 6388, 638…
$ dest <int> 405570, 445466, 688358, 722062, 830618, 838704, 960997, 1…
$ movie_id <int> 4590482, 4590482, 4590482, 4590482, 4590482, 4590482, 459…
$ weight <dbl> 52, 13, 143, 234, 260, 208, 156, 247, 104, 130, 26, 182, …
$ title <chr> "Zero Dark Thirty", "Zero Dark Thirty", "Zero Dark Thirty…
$ ratings <dbl> 231992, 231992, 231992, 231992, 231992, 231992, 231992, 2…
```
We have also computed a `weight` variable that we can use to weight the edges in the resulting graph. In this case, the `weight` is based on the order in which each actor appears in the credits. So a ranking of `1` means that the actor had top billing. These weights will be useful because a higher order in the credits usually means more screen time.
```
E %>%
summarize(
num_rows = n(),
num_titles = n_distinct(title)
)
```
```
num_rows num_titles
1 10223 55
```
Our query resulted in 10,223 connections between 55 films. We can see that [*Batman: The Dark Knight Rises*](https://en.wikipedia.org/w/index.php?search=Batman:%20The%20Dark%20Knight%20Rises) received the most user ratings on IMDb.
```
movies <- E %>%
group_by(movie_id) %>%
summarize(title = max(title), N = n(), numRatings = max(ratings)) %>%
arrange(desc(numRatings))
movies
```
```
# A tibble: 55 × 4
movie_id title N numRatings
<int> <chr> <int> <dbl>
1 4339115 The Dark Knight Rises 190 1258255
2 3519403 Django Unchained 190 1075891
3 4316706 The Avengers 190 1067306
4 4368646 The Hunger Games 190 750674
5 4366574 The Hobbit: An Unexpected Journey 190 681957
6 4224391 Silver Linings Playbook 190 577500
7 4231376 Skyfall 190 557652
8 4116220 Prometheus 190 504980
9 4300124 Ted 190 504893
10 3298411 Argo 190 493001
# … with 45 more rows
```
Next, we should gather some information about the vertices in this graph. We could have done this with another `JOIN` in the original query, but doing it now will be more efficient. (Why? See the cross\-join exercise.) In this case, all we need is each actor’s name and IMDb identifier.
```
actor_ids <- unique(c(E$src, E$dest))
V <- db %>%
tbl("name") %>%
filter(id %in% actor_ids) %>%
select(actor_id = id, actor_name = name) %>%
collect() %>%
arrange(actor_id) %>%
mutate(id = row_number())
glimpse(V)
```
```
Rows: 1,010
Columns: 3
$ actor_id <int> 6388, 6897, 8462, 16644, 17039, 18760, 28535, 33799, 42…
$ actor_name <chr> "Abkarian, Simon", "Aboutboul, Alon", "Abtahi, Omid", "…
$ id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, …
```
### 20\.2\.2 Building the Hollywood network
To build a graph, we specify the edges, whether we want them to be directed, and add information about the vertices.
```
edges <- E %>%
left_join(select(V, from = id, actor_id), by = c("src" = "actor_id")) %>%
left_join(select(V, to = id, actor_id), by = c("dest" = "actor_id"))
g <- tbl_graph(nodes = V, directed = FALSE, edges = edges)
summary(g)
```
```
IGRAPH 62d5731 U-W- 1010 10223 --
+ attr: actor_id (v/n), actor_name (v/c), id (v/n), src (e/n), dest
| (e/n), movie_id (e/n), weight (e/n), title (e/c), ratings (e/n)
```
From the `summary()` command above, we can see that we have 1,010 actors and 10,223 edges between them. Note that we have associated metadata with each edge: namely, information about the movie that gave rise to the edge, and the aforementioned `weight` metric based on the order in the credits where each actor appeared. (The idea is that top\-billed stars are likely to appear on screen longer, and thus have more meaningful interactions with more of the cast.)
With our network intact, we can visualize it. There are *many* graphical parameters that you may wish to set, and the default choices are not always good. In this case we have 1,010 vertices, so we’ll make them small, and omit labels.
Figure [20\.4](ch-netsci.html#fig:hollywood) displays the results.
```
ggraph(g, 'drl') +
geom_edge_fan(width = 0.1) +
geom_node_point(color = "dodgerblue") +
theme_void()
```
Figure 20\.4: Visualization of Hollywood network for popular 2012 movies.
It is easy to see the clusters based on movies, but you can also see a few actors who have appeared in multiple movies, and how they tend to be more “central” to the network. If an actor has appeared in multiple movies, then it stands to reason that they will have more connections to other actors. This is captured by degree centrality.
```
g <- g %>%
mutate(degree = centrality_degree())
g %>%
as_tibble() %>%
arrange(desc(degree)) %>%
head()
```
```
# A tibble: 6 × 4
actor_id actor_name id degree
<int> <chr> <int> <dbl>
1 502126 Cranston, Bryan 113 57
2 891094 Gordon-Levitt, Joseph 228 57
3 975636 Hardy, Tom 257 57
4 1012171 Hemsworth, Chris 272 57
5 1713855 Neeson, Liam 466 57
6 1114312 Ivanek, Zeljko 304 56
```
There are a number of big name actors on this list who appeared in multiple movies in 2012\. Why does [Bryan Cranston](http://m.imdb.com/name/nm0186505/filmotype/actor?ref_=m_nmfm_1) have so many connections? The following quick function will retrieve the list of movies for a particular actor.
```
show_movies <- function(g, id) {
g %>%
activate(edges) %>%
as_tibble() %>%
filter(src == id | dest == id) %>%
group_by(movie_id) %>%
summarize(title = first(title), num_connections = n())
}
show_movies(g, 502126)
```
```
# A tibble: 3 × 3
movie_id title num_connections
<int> <chr> <int>
1 3298411 Argo 19
2 3780482 John Carter 19
3 4472483 Total Recall 19
```
Cranston appeared in all three of these movies.
Note however, that the distribution of degrees is not terribly smooth (see Figure [20\.5](ch-netsci.html#fig:crans)). That is, the number of connections that each actor has appears to be limited to a few discrete possibilities. Can you think of why that might be?
```
ggplot(data = enframe(igraph::degree(g)), aes(x = value)) +
geom_density(size = 2)
```
Figure 20\.5: Distribution of degrees for actors in the Hollywood network of popular 2012 movies.
We use the **ggraph** package, which provides `geom_node_*()` and `geom_edge_*()` functions for plotting graphs directly with **ggplot2**. (Alternative plotting packages include **ggnetwork**, **geomnet**, and **GGally**)
```
hollywood <- ggraph(g, layout = 'drl') +
geom_edge_fan(aes(alpha = weight), color = "lightgray") +
geom_node_point(aes(color = degree), alpha = 0.6) +
scale_edge_alpha_continuous(range = c(0, 1)) +
scale_color_viridis_c() +
theme_void()
```
We don’t want to show vertex labels for everyone, because that would result in an unreadable mess. However, it would be nice to see the highly central actors. Figure [20\.6](ch-netsci.html#fig:ggplot-network) shows our completed plot.
The transparency of the edges is scaled relatively to the `weight` measure that we computed earlier.
The `ggnetwork()` function transforms our **igraph** object into a data frame, from which the `geom_nodes()` and `geom_edges()` functions can map variables to aesthetics. In this case, since there are so many edges, we use the `scale_size_continuous()` function to make the edges very thin.
```
hollywood +
geom_node_label(
aes(
filter = degree > 40,
label = str_replace_all(actor_name, ", ", ",\n")
),
repel = TRUE
)
```
Figure 20\.6: The Hollywood network for popular 2012 movies. Color is mapped to degree centrality.
### 20\.2\.3 Building a Kristen Stewart oracle
Degree centrality does not take into account the weights on the edges. If we want to emphasize the pathways through leading actors, we could consider [*betweenness centrality*](https://en.wikipedia.org/w/index.php?search=betweenness%20centrality).
```
g <- g %>%
mutate(btw = centrality_betweenness(weights = weight, normalized = TRUE))
g %>%
as_tibble() %>%
arrange(desc(btw)) %>%
head(10)
```
```
# A tibble: 10 × 5
actor_id actor_name id degree btw
<int> <chr> <int> <dbl> <dbl>
1 3945132 Stewart, Kristen 964 38 0.236
2 891094 Gordon-Levitt, Joseph 228 57 0.217
3 3346548 Kendrick, Anna 857 38 0.195
4 135422 Bale, Christian 27 19 0.179
5 76481 Ansari, Aziz 15 19 0.176
6 558059 Day-Lewis, Daniel 135 19 0.176
7 1318021 LaBeouf, Shia 363 19 0.156
8 2987679 Dean, Ester 787 38 0.152
9 2589137 Willis, Bruce 694 56 0.141
10 975636 Hardy, Tom 257 57 0.134
```
```
show_movies(g, 3945132)
```
```
# A tibble: 2 × 3
movie_id title num_connections
<int> <chr> <int>
1 4237818 Snow White and the Huntsman 19
2 4436842 The Twilight Saga: Breaking Dawn - Part 2 19
```
Notice that [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart) has the highest betweenness centrality, while [Joseph Gordon\-Levitt](https://en.wikipedia.org/w/index.php?search=Joseph%20Gordon-Levitt) and [Tom Hardy](https://en.wikipedia.org/w/index.php?search=Tom%20Hardy) (and others) have the highest degree centrality.
Moreover, [Christian Bale](https://en.wikipedia.org/w/index.php?search=Christian%20Bale) has the third\-highest betweenness centrality despite appearing in only one movie. This is because he played the lead in *The Dark Knight Rises*, the movie responsible for the most edges. Most shortest paths through *The Dark Knight Rises* pass through [Christian Bale](https://en.wikipedia.org/w/index.php?search=Christian%20Bale).
If Kristen Stewart (`imdbId` `3945132`) is very central to this network, then perhaps instead of a Bacon number, we could consider a Stewart number.
[Charlize Theron](https://en.wikipedia.org/w/index.php?search=Charlize%20Theron)’s Stewart number is obviously 1, since they appeared in [*Snow White and the Huntsman*](https://en.wikipedia.org/w/index.php?search=Snow%20White%20and%20the%20Huntsman) together:
```
ks <- V %>%
filter(actor_name == "Stewart, Kristen")
ct <- V %>%
filter(actor_name == "Theron, Charlize")
g %>%
convert(to_shortest_path, from = ks$id, to = ct$id)
```
```
# A tbl_graph: 2 nodes and 1 edges
#
# An unrooted tree
#
# Node Data: 2 × 6 (active)
actor_id actor_name id degree btw .tidygraph_node_index
<int> <chr> <int> <dbl> <dbl> <int>
1 3945132 Stewart, Kristen 964 38 0.236 964
2 3990819 Theron, Charlize 974 38 0.0940 974
#
# Edge Data: 1 × 9
from to src dest movie_id weight title ratings .tidygraph_edge…
<int> <int> <int> <int> <int> <dbl> <chr> <dbl> <int>
1 1 2 3945132 3.99e6 4237818 3 Snow … 243824 10198
```
On the other hand, her distance from [Joseph Gordon\-Levitt](https://en.wikipedia.org/w/index.php?search=Joseph%20Gordon-Levitt) is 5\. The interpretation here is that [Joseph Gordon\-Levitt](https://en.wikipedia.org/w/index.php?search=Joseph%20Gordon-Levitt) was in [*Batman: The Dark Knight Rises*](https://en.wikipedia.org/w/index.php?search=Batman:%20The%20Dark%20Knight%20Rises) with [Tom Hardy](https://en.wikipedia.org/w/index.php?search=Tom%20Hardy), who was in [*Lawless*](https://en.wikipedia.org/w/index.php?search=Lawless) with [Guy Pearce](https://en.wikipedia.org/w/index.php?search=Guy%20Pearce), who was in [*Prometheus*](https://en.wikipedia.org/w/index.php?search=Prometheus) with [Charlize Theron](https://en.wikipedia.org/w/index.php?search=Charlize%20Theron), who was in [*Snow White and the Huntsman*](https://en.wikipedia.org/w/index.php?search=Snow%20White%20and%20the%20Huntsman) with [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart).
We show this graphically in Figure [20\.7](ch-netsci.html#fig:ks).
```
set.seed(47)
jgl <- V %>%
filter(actor_name == "Gordon-Levitt, Joseph")
h <- g %>%
convert(to_shortest_path, from = jgl$id, to = ks$id, weights = NA)
h %>%
ggraph('gem') +
geom_node_point() +
geom_node_label(aes(label = actor_name)) +
geom_edge_fan2(aes(label = title)) +
coord_cartesian(clip = "off") +
theme(plot.margin = margin(6, 36, 6, 36))
```
Figure 20\.7: Subgraph showing a shortest path through the Hollywood network from Joseph Gordon\-Levitt to Kristen Stewart.
Note, however, that these shortest paths are not unique. In fact, there are 9 shortest paths between [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart) and [Joseph Gordon\-Levitt](https://en.wikipedia.org/w/index.php?search=Joseph%20Gordon-Levitt), each having a length of 5\.
```
igraph::all_shortest_paths(g, from = ks$id, to = jgl$id, weights = NA) %>%
pluck("res") %>%
length()
```
```
[1] 9
```
As we saw in Figure [20\.6](ch-netsci.html#fig:ggplot-network), our Hollywood network is not connected, and thus its diameter is infinite. However, the diameter of the largest connected component can be computed. This number (in this case, 10\) indicates how many hops separate the two most distant actors in the network.
```
igraph::diameter(g, weights = NA)
```
```
[1] 10
```
```
g %>%
mutate(eccentricity = node_eccentricity()) %>%
filter(actor_name == "Stewart, Kristen")
```
```
# A tbl_graph: 1 nodes and 0 edges
#
# An unrooted tree
#
# Node Data: 1 × 6 (active)
actor_id actor_name id degree btw eccentricity
<int> <chr> <int> <dbl> <dbl> <dbl>
1 3945132 Stewart, Kristen 964 38 0.236 6
#
# Edge Data: 0 × 8
# … with 8 variables: from <int>, to <int>, src <int>, dest <int>,
# movie_id <int>, weight <dbl>, title <chr>, ratings <dbl>
```
On the other hand, we note that [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart)’s eccentricity is 6\. This means that there is no actor in the connected part of the network who is more than 6 hops away from [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart).
Six degrees of separation indeed!
### 20\.2\.1 Collecting Hollywood data
We will populate a [*Hollywood*](https://en.wikipedia.org/w/index.php?search=Hollywood) network using actors in the IMDb.
In this network, each actor is a node, and two actors share an edge if they have ever appeared in a movie together.
Our goal will be to determine the centrality of [Kevin Bacon](https://en.wikipedia.org/w/index.php?search=Kevin%20Bacon).
First, we want to determine the edges, since we can then look up the node information based on the edges that are present.
One caveat is that these networks can grow very rapidly (since the number of edges is \\(O(n^2\)\\), where \\(n\\) is the number of vertices).
For this example, we will be conservative by including popular (at least 150,000 ratings) feature films (i.e., `kind_id` equal to `1`) in 2012, and we will consider only the top\-20 credited roles in each film.
To retrieve the list of edges, we need to consider all possible cast assignment pairs.
To get this list, we start by forming all total pairs using the `CROSS JOIN` operation in MySQL (see Chapter [15](ch-sql.html#ch:sql)), which has no direct **dplyr** equivalent.
Thus, in this case we will have to actually write the SQL code.
Note that we filter this list down to the unique pairs, which we can do by only including pairs where `person_id` from the first table is strictly less than `person_id` from the second table.
The result of the following query will come into **R** as the object `E`.
```
library(mdsr)
db <- dbConnect_scidb("imdb")
```
```
SELECT a.person_id AS src, b.person_id AS dest,
a.movie_id,
a.nr_order * b.nr_order AS weight,
t.title, idx.info AS ratings
FROM imdb.cast_info AS a
CROSS JOIN imdb.cast_info AS b USING (movie_id)
LEFT JOIN imdb.title AS t ON a.movie_id = t.id
LEFT JOIN imdb.movie_info_idx AS idx ON idx.movie_id = a.movie_id
WHERE t.production_year = 2012 AND t.kind_id = 1
AND info_type_id = 100 AND idx.info > 150000
AND a.nr_order <= 20 AND b.nr_order <= 20
AND a.role_id IN (1,2) AND b.role_id IN (1,2)
AND a.person_id < b.person_id
GROUP BY src, dest, movie_id
```
```
E <- E %>%
mutate(ratings = parse_number(ratings))
glimpse(E)
```
```
Rows: 10,223
Columns: 6
$ src <int> 6388, 6388, 6388, 6388, 6388, 6388, 6388, 6388, 6388, 638…
$ dest <int> 405570, 445466, 688358, 722062, 830618, 838704, 960997, 1…
$ movie_id <int> 4590482, 4590482, 4590482, 4590482, 4590482, 4590482, 459…
$ weight <dbl> 52, 13, 143, 234, 260, 208, 156, 247, 104, 130, 26, 182, …
$ title <chr> "Zero Dark Thirty", "Zero Dark Thirty", "Zero Dark Thirty…
$ ratings <dbl> 231992, 231992, 231992, 231992, 231992, 231992, 231992, 2…
```
We have also computed a `weight` variable that we can use to weight the edges in the resulting graph. In this case, the `weight` is based on the order in which each actor appears in the credits. So a ranking of `1` means that the actor had top billing. These weights will be useful because a higher order in the credits usually means more screen time.
```
E %>%
summarize(
num_rows = n(),
num_titles = n_distinct(title)
)
```
```
num_rows num_titles
1 10223 55
```
Our query resulted in 10,223 connections between 55 films. We can see that [*Batman: The Dark Knight Rises*](https://en.wikipedia.org/w/index.php?search=Batman:%20The%20Dark%20Knight%20Rises) received the most user ratings on IMDb.
```
movies <- E %>%
group_by(movie_id) %>%
summarize(title = max(title), N = n(), numRatings = max(ratings)) %>%
arrange(desc(numRatings))
movies
```
```
# A tibble: 55 × 4
movie_id title N numRatings
<int> <chr> <int> <dbl>
1 4339115 The Dark Knight Rises 190 1258255
2 3519403 Django Unchained 190 1075891
3 4316706 The Avengers 190 1067306
4 4368646 The Hunger Games 190 750674
5 4366574 The Hobbit: An Unexpected Journey 190 681957
6 4224391 Silver Linings Playbook 190 577500
7 4231376 Skyfall 190 557652
8 4116220 Prometheus 190 504980
9 4300124 Ted 190 504893
10 3298411 Argo 190 493001
# … with 45 more rows
```
Next, we should gather some information about the vertices in this graph. We could have done this with another `JOIN` in the original query, but doing it now will be more efficient. (Why? See the cross\-join exercise.) In this case, all we need is each actor’s name and IMDb identifier.
```
actor_ids <- unique(c(E$src, E$dest))
V <- db %>%
tbl("name") %>%
filter(id %in% actor_ids) %>%
select(actor_id = id, actor_name = name) %>%
collect() %>%
arrange(actor_id) %>%
mutate(id = row_number())
glimpse(V)
```
```
Rows: 1,010
Columns: 3
$ actor_id <int> 6388, 6897, 8462, 16644, 17039, 18760, 28535, 33799, 42…
$ actor_name <chr> "Abkarian, Simon", "Aboutboul, Alon", "Abtahi, Omid", "…
$ id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, …
```
### 20\.2\.2 Building the Hollywood network
To build a graph, we specify the edges, whether we want them to be directed, and add information about the vertices.
```
edges <- E %>%
left_join(select(V, from = id, actor_id), by = c("src" = "actor_id")) %>%
left_join(select(V, to = id, actor_id), by = c("dest" = "actor_id"))
g <- tbl_graph(nodes = V, directed = FALSE, edges = edges)
summary(g)
```
```
IGRAPH 62d5731 U-W- 1010 10223 --
+ attr: actor_id (v/n), actor_name (v/c), id (v/n), src (e/n), dest
| (e/n), movie_id (e/n), weight (e/n), title (e/c), ratings (e/n)
```
From the `summary()` command above, we can see that we have 1,010 actors and 10,223 edges between them. Note that we have associated metadata with each edge: namely, information about the movie that gave rise to the edge, and the aforementioned `weight` metric based on the order in the credits where each actor appeared. (The idea is that top\-billed stars are likely to appear on screen longer, and thus have more meaningful interactions with more of the cast.)
With our network intact, we can visualize it. There are *many* graphical parameters that you may wish to set, and the default choices are not always good. In this case we have 1,010 vertices, so we’ll make them small, and omit labels.
Figure [20\.4](ch-netsci.html#fig:hollywood) displays the results.
```
ggraph(g, 'drl') +
geom_edge_fan(width = 0.1) +
geom_node_point(color = "dodgerblue") +
theme_void()
```
Figure 20\.4: Visualization of Hollywood network for popular 2012 movies.
It is easy to see the clusters based on movies, but you can also see a few actors who have appeared in multiple movies, and how they tend to be more “central” to the network. If an actor has appeared in multiple movies, then it stands to reason that they will have more connections to other actors. This is captured by degree centrality.
```
g <- g %>%
mutate(degree = centrality_degree())
g %>%
as_tibble() %>%
arrange(desc(degree)) %>%
head()
```
```
# A tibble: 6 × 4
actor_id actor_name id degree
<int> <chr> <int> <dbl>
1 502126 Cranston, Bryan 113 57
2 891094 Gordon-Levitt, Joseph 228 57
3 975636 Hardy, Tom 257 57
4 1012171 Hemsworth, Chris 272 57
5 1713855 Neeson, Liam 466 57
6 1114312 Ivanek, Zeljko 304 56
```
There are a number of big name actors on this list who appeared in multiple movies in 2012\. Why does [Bryan Cranston](http://m.imdb.com/name/nm0186505/filmotype/actor?ref_=m_nmfm_1) have so many connections? The following quick function will retrieve the list of movies for a particular actor.
```
show_movies <- function(g, id) {
g %>%
activate(edges) %>%
as_tibble() %>%
filter(src == id | dest == id) %>%
group_by(movie_id) %>%
summarize(title = first(title), num_connections = n())
}
show_movies(g, 502126)
```
```
# A tibble: 3 × 3
movie_id title num_connections
<int> <chr> <int>
1 3298411 Argo 19
2 3780482 John Carter 19
3 4472483 Total Recall 19
```
Cranston appeared in all three of these movies.
Note however, that the distribution of degrees is not terribly smooth (see Figure [20\.5](ch-netsci.html#fig:crans)). That is, the number of connections that each actor has appears to be limited to a few discrete possibilities. Can you think of why that might be?
```
ggplot(data = enframe(igraph::degree(g)), aes(x = value)) +
geom_density(size = 2)
```
Figure 20\.5: Distribution of degrees for actors in the Hollywood network of popular 2012 movies.
We use the **ggraph** package, which provides `geom_node_*()` and `geom_edge_*()` functions for plotting graphs directly with **ggplot2**. (Alternative plotting packages include **ggnetwork**, **geomnet**, and **GGally**)
```
hollywood <- ggraph(g, layout = 'drl') +
geom_edge_fan(aes(alpha = weight), color = "lightgray") +
geom_node_point(aes(color = degree), alpha = 0.6) +
scale_edge_alpha_continuous(range = c(0, 1)) +
scale_color_viridis_c() +
theme_void()
```
We don’t want to show vertex labels for everyone, because that would result in an unreadable mess. However, it would be nice to see the highly central actors. Figure [20\.6](ch-netsci.html#fig:ggplot-network) shows our completed plot.
The transparency of the edges is scaled relatively to the `weight` measure that we computed earlier.
The `ggnetwork()` function transforms our **igraph** object into a data frame, from which the `geom_nodes()` and `geom_edges()` functions can map variables to aesthetics. In this case, since there are so many edges, we use the `scale_size_continuous()` function to make the edges very thin.
```
hollywood +
geom_node_label(
aes(
filter = degree > 40,
label = str_replace_all(actor_name, ", ", ",\n")
),
repel = TRUE
)
```
Figure 20\.6: The Hollywood network for popular 2012 movies. Color is mapped to degree centrality.
### 20\.2\.3 Building a Kristen Stewart oracle
Degree centrality does not take into account the weights on the edges. If we want to emphasize the pathways through leading actors, we could consider [*betweenness centrality*](https://en.wikipedia.org/w/index.php?search=betweenness%20centrality).
```
g <- g %>%
mutate(btw = centrality_betweenness(weights = weight, normalized = TRUE))
g %>%
as_tibble() %>%
arrange(desc(btw)) %>%
head(10)
```
```
# A tibble: 10 × 5
actor_id actor_name id degree btw
<int> <chr> <int> <dbl> <dbl>
1 3945132 Stewart, Kristen 964 38 0.236
2 891094 Gordon-Levitt, Joseph 228 57 0.217
3 3346548 Kendrick, Anna 857 38 0.195
4 135422 Bale, Christian 27 19 0.179
5 76481 Ansari, Aziz 15 19 0.176
6 558059 Day-Lewis, Daniel 135 19 0.176
7 1318021 LaBeouf, Shia 363 19 0.156
8 2987679 Dean, Ester 787 38 0.152
9 2589137 Willis, Bruce 694 56 0.141
10 975636 Hardy, Tom 257 57 0.134
```
```
show_movies(g, 3945132)
```
```
# A tibble: 2 × 3
movie_id title num_connections
<int> <chr> <int>
1 4237818 Snow White and the Huntsman 19
2 4436842 The Twilight Saga: Breaking Dawn - Part 2 19
```
Notice that [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart) has the highest betweenness centrality, while [Joseph Gordon\-Levitt](https://en.wikipedia.org/w/index.php?search=Joseph%20Gordon-Levitt) and [Tom Hardy](https://en.wikipedia.org/w/index.php?search=Tom%20Hardy) (and others) have the highest degree centrality.
Moreover, [Christian Bale](https://en.wikipedia.org/w/index.php?search=Christian%20Bale) has the third\-highest betweenness centrality despite appearing in only one movie. This is because he played the lead in *The Dark Knight Rises*, the movie responsible for the most edges. Most shortest paths through *The Dark Knight Rises* pass through [Christian Bale](https://en.wikipedia.org/w/index.php?search=Christian%20Bale).
If Kristen Stewart (`imdbId` `3945132`) is very central to this network, then perhaps instead of a Bacon number, we could consider a Stewart number.
[Charlize Theron](https://en.wikipedia.org/w/index.php?search=Charlize%20Theron)’s Stewart number is obviously 1, since they appeared in [*Snow White and the Huntsman*](https://en.wikipedia.org/w/index.php?search=Snow%20White%20and%20the%20Huntsman) together:
```
ks <- V %>%
filter(actor_name == "Stewart, Kristen")
ct <- V %>%
filter(actor_name == "Theron, Charlize")
g %>%
convert(to_shortest_path, from = ks$id, to = ct$id)
```
```
# A tbl_graph: 2 nodes and 1 edges
#
# An unrooted tree
#
# Node Data: 2 × 6 (active)
actor_id actor_name id degree btw .tidygraph_node_index
<int> <chr> <int> <dbl> <dbl> <int>
1 3945132 Stewart, Kristen 964 38 0.236 964
2 3990819 Theron, Charlize 974 38 0.0940 974
#
# Edge Data: 1 × 9
from to src dest movie_id weight title ratings .tidygraph_edge…
<int> <int> <int> <int> <int> <dbl> <chr> <dbl> <int>
1 1 2 3945132 3.99e6 4237818 3 Snow … 243824 10198
```
On the other hand, her distance from [Joseph Gordon\-Levitt](https://en.wikipedia.org/w/index.php?search=Joseph%20Gordon-Levitt) is 5\. The interpretation here is that [Joseph Gordon\-Levitt](https://en.wikipedia.org/w/index.php?search=Joseph%20Gordon-Levitt) was in [*Batman: The Dark Knight Rises*](https://en.wikipedia.org/w/index.php?search=Batman:%20The%20Dark%20Knight%20Rises) with [Tom Hardy](https://en.wikipedia.org/w/index.php?search=Tom%20Hardy), who was in [*Lawless*](https://en.wikipedia.org/w/index.php?search=Lawless) with [Guy Pearce](https://en.wikipedia.org/w/index.php?search=Guy%20Pearce), who was in [*Prometheus*](https://en.wikipedia.org/w/index.php?search=Prometheus) with [Charlize Theron](https://en.wikipedia.org/w/index.php?search=Charlize%20Theron), who was in [*Snow White and the Huntsman*](https://en.wikipedia.org/w/index.php?search=Snow%20White%20and%20the%20Huntsman) with [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart).
We show this graphically in Figure [20\.7](ch-netsci.html#fig:ks).
```
set.seed(47)
jgl <- V %>%
filter(actor_name == "Gordon-Levitt, Joseph")
h <- g %>%
convert(to_shortest_path, from = jgl$id, to = ks$id, weights = NA)
h %>%
ggraph('gem') +
geom_node_point() +
geom_node_label(aes(label = actor_name)) +
geom_edge_fan2(aes(label = title)) +
coord_cartesian(clip = "off") +
theme(plot.margin = margin(6, 36, 6, 36))
```
Figure 20\.7: Subgraph showing a shortest path through the Hollywood network from Joseph Gordon\-Levitt to Kristen Stewart.
Note, however, that these shortest paths are not unique. In fact, there are 9 shortest paths between [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart) and [Joseph Gordon\-Levitt](https://en.wikipedia.org/w/index.php?search=Joseph%20Gordon-Levitt), each having a length of 5\.
```
igraph::all_shortest_paths(g, from = ks$id, to = jgl$id, weights = NA) %>%
pluck("res") %>%
length()
```
```
[1] 9
```
As we saw in Figure [20\.6](ch-netsci.html#fig:ggplot-network), our Hollywood network is not connected, and thus its diameter is infinite. However, the diameter of the largest connected component can be computed. This number (in this case, 10\) indicates how many hops separate the two most distant actors in the network.
```
igraph::diameter(g, weights = NA)
```
```
[1] 10
```
```
g %>%
mutate(eccentricity = node_eccentricity()) %>%
filter(actor_name == "Stewart, Kristen")
```
```
# A tbl_graph: 1 nodes and 0 edges
#
# An unrooted tree
#
# Node Data: 1 × 6 (active)
actor_id actor_name id degree btw eccentricity
<int> <chr> <int> <dbl> <dbl> <dbl>
1 3945132 Stewart, Kristen 964 38 0.236 6
#
# Edge Data: 0 × 8
# … with 8 variables: from <int>, to <int>, src <int>, dest <int>,
# movie_id <int>, weight <dbl>, title <chr>, ratings <dbl>
```
On the other hand, we note that [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart)’s eccentricity is 6\. This means that there is no actor in the connected part of the network who is more than 6 hops away from [Kristen Stewart](https://en.wikipedia.org/w/index.php?search=Kristen%20Stewart).
Six degrees of separation indeed!
20\.3 PageRank
--------------
For many readers, it may be difficult (or impossible) to remember what search engines on the Web were like before Google. Search engines such as [*Altavista*](https://en.wikipedia.org/w/index.php?search=Altavista), [*Web Crawler*](https://en.wikipedia.org/w/index.php?search=Web%20Crawler), [*Excite*](https://en.wikipedia.org/w/index.php?search=Excite), and [*Yahoo!*](https://en.wikipedia.org/w/index.php?search=Yahoo!) vied for supremacy, but none returned results that were of comparable use to the ones we get today. Frequently, finding what you wanted required sifting through pages of slow\-to\-load links.
Consider the search problem. A user types in a [*search query*](https://en.wikipedia.org/w/index.php?search=search%20query) consisting of one or more words or terms. Then the search engine produces an ordered list of Web pages ranked by their relevance to that search query. How would you instruct a computer to determine the relevance of a Web page to a query?
This problem is not trivial. Most pre\-Google search engines worked by categorizing the words on every Web page, and then determining—based on the search query—which pages were most relevant to that query.
One problem with this approach is that it relies on each Web designer to have the words on its page accurately reflect the content. Naturally, advertisers could easily manipulate search engines by loading their pages with popular search terms, written in the same color as the background (making them invisible to the user), regardless of whether those words were related to the actual content of the page. So naïve search engines might rank these pages more highly, even though they were not relevant to the user.
Google conquered search by thinking about the problem in a fundamentally different way and taking advantage of the network structure of the World Wide Web. The web is a directed graph, in which each webpage (URL) is a node, and edges reflect links from one webpage to another. In 1998, [Sergey Brin](https://en.wikipedia.org/w/index.php?search=Sergey%20Brin) and [Larry Page](https://en.wikipedia.org/w/index.php?search=Larry%20Page)—while computer science graduate students at [*Stanford University*](https://en.wikipedia.org/w/index.php?search=Stanford%20University)—developed a centrality measure called [*PageRank*](https://en.wikipedia.org/w/index.php?search=PageRank) that formed the basis of Google’s search algorithms (Page et al. 1999\). The algorithm led to search results that were so much better than those of its competitors that Google quickly swallowed the entire search market, and is now one of the world’s largest companies. The key insight was that one could use the directed links on the Web as a means of “voting” in a way that was much more difficult to exploit. That is, advertisers could only control links on their pages, but not links to their pages from other sites.
### 20\.3\.1 Eigenvector centrality
Computing PageRank is a rather simple exercise in linear algebra. It is an example of a [*Markov process*](https://en.wikipedia.org/w/index.php?search=Markov%20process). Suppose there are \\(n\\) webpages on the Web. Let \\(\\mathbf{v}\_0 \= \\mathbf{1}/n\\) be a vector that gives the initial probability that a randomly chosen Web surfer will be on any given page. In the absence of any information about this user, there is an equal probability that they might be on any page.
But for each of these \\(n\\) webpages, we also know to which pages it links. These are outgoing directed edges in the Web graph. We assume that a random surfer will follow each link with equal probability, so if there are \\(m\_i\\) outgoing links on the \\(i^{th}\\) webpage, then the probability that the random surfer goes from page \\(i\\) to page \\(j\\) is \\(p\_{ij} \= 1 / m\_i\\). Note that if the \\(i^{th}\\) page doesn’t link to the \\(j^{th}\\) page, then \\(p\_{ij} \= 0\\). In this manner, we can form the \\(n \\times n\\) [*transition matrix*](https://en.wikipedia.org/w/index.php?search=transition%20matrix) \\(\\mathbf{P}\\), wherein each entry describes the probability of moving from page \\(i\\) to page \\(j\\).
The product \\(\\mathbf{P} \\mathbf{v}\_0 \= \\mathbf{v}\_1\\) is a vector where \\(v\_{1i}\\) indicates the probability of being at the \\(i^{th}\\) webpage, after picking a webpage uniformly at random to start, and then clicking on one link chosen at random (with equal probability). The product \\(\\mathbf{P} \\mathbf{v}\_1 \= \\mathbf{P}^2 \\mathbf{v}\_0\\) gives us the probabilities after two clicks, etc.
It can be shown mathematically that if we continue to iterate this process, then we will arrive at a [*stationary distribution*](https://en.wikipedia.org/w/index.php?search=stationary%20distribution) \\(\\mathbf{v}^\*\\) that reflects the long\-term probability of being on any given page.
Each entry in that vector then represents the popularity of the corresponding webpage—\\(\\mathbf{v}^\*\\) is the PageRank of each webpage.[40](#fn40)
Because \\(\\mathbf{v}^\*\\) is an eigenvector of the transition matrix (since \\(\\mathbf{P} \\mathbf{v}^\* \= \\mathbf{v}^\*\\)), this measure of centrality is known as [*eigenvector centrality*](https://en.wikipedia.org/w/index.php?search=eigenvector%20centrality).
It was, in fact, developed earlier, but Page and Brin were the first to apply the idea to the World Wide Web for the purpose of search.
The success of PageRank has led to its being applied in a wide variety of contexts—virtually any problem in which a ranking measure on a network setting is feasible.
In addition to the college team sports example below, applications of PageRank include: scholarly citations (e.g., [eigenfactor.org](http://www.eigenfactor.org/)), doctoral programs, protein networks, and lexical semantics.
Another metaphor that may be helpful in understanding PageRank is that of movable mass. That is, suppose that there is a certain amount of mass in a network. The initial vector \\(\\mathbf{v}\_0\\) models a uniform distribution of that mass over the vertices. That is, \\(1/n\\) of the total mass is located on each vertex. The transition matrix \\(\\mathbf{P}\\) models that mass flowing through the network according to the weights on each edge. After a while, the mass will “settle” on the vertices, but in a non\-uniform distribution. The node that has accumulated the most mass has the largest PageRank.
### 20\.3\.1 Eigenvector centrality
Computing PageRank is a rather simple exercise in linear algebra. It is an example of a [*Markov process*](https://en.wikipedia.org/w/index.php?search=Markov%20process). Suppose there are \\(n\\) webpages on the Web. Let \\(\\mathbf{v}\_0 \= \\mathbf{1}/n\\) be a vector that gives the initial probability that a randomly chosen Web surfer will be on any given page. In the absence of any information about this user, there is an equal probability that they might be on any page.
But for each of these \\(n\\) webpages, we also know to which pages it links. These are outgoing directed edges in the Web graph. We assume that a random surfer will follow each link with equal probability, so if there are \\(m\_i\\) outgoing links on the \\(i^{th}\\) webpage, then the probability that the random surfer goes from page \\(i\\) to page \\(j\\) is \\(p\_{ij} \= 1 / m\_i\\). Note that if the \\(i^{th}\\) page doesn’t link to the \\(j^{th}\\) page, then \\(p\_{ij} \= 0\\). In this manner, we can form the \\(n \\times n\\) [*transition matrix*](https://en.wikipedia.org/w/index.php?search=transition%20matrix) \\(\\mathbf{P}\\), wherein each entry describes the probability of moving from page \\(i\\) to page \\(j\\).
The product \\(\\mathbf{P} \\mathbf{v}\_0 \= \\mathbf{v}\_1\\) is a vector where \\(v\_{1i}\\) indicates the probability of being at the \\(i^{th}\\) webpage, after picking a webpage uniformly at random to start, and then clicking on one link chosen at random (with equal probability). The product \\(\\mathbf{P} \\mathbf{v}\_1 \= \\mathbf{P}^2 \\mathbf{v}\_0\\) gives us the probabilities after two clicks, etc.
It can be shown mathematically that if we continue to iterate this process, then we will arrive at a [*stationary distribution*](https://en.wikipedia.org/w/index.php?search=stationary%20distribution) \\(\\mathbf{v}^\*\\) that reflects the long\-term probability of being on any given page.
Each entry in that vector then represents the popularity of the corresponding webpage—\\(\\mathbf{v}^\*\\) is the PageRank of each webpage.[40](#fn40)
Because \\(\\mathbf{v}^\*\\) is an eigenvector of the transition matrix (since \\(\\mathbf{P} \\mathbf{v}^\* \= \\mathbf{v}^\*\\)), this measure of centrality is known as [*eigenvector centrality*](https://en.wikipedia.org/w/index.php?search=eigenvector%20centrality).
It was, in fact, developed earlier, but Page and Brin were the first to apply the idea to the World Wide Web for the purpose of search.
The success of PageRank has led to its being applied in a wide variety of contexts—virtually any problem in which a ranking measure on a network setting is feasible.
In addition to the college team sports example below, applications of PageRank include: scholarly citations (e.g., [eigenfactor.org](http://www.eigenfactor.org/)), doctoral programs, protein networks, and lexical semantics.
Another metaphor that may be helpful in understanding PageRank is that of movable mass. That is, suppose that there is a certain amount of mass in a network. The initial vector \\(\\mathbf{v}\_0\\) models a uniform distribution of that mass over the vertices. That is, \\(1/n\\) of the total mass is located on each vertex. The transition matrix \\(\\mathbf{P}\\) models that mass flowing through the network according to the weights on each edge. After a while, the mass will “settle” on the vertices, but in a non\-uniform distribution. The node that has accumulated the most mass has the largest PageRank.
20\.4 Extended example: 1996 men’s college basketball
-----------------------------------------------------
Every March (with exception in 2020 due to COVID\-19\), the attention of many sports fans and college students is captured by the NCAA basketball tournament, which pits 68 of the best teams against each other in a winner\-take\-all, single\-elimination tournament.
(A [*tournament*](https://en.wikipedia.org/w/index.php?search=tournament) is a special type of directed graph.)
However, each team in the tournament is seeded based on their performance during the regular season.
These seeds are important, since getting a higher seed can mean an easier path through the tournament.
Moreover, a tournament berth itself can mean millions of dollars in revenue to a school’s basketball program. Finally, predicting the outcome of the tournament has become something of a sport unto itself.
Kaggle has held a machine learning (see Chapters [11](ch-learningI.html#ch:learningI) and [12](ch-learningII.html#ch:learningII)) competition each spring to solicit these predictions. We will use their data to build a PageRank metric for team strength for the 1995–1996 regular season (the best season in the history of the [*University of Massachusetts*](https://en.wikipedia.org/w/index.php?search=University%20of%20Massachusetts)). To do this, we will build a directed graph whereby each team is a node, and each game creates a directed edge from the losing team to the winning team, which can be weighted based on the margin of victory. The PageRank in such a network is a measure of each team’s strength.
First, we need to download the game\-by\-game results and a lookup table that translates the team IDs into school names. Note that Kaggle requires a sign\-in, so the code below may not work for you without your using your Web browser to authenticate.
```
prefix <- "https://www.kaggle.com/c/march-machine-learning-mania-2015"
url_teams <- paste(prefix, "download/teams.csv", sep = "/")
url_games <- paste(
prefix,
"download/regular_season_compact_results.csv", sep = "/"
)
download.file(url_teams, destfile = "data/teams.csv")
download.file(url_games, destfile = "data/games.csv")
```
Next, we will load this data and `filter()` to select just the 1996 season.
```
library(mdsr)
teams <- readr::read_csv("data/teams.csv")
games <- readr::read_csv("data/games.csv") %>%
filter(season == 1996)
dim(games)
```
```
[1] 4122 8
```
Since the basketball schedule is very unbalanced (each team does not play the same number of games against each other team), margin of victory seems like an important factor in determining how much better one team is than another. We will use the ratio of the winning team’s score to the losing team’s score as an edge weight.
```
E <- games %>%
mutate(score_ratio = wscore/lscore) %>%
select(lteam, wteam, score_ratio)
V <- teams %>%
filter(team_id %in% unique(c(E$lteam, E$wteam)))
g <- igraph::graph_from_data_frame(E, directed = TRUE, vertices = V) %>%
as_tbl_graph() %>%
mutate(team_id = parse_number(name))
summary(g)
```
```
IGRAPH fa6df40 DN-- 305 4122 --
+ attr: name (v/c), team_name (v/c), team_id (v/n), score_ratio
| (e/n)
```
Our graph for this season contains 305 teams, who played a total of 4,122 games.
The **igraph** package contains a `centrality_pagerank()` function that will compute PageRank for us.
In the results below, we can see that by this measure, [*George Washington University*](https://en.wikipedia.org/w/index.php?search=George%20Washington%20University) was the highest\-ranked team, followed by UMass and [*Georgetown*](https://en.wikipedia.org/w/index.php?search=Georgetown).
In reality, the 7th\-ranked team, Kentucky, won the tournament by beating [*Syracuse*](https://en.wikipedia.org/w/index.php?search=Syracuse), the 16th\-ranked team.
All four semifinalists (Kentucky, Syracuse, UMass, and Mississippi State) ranked in the top\-16 according to PageRank, and all 8 quarterfinalists (also including Wake Forest, Kansas, Georgetown, and Cincinnati) were in the top\-20\.
Thus, assessing team strength by computing PageRank on regular season results would have made for a high\-quality prediction of the postseason results.
```
g <- g %>%
mutate(pagerank = centrality_pagerank())
g %>%
as_tibble() %>%
arrange(desc(pagerank)) %>%
head(20)
```
```
# A tibble: 20 × 4
name team_name team_id pagerank
<chr> <chr> <dbl> <dbl>
1 1203 G Washington 1203 0.0219
2 1269 Massachusetts 1269 0.0205
3 1207 Georgetown 1207 0.0164
4 1234 Iowa 1234 0.0143
5 1163 Connecticut 1163 0.0141
6 1437 Villanova 1437 0.0131
7 1246 Kentucky 1246 0.0127
8 1345 Purdue 1345 0.0115
9 1280 Mississippi St 1280 0.0114
10 1210 Georgia Tech 1210 0.0106
11 1112 Arizona 1112 0.0103
12 1448 Wake Forest 1448 0.0101
13 1242 Kansas 1242 0.00992
14 1336 Penn St 1336 0.00975
15 1185 E Michigan 1185 0.00971
16 1393 Syracuse 1393 0.00956
17 1266 Marquette 1266 0.00944
18 1314 North Carolina 1314 0.00942
19 1153 Cincinnati 1153 0.00940
20 1396 Temple 1396 0.00860
```
Note that these rankings are very different from simply assessing each team’s record and winning percentage, since it implicitly considers *who beat whom*, and by how much. Using won–loss record alone, UMass was the best team, with a 31–1 record, while Kentucky was 4th at 28–2\.
```
wins <- E %>%
group_by(wteam) %>%
summarize(W = n())
losses <- E %>%
group_by(lteam) %>%
summarize(L = n())
g <- g %>%
left_join(wins, by = c("team_id" = "wteam")) %>%
left_join(losses, by = c("team_id" = "lteam")) %>%
mutate(win_pct = W / (W + L))
g %>%
as_tibble() %>%
arrange(desc(win_pct)) %>%
head(20)
```
```
# A tibble: 20 × 7
name team_name team_id pagerank W L win_pct
<chr> <chr> <dbl> <dbl> <int> <int> <dbl>
1 1269 Massachusetts 1269 0.0205 31 1 0.969
2 1403 Texas Tech 1403 0.00548 28 1 0.966
3 1163 Connecticut 1163 0.0141 30 2 0.938
4 1246 Kentucky 1246 0.0127 28 2 0.933
5 1180 Drexel 1180 0.00253 25 3 0.893
6 1453 WI Green Bay 1453 0.00438 24 3 0.889
7 1158 Col Charleston 1158 0.00190 22 3 0.88
8 1307 New Mexico 1307 0.00531 26 4 0.867
9 1153 Cincinnati 1153 0.00940 25 4 0.862
10 1242 Kansas 1242 0.00992 25 4 0.862
11 1172 Davidson 1172 0.00237 22 4 0.846
12 1345 Purdue 1345 0.0115 25 5 0.833
13 1448 Wake Forest 1448 0.0101 23 5 0.821
14 1185 E Michigan 1185 0.00971 22 5 0.815
15 1439 Virginia Tech 1439 0.00633 22 5 0.815
16 1437 Villanova 1437 0.0131 25 6 0.806
17 1112 Arizona 1112 0.0103 24 6 0.8
18 1428 Utah 1428 0.00613 23 6 0.793
19 1265 Marist 1265 0.00260 22 6 0.786
20 1114 Ark Little Rock 1114 0.00429 21 6 0.778
```
```
g %>%
as_tibble() %>%
summarize(pr_wpct_cor = cor(pagerank, win_pct, use = "complete.obs"))
```
```
# A tibble: 1 × 1
pr_wpct_cor
<dbl>
1 0.639
```
While PageRank and winning percentage are moderately correlated, PageRank recognizes that, for example, [*Texas Tech*](https://en.wikipedia.org/w/index.php?search=Texas%20Tech)’s 28\-1 record did not even make them a top\-20 team. Georgetown beat Texas Tech in the quarterfinals.
This particular graph has some interesting features. First, UMass beat Kentucky in their first game of the season.
```
E %>%
filter(wteam == 1269 & lteam == 1246)
```
```
# A tibble: 1 × 3
lteam wteam score_ratio
<dbl> <dbl> <dbl>
1 1246 1269 1.12
```
This helps to explain why UMass has a higher PageRank than Kentucky, since the only edge between them points to UMass. Sadly, Kentucky beat UMass in the semifinal round of the tournament—but that game is not present in this regular season data set.
Secondly, George Washington finished the regular season 21–7, yet they had the highest PageRank in the country. How could this have happened? In this case, George Washington was the only team to beat UMass in the regular season. Even though the two teams split their season series, this allows much of the mass that flows to UMass to flow to George Washington.
```
E %>%
filter(lteam %in% c(1203, 1269) & wteam %in% c(1203, 1269))
```
```
# A tibble: 2 × 3
lteam wteam score_ratio
<dbl> <dbl> <dbl>
1 1269 1203 1.13
2 1203 1269 1.14
```
The national network is large and complex, and therefore we will focus on the [*Atlantic 10 conference*](https://en.wikipedia.org/w/index.php?search=Atlantic%2010%20conference) to illustrate how PageRank is actually computed. The A\-10 consisted of 12 teams in 1996\.
```
A_10 <- c("Massachusetts", "Temple", "G Washington", "Rhode Island",
"St Bonaventure", "St Joseph's PA", "Virginia Tech", "Xavier",
"Dayton", "Duquesne", "La Salle", "Fordham")
```
We can form an [*induced subgraph*](https://en.wikipedia.org/w/index.php?search=induced%20subgraph) of our national network that consists solely of vertices and edges among the A\-10 teams.
We will also compute PageRank on this network.
```
a10 <- g %>%
filter(team_name %in% A_10) %>%
mutate(pagerank = centrality_pagerank())
summary(a10)
```
```
IGRAPH 46da9dd DN-- 12 107 --
+ attr: name (v/c), team_name (v/c), team_id (v/n), pagerank (v/n),
| W (v/n), L (v/n), win_pct (v/n), score_ratio (e/n)
```
We visualize this network in Figure [20\.8](ch-netsci.html#fig:a10), where the size of the vertices are proportional to each team’s PageRank, and the transparency of the edges is based on the ratio of the scores in that game. We note that George Washington and UMass are the largest nodes, and that all but one of the edges connected to UMass point towards it.
```
library(ggraph)
ggraph(a10, layout = 'kk') +
geom_edge_arc(
aes(alpha = score_ratio), color = "lightgray",
arrow = arrow(length = unit(0.2, "cm")),
end_cap = circle(1, 'cm'),
strength = 0.2
) +
geom_node_point(aes(size = pagerank, color = pagerank), alpha = 0.6) +
geom_node_label(aes(label = team_name), repel = TRUE) +
scale_alpha_continuous(range = c(0.4, 1)) +
scale_size_continuous(range = c(1, 10)) +
guides(
color = guide_legend("PageRank"),
size = guide_legend("PageRank")
) +
theme_void()
```
Figure 20\.8: Atlantic 10 Conference network, NCAA men’s basketball, 1995–1996\.
Now, let’s compute PageRank for this network using nothing but matrix multiplication. First, we need to get the transition matrix for the graph. This is the same thing as the [*adjacency matrix*](https://en.wikipedia.org/w/index.php?search=adjacency%20matrix), with the entries weighted by the score ratios.
```
P <- a10 %>%
igraph::as_adjacency_matrix(sparse = FALSE, attr = "score_ratio") %>%
t()
```
However, entries in \\(\\mathbf{P}\\) need to be probabilities, and thus they need to be normalized so that each column sums to 1\. We can achieve this using the `scale()` function.
```
P <- scale(P, center = FALSE, scale = colSums(P))
round(P, 2)
```
```
1173 1182 1200 1203 1247 1269 1348 1382 1386 1396 1439 1462
1173 0.00 0.09 0.00 0.00 0.09 0 0.14 0.11 0.00 0.00 0.00 0.16
1182 0.10 0.00 0.10 0.00 0.10 0 0.00 0.00 0.00 0.00 0.00 0.00
1200 0.11 0.00 0.00 0.00 0.09 0 0.00 0.00 0.00 0.00 0.00 0.00
1203 0.11 0.10 0.10 0.00 0.10 1 0.14 0.11 0.17 0.37 0.27 0.15
1247 0.00 0.09 0.00 0.25 0.00 0 0.00 0.12 0.00 0.00 0.00 0.00
1269 0.12 0.09 0.13 0.26 0.11 0 0.14 0.12 0.16 0.34 0.25 0.15
1348 0.00 0.11 0.11 0.00 0.12 0 0.00 0.12 0.16 0.29 0.21 0.18
1382 0.11 0.09 0.13 0.00 0.00 0 0.14 0.00 0.00 0.00 0.00 0.00
1386 0.11 0.10 0.10 0.24 0.09 0 0.14 0.11 0.00 0.00 0.00 0.00
1396 0.12 0.15 0.12 0.00 0.12 0 0.16 0.10 0.16 0.00 0.27 0.19
1439 0.12 0.09 0.12 0.25 0.09 0 0.14 0.11 0.17 0.00 0.00 0.17
1462 0.10 0.09 0.09 0.00 0.09 0 0.00 0.12 0.18 0.00 0.00 0.00
attr(,"scaled:scale")
1173 1182 1200 1203 1247 1269 1348 1382 1386 1396 1439 1462
10.95 11.64 11.91 4.39 11.64 1.13 7.66 10.56 6.54 3.65 5.11 6.95
```
One shortcoming of this construction is that our graph has multiple edges between pairs of vertices, since teams in the same conference usually play each other twice. Unfortunately, the **igraph** function `as_adjacency_matrix()` doesn’t handle this well:
> If the graph has multiple edges, the edge attribute of an arbitrarily chosen edge (for the multiple edges) is included.
Even though UMass beat Temple twice, only one of those edges (apparently chosen arbitrarily) will show up in the adjacency matrix. Note also that in the transition matrix shown above, the column labeled `1269` contains a one and eleven zeros. This indicates that the probability of UMass (`1269`) transitioning to George Washington (`1203`) is 1—since UMass’s only loss was to George Washington. This is not accurate, because the model doesn’t handle multiple edges in a sufficiently sophisticated way.
It is apparent from the matrix that
George Washington is nearly equally likely to move to La Salle, UMass, St. Joseph’s, and Virginia Tech—their four losses in the Atlantic 10\.
Next, we’ll define the initial vector with uniform probabilities—each team has an initial value of 1/12\.
```
num_vertices <- nrow(as_tibble(a10))
v0 <- rep(1, num_vertices) / num_vertices
v0
```
```
[1] 0.0833 0.0833 0.0833 0.0833 0.0833 0.0833 0.0833 0.0833 0.0833 0.0833
[11] 0.0833 0.0833
```
To compute PageRank, we iteratively multiply the initial vector \\(\\mathbf{v}\_0\\) by the transition matrix \\(\\mathbf{P}\\). We’ll do 20 multiplications with a loop:
```
v <- v0
for (i in 1:20) {
v <- P %*% v
}
as.vector(v)
```
```
[1] 0.02552 0.01049 0.00935 0.28427 0.07319 0.17688 0.08206 0.01612 0.09253
[10] 0.08199 0.11828 0.02930
```
We find that the fourth vertex—George Washington—has the highest PageRank. Compare these with the values returned by the built\-in `page_rank()` function from **igraph**:
```
igraph::page_rank(a10)$vector
```
```
1173 1182 1200 1203 1247 1269 1348 1382 1386 1396
0.0346 0.0204 0.0193 0.2467 0.0679 0.1854 0.0769 0.0259 0.0870 0.0894
1439 1462
0.1077 0.0390
```
Why are they different? One limitation of PageRank as we’ve defined it is that there could be [*sinks*](https://en.wikipedia.org/w/index.php?search=sinks), or [*spider traps*](https://en.wikipedia.org/w/index.php?search=spider%20traps), in a network. These are individual nodes, or even a collection of nodes, out of which there are no outgoing edges. (UMass is nearly—but not quite—a spider trap in this network.) In this event, if random surfers find themselves in a spider trap, there is no way out, and all of the probability will end up in those vertices.
In practice, PageRank is modified by adding a [*random restart*](https://en.wikipedia.org/w/index.php?search=random%20restart).
This means that every so often, the random surfer simply picks up and starts over again.
The parameter that controls this in `page_rank()` is called `damping`, and it has a default value of 0\.85\.
If we set the `damping` argument to 1, corresponding to the matrix multiplication we did above, we get a little closer.
```
igraph::page_rank(a10, damping = 1)$vector
```
```
1173 1182 1200 1203 1247 1269 1348 1382 1386
0.02290 0.00778 0.00729 0.28605 0.07297 0.20357 0.07243 0.01166 0.09073
1396 1439 1462
0.08384 0.11395 0.02683
```
Alternatively, we can do the random walk again, but allow for random restarts.
```
w <- v0
d <- 0.85
for (i in 1:20) {
w <- d * P %*% w + (1 - d) * v0
}
as.vector(w)
```
```
[1] 0.0382 0.0231 0.0213 0.2453 0.0689 0.1601 0.0866 0.0302 0.0880 0.0872
[11] 0.1106 0.0407
```
```
igraph::page_rank(a10, damping = 0.85)$vector
```
```
1173 1182 1200 1203 1247 1269 1348 1382 1386 1396
0.0346 0.0204 0.0193 0.2467 0.0679 0.1854 0.0769 0.0259 0.0870 0.0894
1439 1462
0.1077 0.0390
```
Again, the results are not exactly the same due to the approximation of values in the adjacency matrix \\(\\mathbf{P}\\) mentioned earlier, but they are quite close.
20\.5 Further resources
-----------------------
There are two popular workhorse **R** packages for network analysis: **igraph** and **sna**. Both have large user bases and are actively developed. **igraph** also has bindings for Python and C, see Chapter [21](ch-big.html#ch:big).
For more sophisticated graph visualization software, see [*Gephi*](https://en.wikipedia.org/w/index.php?search=Gephi).
In addition to **igraph**, the **ggnetwork**, **sna**, and **network** **R** packages are useful for working with graph objects.
[Albert\-László Barabási](https://en.wikipedia.org/w/index.php?search=Albert-László%20Barabási)’s book *Linked* is a popular introduction to network science (Barabási and Frangos 2014\). For a broader undergraduate textbook, see Easley and Kleinberg (2010\).
20\.6 Exercises
---------------
**Problem 1 (Medium)**: The following problem considers the U.S. airport network as a graph.
1. What information do you need to compute the PageRank of the U.S. airport network? Write an SQL query to retrieve this information for 2012\.
(Hint: use the `dbConnect_scidb` function to connect to the `airlines` database.)
2. Use the data you pulled from SQL and build the network as a *weighted* `tidygraph` object, where the weights are proportional to the frequency of flights between each pair of airports.
3. Compute the PageRank of each airport in your network. What are the top\-10 “most central” airports? Where does Oakland International Airport `OAK` rank?
4. Update the vertex attributes of your network with the geographic coordinates of each airport (available in the `airports` table).
5. Use `ggraph` to draw the airport network. Make the thickness or transparency of each edge proportional to its weight.
6. Overlay your airport network on a U.S. map (see the spatial data chapter).
7. Project the map and the airport network using the Lambert Conformal Conic projection.
8. Crop the map you created to zoom in on your local airport.
**Problem 2 (Hard)**: Let’s reconsider the Internet Movie Database (IMDb) example.
1. In the `CROSS JOIN` query in the movies example, how could we have modified the SQL query to include the actor’s and actresses’ names in the original query? Why would this have been less efficient from a computational and data storage point of view?
2. Expand the Hollywood network by going further back in time. If you go back to 2000, which actor/actress has the highest degree centrality? Betweenness centrality? Eigenvector centrality?
**Problem 3 (Hard)**: Use the `dbConnect_scidb` function to connect to the `airlines` database using the data from 2013 to answer the following problem. For a while, [Edward Snowden](https://en.wikipedia.org/wiki/Edward_Snowden) was trapped in a Moscow airport. Suppose that you were trapped not in *one* airport, but in *all* airports. If you were forced to randomly fly around the United States, where would you be most likely to end up?
20\.7 Supplementary exercises
-----------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-networks.html\#networks\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-networks.html#networks-online-exercises)
No exercises found
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-big.html |
Chapter 21 Epilogue: Towards “big data”
=======================================
The terms [*data science*](https://en.wikipedia.org/w/index.php?search=data%20science) and [*big data*](https://en.wikipedia.org/w/index.php?search=big%20data) are often used
interchangeably, but this is not correct.
Technically, “big data” is a part of data science: the part that deals with data that are so large that they cannot be handled by an ordinary computer. This book provides what we hope is a broad—yet principled—introduction to data science, but it does not specifically prepare the reader to work with big data.
Rather, we see the concepts developed in this book as “precursors” to big data (Horton, Baumer, and Wickham 2015; Horton and Hardin 2015\).
In this epilogue, we explore notions of big data and point the reader towards technologies that scale for truly big data.
21\.1 Notions of big data
-------------------------
[*Big data*](https://en.wikipedia.org/w/index.php?search=Big%20data) is an exceptionally hot topic, but it is not so well\-defined. Wikipedia states:
> [Big data](http://en.wikipedia.org/wiki/Big_data) is a field that treats ways to analyze, systematically extract information from, or otherwise deal with data sets that are too large or complex to be dealt with by traditional data\-processing application software.
>
>
> Relational database management systems, desktop statistics and software packages used to visualize data often have difficulty handling big data. The work may require “massively parallel software running on tens, hundreds, or even thousands of servers.” What qualifies as being “big data” varies depending on the capabilities of the users and their tools, and expanding capabilities make big data a moving target. “For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration” (retrieved December 2020\).
Big data is often characterized by the three V’s: volume, velocity, and variety (Laney 2001\).
Under this definition, the qualities that make big data different are its *size*, how *quickly* it grows as it is collected, and how many different *formats* it may come in.
In big data, the size of tables may be too large to fit on an ordinary computer, the data and queries on it may be coming in too quickly to process, or the data may be distributed across many different systems.
[Randall Pruim](https://en.wikipedia.org/w/index.php?search=Randall%20Pruim) puts it more concisely: “Big data is when your workflow breaks.”
Both relative and absolute definitions of big data are meaningful.
The absolute definition may be easier to understand: We simply specify a data size and agree that any data that are at least that large are “big”—otherwise they are not.
The problem with this definition is that it is a moving target.
It might mean [*petabytes*](https://en.wikipedia.org/w/index.php?search=petabytes) (1,000 terabytes) today, but [*exabytes*](https://en.wikipedia.org/w/index.php?search=exabytes) (1,000 petabytes) a few years from now.
Regardless of the precise definition, it is increasingly clear that while many organizations like Google, Facebook, and Amazon are working with truly big data, most individuals—even data scientists like you and us—are not.
For us, the relative definition becomes more meaningful.
A big data problem occurs when the workflow that you have been using to solve problems becomes infeasible due to the expansion in the size of your data.
It is useful in this context to think about [*orders of magnitude*](https://en.wikipedia.org/w/index.php?search=orders%20of%20magnitude) of data.
The evolution of baseball data illustrates how “big data problems” have arisen as the volume and variety of the data has increased over time.
* **Individual game data**: [Henry Chadwick](https://en.wikipedia.org/w/index.php?search=Henry%20Chadwick) started collecting boxscores (a tabular summary of each game) in the early 1900s. These data (dozens or even hundreds of rows) can be stored on handwritten pieces of paper, or in a single spreadsheet. Each row might represent one *game*. Thus, a perfectly good workflow for working with data of this size is to store them on paper. A more sophisticated workflow would be to store them in a spreadsheet application.
* **Seasonal data**: By the 1970s, decades of baseball history were recorded in a seasonal format. Here, the data are aggregated at the *player\-team\-season* level. An example of this kind of data is the **Lahman** database we explored in Chapter [4](ch-dataI.html#ch:dataI), which has nearly 100,000 rows in the `Batting` table. Note that in this seasonal format, we know how many home runs each player hit for each team, but we don’t know anything about *when* they were hit (e.g., in what month or what inning). [*Excel*](https://en.wikipedia.org/w/index.php?search=Excel) is limited in the number of rows that a single spreadsheet can contain. The original limit of \\(2^{14} \= 16,384\\) rows was bumped up to \\(2^{16} \= 65,536\\) rows in 2003, and the current limit is \\(2^{20} \\approx 1\\) million rows. Up until 2003, simply opening the `Batting` table in Excel would have been impossible. This is a big data problem, because your Excel workflow has broken due to the size of your data. On the other hand, opening the `Batting` table in **R** requires far less memory, since **R** does not try to display all of the data.
* **Play\-by\-play data**: By the 1990s, [Retrosheet](http://www.retrosheet.org/) began collecting even more granular play\-by\-play data. Each row contains information about one *play*. This means that we know exactly when each player hit each home run—what date, what inning, off of which pitcher, which other runners were on base, and even which other players were in the field. As of this writing nearly 100 seasons occupying more than 10 million rows are available. This creates a big data problem for **R**.–you would have a hard time loading these data into **R** on a typical personal computer. However, SQL provides a scalable solution for data of this magnitude, even on a laptop. Still, you will experience significantly better performance if these data are stored in an SQL cluster with lots of memory.
* **Camera\-tracking data**: The [Statcast](http://m.mlb.com/statcast/leaderboard#hr-distance,p) data set contains \\((x,y,z)\\)\-coordinates for all fielders, baserunners, and the ball every \\(1/15^{th}\\) of a second. Thus, each row is a moment in time. These data indicate not just the outcome of each play, but exactly where each of the players on the field and the ball were as the play evolved. These data are several gigabytes per game, which translates into many terabytes per season. Thus, some sort of distributed server system would be required just to store these data. These data are “big” in the relative sense for any individual, but they are still orders of magnitude away from being “big” in the absolute sense.
What does absolutely big data look like? For an individual user, you might consider the [13\.5\-terabyte data set of 110 billion events released in 2015 by Yahoo!](http://yahoo.tumblr.com/post/137282204964/yahoo-releases-the-largest-ever-machine-learning) for use in machine learning research.
The grand\-daddy of data may be the [*Large Hadron Collider*](https://en.wikipedia.org/w/index.php?search=Large%20Hadron%20Collider) in Europe, which is generating 25 petabytes of data per year (CERN 2008\).
However, only 0\.001% of all of the data that is begin generated by the supercollider is being saved, because to collect it all would mean capturing nearly 500 exabytes *per day*.
This is clearly big data.
21\.2 Tools for bigger data
---------------------------
By now, you have a working knowledge of both **R** and SQL.
These are battle\-tested, valuable tools for working with small and medium data. Both have large user bases, ample deployment, and continue to be very actively developed. Some of that development seeks to make **R** and SQL more useful for truly large data.
While we don’t have the space to cover these extensions in detail, in this section we outline some of the most important concepts for working with big data, and highlight some of the tools you are likely to see on this frontier of your working knowledge.
### 21\.2\.1 Data and memory structures for big data
An alternative to **dplyr**, **data.table** is a popular **R** package for fast SQL\-style operations on very large data tables (many gigabytes of memory).
It is [not clear](http://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) that **data.table** is faster or more efficient than **dplyr**, and it uses a different—but not necessarily better—syntax.
Moreover, **dplyr** can use **data.table** itself as a backend.
We have chosen to highlight **dplyr** in this book primarily because it fits so well syntactically with a number of other **R** packages we use herein (i.e., the **tidyverse**).
For some problems—more common in machine learning—the number of explanatory variables \\(p\\) can be large (not necessarily relative to the number of observations \\(n\\)).
In such cases, the algorithm to compute a least\-squares regression model may eat up quite a bit of memory.
The **biglm** package seeks to improve on this by providing a memory\-efficient `biglm()` function that can be used in place of `lm()`.
In particular, **biglm** can fit generalized linear models with data frames that are larger than memory.
The package accomplishes this by splitting the computations into more manageable chunks—updating the results iteratively as each chunk is processed.
In this manner, you can write a drop\-in replacement for your existing code that will scale to data sets larger than the memory on your computer.
```
library(tidyverse)
library(mdsr)
library(biglm)
library(bench)
n <- 20000
p <- 500
d <- rnorm(n * (p + 1)) %>%
matrix(ncol = (p + 1)) %>%
as_tibble(.name_repair = "unique")
expl_vars <- names(d) %>%
tail(-1) %>%
paste(collapse = " + ")
my_formula <- as.formula(paste("...1 ~ ", expl_vars))
system_time(lm(my_formula, data = d))
```
```
process real
5.7s 5.7s
```
```
system_time(biglm(my_formula, data = d))
```
```
process real
4.14s 4.14s
```
Here we see that the computation completed more quickly (and can be updated to incorporate more observations, unlike `lm()`).
The **biglm** package is also useful in settings where there are many observations but not so many predictors.
A related package is **bigmemory**.
This package extends **R**’s capabilities to map memory to disk, allowing you to work with larger matrices.
### 21\.2\.2 Compilation
Python, SQL, and **R** are [*interpreted programming language*](https://en.wikipedia.org/w/index.php?search=interpreted%20programming%20language)s.
This means that the code that you write in these languages gets translated into machine language on\-the\-fly as you execute it.
The process is not altogether different than when you hear someone speaking in Russian on the news, and then you hear a halting English translation with a one\- or two\-second delay.
Most of the time, the translation happens so fast that you don’t even notice.
Imagine that instead of translating the Russian speaker’s words on\-the\-fly, the translator took dictation, wrote down a thoughtful translation, and then re\-recorded the segment in English.
You would be able to process the English\-speaking segment faster—because you are fluent in English.
At the same time, the translation would probably be better, since more time and care went into it, and you would likely pick up subtle nuances that were lost in the on\-the\-fly translation. Moreover, once the English segment is recorded, it can be watched at any time without incurring the cost of translation again.
This alternative paradigm involves a one\-time translation of the code called [*compilation*](https://en.wikipedia.org/w/index.php?search=compilation). **R** code is not compiled (it is interpreted), but *C\+\+* code is.
The result of compilation is a binary program that can be executed by the CPU directly.
This is why, for example, you can’t write a desktop application in **R**, and executables written in *C\+\+* will be much faster than scripts written in **R** or Python.
(To continue this line of reasoning, binaries written in assembly language can be faster than those written in *C\+\+*, and binaries written in machine language can be faster than those written in assembly.)
If *C\+\+* is so much faster than **R**, then why write code in **R**?
Here again, it is a trade\-off.
The code written in *C\+\+* may be faster, but when your programming time is taken into account you can often accomplish your task much faster by writing in **R**.
This is because **R** provides extensive libraries that are designed to reduce the amount of code that you have to write. **R** is also interactive, so that you can keep a session alive and continue to write new code as you run the old code.
This is fundamentally different from *C\+\+* development, where you have to re\-compile every time you change a single line of code.
The convenience of **R** programming comes at the expense of speed.
However, there is a compromise.
**Rcpp** allows you to move certain pieces of your **R** code to *C\+\+*.
The basic idea is that **Rcpp** provides *C\+\+* data structures that correspond to **R** data structures (e.g., a `data.frame` data structure written in *C\+\+*).
It is thus possible to write functions in **R** that get compiled into faster *C\+\+* code, with minimal additional effort on the part of the **R** programmer.
The **dplyr** package makes extensive use of this functionality to improve performance.
### 21\.2\.3 Parallel and distributed computing
#### 21\.2\.3\.1 Embarrassingly parallel computing
How do you increase a program’s capacity to work with larger data?
The most obvious way is to add more memory (i.e., [*RAM*](https://en.wikipedia.org/w/index.php?search=RAM)) to your computer.
This enables the program to read more data at once, enabling greater functionality with any additional programming.
But what if the bottleneck is not the memory, but the processor ([*CPU*](https://en.wikipedia.org/w/index.php?search=CPU))?
A processor can only do one thing at a time.
So if you have a computation that takes \\(t\\) units of time, and you have to do that computation for many different data sets, then you can expect that it will take many more units of time to complete.
For example, suppose we generate 20 sets of 1 million \\((x,y)\\) random pairs and want to fit a regression model to each set.
```
n <- 1e6
k <- 20
d <- tibble(y = rnorm(n*k), x = rnorm(n*k), set = rep(1:k, each = n))
fit_lm <- function(data, set_id) {
data %>%
filter(set == set_id) %>%
lm(y ~ x, data = .)
}
```
However long it takes to do it for the first set, it should take about 20 times as long to do it for all 20 sets.
This is as expected, since the computation procedure was to fit the regression model for the first set, then fit it for the second set, and so on.
```
system_time(map(1:1, fit_lm, data = d))
```
```
process real
657ms 657ms
```
```
system_time(map(1:k, fit_lm, data = d))
```
```
process real
8.86s 8.86s
```
However, in this particular case, the data in each of the 20 sets has nothing to do with the data in any of the other sets.
This is an example of an [*embarrassingly parallel*](https://en.wikipedia.org/w/index.php?search=embarrassingly%20parallel) problem.
These data are ripe candidates for a [*parallelized*](https://en.wikipedia.org/w/index.php?search=parallelized) computation.
If we had 20 processors, we could fit one regression model on each CPU—all at the same time—and get our final result in about the same time as it takes to fit the model to *one* set of data.
This would be a tremendous improvement in speed.
Unfortunately, we don’t have 20 CPUs. Nevertheless, most modern computers have multiple cores.
```
library(parallel)
my_cores <- detectCores()
my_cores
```
```
[1] 4
```
The **parallel** package provides functionality for parallel computation in **R**.
The **furrr** package extends the **future** package to allow us to express embarrassingly parallel computations in our familiar **purrr** syntax (Vaughan and Dancho 2021\).
Specifically, it provides a function `future_map()` that works just like `map()` (see Chapter [7](ch-iteration.html#ch:iteration)), except that it spreads the computations over multiple cores.
The theoretical speed\-up is a function of `my_cores`, but in practice this may be less for a variety of reasons (most notably, the overhead associated with combining the parallel results).
The **plan** function sets up a parallel computing environment. In this case, we are using the `multiprocess` mode, which will split computations across asynchronous separate **R** sessions. The `workers` argument to `plan()` controls the number of cores being used for parallel computation. Next, we fit the 20 regression models using the `future_map()` function instead of **map**. Once completed, set the computation mode back to `sequential` for the remainder of this chapter.
```
library(furrr)
plan(multiprocess, workers = my_cores)
system_time(
future_map(1:k, fit_lm, data = d)
)
```
```
process real
9.98s 11.56s
```
```
plan(sequential)
```
In this case, the overhead associated with combining the results was larger than the savings from parallelizing the computation. But this will not always be the case.
#### 21\.2\.3\.2 GPU computing and CUDA
Another fruitful avenue to speed up computations is through use of a graphical processing unit (GPU).
These devices feature a highly parallel structure that can lead to significant performance gains.
[*CUDA*](https://en.wikipedia.org/w/index.php?search=CUDA) is a parallel computing platform and application programming interface created by [*NVIDIA*](https://en.wikipedia.org/w/index.php?search=NVIDIA) (one of the largest manufacturers of GPUs).
The **OpenCL** package provides bindings for **R** to the open\-source, general\-purpose OpenCL programming language for GPU computing.
#### 21\.2\.3\.3 MapReduce
[*MapReduce*](https://en.wikipedia.org/w/index.php?search=MapReduce) is a programming paradigm for parallel computing. To solve a task using a MapReduce framework, two functions must be written:
1. `Map(key_0, value_0)`: The `Map()` function reads in the original data (which is stored in key\-value pairs), and splits it up into smaller subtasks. It returns a `list` of key\-value pairs \\((key\_1, value\_1\)\\), where the keys and values are not necessarily of the same type as the original ones.
2. `Reduce(key_1, list(value_1))`: The MapReduce implementation has a method for aggregating the key\-value pairs returned by the `Map()` function by their keys (i.e., `key_1`). Thus, you only have to write the `Reduce()` function, which takes as input a particular `key_1`, and a list of all the `value_1`’s that correspond to `key_1`. The `Reduce()` function then performs some operation on that list, and returns a list of values.
MapReduce is efficient and effective because the `Map()` step can be highly parallelized.
Moreover, MapReduce is also fault tolerant, because if any individual `Map()` job fails, the controller can simply start another one.
The `Reduce()` step often provides functionality similar to a `GROUP BY` operation in SQL.
##### Example
The canonical MapReduce example is to tabulate the frequency of each word in a large number of text documents (i.e., a [*corpus*](https://en.wikipedia.org/w/index.php?search=corpus) (see Chapter [19](ch-text.html#ch:text))).
In what follows, we show an implementation written in Python by [Bill Howe](https://en.wikipedia.org/w/index.php?search=Bill%20Howe) of the [*University of Washington*](https://en.wikipedia.org/w/index.php?search=University%20of%20Washington) (Howe 2014\).
Note that at the beginning, this bit of code calls [an external `MapReduce` library](https://github.com/uwescience/datasci_course_materials/blob/master/assignment3/MapReduce.py) that actually implements MapReduce.
The user only needs to write the two functions shown in this block of code—not the MapReduce library itself.
```
import MapReduce
import sys
mr = MapReduce.MapReduce()
def mapper(record):
key = record[0]
value = record[1]
words = value.split()
for w in words:
mr.emit_intermediate(w, 1)
def reducer(key, list_of_values):
total = 0
for v in list_of_values:
total += v
mr.emit((key, total))
if __name__ == '__main__':
inputdata = open(sys.argv[1])
mr.execute(inputdata, mapper, reducer)
```
We will use this MapReduce program to compile a word count for the [issues raised on GitHub for the **ggplot2** package](https://github.com/hadley/ggplot2/issues).
These are stored in a [*JSON*](https://en.wikipedia.org/w/index.php?search=JSON) file (see Chapter [6](ch-dataII.html#ch:dataII)) as a single JSON array.
Since we want to illustrate how MapReduce can parallelize over many files, we will convert this single array into a JSON object for each issue.
This will mimic the typical use case.
The **jsonlite** package provides functionality for coverting between JSON objects and native **R** data structures.
```
library(jsonlite)
url <- "https://api.github.com/repos/tidyverse/ggplot2/issues"
gg_issues <- url %>%
fromJSON() %>%
select(url, body) %>%
group_split(url) %>%
map_chr(~toJSON(as.character(.x))) %>%
write(file = "code/map-reduce/issues.json")
```
For example, the first issue is displayed below.
Note that it consists of two comma\-separated character strings within brackets.
We can think of this as having the format: `[key, value]`.
```
readLines("code/map-reduce/issues.json") %>%
head(1) %>%
str_wrap(width = 70) %>%
cat()
```
```
["https://api.github.com/repos/tidyverse/ggplot2/issues/4019","When
setting the limits of `scale_fill_steps()`, the fill brackets in
the legend becomes unevenly spaced. It's not clear why or how this
happens.\r\n\r\n``` r\r\nlibrary(tidyverse)\r\n\r\ndf <- tibble(\r\n
crossing(\r\n tibble(sample = paste(\"sample\", 1:4)),\r\n tibble(pos
= 1:4)\r\n ),\r\n val = runif(16)\r\n)\r\n\r\nggplot(df, aes(x =
pos, y = sample)) +\r\n geom_line() +\r\n geom_point(aes(fill =
val), pch = 21, size = 7) +\r\n scale_fill_steps(low = \"white\",
high = \"black\")\r\n```\r\n\r\n
\r\n\r\n``` r\r\n\r\nggplot(df, aes(x = pos, y = sample)) +\r\n
geom_line() +\r\n geom_point(aes(fill = val), pch = 21, size = 7) +
\r\n scale_fill_steps(low = \"white\", high = \"black\", limits
= c(0, 1))\r\n```\r\n\r\n
\r\n\r\n<sup>Created on 2020-05-22 by the [reprex package](https://
reprex.tidyverse.org) (v0.3.0)<\/sup>"]
```
In the Python code written above (which is stored in the file `wordcount.py`), the `mapper()` function takes a `record` argument (i.e., one line of the `issues.json` file), and examines its first two elements—the `key` becomes the first argument (in this case, the URL of the GitHub issue) and the `value` becomes the second argument (the text of the issue).
After splitting the `value` on each space, the `mapper()` function emits a \\((key, value)\\) pair for each word.
Thus, the first issue shown above would generate the pairs: `(When, 1)`, `(setting, 1)`, `(the, 1)`, etc.
The `MapReduce` library provides a mechanism for efficiently collecting all of the resulting pairs based on the `key`, which in this case corresponds to a single word.
The `reducer()` function simply adds up all of the values associated with each key.
In this case, these values are all `1`s, so the resulting pair is a word and the number of times it appears (e.g., `(the, 158)`, etc.).
Thanks to the **reticulate** package, we can run this Python script from within **R** and bring the results into **R** for further analysis.
We see that the most common words in this corpus are short articles and prepositions.
```
library(mdsr)
cmd <- "python code/map-reduce/wordcount.py code/map-reduce/issues.json"
res <- system(cmd, intern = TRUE)
freq_df <- res %>%
purrr::map(jsonlite::fromJSON) %>%
purrr::map(set_names, c("word", "count")) %>%
bind_rows() %>%
mutate(count = parse_number(count))
glimpse(freq_df)
```
```
Rows: 1,605
Columns: 2
$ word <chr> "geom_point(aes(fill", "aliased", "desirable", "ggplot(ct)+g…
$ count <dbl> 2, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 3, 1, 2, 2, 1, 5, …
```
```
freq_df %>%
filter(str_detect(pattern = "[a-z]", word)) %>%
arrange(desc(count)) %>%
head(10)
```
```
# A tibble: 10 × 2
word count
<chr> <dbl>
1 the 147
2 to 87
3 a 63
4 of 45
5 is 43
6 in 33
7 and 32
8 that 31
9 it 28
10 be 28
```
MapReduce is popular and offers some advantages over SQL for some problems.
When MapReduce first became popular, and Google used it to redo their
webpage ranking system (see Chapter [20](ch-netsci.html#ch:netsci)), there was great excitement about a coming “paradigm shift” in parallel and distributed computing.
Nevertheless, advocates of SQL have challenged the notion that it has been completely superseded by MapReduce (Stonebraker et al. 2010\).
#### 21\.2\.3\.4 Hadoop
As noted previously, MapReduce requires a software implementation.
One popular such implementation is Hadoop MapReduce, which is one of the core components of [*Apache Hadoop*](https://en.wikipedia.org/w/index.php?search=Apache%20Hadoop).
Hadoop is a larger software ecosystem for storing and processing large data that includes a distributed file system, Pig, Hive, Spark, and other popular open\-source software tools.
While we won’t be able to go into great detail about these items, we will illustrate how to interface with Spark, which has a particularly tight integration with **RStudio**.
#### 21\.2\.3\.5 Spark
One nice feature of [*Apache Spark*](https://en.wikipedia.org/w/index.php?search=Apache%20Spark)—especially for our purposes—is that while it requires a distributed file system, it can implement a pseudo\-distributed file system on a single machine.
This makes it possible for you to experiment with Spark on your local machine even if you don’t have access to a cluster. For obvious reasons, you won’t actually see the performance boost that parallelism can bring, but you can try it out and debug your code.
Furthermore, the [**sparklyr** package](http://spark.rstudio.com/) makes it painless to install a local Spark cluster from within **R**, as well as connect to a local or remote cluster.
Once the **sparklyr** package is installed, we can use it to install a local Spark cluster.
```
library(sparklyr)
spark_install(version = "3.0") # only once!
```
Next, we make a connection to our local Spark instance from within **R**.
Of course, if we were connecting to a remote Spark cluster, we could modify the `master` argument to reflect that.
Spark requires [*Java*](https://en.wikipedia.org/w/index.php?search=Java), so you may have to install the [*Java Development Kit*](https://en.wikipedia.org/w/index.php?search=Java%20Development%20Kit) before using Spark.[41](#fn41)
```
# sudo apt-get install openjdk-8-jdk
sc <- spark_connect(master = "local", version = "3.0")
class(sc)
```
```
[1] "spark_connection" "spark_shell_connection"
[3] "DBIConnection"
```
Note that `sc` has class `DBIConnection`—this means that it can do many of the things that other **dplyr** connections can do.
For example, the `src_tbls()` function works just like it did on the MySQL connection objects we saw in Chapter [15](ch-sql.html#ch:sql).
```
src_tbls(sc)
```
```
character(0)
```
In this case, there are no tables present in this Spark cluster, but we can add them using the `copy_to()` command.
Here, we will load the `babynames` table from the **babynames** package.
```
babynames_tbl <- sc %>%
copy_to(babynames::babynames, "babynames")
src_tbls(sc)
```
```
[1] "babynames"
```
```
class(babynames_tbl)
```
```
[1] "tbl_spark" "tbl_sql" "tbl_lazy" "tbl"
```
The `babynames_tbl` object is a `tbl_spark`, but also a `tbl_sql`.
Again, this is analogous to what we saw in Chapter [15](ch-sql.html#ch:sql), where a `tbl_MySQLConnection` was also a `tbl_sql`.
```
babynames_tbl %>%
filter(name == "Benjamin") %>%
group_by(year) %>%
summarize(N = n(), total_births = sum(n)) %>%
arrange(desc(total_births)) %>%
head()
```
```
# Source: spark<?> [?? x 3]
# Ordered by: desc(total_births)
year N total_births
<dbl> <dbl> <dbl>
1 1989 2 15785
2 1988 2 15279
3 1987 2 14953
4 2000 2 14864
5 1990 2 14660
6 2016 2 14641
```
As we will see below with [*Google BigQuery*](https://en.wikipedia.org/w/index.php?search=Google%20BigQuery), even though Spark is a parallelized technology designed to supersede SQL, it is still useful to know SQL in order to use Spark.
Like BigQuery, **sparklyr** allows you to work with a Spark cluster using the familiar **dplyr** interface.
As you might suspect, because `babynames_tbl` is a `tbl_sql`, it implements SQL methods common in **DBI**.
Thus, we can also write SQL queries against our Spark cluster.
```
library(DBI)
dbGetQuery(sc, "SELECT year, sum(1) as N, sum(n) as total_births
FROM babynames WHERE name == 'Benjamin'
GROUP BY year
ORDER BY total_births desc
LIMIT 6")
```
```
year N total_births
1 1989 2 15785
2 1988 2 15279
3 1987 2 14953
4 2000 2 14864
5 1990 2 14660
6 2016 2 14641
```
Finally, because Spark includes not only a database infrastructure, but also a machine learning library, **sparklyr** allows you to fit many of the models we outlined in Chapter [11](ch-learningI.html#ch:learningI) and [12](ch-learningII.html#ch:learningII) within Spark.
This means that you can rely on Spark’s big data capabilities without having to bring all of your data into **R**’s memory.
As a motivating example, we fit a multiple regression model for the amount of rainfall at the MacLeish field station as a function of the temperature, pressure, and relative humidity.
```
library(macleish)
weather_tbl <- copy_to(sc, whately_2015)
weather_tbl %>%
ml_linear_regression(rainfall ~ temperature + pressure + rel_humidity) %>%
summary()
```
```
Deviance Residuals:
Min 1Q Median 3Q Max
-0.041290 -0.021761 -0.011632 -0.000576 15.968356
Coefficients:
(Intercept) temperature pressure rel_humidity
0.717754 0.000409 -0.000755 0.000438
R-Squared: 0.004824
Root Mean Squared Error: 0.1982
```
The most recent versions of **RStudio** include integrated support for management of Spark clusters.
### 21\.2\.4 Alternatives to SQL
Relational database management systems can be spread across multiple computers into what is called a [*cluster*](https://en.wikipedia.org/w/index.php?search=cluster).
In fact, it is widely acknowledged that one of the things that allowed Google to grow so fast was its use of the open\-source (zero cost) MySQL RDBMS running as a cluster across many identical low\-cost servers.
That is, rather than investing large amounts of money in big machines, they built a massive MySQL cluster over many small, cheap machines.
Both [MySQL](https://en.wikipedia.org/wiki/MySQL_Cluster) and [PostgreSQL](http://www.postgresql.org/docs/9.4/static/creating-cluster.html) provide functionality for extending a single installation to a cluster.
Use a cloud\-based computing service, such as Amazon Web Services, Google Cloud Platform, or Digital Ocean, for a low\-cost alternative to building your own server farm (many of these companies offer free credits for student and instructor use).
#### 21\.2\.4\.1 BigQuery
[*BigQuery*](https://en.wikipedia.org/w/index.php?search=BigQuery) is a Web service offered by Google. Internally, the BigQuery service is supported by [*Dremel*](https://en.wikipedia.org/w/index.php?search=Dremel), the open\-source version of which is [*Apache Drill*](https://en.wikipedia.org/w/index.php?search=Apache%20Drill).
The **bigrquery** [package](https://github.com/rstats-db/bigrquery) for **R** provides access to BigQuery from within **R**.
To use the BigQuery service, you need to sign up for an account with Google, but you won’t be charged unless you exceed the free limit of 10,000 requests per day (the [BigQuery sandbox](https://cloud.google.com/bigquery/docs/sandbox) provides free access subject to certain limits).
If you want to use your own data, you have to upload it to Google Cloud Storage, but Google provides many data sets that you can use for free (e.g., COVID, Census, real\-estate transactions). Here we illustrate how to query the `shakespeare` data set—which is a list of all of the words that appear in Shakespeare’s plays—to find the most common words. Note that BigQuery understands a recognizable [dialect of SQL](https://cloud.google.com/bigquery/query-reference)—what makes BigQuery special is that it is built on top of Google’s massive computing architecture.
```
library(bigrquery)
project_id <- "my-google-id"
sql <- "
SELECT word
, count(distinct corpus) AS numPlays
, sum(word_count) AS N
FROM [publicdata:samples.shakespeare]
GROUP BY word
ORDER BY N desc
LIMIT 10
"
bq_project_query(sql, project = project_id)
```
```
4.9 megabytes processed
word numPlays N
1 the 42 25568
2 I 42 21028
3 and 42 19649
4 to 42 17361
5 of 42 16438
6 a 42 13409
7 you 42 12527
8 my 42 11291
9 in 42 10589
10 is 42 8735
```
#### 21\.2\.4\.2 NoSQL
[*NoSQL*](https://en.wikipedia.org/w/index.php?search=NoSQL) refers not to a specific technology, but rather to a class of database architectures that are *not* based on the notion—so central to SQL (and `data.frame`s in **R**—that a table consists of a rectangular array of rows and columns.
Rather than being built around tables, NoSQL databases may be built around columns, key\-value pairs, documents, or graphs.
Nevertheless NoSQL databases may (or may not) include an SQL\-like query language for retrieving data.
One particularly successful NoSQL database is [*MongoDB*](https://en.wikipedia.org/w/index.php?search=MongoDB), which is based on a document structure.
In particular, MongoDB is often used to store JSON objects (see Chapter [6](ch-dataII.html#ch:dataII)), which are not necessarily tabular.
21\.3 Alternatives to **R**
---------------------------
[Python](http://en.wikipedia.org/wiki/Python_(programming_language)) is a widely\-used general\-purpose, high\-level programming language.
You will find adherents for both **R** and Python, and while there are [ongoing](http://readwrite.com/2013/11/25/python-displacing-r-as-the-programming-language-for-data-science#awesm=~oopSq74KSJsK2w) [debates](https://github.com/hadley/r-python) about which is “better,” there is no consensus.
It is probably true that—for obvious reasons—computer scientists tend to favor Python, while statisticians tend to favor **R**.
We prefer the latter but will not make any claims about its being “better” than Python.
A well\-rounded data scientist should be competent in both environments.
Python is a modular environment (like **R**) and includes many libraries for working with data.
The most **R**\-like is `Pandas`, but other popular auxiliary libraries include `SciPy` for scientific computation, `NumPy` for large arrays, `matplotlib` for graphics, and `scikit-learn` for machine learning.
Other popular programming languages among data scientists include [*Scala*](https://en.wikipedia.org/w/index.php?search=Scala) and [*Julia*](https://en.wikipedia.org/wiki/Julia_(programming_language)).
Scala supports a [*functional programming*](https://en.wikipedia.org/w/index.php?search=functional%20programming) paradigm that has been promoted by H. Wickham (2019\) and other **R** users. Julia has a smaller user base but has nonetheless many strong adherents.
21\.4 Closing thoughts
----------------------
Advances in computing power and the internet have changed the field of statistics in ways that only the greatest visionaries could have imagined.
In the 20th century, the science of extracting meaning from data focused on developing inferential techniques that required sophisticated mathematics to squeeze the most information out of small data.
In the 21st century, the science of extracting meaning from data has focused on developing powerful computational tools that enable the processing of ever larger and more complex data.
While the essential analytical language of the last century—mathematics—is still of great importance, the analytical language of this century is undoubtedly programming.
The ability to write code is a necessary but not sufficient condition for becoming a data scientist.
We have focused on programming in **R**, a well\-worn interpreted language designed by statisticians for computing with data.
We believe that as an open\-source language with a broad following, **R** has significant staying power. Yet we recognize that all technological tools eventually become obsolete.
Nevertheless, by absorbing the lessons in this book, you will have transformed yourself into a competent, ethical, and versatile data scientist—one who possesses the essential capacities for working with a variety of data programmatically.
You can build and interpret models, query databases both local and remote, make informative and interactive maps, and wrangle and visualize data in various forms.
Internalizing these abilities will allow them to permeate your work in whatever field interests you, for as long as you continue to use data to inform.
21\.5 Further resources
-----------------------
Tools for working with big data analytics are developing more quickly than any of the other topics in this book.
A special issue of the *The American Statistician* addressed the training of students in statistics and data science (Horton and Hardin 2015\).
The issue included articles on teaching statistics at “Google\-Scale” (Chamandy, Muraldharan, and Wager 2015\) and on the teaching of data science more generally (B. S. Baumer 2015; Hardin et al. 2015\).
The board of directors of the American Statistical Association endorsed the *Curriculum Guidelines for Undergraduate Programs in Data Science* written by the Park City Math Institute (PCMI) Undergraduate Faculty Group (De Veaux et al. 2017\).
These guidelines recommended fusing statistical thinking into the teaching of techniques to solve big data problems.
A comprehensive survey of **R** packages for parallel computation and high\-performance computing is available through the [CRAN task view on that subject](https://cran.r-project.org/web/views/HighPerformanceComputing.html).
The *Parallel R* book is another resource (McCallum and Weston 2011\).
More information about [Google BigQuery](https://cloud.google.com/bigquery) can be found at their website.
A [tutorial for SparkR](https://spark.apache.org/docs/1.6.0/sparkr.html) is available on Apache’s website.
21\.1 Notions of big data
-------------------------
[*Big data*](https://en.wikipedia.org/w/index.php?search=Big%20data) is an exceptionally hot topic, but it is not so well\-defined. Wikipedia states:
> [Big data](http://en.wikipedia.org/wiki/Big_data) is a field that treats ways to analyze, systematically extract information from, or otherwise deal with data sets that are too large or complex to be dealt with by traditional data\-processing application software.
>
>
> Relational database management systems, desktop statistics and software packages used to visualize data often have difficulty handling big data. The work may require “massively parallel software running on tens, hundreds, or even thousands of servers.” What qualifies as being “big data” varies depending on the capabilities of the users and their tools, and expanding capabilities make big data a moving target. “For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration” (retrieved December 2020\).
Big data is often characterized by the three V’s: volume, velocity, and variety (Laney 2001\).
Under this definition, the qualities that make big data different are its *size*, how *quickly* it grows as it is collected, and how many different *formats* it may come in.
In big data, the size of tables may be too large to fit on an ordinary computer, the data and queries on it may be coming in too quickly to process, or the data may be distributed across many different systems.
[Randall Pruim](https://en.wikipedia.org/w/index.php?search=Randall%20Pruim) puts it more concisely: “Big data is when your workflow breaks.”
Both relative and absolute definitions of big data are meaningful.
The absolute definition may be easier to understand: We simply specify a data size and agree that any data that are at least that large are “big”—otherwise they are not.
The problem with this definition is that it is a moving target.
It might mean [*petabytes*](https://en.wikipedia.org/w/index.php?search=petabytes) (1,000 terabytes) today, but [*exabytes*](https://en.wikipedia.org/w/index.php?search=exabytes) (1,000 petabytes) a few years from now.
Regardless of the precise definition, it is increasingly clear that while many organizations like Google, Facebook, and Amazon are working with truly big data, most individuals—even data scientists like you and us—are not.
For us, the relative definition becomes more meaningful.
A big data problem occurs when the workflow that you have been using to solve problems becomes infeasible due to the expansion in the size of your data.
It is useful in this context to think about [*orders of magnitude*](https://en.wikipedia.org/w/index.php?search=orders%20of%20magnitude) of data.
The evolution of baseball data illustrates how “big data problems” have arisen as the volume and variety of the data has increased over time.
* **Individual game data**: [Henry Chadwick](https://en.wikipedia.org/w/index.php?search=Henry%20Chadwick) started collecting boxscores (a tabular summary of each game) in the early 1900s. These data (dozens or even hundreds of rows) can be stored on handwritten pieces of paper, or in a single spreadsheet. Each row might represent one *game*. Thus, a perfectly good workflow for working with data of this size is to store them on paper. A more sophisticated workflow would be to store them in a spreadsheet application.
* **Seasonal data**: By the 1970s, decades of baseball history were recorded in a seasonal format. Here, the data are aggregated at the *player\-team\-season* level. An example of this kind of data is the **Lahman** database we explored in Chapter [4](ch-dataI.html#ch:dataI), which has nearly 100,000 rows in the `Batting` table. Note that in this seasonal format, we know how many home runs each player hit for each team, but we don’t know anything about *when* they were hit (e.g., in what month or what inning). [*Excel*](https://en.wikipedia.org/w/index.php?search=Excel) is limited in the number of rows that a single spreadsheet can contain. The original limit of \\(2^{14} \= 16,384\\) rows was bumped up to \\(2^{16} \= 65,536\\) rows in 2003, and the current limit is \\(2^{20} \\approx 1\\) million rows. Up until 2003, simply opening the `Batting` table in Excel would have been impossible. This is a big data problem, because your Excel workflow has broken due to the size of your data. On the other hand, opening the `Batting` table in **R** requires far less memory, since **R** does not try to display all of the data.
* **Play\-by\-play data**: By the 1990s, [Retrosheet](http://www.retrosheet.org/) began collecting even more granular play\-by\-play data. Each row contains information about one *play*. This means that we know exactly when each player hit each home run—what date, what inning, off of which pitcher, which other runners were on base, and even which other players were in the field. As of this writing nearly 100 seasons occupying more than 10 million rows are available. This creates a big data problem for **R**.–you would have a hard time loading these data into **R** on a typical personal computer. However, SQL provides a scalable solution for data of this magnitude, even on a laptop. Still, you will experience significantly better performance if these data are stored in an SQL cluster with lots of memory.
* **Camera\-tracking data**: The [Statcast](http://m.mlb.com/statcast/leaderboard#hr-distance,p) data set contains \\((x,y,z)\\)\-coordinates for all fielders, baserunners, and the ball every \\(1/15^{th}\\) of a second. Thus, each row is a moment in time. These data indicate not just the outcome of each play, but exactly where each of the players on the field and the ball were as the play evolved. These data are several gigabytes per game, which translates into many terabytes per season. Thus, some sort of distributed server system would be required just to store these data. These data are “big” in the relative sense for any individual, but they are still orders of magnitude away from being “big” in the absolute sense.
What does absolutely big data look like? For an individual user, you might consider the [13\.5\-terabyte data set of 110 billion events released in 2015 by Yahoo!](http://yahoo.tumblr.com/post/137282204964/yahoo-releases-the-largest-ever-machine-learning) for use in machine learning research.
The grand\-daddy of data may be the [*Large Hadron Collider*](https://en.wikipedia.org/w/index.php?search=Large%20Hadron%20Collider) in Europe, which is generating 25 petabytes of data per year (CERN 2008\).
However, only 0\.001% of all of the data that is begin generated by the supercollider is being saved, because to collect it all would mean capturing nearly 500 exabytes *per day*.
This is clearly big data.
21\.2 Tools for bigger data
---------------------------
By now, you have a working knowledge of both **R** and SQL.
These are battle\-tested, valuable tools for working with small and medium data. Both have large user bases, ample deployment, and continue to be very actively developed. Some of that development seeks to make **R** and SQL more useful for truly large data.
While we don’t have the space to cover these extensions in detail, in this section we outline some of the most important concepts for working with big data, and highlight some of the tools you are likely to see on this frontier of your working knowledge.
### 21\.2\.1 Data and memory structures for big data
An alternative to **dplyr**, **data.table** is a popular **R** package for fast SQL\-style operations on very large data tables (many gigabytes of memory).
It is [not clear](http://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) that **data.table** is faster or more efficient than **dplyr**, and it uses a different—but not necessarily better—syntax.
Moreover, **dplyr** can use **data.table** itself as a backend.
We have chosen to highlight **dplyr** in this book primarily because it fits so well syntactically with a number of other **R** packages we use herein (i.e., the **tidyverse**).
For some problems—more common in machine learning—the number of explanatory variables \\(p\\) can be large (not necessarily relative to the number of observations \\(n\\)).
In such cases, the algorithm to compute a least\-squares regression model may eat up quite a bit of memory.
The **biglm** package seeks to improve on this by providing a memory\-efficient `biglm()` function that can be used in place of `lm()`.
In particular, **biglm** can fit generalized linear models with data frames that are larger than memory.
The package accomplishes this by splitting the computations into more manageable chunks—updating the results iteratively as each chunk is processed.
In this manner, you can write a drop\-in replacement for your existing code that will scale to data sets larger than the memory on your computer.
```
library(tidyverse)
library(mdsr)
library(biglm)
library(bench)
n <- 20000
p <- 500
d <- rnorm(n * (p + 1)) %>%
matrix(ncol = (p + 1)) %>%
as_tibble(.name_repair = "unique")
expl_vars <- names(d) %>%
tail(-1) %>%
paste(collapse = " + ")
my_formula <- as.formula(paste("...1 ~ ", expl_vars))
system_time(lm(my_formula, data = d))
```
```
process real
5.7s 5.7s
```
```
system_time(biglm(my_formula, data = d))
```
```
process real
4.14s 4.14s
```
Here we see that the computation completed more quickly (and can be updated to incorporate more observations, unlike `lm()`).
The **biglm** package is also useful in settings where there are many observations but not so many predictors.
A related package is **bigmemory**.
This package extends **R**’s capabilities to map memory to disk, allowing you to work with larger matrices.
### 21\.2\.2 Compilation
Python, SQL, and **R** are [*interpreted programming language*](https://en.wikipedia.org/w/index.php?search=interpreted%20programming%20language)s.
This means that the code that you write in these languages gets translated into machine language on\-the\-fly as you execute it.
The process is not altogether different than when you hear someone speaking in Russian on the news, and then you hear a halting English translation with a one\- or two\-second delay.
Most of the time, the translation happens so fast that you don’t even notice.
Imagine that instead of translating the Russian speaker’s words on\-the\-fly, the translator took dictation, wrote down a thoughtful translation, and then re\-recorded the segment in English.
You would be able to process the English\-speaking segment faster—because you are fluent in English.
At the same time, the translation would probably be better, since more time and care went into it, and you would likely pick up subtle nuances that were lost in the on\-the\-fly translation. Moreover, once the English segment is recorded, it can be watched at any time without incurring the cost of translation again.
This alternative paradigm involves a one\-time translation of the code called [*compilation*](https://en.wikipedia.org/w/index.php?search=compilation). **R** code is not compiled (it is interpreted), but *C\+\+* code is.
The result of compilation is a binary program that can be executed by the CPU directly.
This is why, for example, you can’t write a desktop application in **R**, and executables written in *C\+\+* will be much faster than scripts written in **R** or Python.
(To continue this line of reasoning, binaries written in assembly language can be faster than those written in *C\+\+*, and binaries written in machine language can be faster than those written in assembly.)
If *C\+\+* is so much faster than **R**, then why write code in **R**?
Here again, it is a trade\-off.
The code written in *C\+\+* may be faster, but when your programming time is taken into account you can often accomplish your task much faster by writing in **R**.
This is because **R** provides extensive libraries that are designed to reduce the amount of code that you have to write. **R** is also interactive, so that you can keep a session alive and continue to write new code as you run the old code.
This is fundamentally different from *C\+\+* development, where you have to re\-compile every time you change a single line of code.
The convenience of **R** programming comes at the expense of speed.
However, there is a compromise.
**Rcpp** allows you to move certain pieces of your **R** code to *C\+\+*.
The basic idea is that **Rcpp** provides *C\+\+* data structures that correspond to **R** data structures (e.g., a `data.frame` data structure written in *C\+\+*).
It is thus possible to write functions in **R** that get compiled into faster *C\+\+* code, with minimal additional effort on the part of the **R** programmer.
The **dplyr** package makes extensive use of this functionality to improve performance.
### 21\.2\.3 Parallel and distributed computing
#### 21\.2\.3\.1 Embarrassingly parallel computing
How do you increase a program’s capacity to work with larger data?
The most obvious way is to add more memory (i.e., [*RAM*](https://en.wikipedia.org/w/index.php?search=RAM)) to your computer.
This enables the program to read more data at once, enabling greater functionality with any additional programming.
But what if the bottleneck is not the memory, but the processor ([*CPU*](https://en.wikipedia.org/w/index.php?search=CPU))?
A processor can only do one thing at a time.
So if you have a computation that takes \\(t\\) units of time, and you have to do that computation for many different data sets, then you can expect that it will take many more units of time to complete.
For example, suppose we generate 20 sets of 1 million \\((x,y)\\) random pairs and want to fit a regression model to each set.
```
n <- 1e6
k <- 20
d <- tibble(y = rnorm(n*k), x = rnorm(n*k), set = rep(1:k, each = n))
fit_lm <- function(data, set_id) {
data %>%
filter(set == set_id) %>%
lm(y ~ x, data = .)
}
```
However long it takes to do it for the first set, it should take about 20 times as long to do it for all 20 sets.
This is as expected, since the computation procedure was to fit the regression model for the first set, then fit it for the second set, and so on.
```
system_time(map(1:1, fit_lm, data = d))
```
```
process real
657ms 657ms
```
```
system_time(map(1:k, fit_lm, data = d))
```
```
process real
8.86s 8.86s
```
However, in this particular case, the data in each of the 20 sets has nothing to do with the data in any of the other sets.
This is an example of an [*embarrassingly parallel*](https://en.wikipedia.org/w/index.php?search=embarrassingly%20parallel) problem.
These data are ripe candidates for a [*parallelized*](https://en.wikipedia.org/w/index.php?search=parallelized) computation.
If we had 20 processors, we could fit one regression model on each CPU—all at the same time—and get our final result in about the same time as it takes to fit the model to *one* set of data.
This would be a tremendous improvement in speed.
Unfortunately, we don’t have 20 CPUs. Nevertheless, most modern computers have multiple cores.
```
library(parallel)
my_cores <- detectCores()
my_cores
```
```
[1] 4
```
The **parallel** package provides functionality for parallel computation in **R**.
The **furrr** package extends the **future** package to allow us to express embarrassingly parallel computations in our familiar **purrr** syntax (Vaughan and Dancho 2021\).
Specifically, it provides a function `future_map()` that works just like `map()` (see Chapter [7](ch-iteration.html#ch:iteration)), except that it spreads the computations over multiple cores.
The theoretical speed\-up is a function of `my_cores`, but in practice this may be less for a variety of reasons (most notably, the overhead associated with combining the parallel results).
The **plan** function sets up a parallel computing environment. In this case, we are using the `multiprocess` mode, which will split computations across asynchronous separate **R** sessions. The `workers` argument to `plan()` controls the number of cores being used for parallel computation. Next, we fit the 20 regression models using the `future_map()` function instead of **map**. Once completed, set the computation mode back to `sequential` for the remainder of this chapter.
```
library(furrr)
plan(multiprocess, workers = my_cores)
system_time(
future_map(1:k, fit_lm, data = d)
)
```
```
process real
9.98s 11.56s
```
```
plan(sequential)
```
In this case, the overhead associated with combining the results was larger than the savings from parallelizing the computation. But this will not always be the case.
#### 21\.2\.3\.2 GPU computing and CUDA
Another fruitful avenue to speed up computations is through use of a graphical processing unit (GPU).
These devices feature a highly parallel structure that can lead to significant performance gains.
[*CUDA*](https://en.wikipedia.org/w/index.php?search=CUDA) is a parallel computing platform and application programming interface created by [*NVIDIA*](https://en.wikipedia.org/w/index.php?search=NVIDIA) (one of the largest manufacturers of GPUs).
The **OpenCL** package provides bindings for **R** to the open\-source, general\-purpose OpenCL programming language for GPU computing.
#### 21\.2\.3\.3 MapReduce
[*MapReduce*](https://en.wikipedia.org/w/index.php?search=MapReduce) is a programming paradigm for parallel computing. To solve a task using a MapReduce framework, two functions must be written:
1. `Map(key_0, value_0)`: The `Map()` function reads in the original data (which is stored in key\-value pairs), and splits it up into smaller subtasks. It returns a `list` of key\-value pairs \\((key\_1, value\_1\)\\), where the keys and values are not necessarily of the same type as the original ones.
2. `Reduce(key_1, list(value_1))`: The MapReduce implementation has a method for aggregating the key\-value pairs returned by the `Map()` function by their keys (i.e., `key_1`). Thus, you only have to write the `Reduce()` function, which takes as input a particular `key_1`, and a list of all the `value_1`’s that correspond to `key_1`. The `Reduce()` function then performs some operation on that list, and returns a list of values.
MapReduce is efficient and effective because the `Map()` step can be highly parallelized.
Moreover, MapReduce is also fault tolerant, because if any individual `Map()` job fails, the controller can simply start another one.
The `Reduce()` step often provides functionality similar to a `GROUP BY` operation in SQL.
##### Example
The canonical MapReduce example is to tabulate the frequency of each word in a large number of text documents (i.e., a [*corpus*](https://en.wikipedia.org/w/index.php?search=corpus) (see Chapter [19](ch-text.html#ch:text))).
In what follows, we show an implementation written in Python by [Bill Howe](https://en.wikipedia.org/w/index.php?search=Bill%20Howe) of the [*University of Washington*](https://en.wikipedia.org/w/index.php?search=University%20of%20Washington) (Howe 2014\).
Note that at the beginning, this bit of code calls [an external `MapReduce` library](https://github.com/uwescience/datasci_course_materials/blob/master/assignment3/MapReduce.py) that actually implements MapReduce.
The user only needs to write the two functions shown in this block of code—not the MapReduce library itself.
```
import MapReduce
import sys
mr = MapReduce.MapReduce()
def mapper(record):
key = record[0]
value = record[1]
words = value.split()
for w in words:
mr.emit_intermediate(w, 1)
def reducer(key, list_of_values):
total = 0
for v in list_of_values:
total += v
mr.emit((key, total))
if __name__ == '__main__':
inputdata = open(sys.argv[1])
mr.execute(inputdata, mapper, reducer)
```
We will use this MapReduce program to compile a word count for the [issues raised on GitHub for the **ggplot2** package](https://github.com/hadley/ggplot2/issues).
These are stored in a [*JSON*](https://en.wikipedia.org/w/index.php?search=JSON) file (see Chapter [6](ch-dataII.html#ch:dataII)) as a single JSON array.
Since we want to illustrate how MapReduce can parallelize over many files, we will convert this single array into a JSON object for each issue.
This will mimic the typical use case.
The **jsonlite** package provides functionality for coverting between JSON objects and native **R** data structures.
```
library(jsonlite)
url <- "https://api.github.com/repos/tidyverse/ggplot2/issues"
gg_issues <- url %>%
fromJSON() %>%
select(url, body) %>%
group_split(url) %>%
map_chr(~toJSON(as.character(.x))) %>%
write(file = "code/map-reduce/issues.json")
```
For example, the first issue is displayed below.
Note that it consists of two comma\-separated character strings within brackets.
We can think of this as having the format: `[key, value]`.
```
readLines("code/map-reduce/issues.json") %>%
head(1) %>%
str_wrap(width = 70) %>%
cat()
```
```
["https://api.github.com/repos/tidyverse/ggplot2/issues/4019","When
setting the limits of `scale_fill_steps()`, the fill brackets in
the legend becomes unevenly spaced. It's not clear why or how this
happens.\r\n\r\n``` r\r\nlibrary(tidyverse)\r\n\r\ndf <- tibble(\r\n
crossing(\r\n tibble(sample = paste(\"sample\", 1:4)),\r\n tibble(pos
= 1:4)\r\n ),\r\n val = runif(16)\r\n)\r\n\r\nggplot(df, aes(x =
pos, y = sample)) +\r\n geom_line() +\r\n geom_point(aes(fill =
val), pch = 21, size = 7) +\r\n scale_fill_steps(low = \"white\",
high = \"black\")\r\n```\r\n\r\n
\r\n\r\n``` r\r\n\r\nggplot(df, aes(x = pos, y = sample)) +\r\n
geom_line() +\r\n geom_point(aes(fill = val), pch = 21, size = 7) +
\r\n scale_fill_steps(low = \"white\", high = \"black\", limits
= c(0, 1))\r\n```\r\n\r\n
\r\n\r\n<sup>Created on 2020-05-22 by the [reprex package](https://
reprex.tidyverse.org) (v0.3.0)<\/sup>"]
```
In the Python code written above (which is stored in the file `wordcount.py`), the `mapper()` function takes a `record` argument (i.e., one line of the `issues.json` file), and examines its first two elements—the `key` becomes the first argument (in this case, the URL of the GitHub issue) and the `value` becomes the second argument (the text of the issue).
After splitting the `value` on each space, the `mapper()` function emits a \\((key, value)\\) pair for each word.
Thus, the first issue shown above would generate the pairs: `(When, 1)`, `(setting, 1)`, `(the, 1)`, etc.
The `MapReduce` library provides a mechanism for efficiently collecting all of the resulting pairs based on the `key`, which in this case corresponds to a single word.
The `reducer()` function simply adds up all of the values associated with each key.
In this case, these values are all `1`s, so the resulting pair is a word and the number of times it appears (e.g., `(the, 158)`, etc.).
Thanks to the **reticulate** package, we can run this Python script from within **R** and bring the results into **R** for further analysis.
We see that the most common words in this corpus are short articles and prepositions.
```
library(mdsr)
cmd <- "python code/map-reduce/wordcount.py code/map-reduce/issues.json"
res <- system(cmd, intern = TRUE)
freq_df <- res %>%
purrr::map(jsonlite::fromJSON) %>%
purrr::map(set_names, c("word", "count")) %>%
bind_rows() %>%
mutate(count = parse_number(count))
glimpse(freq_df)
```
```
Rows: 1,605
Columns: 2
$ word <chr> "geom_point(aes(fill", "aliased", "desirable", "ggplot(ct)+g…
$ count <dbl> 2, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 3, 1, 2, 2, 1, 5, …
```
```
freq_df %>%
filter(str_detect(pattern = "[a-z]", word)) %>%
arrange(desc(count)) %>%
head(10)
```
```
# A tibble: 10 × 2
word count
<chr> <dbl>
1 the 147
2 to 87
3 a 63
4 of 45
5 is 43
6 in 33
7 and 32
8 that 31
9 it 28
10 be 28
```
MapReduce is popular and offers some advantages over SQL for some problems.
When MapReduce first became popular, and Google used it to redo their
webpage ranking system (see Chapter [20](ch-netsci.html#ch:netsci)), there was great excitement about a coming “paradigm shift” in parallel and distributed computing.
Nevertheless, advocates of SQL have challenged the notion that it has been completely superseded by MapReduce (Stonebraker et al. 2010\).
#### 21\.2\.3\.4 Hadoop
As noted previously, MapReduce requires a software implementation.
One popular such implementation is Hadoop MapReduce, which is one of the core components of [*Apache Hadoop*](https://en.wikipedia.org/w/index.php?search=Apache%20Hadoop).
Hadoop is a larger software ecosystem for storing and processing large data that includes a distributed file system, Pig, Hive, Spark, and other popular open\-source software tools.
While we won’t be able to go into great detail about these items, we will illustrate how to interface with Spark, which has a particularly tight integration with **RStudio**.
#### 21\.2\.3\.5 Spark
One nice feature of [*Apache Spark*](https://en.wikipedia.org/w/index.php?search=Apache%20Spark)—especially for our purposes—is that while it requires a distributed file system, it can implement a pseudo\-distributed file system on a single machine.
This makes it possible for you to experiment with Spark on your local machine even if you don’t have access to a cluster. For obvious reasons, you won’t actually see the performance boost that parallelism can bring, but you can try it out and debug your code.
Furthermore, the [**sparklyr** package](http://spark.rstudio.com/) makes it painless to install a local Spark cluster from within **R**, as well as connect to a local or remote cluster.
Once the **sparklyr** package is installed, we can use it to install a local Spark cluster.
```
library(sparklyr)
spark_install(version = "3.0") # only once!
```
Next, we make a connection to our local Spark instance from within **R**.
Of course, if we were connecting to a remote Spark cluster, we could modify the `master` argument to reflect that.
Spark requires [*Java*](https://en.wikipedia.org/w/index.php?search=Java), so you may have to install the [*Java Development Kit*](https://en.wikipedia.org/w/index.php?search=Java%20Development%20Kit) before using Spark.[41](#fn41)
```
# sudo apt-get install openjdk-8-jdk
sc <- spark_connect(master = "local", version = "3.0")
class(sc)
```
```
[1] "spark_connection" "spark_shell_connection"
[3] "DBIConnection"
```
Note that `sc` has class `DBIConnection`—this means that it can do many of the things that other **dplyr** connections can do.
For example, the `src_tbls()` function works just like it did on the MySQL connection objects we saw in Chapter [15](ch-sql.html#ch:sql).
```
src_tbls(sc)
```
```
character(0)
```
In this case, there are no tables present in this Spark cluster, but we can add them using the `copy_to()` command.
Here, we will load the `babynames` table from the **babynames** package.
```
babynames_tbl <- sc %>%
copy_to(babynames::babynames, "babynames")
src_tbls(sc)
```
```
[1] "babynames"
```
```
class(babynames_tbl)
```
```
[1] "tbl_spark" "tbl_sql" "tbl_lazy" "tbl"
```
The `babynames_tbl` object is a `tbl_spark`, but also a `tbl_sql`.
Again, this is analogous to what we saw in Chapter [15](ch-sql.html#ch:sql), where a `tbl_MySQLConnection` was also a `tbl_sql`.
```
babynames_tbl %>%
filter(name == "Benjamin") %>%
group_by(year) %>%
summarize(N = n(), total_births = sum(n)) %>%
arrange(desc(total_births)) %>%
head()
```
```
# Source: spark<?> [?? x 3]
# Ordered by: desc(total_births)
year N total_births
<dbl> <dbl> <dbl>
1 1989 2 15785
2 1988 2 15279
3 1987 2 14953
4 2000 2 14864
5 1990 2 14660
6 2016 2 14641
```
As we will see below with [*Google BigQuery*](https://en.wikipedia.org/w/index.php?search=Google%20BigQuery), even though Spark is a parallelized technology designed to supersede SQL, it is still useful to know SQL in order to use Spark.
Like BigQuery, **sparklyr** allows you to work with a Spark cluster using the familiar **dplyr** interface.
As you might suspect, because `babynames_tbl` is a `tbl_sql`, it implements SQL methods common in **DBI**.
Thus, we can also write SQL queries against our Spark cluster.
```
library(DBI)
dbGetQuery(sc, "SELECT year, sum(1) as N, sum(n) as total_births
FROM babynames WHERE name == 'Benjamin'
GROUP BY year
ORDER BY total_births desc
LIMIT 6")
```
```
year N total_births
1 1989 2 15785
2 1988 2 15279
3 1987 2 14953
4 2000 2 14864
5 1990 2 14660
6 2016 2 14641
```
Finally, because Spark includes not only a database infrastructure, but also a machine learning library, **sparklyr** allows you to fit many of the models we outlined in Chapter [11](ch-learningI.html#ch:learningI) and [12](ch-learningII.html#ch:learningII) within Spark.
This means that you can rely on Spark’s big data capabilities without having to bring all of your data into **R**’s memory.
As a motivating example, we fit a multiple regression model for the amount of rainfall at the MacLeish field station as a function of the temperature, pressure, and relative humidity.
```
library(macleish)
weather_tbl <- copy_to(sc, whately_2015)
weather_tbl %>%
ml_linear_regression(rainfall ~ temperature + pressure + rel_humidity) %>%
summary()
```
```
Deviance Residuals:
Min 1Q Median 3Q Max
-0.041290 -0.021761 -0.011632 -0.000576 15.968356
Coefficients:
(Intercept) temperature pressure rel_humidity
0.717754 0.000409 -0.000755 0.000438
R-Squared: 0.004824
Root Mean Squared Error: 0.1982
```
The most recent versions of **RStudio** include integrated support for management of Spark clusters.
### 21\.2\.4 Alternatives to SQL
Relational database management systems can be spread across multiple computers into what is called a [*cluster*](https://en.wikipedia.org/w/index.php?search=cluster).
In fact, it is widely acknowledged that one of the things that allowed Google to grow so fast was its use of the open\-source (zero cost) MySQL RDBMS running as a cluster across many identical low\-cost servers.
That is, rather than investing large amounts of money in big machines, they built a massive MySQL cluster over many small, cheap machines.
Both [MySQL](https://en.wikipedia.org/wiki/MySQL_Cluster) and [PostgreSQL](http://www.postgresql.org/docs/9.4/static/creating-cluster.html) provide functionality for extending a single installation to a cluster.
Use a cloud\-based computing service, such as Amazon Web Services, Google Cloud Platform, or Digital Ocean, for a low\-cost alternative to building your own server farm (many of these companies offer free credits for student and instructor use).
#### 21\.2\.4\.1 BigQuery
[*BigQuery*](https://en.wikipedia.org/w/index.php?search=BigQuery) is a Web service offered by Google. Internally, the BigQuery service is supported by [*Dremel*](https://en.wikipedia.org/w/index.php?search=Dremel), the open\-source version of which is [*Apache Drill*](https://en.wikipedia.org/w/index.php?search=Apache%20Drill).
The **bigrquery** [package](https://github.com/rstats-db/bigrquery) for **R** provides access to BigQuery from within **R**.
To use the BigQuery service, you need to sign up for an account with Google, but you won’t be charged unless you exceed the free limit of 10,000 requests per day (the [BigQuery sandbox](https://cloud.google.com/bigquery/docs/sandbox) provides free access subject to certain limits).
If you want to use your own data, you have to upload it to Google Cloud Storage, but Google provides many data sets that you can use for free (e.g., COVID, Census, real\-estate transactions). Here we illustrate how to query the `shakespeare` data set—which is a list of all of the words that appear in Shakespeare’s plays—to find the most common words. Note that BigQuery understands a recognizable [dialect of SQL](https://cloud.google.com/bigquery/query-reference)—what makes BigQuery special is that it is built on top of Google’s massive computing architecture.
```
library(bigrquery)
project_id <- "my-google-id"
sql <- "
SELECT word
, count(distinct corpus) AS numPlays
, sum(word_count) AS N
FROM [publicdata:samples.shakespeare]
GROUP BY word
ORDER BY N desc
LIMIT 10
"
bq_project_query(sql, project = project_id)
```
```
4.9 megabytes processed
word numPlays N
1 the 42 25568
2 I 42 21028
3 and 42 19649
4 to 42 17361
5 of 42 16438
6 a 42 13409
7 you 42 12527
8 my 42 11291
9 in 42 10589
10 is 42 8735
```
#### 21\.2\.4\.2 NoSQL
[*NoSQL*](https://en.wikipedia.org/w/index.php?search=NoSQL) refers not to a specific technology, but rather to a class of database architectures that are *not* based on the notion—so central to SQL (and `data.frame`s in **R**—that a table consists of a rectangular array of rows and columns.
Rather than being built around tables, NoSQL databases may be built around columns, key\-value pairs, documents, or graphs.
Nevertheless NoSQL databases may (or may not) include an SQL\-like query language for retrieving data.
One particularly successful NoSQL database is [*MongoDB*](https://en.wikipedia.org/w/index.php?search=MongoDB), which is based on a document structure.
In particular, MongoDB is often used to store JSON objects (see Chapter [6](ch-dataII.html#ch:dataII)), which are not necessarily tabular.
### 21\.2\.1 Data and memory structures for big data
An alternative to **dplyr**, **data.table** is a popular **R** package for fast SQL\-style operations on very large data tables (many gigabytes of memory).
It is [not clear](http://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly) that **data.table** is faster or more efficient than **dplyr**, and it uses a different—but not necessarily better—syntax.
Moreover, **dplyr** can use **data.table** itself as a backend.
We have chosen to highlight **dplyr** in this book primarily because it fits so well syntactically with a number of other **R** packages we use herein (i.e., the **tidyverse**).
For some problems—more common in machine learning—the number of explanatory variables \\(p\\) can be large (not necessarily relative to the number of observations \\(n\\)).
In such cases, the algorithm to compute a least\-squares regression model may eat up quite a bit of memory.
The **biglm** package seeks to improve on this by providing a memory\-efficient `biglm()` function that can be used in place of `lm()`.
In particular, **biglm** can fit generalized linear models with data frames that are larger than memory.
The package accomplishes this by splitting the computations into more manageable chunks—updating the results iteratively as each chunk is processed.
In this manner, you can write a drop\-in replacement for your existing code that will scale to data sets larger than the memory on your computer.
```
library(tidyverse)
library(mdsr)
library(biglm)
library(bench)
n <- 20000
p <- 500
d <- rnorm(n * (p + 1)) %>%
matrix(ncol = (p + 1)) %>%
as_tibble(.name_repair = "unique")
expl_vars <- names(d) %>%
tail(-1) %>%
paste(collapse = " + ")
my_formula <- as.formula(paste("...1 ~ ", expl_vars))
system_time(lm(my_formula, data = d))
```
```
process real
5.7s 5.7s
```
```
system_time(biglm(my_formula, data = d))
```
```
process real
4.14s 4.14s
```
Here we see that the computation completed more quickly (and can be updated to incorporate more observations, unlike `lm()`).
The **biglm** package is also useful in settings where there are many observations but not so many predictors.
A related package is **bigmemory**.
This package extends **R**’s capabilities to map memory to disk, allowing you to work with larger matrices.
### 21\.2\.2 Compilation
Python, SQL, and **R** are [*interpreted programming language*](https://en.wikipedia.org/w/index.php?search=interpreted%20programming%20language)s.
This means that the code that you write in these languages gets translated into machine language on\-the\-fly as you execute it.
The process is not altogether different than when you hear someone speaking in Russian on the news, and then you hear a halting English translation with a one\- or two\-second delay.
Most of the time, the translation happens so fast that you don’t even notice.
Imagine that instead of translating the Russian speaker’s words on\-the\-fly, the translator took dictation, wrote down a thoughtful translation, and then re\-recorded the segment in English.
You would be able to process the English\-speaking segment faster—because you are fluent in English.
At the same time, the translation would probably be better, since more time and care went into it, and you would likely pick up subtle nuances that were lost in the on\-the\-fly translation. Moreover, once the English segment is recorded, it can be watched at any time without incurring the cost of translation again.
This alternative paradigm involves a one\-time translation of the code called [*compilation*](https://en.wikipedia.org/w/index.php?search=compilation). **R** code is not compiled (it is interpreted), but *C\+\+* code is.
The result of compilation is a binary program that can be executed by the CPU directly.
This is why, for example, you can’t write a desktop application in **R**, and executables written in *C\+\+* will be much faster than scripts written in **R** or Python.
(To continue this line of reasoning, binaries written in assembly language can be faster than those written in *C\+\+*, and binaries written in machine language can be faster than those written in assembly.)
If *C\+\+* is so much faster than **R**, then why write code in **R**?
Here again, it is a trade\-off.
The code written in *C\+\+* may be faster, but when your programming time is taken into account you can often accomplish your task much faster by writing in **R**.
This is because **R** provides extensive libraries that are designed to reduce the amount of code that you have to write. **R** is also interactive, so that you can keep a session alive and continue to write new code as you run the old code.
This is fundamentally different from *C\+\+* development, where you have to re\-compile every time you change a single line of code.
The convenience of **R** programming comes at the expense of speed.
However, there is a compromise.
**Rcpp** allows you to move certain pieces of your **R** code to *C\+\+*.
The basic idea is that **Rcpp** provides *C\+\+* data structures that correspond to **R** data structures (e.g., a `data.frame` data structure written in *C\+\+*).
It is thus possible to write functions in **R** that get compiled into faster *C\+\+* code, with minimal additional effort on the part of the **R** programmer.
The **dplyr** package makes extensive use of this functionality to improve performance.
### 21\.2\.3 Parallel and distributed computing
#### 21\.2\.3\.1 Embarrassingly parallel computing
How do you increase a program’s capacity to work with larger data?
The most obvious way is to add more memory (i.e., [*RAM*](https://en.wikipedia.org/w/index.php?search=RAM)) to your computer.
This enables the program to read more data at once, enabling greater functionality with any additional programming.
But what if the bottleneck is not the memory, but the processor ([*CPU*](https://en.wikipedia.org/w/index.php?search=CPU))?
A processor can only do one thing at a time.
So if you have a computation that takes \\(t\\) units of time, and you have to do that computation for many different data sets, then you can expect that it will take many more units of time to complete.
For example, suppose we generate 20 sets of 1 million \\((x,y)\\) random pairs and want to fit a regression model to each set.
```
n <- 1e6
k <- 20
d <- tibble(y = rnorm(n*k), x = rnorm(n*k), set = rep(1:k, each = n))
fit_lm <- function(data, set_id) {
data %>%
filter(set == set_id) %>%
lm(y ~ x, data = .)
}
```
However long it takes to do it for the first set, it should take about 20 times as long to do it for all 20 sets.
This is as expected, since the computation procedure was to fit the regression model for the first set, then fit it for the second set, and so on.
```
system_time(map(1:1, fit_lm, data = d))
```
```
process real
657ms 657ms
```
```
system_time(map(1:k, fit_lm, data = d))
```
```
process real
8.86s 8.86s
```
However, in this particular case, the data in each of the 20 sets has nothing to do with the data in any of the other sets.
This is an example of an [*embarrassingly parallel*](https://en.wikipedia.org/w/index.php?search=embarrassingly%20parallel) problem.
These data are ripe candidates for a [*parallelized*](https://en.wikipedia.org/w/index.php?search=parallelized) computation.
If we had 20 processors, we could fit one regression model on each CPU—all at the same time—and get our final result in about the same time as it takes to fit the model to *one* set of data.
This would be a tremendous improvement in speed.
Unfortunately, we don’t have 20 CPUs. Nevertheless, most modern computers have multiple cores.
```
library(parallel)
my_cores <- detectCores()
my_cores
```
```
[1] 4
```
The **parallel** package provides functionality for parallel computation in **R**.
The **furrr** package extends the **future** package to allow us to express embarrassingly parallel computations in our familiar **purrr** syntax (Vaughan and Dancho 2021\).
Specifically, it provides a function `future_map()` that works just like `map()` (see Chapter [7](ch-iteration.html#ch:iteration)), except that it spreads the computations over multiple cores.
The theoretical speed\-up is a function of `my_cores`, but in practice this may be less for a variety of reasons (most notably, the overhead associated with combining the parallel results).
The **plan** function sets up a parallel computing environment. In this case, we are using the `multiprocess` mode, which will split computations across asynchronous separate **R** sessions. The `workers` argument to `plan()` controls the number of cores being used for parallel computation. Next, we fit the 20 regression models using the `future_map()` function instead of **map**. Once completed, set the computation mode back to `sequential` for the remainder of this chapter.
```
library(furrr)
plan(multiprocess, workers = my_cores)
system_time(
future_map(1:k, fit_lm, data = d)
)
```
```
process real
9.98s 11.56s
```
```
plan(sequential)
```
In this case, the overhead associated with combining the results was larger than the savings from parallelizing the computation. But this will not always be the case.
#### 21\.2\.3\.2 GPU computing and CUDA
Another fruitful avenue to speed up computations is through use of a graphical processing unit (GPU).
These devices feature a highly parallel structure that can lead to significant performance gains.
[*CUDA*](https://en.wikipedia.org/w/index.php?search=CUDA) is a parallel computing platform and application programming interface created by [*NVIDIA*](https://en.wikipedia.org/w/index.php?search=NVIDIA) (one of the largest manufacturers of GPUs).
The **OpenCL** package provides bindings for **R** to the open\-source, general\-purpose OpenCL programming language for GPU computing.
#### 21\.2\.3\.3 MapReduce
[*MapReduce*](https://en.wikipedia.org/w/index.php?search=MapReduce) is a programming paradigm for parallel computing. To solve a task using a MapReduce framework, two functions must be written:
1. `Map(key_0, value_0)`: The `Map()` function reads in the original data (which is stored in key\-value pairs), and splits it up into smaller subtasks. It returns a `list` of key\-value pairs \\((key\_1, value\_1\)\\), where the keys and values are not necessarily of the same type as the original ones.
2. `Reduce(key_1, list(value_1))`: The MapReduce implementation has a method for aggregating the key\-value pairs returned by the `Map()` function by their keys (i.e., `key_1`). Thus, you only have to write the `Reduce()` function, which takes as input a particular `key_1`, and a list of all the `value_1`’s that correspond to `key_1`. The `Reduce()` function then performs some operation on that list, and returns a list of values.
MapReduce is efficient and effective because the `Map()` step can be highly parallelized.
Moreover, MapReduce is also fault tolerant, because if any individual `Map()` job fails, the controller can simply start another one.
The `Reduce()` step often provides functionality similar to a `GROUP BY` operation in SQL.
##### Example
The canonical MapReduce example is to tabulate the frequency of each word in a large number of text documents (i.e., a [*corpus*](https://en.wikipedia.org/w/index.php?search=corpus) (see Chapter [19](ch-text.html#ch:text))).
In what follows, we show an implementation written in Python by [Bill Howe](https://en.wikipedia.org/w/index.php?search=Bill%20Howe) of the [*University of Washington*](https://en.wikipedia.org/w/index.php?search=University%20of%20Washington) (Howe 2014\).
Note that at the beginning, this bit of code calls [an external `MapReduce` library](https://github.com/uwescience/datasci_course_materials/blob/master/assignment3/MapReduce.py) that actually implements MapReduce.
The user only needs to write the two functions shown in this block of code—not the MapReduce library itself.
```
import MapReduce
import sys
mr = MapReduce.MapReduce()
def mapper(record):
key = record[0]
value = record[1]
words = value.split()
for w in words:
mr.emit_intermediate(w, 1)
def reducer(key, list_of_values):
total = 0
for v in list_of_values:
total += v
mr.emit((key, total))
if __name__ == '__main__':
inputdata = open(sys.argv[1])
mr.execute(inputdata, mapper, reducer)
```
We will use this MapReduce program to compile a word count for the [issues raised on GitHub for the **ggplot2** package](https://github.com/hadley/ggplot2/issues).
These are stored in a [*JSON*](https://en.wikipedia.org/w/index.php?search=JSON) file (see Chapter [6](ch-dataII.html#ch:dataII)) as a single JSON array.
Since we want to illustrate how MapReduce can parallelize over many files, we will convert this single array into a JSON object for each issue.
This will mimic the typical use case.
The **jsonlite** package provides functionality for coverting between JSON objects and native **R** data structures.
```
library(jsonlite)
url <- "https://api.github.com/repos/tidyverse/ggplot2/issues"
gg_issues <- url %>%
fromJSON() %>%
select(url, body) %>%
group_split(url) %>%
map_chr(~toJSON(as.character(.x))) %>%
write(file = "code/map-reduce/issues.json")
```
For example, the first issue is displayed below.
Note that it consists of two comma\-separated character strings within brackets.
We can think of this as having the format: `[key, value]`.
```
readLines("code/map-reduce/issues.json") %>%
head(1) %>%
str_wrap(width = 70) %>%
cat()
```
```
["https://api.github.com/repos/tidyverse/ggplot2/issues/4019","When
setting the limits of `scale_fill_steps()`, the fill brackets in
the legend becomes unevenly spaced. It's not clear why or how this
happens.\r\n\r\n``` r\r\nlibrary(tidyverse)\r\n\r\ndf <- tibble(\r\n
crossing(\r\n tibble(sample = paste(\"sample\", 1:4)),\r\n tibble(pos
= 1:4)\r\n ),\r\n val = runif(16)\r\n)\r\n\r\nggplot(df, aes(x =
pos, y = sample)) +\r\n geom_line() +\r\n geom_point(aes(fill =
val), pch = 21, size = 7) +\r\n scale_fill_steps(low = \"white\",
high = \"black\")\r\n```\r\n\r\n
\r\n\r\n``` r\r\n\r\nggplot(df, aes(x = pos, y = sample)) +\r\n
geom_line() +\r\n geom_point(aes(fill = val), pch = 21, size = 7) +
\r\n scale_fill_steps(low = \"white\", high = \"black\", limits
= c(0, 1))\r\n```\r\n\r\n
\r\n\r\n<sup>Created on 2020-05-22 by the [reprex package](https://
reprex.tidyverse.org) (v0.3.0)<\/sup>"]
```
In the Python code written above (which is stored in the file `wordcount.py`), the `mapper()` function takes a `record` argument (i.e., one line of the `issues.json` file), and examines its first two elements—the `key` becomes the first argument (in this case, the URL of the GitHub issue) and the `value` becomes the second argument (the text of the issue).
After splitting the `value` on each space, the `mapper()` function emits a \\((key, value)\\) pair for each word.
Thus, the first issue shown above would generate the pairs: `(When, 1)`, `(setting, 1)`, `(the, 1)`, etc.
The `MapReduce` library provides a mechanism for efficiently collecting all of the resulting pairs based on the `key`, which in this case corresponds to a single word.
The `reducer()` function simply adds up all of the values associated with each key.
In this case, these values are all `1`s, so the resulting pair is a word and the number of times it appears (e.g., `(the, 158)`, etc.).
Thanks to the **reticulate** package, we can run this Python script from within **R** and bring the results into **R** for further analysis.
We see that the most common words in this corpus are short articles and prepositions.
```
library(mdsr)
cmd <- "python code/map-reduce/wordcount.py code/map-reduce/issues.json"
res <- system(cmd, intern = TRUE)
freq_df <- res %>%
purrr::map(jsonlite::fromJSON) %>%
purrr::map(set_names, c("word", "count")) %>%
bind_rows() %>%
mutate(count = parse_number(count))
glimpse(freq_df)
```
```
Rows: 1,605
Columns: 2
$ word <chr> "geom_point(aes(fill", "aliased", "desirable", "ggplot(ct)+g…
$ count <dbl> 2, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 3, 1, 2, 2, 1, 5, …
```
```
freq_df %>%
filter(str_detect(pattern = "[a-z]", word)) %>%
arrange(desc(count)) %>%
head(10)
```
```
# A tibble: 10 × 2
word count
<chr> <dbl>
1 the 147
2 to 87
3 a 63
4 of 45
5 is 43
6 in 33
7 and 32
8 that 31
9 it 28
10 be 28
```
MapReduce is popular and offers some advantages over SQL for some problems.
When MapReduce first became popular, and Google used it to redo their
webpage ranking system (see Chapter [20](ch-netsci.html#ch:netsci)), there was great excitement about a coming “paradigm shift” in parallel and distributed computing.
Nevertheless, advocates of SQL have challenged the notion that it has been completely superseded by MapReduce (Stonebraker et al. 2010\).
#### 21\.2\.3\.4 Hadoop
As noted previously, MapReduce requires a software implementation.
One popular such implementation is Hadoop MapReduce, which is one of the core components of [*Apache Hadoop*](https://en.wikipedia.org/w/index.php?search=Apache%20Hadoop).
Hadoop is a larger software ecosystem for storing and processing large data that includes a distributed file system, Pig, Hive, Spark, and other popular open\-source software tools.
While we won’t be able to go into great detail about these items, we will illustrate how to interface with Spark, which has a particularly tight integration with **RStudio**.
#### 21\.2\.3\.5 Spark
One nice feature of [*Apache Spark*](https://en.wikipedia.org/w/index.php?search=Apache%20Spark)—especially for our purposes—is that while it requires a distributed file system, it can implement a pseudo\-distributed file system on a single machine.
This makes it possible for you to experiment with Spark on your local machine even if you don’t have access to a cluster. For obvious reasons, you won’t actually see the performance boost that parallelism can bring, but you can try it out and debug your code.
Furthermore, the [**sparklyr** package](http://spark.rstudio.com/) makes it painless to install a local Spark cluster from within **R**, as well as connect to a local or remote cluster.
Once the **sparklyr** package is installed, we can use it to install a local Spark cluster.
```
library(sparklyr)
spark_install(version = "3.0") # only once!
```
Next, we make a connection to our local Spark instance from within **R**.
Of course, if we were connecting to a remote Spark cluster, we could modify the `master` argument to reflect that.
Spark requires [*Java*](https://en.wikipedia.org/w/index.php?search=Java), so you may have to install the [*Java Development Kit*](https://en.wikipedia.org/w/index.php?search=Java%20Development%20Kit) before using Spark.[41](#fn41)
```
# sudo apt-get install openjdk-8-jdk
sc <- spark_connect(master = "local", version = "3.0")
class(sc)
```
```
[1] "spark_connection" "spark_shell_connection"
[3] "DBIConnection"
```
Note that `sc` has class `DBIConnection`—this means that it can do many of the things that other **dplyr** connections can do.
For example, the `src_tbls()` function works just like it did on the MySQL connection objects we saw in Chapter [15](ch-sql.html#ch:sql).
```
src_tbls(sc)
```
```
character(0)
```
In this case, there are no tables present in this Spark cluster, but we can add them using the `copy_to()` command.
Here, we will load the `babynames` table from the **babynames** package.
```
babynames_tbl <- sc %>%
copy_to(babynames::babynames, "babynames")
src_tbls(sc)
```
```
[1] "babynames"
```
```
class(babynames_tbl)
```
```
[1] "tbl_spark" "tbl_sql" "tbl_lazy" "tbl"
```
The `babynames_tbl` object is a `tbl_spark`, but also a `tbl_sql`.
Again, this is analogous to what we saw in Chapter [15](ch-sql.html#ch:sql), where a `tbl_MySQLConnection` was also a `tbl_sql`.
```
babynames_tbl %>%
filter(name == "Benjamin") %>%
group_by(year) %>%
summarize(N = n(), total_births = sum(n)) %>%
arrange(desc(total_births)) %>%
head()
```
```
# Source: spark<?> [?? x 3]
# Ordered by: desc(total_births)
year N total_births
<dbl> <dbl> <dbl>
1 1989 2 15785
2 1988 2 15279
3 1987 2 14953
4 2000 2 14864
5 1990 2 14660
6 2016 2 14641
```
As we will see below with [*Google BigQuery*](https://en.wikipedia.org/w/index.php?search=Google%20BigQuery), even though Spark is a parallelized technology designed to supersede SQL, it is still useful to know SQL in order to use Spark.
Like BigQuery, **sparklyr** allows you to work with a Spark cluster using the familiar **dplyr** interface.
As you might suspect, because `babynames_tbl` is a `tbl_sql`, it implements SQL methods common in **DBI**.
Thus, we can also write SQL queries against our Spark cluster.
```
library(DBI)
dbGetQuery(sc, "SELECT year, sum(1) as N, sum(n) as total_births
FROM babynames WHERE name == 'Benjamin'
GROUP BY year
ORDER BY total_births desc
LIMIT 6")
```
```
year N total_births
1 1989 2 15785
2 1988 2 15279
3 1987 2 14953
4 2000 2 14864
5 1990 2 14660
6 2016 2 14641
```
Finally, because Spark includes not only a database infrastructure, but also a machine learning library, **sparklyr** allows you to fit many of the models we outlined in Chapter [11](ch-learningI.html#ch:learningI) and [12](ch-learningII.html#ch:learningII) within Spark.
This means that you can rely on Spark’s big data capabilities without having to bring all of your data into **R**’s memory.
As a motivating example, we fit a multiple regression model for the amount of rainfall at the MacLeish field station as a function of the temperature, pressure, and relative humidity.
```
library(macleish)
weather_tbl <- copy_to(sc, whately_2015)
weather_tbl %>%
ml_linear_regression(rainfall ~ temperature + pressure + rel_humidity) %>%
summary()
```
```
Deviance Residuals:
Min 1Q Median 3Q Max
-0.041290 -0.021761 -0.011632 -0.000576 15.968356
Coefficients:
(Intercept) temperature pressure rel_humidity
0.717754 0.000409 -0.000755 0.000438
R-Squared: 0.004824
Root Mean Squared Error: 0.1982
```
The most recent versions of **RStudio** include integrated support for management of Spark clusters.
#### 21\.2\.3\.1 Embarrassingly parallel computing
How do you increase a program’s capacity to work with larger data?
The most obvious way is to add more memory (i.e., [*RAM*](https://en.wikipedia.org/w/index.php?search=RAM)) to your computer.
This enables the program to read more data at once, enabling greater functionality with any additional programming.
But what if the bottleneck is not the memory, but the processor ([*CPU*](https://en.wikipedia.org/w/index.php?search=CPU))?
A processor can only do one thing at a time.
So if you have a computation that takes \\(t\\) units of time, and you have to do that computation for many different data sets, then you can expect that it will take many more units of time to complete.
For example, suppose we generate 20 sets of 1 million \\((x,y)\\) random pairs and want to fit a regression model to each set.
```
n <- 1e6
k <- 20
d <- tibble(y = rnorm(n*k), x = rnorm(n*k), set = rep(1:k, each = n))
fit_lm <- function(data, set_id) {
data %>%
filter(set == set_id) %>%
lm(y ~ x, data = .)
}
```
However long it takes to do it for the first set, it should take about 20 times as long to do it for all 20 sets.
This is as expected, since the computation procedure was to fit the regression model for the first set, then fit it for the second set, and so on.
```
system_time(map(1:1, fit_lm, data = d))
```
```
process real
657ms 657ms
```
```
system_time(map(1:k, fit_lm, data = d))
```
```
process real
8.86s 8.86s
```
However, in this particular case, the data in each of the 20 sets has nothing to do with the data in any of the other sets.
This is an example of an [*embarrassingly parallel*](https://en.wikipedia.org/w/index.php?search=embarrassingly%20parallel) problem.
These data are ripe candidates for a [*parallelized*](https://en.wikipedia.org/w/index.php?search=parallelized) computation.
If we had 20 processors, we could fit one regression model on each CPU—all at the same time—and get our final result in about the same time as it takes to fit the model to *one* set of data.
This would be a tremendous improvement in speed.
Unfortunately, we don’t have 20 CPUs. Nevertheless, most modern computers have multiple cores.
```
library(parallel)
my_cores <- detectCores()
my_cores
```
```
[1] 4
```
The **parallel** package provides functionality for parallel computation in **R**.
The **furrr** package extends the **future** package to allow us to express embarrassingly parallel computations in our familiar **purrr** syntax (Vaughan and Dancho 2021\).
Specifically, it provides a function `future_map()` that works just like `map()` (see Chapter [7](ch-iteration.html#ch:iteration)), except that it spreads the computations over multiple cores.
The theoretical speed\-up is a function of `my_cores`, but in practice this may be less for a variety of reasons (most notably, the overhead associated with combining the parallel results).
The **plan** function sets up a parallel computing environment. In this case, we are using the `multiprocess` mode, which will split computations across asynchronous separate **R** sessions. The `workers` argument to `plan()` controls the number of cores being used for parallel computation. Next, we fit the 20 regression models using the `future_map()` function instead of **map**. Once completed, set the computation mode back to `sequential` for the remainder of this chapter.
```
library(furrr)
plan(multiprocess, workers = my_cores)
system_time(
future_map(1:k, fit_lm, data = d)
)
```
```
process real
9.98s 11.56s
```
```
plan(sequential)
```
In this case, the overhead associated with combining the results was larger than the savings from parallelizing the computation. But this will not always be the case.
#### 21\.2\.3\.2 GPU computing and CUDA
Another fruitful avenue to speed up computations is through use of a graphical processing unit (GPU).
These devices feature a highly parallel structure that can lead to significant performance gains.
[*CUDA*](https://en.wikipedia.org/w/index.php?search=CUDA) is a parallel computing platform and application programming interface created by [*NVIDIA*](https://en.wikipedia.org/w/index.php?search=NVIDIA) (one of the largest manufacturers of GPUs).
The **OpenCL** package provides bindings for **R** to the open\-source, general\-purpose OpenCL programming language for GPU computing.
#### 21\.2\.3\.3 MapReduce
[*MapReduce*](https://en.wikipedia.org/w/index.php?search=MapReduce) is a programming paradigm for parallel computing. To solve a task using a MapReduce framework, two functions must be written:
1. `Map(key_0, value_0)`: The `Map()` function reads in the original data (which is stored in key\-value pairs), and splits it up into smaller subtasks. It returns a `list` of key\-value pairs \\((key\_1, value\_1\)\\), where the keys and values are not necessarily of the same type as the original ones.
2. `Reduce(key_1, list(value_1))`: The MapReduce implementation has a method for aggregating the key\-value pairs returned by the `Map()` function by their keys (i.e., `key_1`). Thus, you only have to write the `Reduce()` function, which takes as input a particular `key_1`, and a list of all the `value_1`’s that correspond to `key_1`. The `Reduce()` function then performs some operation on that list, and returns a list of values.
MapReduce is efficient and effective because the `Map()` step can be highly parallelized.
Moreover, MapReduce is also fault tolerant, because if any individual `Map()` job fails, the controller can simply start another one.
The `Reduce()` step often provides functionality similar to a `GROUP BY` operation in SQL.
##### Example
The canonical MapReduce example is to tabulate the frequency of each word in a large number of text documents (i.e., a [*corpus*](https://en.wikipedia.org/w/index.php?search=corpus) (see Chapter [19](ch-text.html#ch:text))).
In what follows, we show an implementation written in Python by [Bill Howe](https://en.wikipedia.org/w/index.php?search=Bill%20Howe) of the [*University of Washington*](https://en.wikipedia.org/w/index.php?search=University%20of%20Washington) (Howe 2014\).
Note that at the beginning, this bit of code calls [an external `MapReduce` library](https://github.com/uwescience/datasci_course_materials/blob/master/assignment3/MapReduce.py) that actually implements MapReduce.
The user only needs to write the two functions shown in this block of code—not the MapReduce library itself.
```
import MapReduce
import sys
mr = MapReduce.MapReduce()
def mapper(record):
key = record[0]
value = record[1]
words = value.split()
for w in words:
mr.emit_intermediate(w, 1)
def reducer(key, list_of_values):
total = 0
for v in list_of_values:
total += v
mr.emit((key, total))
if __name__ == '__main__':
inputdata = open(sys.argv[1])
mr.execute(inputdata, mapper, reducer)
```
We will use this MapReduce program to compile a word count for the [issues raised on GitHub for the **ggplot2** package](https://github.com/hadley/ggplot2/issues).
These are stored in a [*JSON*](https://en.wikipedia.org/w/index.php?search=JSON) file (see Chapter [6](ch-dataII.html#ch:dataII)) as a single JSON array.
Since we want to illustrate how MapReduce can parallelize over many files, we will convert this single array into a JSON object for each issue.
This will mimic the typical use case.
The **jsonlite** package provides functionality for coverting between JSON objects and native **R** data structures.
```
library(jsonlite)
url <- "https://api.github.com/repos/tidyverse/ggplot2/issues"
gg_issues <- url %>%
fromJSON() %>%
select(url, body) %>%
group_split(url) %>%
map_chr(~toJSON(as.character(.x))) %>%
write(file = "code/map-reduce/issues.json")
```
For example, the first issue is displayed below.
Note that it consists of two comma\-separated character strings within brackets.
We can think of this as having the format: `[key, value]`.
```
readLines("code/map-reduce/issues.json") %>%
head(1) %>%
str_wrap(width = 70) %>%
cat()
```
```
["https://api.github.com/repos/tidyverse/ggplot2/issues/4019","When
setting the limits of `scale_fill_steps()`, the fill brackets in
the legend becomes unevenly spaced. It's not clear why or how this
happens.\r\n\r\n``` r\r\nlibrary(tidyverse)\r\n\r\ndf <- tibble(\r\n
crossing(\r\n tibble(sample = paste(\"sample\", 1:4)),\r\n tibble(pos
= 1:4)\r\n ),\r\n val = runif(16)\r\n)\r\n\r\nggplot(df, aes(x =
pos, y = sample)) +\r\n geom_line() +\r\n geom_point(aes(fill =
val), pch = 21, size = 7) +\r\n scale_fill_steps(low = \"white\",
high = \"black\")\r\n```\r\n\r\n
\r\n\r\n``` r\r\n\r\nggplot(df, aes(x = pos, y = sample)) +\r\n
geom_line() +\r\n geom_point(aes(fill = val), pch = 21, size = 7) +
\r\n scale_fill_steps(low = \"white\", high = \"black\", limits
= c(0, 1))\r\n```\r\n\r\n
\r\n\r\n<sup>Created on 2020-05-22 by the [reprex package](https://
reprex.tidyverse.org) (v0.3.0)<\/sup>"]
```
In the Python code written above (which is stored in the file `wordcount.py`), the `mapper()` function takes a `record` argument (i.e., one line of the `issues.json` file), and examines its first two elements—the `key` becomes the first argument (in this case, the URL of the GitHub issue) and the `value` becomes the second argument (the text of the issue).
After splitting the `value` on each space, the `mapper()` function emits a \\((key, value)\\) pair for each word.
Thus, the first issue shown above would generate the pairs: `(When, 1)`, `(setting, 1)`, `(the, 1)`, etc.
The `MapReduce` library provides a mechanism for efficiently collecting all of the resulting pairs based on the `key`, which in this case corresponds to a single word.
The `reducer()` function simply adds up all of the values associated with each key.
In this case, these values are all `1`s, so the resulting pair is a word and the number of times it appears (e.g., `(the, 158)`, etc.).
Thanks to the **reticulate** package, we can run this Python script from within **R** and bring the results into **R** for further analysis.
We see that the most common words in this corpus are short articles and prepositions.
```
library(mdsr)
cmd <- "python code/map-reduce/wordcount.py code/map-reduce/issues.json"
res <- system(cmd, intern = TRUE)
freq_df <- res %>%
purrr::map(jsonlite::fromJSON) %>%
purrr::map(set_names, c("word", "count")) %>%
bind_rows() %>%
mutate(count = parse_number(count))
glimpse(freq_df)
```
```
Rows: 1,605
Columns: 2
$ word <chr> "geom_point(aes(fill", "aliased", "desirable", "ggplot(ct)+g…
$ count <dbl> 2, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 3, 1, 2, 2, 1, 5, …
```
```
freq_df %>%
filter(str_detect(pattern = "[a-z]", word)) %>%
arrange(desc(count)) %>%
head(10)
```
```
# A tibble: 10 × 2
word count
<chr> <dbl>
1 the 147
2 to 87
3 a 63
4 of 45
5 is 43
6 in 33
7 and 32
8 that 31
9 it 28
10 be 28
```
MapReduce is popular and offers some advantages over SQL for some problems.
When MapReduce first became popular, and Google used it to redo their
webpage ranking system (see Chapter [20](ch-netsci.html#ch:netsci)), there was great excitement about a coming “paradigm shift” in parallel and distributed computing.
Nevertheless, advocates of SQL have challenged the notion that it has been completely superseded by MapReduce (Stonebraker et al. 2010\).
##### Example
The canonical MapReduce example is to tabulate the frequency of each word in a large number of text documents (i.e., a [*corpus*](https://en.wikipedia.org/w/index.php?search=corpus) (see Chapter [19](ch-text.html#ch:text))).
In what follows, we show an implementation written in Python by [Bill Howe](https://en.wikipedia.org/w/index.php?search=Bill%20Howe) of the [*University of Washington*](https://en.wikipedia.org/w/index.php?search=University%20of%20Washington) (Howe 2014\).
Note that at the beginning, this bit of code calls [an external `MapReduce` library](https://github.com/uwescience/datasci_course_materials/blob/master/assignment3/MapReduce.py) that actually implements MapReduce.
The user only needs to write the two functions shown in this block of code—not the MapReduce library itself.
```
import MapReduce
import sys
mr = MapReduce.MapReduce()
def mapper(record):
key = record[0]
value = record[1]
words = value.split()
for w in words:
mr.emit_intermediate(w, 1)
def reducer(key, list_of_values):
total = 0
for v in list_of_values:
total += v
mr.emit((key, total))
if __name__ == '__main__':
inputdata = open(sys.argv[1])
mr.execute(inputdata, mapper, reducer)
```
We will use this MapReduce program to compile a word count for the [issues raised on GitHub for the **ggplot2** package](https://github.com/hadley/ggplot2/issues).
These are stored in a [*JSON*](https://en.wikipedia.org/w/index.php?search=JSON) file (see Chapter [6](ch-dataII.html#ch:dataII)) as a single JSON array.
Since we want to illustrate how MapReduce can parallelize over many files, we will convert this single array into a JSON object for each issue.
This will mimic the typical use case.
The **jsonlite** package provides functionality for coverting between JSON objects and native **R** data structures.
```
library(jsonlite)
url <- "https://api.github.com/repos/tidyverse/ggplot2/issues"
gg_issues <- url %>%
fromJSON() %>%
select(url, body) %>%
group_split(url) %>%
map_chr(~toJSON(as.character(.x))) %>%
write(file = "code/map-reduce/issues.json")
```
For example, the first issue is displayed below.
Note that it consists of two comma\-separated character strings within brackets.
We can think of this as having the format: `[key, value]`.
```
readLines("code/map-reduce/issues.json") %>%
head(1) %>%
str_wrap(width = 70) %>%
cat()
```
```
["https://api.github.com/repos/tidyverse/ggplot2/issues/4019","When
setting the limits of `scale_fill_steps()`, the fill brackets in
the legend becomes unevenly spaced. It's not clear why or how this
happens.\r\n\r\n``` r\r\nlibrary(tidyverse)\r\n\r\ndf <- tibble(\r\n
crossing(\r\n tibble(sample = paste(\"sample\", 1:4)),\r\n tibble(pos
= 1:4)\r\n ),\r\n val = runif(16)\r\n)\r\n\r\nggplot(df, aes(x =
pos, y = sample)) +\r\n geom_line() +\r\n geom_point(aes(fill =
val), pch = 21, size = 7) +\r\n scale_fill_steps(low = \"white\",
high = \"black\")\r\n```\r\n\r\n
\r\n\r\n``` r\r\n\r\nggplot(df, aes(x = pos, y = sample)) +\r\n
geom_line() +\r\n geom_point(aes(fill = val), pch = 21, size = 7) +
\r\n scale_fill_steps(low = \"white\", high = \"black\", limits
= c(0, 1))\r\n```\r\n\r\n
\r\n\r\n<sup>Created on 2020-05-22 by the [reprex package](https://
reprex.tidyverse.org) (v0.3.0)<\/sup>"]
```
In the Python code written above (which is stored in the file `wordcount.py`), the `mapper()` function takes a `record` argument (i.e., one line of the `issues.json` file), and examines its first two elements—the `key` becomes the first argument (in this case, the URL of the GitHub issue) and the `value` becomes the second argument (the text of the issue).
After splitting the `value` on each space, the `mapper()` function emits a \\((key, value)\\) pair for each word.
Thus, the first issue shown above would generate the pairs: `(When, 1)`, `(setting, 1)`, `(the, 1)`, etc.
The `MapReduce` library provides a mechanism for efficiently collecting all of the resulting pairs based on the `key`, which in this case corresponds to a single word.
The `reducer()` function simply adds up all of the values associated with each key.
In this case, these values are all `1`s, so the resulting pair is a word and the number of times it appears (e.g., `(the, 158)`, etc.).
Thanks to the **reticulate** package, we can run this Python script from within **R** and bring the results into **R** for further analysis.
We see that the most common words in this corpus are short articles and prepositions.
```
library(mdsr)
cmd <- "python code/map-reduce/wordcount.py code/map-reduce/issues.json"
res <- system(cmd, intern = TRUE)
freq_df <- res %>%
purrr::map(jsonlite::fromJSON) %>%
purrr::map(set_names, c("word", "count")) %>%
bind_rows() %>%
mutate(count = parse_number(count))
glimpse(freq_df)
```
```
Rows: 1,605
Columns: 2
$ word <chr> "geom_point(aes(fill", "aliased", "desirable", "ggplot(ct)+g…
$ count <dbl> 2, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 3, 1, 2, 2, 1, 5, …
```
```
freq_df %>%
filter(str_detect(pattern = "[a-z]", word)) %>%
arrange(desc(count)) %>%
head(10)
```
```
# A tibble: 10 × 2
word count
<chr> <dbl>
1 the 147
2 to 87
3 a 63
4 of 45
5 is 43
6 in 33
7 and 32
8 that 31
9 it 28
10 be 28
```
MapReduce is popular and offers some advantages over SQL for some problems.
When MapReduce first became popular, and Google used it to redo their
webpage ranking system (see Chapter [20](ch-netsci.html#ch:netsci)), there was great excitement about a coming “paradigm shift” in parallel and distributed computing.
Nevertheless, advocates of SQL have challenged the notion that it has been completely superseded by MapReduce (Stonebraker et al. 2010\).
#### 21\.2\.3\.4 Hadoop
As noted previously, MapReduce requires a software implementation.
One popular such implementation is Hadoop MapReduce, which is one of the core components of [*Apache Hadoop*](https://en.wikipedia.org/w/index.php?search=Apache%20Hadoop).
Hadoop is a larger software ecosystem for storing and processing large data that includes a distributed file system, Pig, Hive, Spark, and other popular open\-source software tools.
While we won’t be able to go into great detail about these items, we will illustrate how to interface with Spark, which has a particularly tight integration with **RStudio**.
#### 21\.2\.3\.5 Spark
One nice feature of [*Apache Spark*](https://en.wikipedia.org/w/index.php?search=Apache%20Spark)—especially for our purposes—is that while it requires a distributed file system, it can implement a pseudo\-distributed file system on a single machine.
This makes it possible for you to experiment with Spark on your local machine even if you don’t have access to a cluster. For obvious reasons, you won’t actually see the performance boost that parallelism can bring, but you can try it out and debug your code.
Furthermore, the [**sparklyr** package](http://spark.rstudio.com/) makes it painless to install a local Spark cluster from within **R**, as well as connect to a local or remote cluster.
Once the **sparklyr** package is installed, we can use it to install a local Spark cluster.
```
library(sparklyr)
spark_install(version = "3.0") # only once!
```
Next, we make a connection to our local Spark instance from within **R**.
Of course, if we were connecting to a remote Spark cluster, we could modify the `master` argument to reflect that.
Spark requires [*Java*](https://en.wikipedia.org/w/index.php?search=Java), so you may have to install the [*Java Development Kit*](https://en.wikipedia.org/w/index.php?search=Java%20Development%20Kit) before using Spark.[41](#fn41)
```
# sudo apt-get install openjdk-8-jdk
sc <- spark_connect(master = "local", version = "3.0")
class(sc)
```
```
[1] "spark_connection" "spark_shell_connection"
[3] "DBIConnection"
```
Note that `sc` has class `DBIConnection`—this means that it can do many of the things that other **dplyr** connections can do.
For example, the `src_tbls()` function works just like it did on the MySQL connection objects we saw in Chapter [15](ch-sql.html#ch:sql).
```
src_tbls(sc)
```
```
character(0)
```
In this case, there are no tables present in this Spark cluster, but we can add them using the `copy_to()` command.
Here, we will load the `babynames` table from the **babynames** package.
```
babynames_tbl <- sc %>%
copy_to(babynames::babynames, "babynames")
src_tbls(sc)
```
```
[1] "babynames"
```
```
class(babynames_tbl)
```
```
[1] "tbl_spark" "tbl_sql" "tbl_lazy" "tbl"
```
The `babynames_tbl` object is a `tbl_spark`, but also a `tbl_sql`.
Again, this is analogous to what we saw in Chapter [15](ch-sql.html#ch:sql), where a `tbl_MySQLConnection` was also a `tbl_sql`.
```
babynames_tbl %>%
filter(name == "Benjamin") %>%
group_by(year) %>%
summarize(N = n(), total_births = sum(n)) %>%
arrange(desc(total_births)) %>%
head()
```
```
# Source: spark<?> [?? x 3]
# Ordered by: desc(total_births)
year N total_births
<dbl> <dbl> <dbl>
1 1989 2 15785
2 1988 2 15279
3 1987 2 14953
4 2000 2 14864
5 1990 2 14660
6 2016 2 14641
```
As we will see below with [*Google BigQuery*](https://en.wikipedia.org/w/index.php?search=Google%20BigQuery), even though Spark is a parallelized technology designed to supersede SQL, it is still useful to know SQL in order to use Spark.
Like BigQuery, **sparklyr** allows you to work with a Spark cluster using the familiar **dplyr** interface.
As you might suspect, because `babynames_tbl` is a `tbl_sql`, it implements SQL methods common in **DBI**.
Thus, we can also write SQL queries against our Spark cluster.
```
library(DBI)
dbGetQuery(sc, "SELECT year, sum(1) as N, sum(n) as total_births
FROM babynames WHERE name == 'Benjamin'
GROUP BY year
ORDER BY total_births desc
LIMIT 6")
```
```
year N total_births
1 1989 2 15785
2 1988 2 15279
3 1987 2 14953
4 2000 2 14864
5 1990 2 14660
6 2016 2 14641
```
Finally, because Spark includes not only a database infrastructure, but also a machine learning library, **sparklyr** allows you to fit many of the models we outlined in Chapter [11](ch-learningI.html#ch:learningI) and [12](ch-learningII.html#ch:learningII) within Spark.
This means that you can rely on Spark’s big data capabilities without having to bring all of your data into **R**’s memory.
As a motivating example, we fit a multiple regression model for the amount of rainfall at the MacLeish field station as a function of the temperature, pressure, and relative humidity.
```
library(macleish)
weather_tbl <- copy_to(sc, whately_2015)
weather_tbl %>%
ml_linear_regression(rainfall ~ temperature + pressure + rel_humidity) %>%
summary()
```
```
Deviance Residuals:
Min 1Q Median 3Q Max
-0.041290 -0.021761 -0.011632 -0.000576 15.968356
Coefficients:
(Intercept) temperature pressure rel_humidity
0.717754 0.000409 -0.000755 0.000438
R-Squared: 0.004824
Root Mean Squared Error: 0.1982
```
The most recent versions of **RStudio** include integrated support for management of Spark clusters.
### 21\.2\.4 Alternatives to SQL
Relational database management systems can be spread across multiple computers into what is called a [*cluster*](https://en.wikipedia.org/w/index.php?search=cluster).
In fact, it is widely acknowledged that one of the things that allowed Google to grow so fast was its use of the open\-source (zero cost) MySQL RDBMS running as a cluster across many identical low\-cost servers.
That is, rather than investing large amounts of money in big machines, they built a massive MySQL cluster over many small, cheap machines.
Both [MySQL](https://en.wikipedia.org/wiki/MySQL_Cluster) and [PostgreSQL](http://www.postgresql.org/docs/9.4/static/creating-cluster.html) provide functionality for extending a single installation to a cluster.
Use a cloud\-based computing service, such as Amazon Web Services, Google Cloud Platform, or Digital Ocean, for a low\-cost alternative to building your own server farm (many of these companies offer free credits for student and instructor use).
#### 21\.2\.4\.1 BigQuery
[*BigQuery*](https://en.wikipedia.org/w/index.php?search=BigQuery) is a Web service offered by Google. Internally, the BigQuery service is supported by [*Dremel*](https://en.wikipedia.org/w/index.php?search=Dremel), the open\-source version of which is [*Apache Drill*](https://en.wikipedia.org/w/index.php?search=Apache%20Drill).
The **bigrquery** [package](https://github.com/rstats-db/bigrquery) for **R** provides access to BigQuery from within **R**.
To use the BigQuery service, you need to sign up for an account with Google, but you won’t be charged unless you exceed the free limit of 10,000 requests per day (the [BigQuery sandbox](https://cloud.google.com/bigquery/docs/sandbox) provides free access subject to certain limits).
If you want to use your own data, you have to upload it to Google Cloud Storage, but Google provides many data sets that you can use for free (e.g., COVID, Census, real\-estate transactions). Here we illustrate how to query the `shakespeare` data set—which is a list of all of the words that appear in Shakespeare’s plays—to find the most common words. Note that BigQuery understands a recognizable [dialect of SQL](https://cloud.google.com/bigquery/query-reference)—what makes BigQuery special is that it is built on top of Google’s massive computing architecture.
```
library(bigrquery)
project_id <- "my-google-id"
sql <- "
SELECT word
, count(distinct corpus) AS numPlays
, sum(word_count) AS N
FROM [publicdata:samples.shakespeare]
GROUP BY word
ORDER BY N desc
LIMIT 10
"
bq_project_query(sql, project = project_id)
```
```
4.9 megabytes processed
word numPlays N
1 the 42 25568
2 I 42 21028
3 and 42 19649
4 to 42 17361
5 of 42 16438
6 a 42 13409
7 you 42 12527
8 my 42 11291
9 in 42 10589
10 is 42 8735
```
#### 21\.2\.4\.2 NoSQL
[*NoSQL*](https://en.wikipedia.org/w/index.php?search=NoSQL) refers not to a specific technology, but rather to a class of database architectures that are *not* based on the notion—so central to SQL (and `data.frame`s in **R**—that a table consists of a rectangular array of rows and columns.
Rather than being built around tables, NoSQL databases may be built around columns, key\-value pairs, documents, or graphs.
Nevertheless NoSQL databases may (or may not) include an SQL\-like query language for retrieving data.
One particularly successful NoSQL database is [*MongoDB*](https://en.wikipedia.org/w/index.php?search=MongoDB), which is based on a document structure.
In particular, MongoDB is often used to store JSON objects (see Chapter [6](ch-dataII.html#ch:dataII)), which are not necessarily tabular.
#### 21\.2\.4\.1 BigQuery
[*BigQuery*](https://en.wikipedia.org/w/index.php?search=BigQuery) is a Web service offered by Google. Internally, the BigQuery service is supported by [*Dremel*](https://en.wikipedia.org/w/index.php?search=Dremel), the open\-source version of which is [*Apache Drill*](https://en.wikipedia.org/w/index.php?search=Apache%20Drill).
The **bigrquery** [package](https://github.com/rstats-db/bigrquery) for **R** provides access to BigQuery from within **R**.
To use the BigQuery service, you need to sign up for an account with Google, but you won’t be charged unless you exceed the free limit of 10,000 requests per day (the [BigQuery sandbox](https://cloud.google.com/bigquery/docs/sandbox) provides free access subject to certain limits).
If you want to use your own data, you have to upload it to Google Cloud Storage, but Google provides many data sets that you can use for free (e.g., COVID, Census, real\-estate transactions). Here we illustrate how to query the `shakespeare` data set—which is a list of all of the words that appear in Shakespeare’s plays—to find the most common words. Note that BigQuery understands a recognizable [dialect of SQL](https://cloud.google.com/bigquery/query-reference)—what makes BigQuery special is that it is built on top of Google’s massive computing architecture.
```
library(bigrquery)
project_id <- "my-google-id"
sql <- "
SELECT word
, count(distinct corpus) AS numPlays
, sum(word_count) AS N
FROM [publicdata:samples.shakespeare]
GROUP BY word
ORDER BY N desc
LIMIT 10
"
bq_project_query(sql, project = project_id)
```
```
4.9 megabytes processed
word numPlays N
1 the 42 25568
2 I 42 21028
3 and 42 19649
4 to 42 17361
5 of 42 16438
6 a 42 13409
7 you 42 12527
8 my 42 11291
9 in 42 10589
10 is 42 8735
```
#### 21\.2\.4\.2 NoSQL
[*NoSQL*](https://en.wikipedia.org/w/index.php?search=NoSQL) refers not to a specific technology, but rather to a class of database architectures that are *not* based on the notion—so central to SQL (and `data.frame`s in **R**—that a table consists of a rectangular array of rows and columns.
Rather than being built around tables, NoSQL databases may be built around columns, key\-value pairs, documents, or graphs.
Nevertheless NoSQL databases may (or may not) include an SQL\-like query language for retrieving data.
One particularly successful NoSQL database is [*MongoDB*](https://en.wikipedia.org/w/index.php?search=MongoDB), which is based on a document structure.
In particular, MongoDB is often used to store JSON objects (see Chapter [6](ch-dataII.html#ch:dataII)), which are not necessarily tabular.
21\.3 Alternatives to **R**
---------------------------
[Python](http://en.wikipedia.org/wiki/Python_(programming_language)) is a widely\-used general\-purpose, high\-level programming language.
You will find adherents for both **R** and Python, and while there are [ongoing](http://readwrite.com/2013/11/25/python-displacing-r-as-the-programming-language-for-data-science#awesm=~oopSq74KSJsK2w) [debates](https://github.com/hadley/r-python) about which is “better,” there is no consensus.
It is probably true that—for obvious reasons—computer scientists tend to favor Python, while statisticians tend to favor **R**.
We prefer the latter but will not make any claims about its being “better” than Python.
A well\-rounded data scientist should be competent in both environments.
Python is a modular environment (like **R**) and includes many libraries for working with data.
The most **R**\-like is `Pandas`, but other popular auxiliary libraries include `SciPy` for scientific computation, `NumPy` for large arrays, `matplotlib` for graphics, and `scikit-learn` for machine learning.
Other popular programming languages among data scientists include [*Scala*](https://en.wikipedia.org/w/index.php?search=Scala) and [*Julia*](https://en.wikipedia.org/wiki/Julia_(programming_language)).
Scala supports a [*functional programming*](https://en.wikipedia.org/w/index.php?search=functional%20programming) paradigm that has been promoted by H. Wickham (2019\) and other **R** users. Julia has a smaller user base but has nonetheless many strong adherents.
21\.4 Closing thoughts
----------------------
Advances in computing power and the internet have changed the field of statistics in ways that only the greatest visionaries could have imagined.
In the 20th century, the science of extracting meaning from data focused on developing inferential techniques that required sophisticated mathematics to squeeze the most information out of small data.
In the 21st century, the science of extracting meaning from data has focused on developing powerful computational tools that enable the processing of ever larger and more complex data.
While the essential analytical language of the last century—mathematics—is still of great importance, the analytical language of this century is undoubtedly programming.
The ability to write code is a necessary but not sufficient condition for becoming a data scientist.
We have focused on programming in **R**, a well\-worn interpreted language designed by statisticians for computing with data.
We believe that as an open\-source language with a broad following, **R** has significant staying power. Yet we recognize that all technological tools eventually become obsolete.
Nevertheless, by absorbing the lessons in this book, you will have transformed yourself into a competent, ethical, and versatile data scientist—one who possesses the essential capacities for working with a variety of data programmatically.
You can build and interpret models, query databases both local and remote, make informative and interactive maps, and wrangle and visualize data in various forms.
Internalizing these abilities will allow them to permeate your work in whatever field interests you, for as long as you continue to use data to inform.
21\.5 Further resources
-----------------------
Tools for working with big data analytics are developing more quickly than any of the other topics in this book.
A special issue of the *The American Statistician* addressed the training of students in statistics and data science (Horton and Hardin 2015\).
The issue included articles on teaching statistics at “Google\-Scale” (Chamandy, Muraldharan, and Wager 2015\) and on the teaching of data science more generally (B. S. Baumer 2015; Hardin et al. 2015\).
The board of directors of the American Statistical Association endorsed the *Curriculum Guidelines for Undergraduate Programs in Data Science* written by the Park City Math Institute (PCMI) Undergraduate Faculty Group (De Veaux et al. 2017\).
These guidelines recommended fusing statistical thinking into the teaching of techniques to solve big data problems.
A comprehensive survey of **R** packages for parallel computation and high\-performance computing is available through the [CRAN task view on that subject](https://cran.r-project.org/web/views/HighPerformanceComputing.html).
The *Parallel R* book is another resource (McCallum and Weston 2011\).
More information about [Google BigQuery](https://cloud.google.com/bigquery) can be found at their website.
A [tutorial for SparkR](https://spark.apache.org/docs/1.6.0/sparkr.html) is available on Apache’s website.
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-mdsr.html |
A Packages used in this book
============================
A.1 The **mdsr** package
------------------------
The **mdsr** package contains most of the small data sets used in this book that are not available in other packages.
To install it from CRAN, use `install.packages()`.
To get the latest release, use the `install_github()` function from the **remotes** package.
(See Section [B.4\.1](ch-R.html#appR:packages) for more comprehensive information about **R** package maintenance.)
```
# this command only needs to be run once
install.packages("mdsr")
# if you want the development version
remotes::install_github("mdsr-book/mdsr")
```
The list of data sets provided can be retrieved using the `data()` function.
```
library(mdsr)
data(package = "mdsr")
```
The **mdsr** package includes some functions that simplify a number of tasks.
In particular, the `dbConnect_scidb()` function provides a shorthand for connecting to the public SQL server hosted by [*Amazon Web Services*](https://en.wikipedia.org/w/index.php?search=Amazon%20Web%20Services).
We use this function extensively in Chapter [15](ch-sql.html#ch:sql) and in our classes and projects.
In keeping with best practices, **mdsr** no longer loads any other packages.
In every chapter in this book, a call to `library(tidyverse)` precedes a call to `library(mdsr)`.
These two steps will set up an **R** session to replicate the code in the book.
A.2 Other packages
------------------
As we discuss in Chapters [1](ch-prologue.html#ch:prologue) and [21](ch-big.html#ch:big), this book is not explicitly about “big data”—it is about mastering data science techniques for small and medium data with an eye towards big data. To that end, we need medium\-sized data sets to work with. We have introduced several such data sets in this book, namely **airlines**, **fec12**, and **fec16**.
The **airlines** package, which was inspired by the **nycflights13** package, gives **R** users the ability to download the full 33 years (and counting) of flight data from the United States Bureau of Transportation Statistics and bring it seamlessly into SQL without actually having to write any SQL code.
The **macleish** package also uses the **etl** framework for hourly\-updated weather data from the MacLeish field station.
The full list of packages used in this book appears below in Tables [A.1](ch-mdsr.html#tab:cran-pkgs) and [A.2](ch-mdsr.html#tab:github-pkgs).
Table A.1: List of CRAN packages used in this book.
| Package | Citation | Title |
| --- | --- | --- |
| alr4 | (Weisberg 2018\) | Data to Accompany Applied Linear Regression 4th Edition |
| ape | (Paradis et al. 2021\) | Analyses of Phylogenetics and Evolution |
| assertthat | (Hadley Wickham 2019a) | Easy Pre and Post Assertions |
| available | (Ganz et al. 2019\) | Check if the Title of a Package is Available, Appropriate and Interesting |
| babynames | (Hadley Wickham 2021a) | US Baby Names 1880\-2017 |
| bench | (Hester 2020\) | High Precision Timing of R Expressions |
| biglm | (Lumley 2020\) | Bounded Memory Linear and Generalized Linear Models |
| bookdown | (Yihui Xie 2021a) | Authoring Books and Technical Documents with R Markdown |
| broom | (Robinson, Hayes, and Couch 2021\) | Convert Statistical Objects into Tidy Tibbles |
| DBI | (R Special Interest Group on Databases (R\-SIG\-DB), Wickham, and Müller 2021\) | R Database Interface |
| dbplyr | (Hadley Wickham, Girlich, and Ruiz 2021\) | A ‘dplyr’ Back End for Databases |
| discrim | (Kuhn 2021\) | Model Wrappers for Discriminant Analysis |
| dplyr | (Hadley Wickham, François, et al. 2021\) | A Grammar of Data Manipulation |
| DT | (Yihui Xie, Cheng, and Tan 2021\) | A Wrapper of the JavaScript Library ‘DataTables’ |
| dygraphs | (Vanderkam et al. 2018\) | Interface to ‘Dygraphs’ Interactive Time Series Charting Library |
| etl | (Benjamin S. Baumer 2021\) | Extract\-Transform\-Load Framework for Medium Data |
| extrafont | (Winston Chang 2014\) | Tools for using fonts |
| forcats | (Hadley Wickham 2021b) | Tools for Working with Categorical Variables (Factors) |
| fs | (Hester and Wickham 2020\) | Cross\-Platform File System Operations Based on ‘libuv’ |
| furrr | (Vaughan and Dancho 2021\) | Apply Mapping Functions in Parallel using Futures |
| future | (Bengtsson 2020\) | Unified Parallel and Distributed Processing in R for Everyone |
| ggmosaic | (Jeppson, Hofmann, and Cook 2021\) | Mosaic Plots in the ‘ggplot2’ Framework |
| ggplot2 | (Hadley Wickham, Chang, et al. 2021\) | Create Elegant Data Visualisations Using the Grammar of Graphics |
| ggraph | (Pedersen 2021\) | An Implementation of Grammar of Graphics for Graphs and Networks |
| ggrepel | (Slowikowski 2021\) | Automatically Position Non\-Overlapping Text Labels with ‘ggplot2’ |
| ggspatial | (Dunnington 2021\) | Spatial Data Framework for ggplot2 |
| ggthemes | (Arnold 2021\) | Extra Themes, Scales and Geoms for ‘ggplot2’ |
| glmnet | (Friedman et al. 2021\) | Lasso and Elastic\-Net Regularized Generalized Linear Models |
| googlesheets4 | (Bryan 2021\) | Access Google Sheets using the Sheets API V4 |
| haven | (Hadley Wickham and Miller 2021\) | Import and Export ‘SPSS,’ ‘Stata’ and ‘SAS’ Files |
| here | (Müller 2020\) | A Simpler Way to Find Your Files |
| Hmisc | (Harrell 2021\) | Harrell Miscellaneous |
| htmlwidgets | (Vaidyanathan et al. 2020\) | HTML Widgets for R |
| igraph | (Csárdi et al. 2020\) | Network Analysis and Visualization |
| janitor | (Firke 2021\) | Simple Tools for Examining and Cleaning Dirty Data |
| jsonlite | (Ooms 2020\) | A Simple and Robust JSON Parser and Generator for R |
| kableExtra | (Zhu 2021\) | Construct Complex Table with ‘kable’ and Pipe Syntax |
| kknn | (Schliep and Hechenbichler 2016\) | Weighted k\-Nearest Neighbors |
| knitr | (Yihui Xie 2021b) | A General\-Purpose Package for Dynamic Report Generation in R |
| Lahman | (Friendly et al. 2021\) | Sean ‘Lahman’ Baseball Database |
| lattice | (Sarkar 2021\) | Trellis Graphics for R |
| lazyeval | (Hadley Wickham 2019c) | Lazy (Non\-Standard) Evaluation |
| leaflet | (Cheng, Karambelkar, and Xie 2021\) | Create Interactive Web Maps with the JavaScript ‘Leaflet’ Library |
| lubridate | (Spinu, Grolemund, and Wickham 2021\) | Make Dealing with Dates a Little Easier |
| macleish | (Benjamin S. Baumer et al. 2020\) | Retrieve Data from MacLeish Field Station |
| magick | (Ooms 2021\) | Advanced Graphics and Image\-Processing in R |
| mapproj | (McIlroy et al. 2020\) | Map Projections |
| maps | (Brownrigg 2018\) | Draw Geographical Maps |
| mclust | (Fraley, Raftery, and Scrucca 2020\) | Gaussian Mixture Modelling for Model\-Based Clustering, Classification, and Density Estimation |
| mdsr | (Benjamin S. Baumer, Horton, and Kaplan 2021\) | Complement to ‘Modern Data Science with R’ |
| modelr | (Hadley Wickham 2020b) | Modelling Functions that Work with the Pipe |
| mosaic | (Pruim, Kaplan, and Horton 2021a) | Project MOSAIC Statistics and Mathematics Teaching Utilities |
| mosaicData | (Pruim, Kaplan, and Horton 2021b) | Project MOSAIC Data Sets |
| NeuralNetTools | (Beck 2018\) | Visualization and Analysis Tools for Neural Networks |
| NHANES | (Pruim 2015\) | Data from the US National Health and Nutrition Examination Study |
| nycflights13 | (Hadley Wickham 2021c) | Flights that Departed NYC in 2013 |
| parsnip | (Kuhn and Vaughan 2021a) | A Common API to Modeling and Analysis Functions |
| partykit | (Hothorn and Zeileis 2021\) | A Toolkit for Recursive Partytioning |
| patchwork | (Pedersen 2020a) | The Composer of Plots |
| plotly | (Sievert et al. 2021\) | Create Interactive Web Graphics via ‘plotly.js’ |
| purrr | (Henry and Wickham 2020\) | Functional Programming Tools |
| randomForest | (Breiman et al. 2018\) | Breiman and Cutler’s Random Forests for Classification and Regression |
| RColorBrewer | (Neuwirth 2014\) | ColorBrewer Palettes |
| Rcpp | (Eddelbuettel et al. 2021\) | Seamless R and C\+\+ Integration |
| readr | (Hadley Wickham and Hester 2021\) | Read Rectangular Text Data |
| readxl | (Hadley Wickham and Bryan 2019\) | Read Excel Files |
| remotes | (Hester et al. 2021\) | R Package Installation from Remote Repositories, Including ‘GitHub’ |
| renv | (Ushey 2021\) | Project Environments |
| reticulate | (Ushey, Allaire, and Tang 2021\) | Interface to ‘Python’ |
| rgdal | (R. Bivand, Keitt, and Rowlingson 2021\) | Bindings for the ‘Geospatial’ Data Abstraction Library |
| rlang | (Henry and Wickham 2021\) | Functions for Base Types and Core R and ‘Tidyverse’ Features |
| rmarkdown | (J. Allaire et al. 2021\) | Dynamic Documents for R |
| RMySQL | (Ooms et al. 2021\) | Database Interface and ‘MySQL’ Driver for R |
| rpart | (Therneau and Atkinson 2019\) | Recursive Partitioning and Regression Trees |
| RSQLite | (Müller et al. 2021\) | ‘SQLite’ Interface for R |
| rvest | (Hadley Wickham 2021d) | Easily Harvest (Scrape) Web Pages |
| scales | (Hadley Wickham and Seidel 2020\) | Scale Functions for Visualization |
| sessioninfo | (Csárdi et al. 2018\) | R Session Information |
| sf | (Pebesma 2021\) | Simple Features for R |
| shiny | (Chang et al. 2021\) | Web Application Framework for R |
| sp | (Pebesma and Bivand 2021\) | Classes and Methods for Spatial Data |
| sparklyr | (Luraschi et al. 2021\) | R Interface to Apache Spark |
| stopwords | (Benoit, Muhr, and Watanabe 2021\) | Multilingual Stopword Lists |
| stringr | (Hadley Wickham 2019d) | Simple, Consistent Wrappers for Common String Operations |
| styler | (Müller and Walthert 2021\) | Non\-Invasive Pretty Printing of R Code |
| testthat | (Hadley Wickham 2021e) | Unit Testing for R |
| textdata | (Hvitfeldt 2020\) | Download and Load Various Text Datasets |
| tidycensus | (Walker and Herman 2021\) | Load US Census Boundary and Attribute Data as ‘tidyverse’ and ‘sf’\-Ready Data Frames |
| tidygeocoder | (Cambon et al. 2021\) | Geocoding Made Easy |
| tidygraph | (Pedersen 2020b) | A Tidy API for Graph Manipulation |
| tidymodels | (Kuhn and Wickham 2021\) | Easily Install and Load the ‘Tidymodels’ Packages |
| tidyr | (Hadley Wickham 2021f) | Tidy Messy Data |
| tidytext | (Robinson and Silge 2021\) | Text Mining using ‘dplyr,’ ‘ggplot2,’ and Other Tidy Tools |
| tidyverse | (Hadley Wickham 2021g) | Easily Install and Load the ‘Tidyverse’ |
| tigris | (Walker 2021\) | Load Census TIGER/Line Shapefiles |
| tm | (Feinerer and Hornik 2020\) | Text Mining Package |
| units | (Pebesma et al. 2021\) | Measurement Units for R Vectors |
| usethis | (Hadley Wickham and Bryan 2021\) | Automate Package and Project Setup |
| viridis | (Garnier 2021a) | Colorblind\-Friendly Color Maps for R |
| viridisLite | (Garnier 2021b) | Colorblind\-Friendly Color Maps (Lite Version) |
| webshot | (Chang 2019\) | Take Screenshots of Web Pages |
| wordcloud | (Fellows 2018\) | Word Clouds |
| wru | (Khanna and Imai 2021\) | Who are You? Bayesian Prediction of Racial Category Using Surname and Geolocation |
| xaringanthemer | (Aden\-Buie 2021\) | Custom ‘xaringan’ CSS Themes |
| xfun | (Yihui Xie 2021c) | Supporting Functions for Packages Maintained by ‘Yihui Xie’ |
| xkcd | (Torres\-Manzanera 2018\) | Plotting ggplot2 Graphics in an XKCD Style |
| yardstick | (Kuhn and Vaughan 2021b) | Tidy Characterizations of Model Performance |
Table A.2: List of GitHub packages used in this book.
| Package | GitHub User | Citation | Title |
| --- | --- | --- | --- |
| etude | dtkaplan | (Kaplan 2021\) | Utilities for Handling Textbook Exercises with Knitr |
| fec12 | baumer\-lab | (Tapal, Gahwagy, and Ryan 2021\) | Data Package for 2012 Federal Elections |
| openrouteservice | GIScience | (Oleś 2021\) | Openrouteservice API Client |
| streamgraph | hrbrmstr | (Rudis 2019\) | Build Streamgraph Visualizations |
A.3 Further resources
---------------------
More information on the [**mdsr**](http://www.github.com/mdsr-book/mdsr) package can be found at [http://www.github.com/mdsr\-book/mdsr](http://www.github.com/mdsr-book/mdsr).
A.1 The **mdsr** package
------------------------
The **mdsr** package contains most of the small data sets used in this book that are not available in other packages.
To install it from CRAN, use `install.packages()`.
To get the latest release, use the `install_github()` function from the **remotes** package.
(See Section [B.4\.1](ch-R.html#appR:packages) for more comprehensive information about **R** package maintenance.)
```
# this command only needs to be run once
install.packages("mdsr")
# if you want the development version
remotes::install_github("mdsr-book/mdsr")
```
The list of data sets provided can be retrieved using the `data()` function.
```
library(mdsr)
data(package = "mdsr")
```
The **mdsr** package includes some functions that simplify a number of tasks.
In particular, the `dbConnect_scidb()` function provides a shorthand for connecting to the public SQL server hosted by [*Amazon Web Services*](https://en.wikipedia.org/w/index.php?search=Amazon%20Web%20Services).
We use this function extensively in Chapter [15](ch-sql.html#ch:sql) and in our classes and projects.
In keeping with best practices, **mdsr** no longer loads any other packages.
In every chapter in this book, a call to `library(tidyverse)` precedes a call to `library(mdsr)`.
These two steps will set up an **R** session to replicate the code in the book.
A.2 Other packages
------------------
As we discuss in Chapters [1](ch-prologue.html#ch:prologue) and [21](ch-big.html#ch:big), this book is not explicitly about “big data”—it is about mastering data science techniques for small and medium data with an eye towards big data. To that end, we need medium\-sized data sets to work with. We have introduced several such data sets in this book, namely **airlines**, **fec12**, and **fec16**.
The **airlines** package, which was inspired by the **nycflights13** package, gives **R** users the ability to download the full 33 years (and counting) of flight data from the United States Bureau of Transportation Statistics and bring it seamlessly into SQL without actually having to write any SQL code.
The **macleish** package also uses the **etl** framework for hourly\-updated weather data from the MacLeish field station.
The full list of packages used in this book appears below in Tables [A.1](ch-mdsr.html#tab:cran-pkgs) and [A.2](ch-mdsr.html#tab:github-pkgs).
Table A.1: List of CRAN packages used in this book.
| Package | Citation | Title |
| --- | --- | --- |
| alr4 | (Weisberg 2018\) | Data to Accompany Applied Linear Regression 4th Edition |
| ape | (Paradis et al. 2021\) | Analyses of Phylogenetics and Evolution |
| assertthat | (Hadley Wickham 2019a) | Easy Pre and Post Assertions |
| available | (Ganz et al. 2019\) | Check if the Title of a Package is Available, Appropriate and Interesting |
| babynames | (Hadley Wickham 2021a) | US Baby Names 1880\-2017 |
| bench | (Hester 2020\) | High Precision Timing of R Expressions |
| biglm | (Lumley 2020\) | Bounded Memory Linear and Generalized Linear Models |
| bookdown | (Yihui Xie 2021a) | Authoring Books and Technical Documents with R Markdown |
| broom | (Robinson, Hayes, and Couch 2021\) | Convert Statistical Objects into Tidy Tibbles |
| DBI | (R Special Interest Group on Databases (R\-SIG\-DB), Wickham, and Müller 2021\) | R Database Interface |
| dbplyr | (Hadley Wickham, Girlich, and Ruiz 2021\) | A ‘dplyr’ Back End for Databases |
| discrim | (Kuhn 2021\) | Model Wrappers for Discriminant Analysis |
| dplyr | (Hadley Wickham, François, et al. 2021\) | A Grammar of Data Manipulation |
| DT | (Yihui Xie, Cheng, and Tan 2021\) | A Wrapper of the JavaScript Library ‘DataTables’ |
| dygraphs | (Vanderkam et al. 2018\) | Interface to ‘Dygraphs’ Interactive Time Series Charting Library |
| etl | (Benjamin S. Baumer 2021\) | Extract\-Transform\-Load Framework for Medium Data |
| extrafont | (Winston Chang 2014\) | Tools for using fonts |
| forcats | (Hadley Wickham 2021b) | Tools for Working with Categorical Variables (Factors) |
| fs | (Hester and Wickham 2020\) | Cross\-Platform File System Operations Based on ‘libuv’ |
| furrr | (Vaughan and Dancho 2021\) | Apply Mapping Functions in Parallel using Futures |
| future | (Bengtsson 2020\) | Unified Parallel and Distributed Processing in R for Everyone |
| ggmosaic | (Jeppson, Hofmann, and Cook 2021\) | Mosaic Plots in the ‘ggplot2’ Framework |
| ggplot2 | (Hadley Wickham, Chang, et al. 2021\) | Create Elegant Data Visualisations Using the Grammar of Graphics |
| ggraph | (Pedersen 2021\) | An Implementation of Grammar of Graphics for Graphs and Networks |
| ggrepel | (Slowikowski 2021\) | Automatically Position Non\-Overlapping Text Labels with ‘ggplot2’ |
| ggspatial | (Dunnington 2021\) | Spatial Data Framework for ggplot2 |
| ggthemes | (Arnold 2021\) | Extra Themes, Scales and Geoms for ‘ggplot2’ |
| glmnet | (Friedman et al. 2021\) | Lasso and Elastic\-Net Regularized Generalized Linear Models |
| googlesheets4 | (Bryan 2021\) | Access Google Sheets using the Sheets API V4 |
| haven | (Hadley Wickham and Miller 2021\) | Import and Export ‘SPSS,’ ‘Stata’ and ‘SAS’ Files |
| here | (Müller 2020\) | A Simpler Way to Find Your Files |
| Hmisc | (Harrell 2021\) | Harrell Miscellaneous |
| htmlwidgets | (Vaidyanathan et al. 2020\) | HTML Widgets for R |
| igraph | (Csárdi et al. 2020\) | Network Analysis and Visualization |
| janitor | (Firke 2021\) | Simple Tools for Examining and Cleaning Dirty Data |
| jsonlite | (Ooms 2020\) | A Simple and Robust JSON Parser and Generator for R |
| kableExtra | (Zhu 2021\) | Construct Complex Table with ‘kable’ and Pipe Syntax |
| kknn | (Schliep and Hechenbichler 2016\) | Weighted k\-Nearest Neighbors |
| knitr | (Yihui Xie 2021b) | A General\-Purpose Package for Dynamic Report Generation in R |
| Lahman | (Friendly et al. 2021\) | Sean ‘Lahman’ Baseball Database |
| lattice | (Sarkar 2021\) | Trellis Graphics for R |
| lazyeval | (Hadley Wickham 2019c) | Lazy (Non\-Standard) Evaluation |
| leaflet | (Cheng, Karambelkar, and Xie 2021\) | Create Interactive Web Maps with the JavaScript ‘Leaflet’ Library |
| lubridate | (Spinu, Grolemund, and Wickham 2021\) | Make Dealing with Dates a Little Easier |
| macleish | (Benjamin S. Baumer et al. 2020\) | Retrieve Data from MacLeish Field Station |
| magick | (Ooms 2021\) | Advanced Graphics and Image\-Processing in R |
| mapproj | (McIlroy et al. 2020\) | Map Projections |
| maps | (Brownrigg 2018\) | Draw Geographical Maps |
| mclust | (Fraley, Raftery, and Scrucca 2020\) | Gaussian Mixture Modelling for Model\-Based Clustering, Classification, and Density Estimation |
| mdsr | (Benjamin S. Baumer, Horton, and Kaplan 2021\) | Complement to ‘Modern Data Science with R’ |
| modelr | (Hadley Wickham 2020b) | Modelling Functions that Work with the Pipe |
| mosaic | (Pruim, Kaplan, and Horton 2021a) | Project MOSAIC Statistics and Mathematics Teaching Utilities |
| mosaicData | (Pruim, Kaplan, and Horton 2021b) | Project MOSAIC Data Sets |
| NeuralNetTools | (Beck 2018\) | Visualization and Analysis Tools for Neural Networks |
| NHANES | (Pruim 2015\) | Data from the US National Health and Nutrition Examination Study |
| nycflights13 | (Hadley Wickham 2021c) | Flights that Departed NYC in 2013 |
| parsnip | (Kuhn and Vaughan 2021a) | A Common API to Modeling and Analysis Functions |
| partykit | (Hothorn and Zeileis 2021\) | A Toolkit for Recursive Partytioning |
| patchwork | (Pedersen 2020a) | The Composer of Plots |
| plotly | (Sievert et al. 2021\) | Create Interactive Web Graphics via ‘plotly.js’ |
| purrr | (Henry and Wickham 2020\) | Functional Programming Tools |
| randomForest | (Breiman et al. 2018\) | Breiman and Cutler’s Random Forests for Classification and Regression |
| RColorBrewer | (Neuwirth 2014\) | ColorBrewer Palettes |
| Rcpp | (Eddelbuettel et al. 2021\) | Seamless R and C\+\+ Integration |
| readr | (Hadley Wickham and Hester 2021\) | Read Rectangular Text Data |
| readxl | (Hadley Wickham and Bryan 2019\) | Read Excel Files |
| remotes | (Hester et al. 2021\) | R Package Installation from Remote Repositories, Including ‘GitHub’ |
| renv | (Ushey 2021\) | Project Environments |
| reticulate | (Ushey, Allaire, and Tang 2021\) | Interface to ‘Python’ |
| rgdal | (R. Bivand, Keitt, and Rowlingson 2021\) | Bindings for the ‘Geospatial’ Data Abstraction Library |
| rlang | (Henry and Wickham 2021\) | Functions for Base Types and Core R and ‘Tidyverse’ Features |
| rmarkdown | (J. Allaire et al. 2021\) | Dynamic Documents for R |
| RMySQL | (Ooms et al. 2021\) | Database Interface and ‘MySQL’ Driver for R |
| rpart | (Therneau and Atkinson 2019\) | Recursive Partitioning and Regression Trees |
| RSQLite | (Müller et al. 2021\) | ‘SQLite’ Interface for R |
| rvest | (Hadley Wickham 2021d) | Easily Harvest (Scrape) Web Pages |
| scales | (Hadley Wickham and Seidel 2020\) | Scale Functions for Visualization |
| sessioninfo | (Csárdi et al. 2018\) | R Session Information |
| sf | (Pebesma 2021\) | Simple Features for R |
| shiny | (Chang et al. 2021\) | Web Application Framework for R |
| sp | (Pebesma and Bivand 2021\) | Classes and Methods for Spatial Data |
| sparklyr | (Luraschi et al. 2021\) | R Interface to Apache Spark |
| stopwords | (Benoit, Muhr, and Watanabe 2021\) | Multilingual Stopword Lists |
| stringr | (Hadley Wickham 2019d) | Simple, Consistent Wrappers for Common String Operations |
| styler | (Müller and Walthert 2021\) | Non\-Invasive Pretty Printing of R Code |
| testthat | (Hadley Wickham 2021e) | Unit Testing for R |
| textdata | (Hvitfeldt 2020\) | Download and Load Various Text Datasets |
| tidycensus | (Walker and Herman 2021\) | Load US Census Boundary and Attribute Data as ‘tidyverse’ and ‘sf’\-Ready Data Frames |
| tidygeocoder | (Cambon et al. 2021\) | Geocoding Made Easy |
| tidygraph | (Pedersen 2020b) | A Tidy API for Graph Manipulation |
| tidymodels | (Kuhn and Wickham 2021\) | Easily Install and Load the ‘Tidymodels’ Packages |
| tidyr | (Hadley Wickham 2021f) | Tidy Messy Data |
| tidytext | (Robinson and Silge 2021\) | Text Mining using ‘dplyr,’ ‘ggplot2,’ and Other Tidy Tools |
| tidyverse | (Hadley Wickham 2021g) | Easily Install and Load the ‘Tidyverse’ |
| tigris | (Walker 2021\) | Load Census TIGER/Line Shapefiles |
| tm | (Feinerer and Hornik 2020\) | Text Mining Package |
| units | (Pebesma et al. 2021\) | Measurement Units for R Vectors |
| usethis | (Hadley Wickham and Bryan 2021\) | Automate Package and Project Setup |
| viridis | (Garnier 2021a) | Colorblind\-Friendly Color Maps for R |
| viridisLite | (Garnier 2021b) | Colorblind\-Friendly Color Maps (Lite Version) |
| webshot | (Chang 2019\) | Take Screenshots of Web Pages |
| wordcloud | (Fellows 2018\) | Word Clouds |
| wru | (Khanna and Imai 2021\) | Who are You? Bayesian Prediction of Racial Category Using Surname and Geolocation |
| xaringanthemer | (Aden\-Buie 2021\) | Custom ‘xaringan’ CSS Themes |
| xfun | (Yihui Xie 2021c) | Supporting Functions for Packages Maintained by ‘Yihui Xie’ |
| xkcd | (Torres\-Manzanera 2018\) | Plotting ggplot2 Graphics in an XKCD Style |
| yardstick | (Kuhn and Vaughan 2021b) | Tidy Characterizations of Model Performance |
Table A.2: List of GitHub packages used in this book.
| Package | GitHub User | Citation | Title |
| --- | --- | --- | --- |
| etude | dtkaplan | (Kaplan 2021\) | Utilities for Handling Textbook Exercises with Knitr |
| fec12 | baumer\-lab | (Tapal, Gahwagy, and Ryan 2021\) | Data Package for 2012 Federal Elections |
| openrouteservice | GIScience | (Oleś 2021\) | Openrouteservice API Client |
| streamgraph | hrbrmstr | (Rudis 2019\) | Build Streamgraph Visualizations |
A.3 Further resources
---------------------
More information on the [**mdsr**](http://www.github.com/mdsr-book/mdsr) package can be found at [http://www.github.com/mdsr\-book/mdsr](http://www.github.com/mdsr-book/mdsr).
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-R.html |
B Introduction to `R` and RStudio
=================================
This chapter provides a (brief) introduction to **R** and **RStudio**.
The **R** language is a free, open\-source software environment for statistical computing and graphics (Ihaka and Gentleman 1996; R Core Team 2020\).
**RStudio** is
an open\-source integrated development environment (IDE) for **R** that adds many features and productivity tools for **R** (RStudio 2020\).
This chapter includes a short history, installation information, a sample session, background on fundamental structures and actions, information about help and documentation, and other important topics.
The **R** Foundation for Statistical Computing
holds and administers the copyright of the **R** software and documentation. **R** is available under the terms of the Free Software Foundation’s GNU General Public License in source code form.
**RStudio** facilitates use of **R** by integrating **R** help and documentation, providing a workspace browser and data viewer, and supporting syntax highlighting, code completion, and
smart indentation.
Support for reproducible analysis is made available with the **knitr** package and **R** Markdown (see Appendix [D](ch-reproduce.html#ch:reproduce)).
It facilitates the creation of dynamic [*web applications*](https://en.wikipedia.org/w/index.php?search=web%20applications) using Shiny (see Chapter [14\.4](ch-vizIII.html#sec:shiny)).
It also provides support for multiple projects as well as an interface to source code control systems such as GitHub.
It has become the default interface for many **R** users, and is our recommended environment for analysis.
**RStudio** is available as a client (standalone) for Windows, Mac OS X, and Linux. There is also a server version.
Commercial products and support are available in addition to the open\-source offerings (see <http://www.rstudio.com/ide> for details).
The first versions of **R** were written by [Ross Ihaka](https://en.wikipedia.org/w/index.php?search=Ross%20Ihaka) and [Robert Gentleman](https://en.wikipedia.org/w/index.php?search=Robert%20Gentleman) at the [*University of Auckland*](https://en.wikipedia.org/w/index.php?search=University%20of%20Auckland), New Zealand, while current development is coordinated by the **R** Development Core Team,
a group of international volunteers.
The **R** language is quite similar to the S language, a flexible and extensible statistical environment originally developed in the 1980s at
AT\&T Bell Labs (now Alcatel–Lucent).
B.1 Installation
----------------
New users are encouraged to download and install **R** from the
Comprehensive **R** Archive Network (CRAN, [http://www.r\-project.org](http://www.r-project.org)) and install **RStudio**
from <http://www.rstudio.com/download>.
The sample session in the appendix of the *Introduction to R* documentation,
also available from CRAN, is recommended reading.
The home page for the **R** project, located at [http://r\-project.org](http://r-project.org), is the best starting place for information about the software.
It includes links to CRAN, which features pre\-compiled binaries as well as source code for **R**, add\-on packages, documentation (including manuals,
frequently asked questions, and the **R** newsletter) as well as general background information.
Mirrored CRAN sites with identical copies of these files exist all around the world.
Updates to **R** and packages are regularly posted on CRAN.
### B.1\.1 RStudio
**RStudio** for Mac OS X, Windows, or Linux can be downloaded
from <https://rstudio.com/products/rstudio>.
**RStudio** requires **R** to be installed on the local machine.
A server version
(accessible from Web browsers) is also available for download.
Documentation of the advanced features is available on the **RStudio** website.
B.2 Learning **R**
------------------
The **R** environment features extensive online documentation, though it can sometimes be challenging to comprehend. Each command has an associated help file that describes
usage, lists arguments, provides details of actions, gives references, lists other related functions, and includes examples of its use. The help system is invoked using either the `?` or `help()` commands.
```
?function
help(function)
```
where `function` is the name of the function of interest. (Alternatively, the `Help` tab in **RStudio** can be used to access the help system.)
Some commands (e.g., `if`) are reserved, so `?if` will not generate the desired documentation.
Running `?"if"` will work (see also `?Reserved` and `?Control`). Other reserved words include `else`, `repeat`, `while`, `function`, `for`, `in`, `next`, `break`, `TRUE`, `FALSE`, `NULL`, `Inf`, `NaN`, and `NA`.
The `RSiteSearch()` function will search for key words or phrases in many places (including the search engine at [http://search.r\-project.org](http://search.r-project.org)).
The [RSeek.org](http://rseek.org) site can also be helpful in finding more information and examples.
Examples of many functions are available using the `example()` function.
```
example(mean)
```
Other useful resources are `help.start()`, which provides a set of online manuals, and `help.search()`, which can be used to look up entries by description. The `apropos()` command returns any functions in the current search list that match a given pattern (which facilitates searching for a function based on what it does, as opposed to its name).
Other resources for help available from CRAN include the **R** help mailing list.
The [StackOverflow site](http://stackoverflow.com/questions/tagged/r) for **R** provides a series of questions and answers for common questions that are tagged as being related to **R**.
New users are also encouraged to read the **R** FAQ (frequently asked questions) list.
**RStudio** provides a [curated guide](http://www.rstudio.com/resources/training/online-learning) to resources for learning **R** and its extensions.
B.3 Fundamental structures and objects
--------------------------------------
Here we provide a brief introduction to **R** data structures.
### B.3\.1 Objects and vectors
Almost everything in **R** is an object, which may be initially confusing to a new user.
An object is simply something stored in **R**’s memory. Common objects include vectors, matrices, arrays, factors, data frames (akin to data sets in other systems), lists, and functions.
The basic variable structure is a vector.
Vectors (and other objects) are created using the `<-` or `=` assignment operators (which assign the evaluated expression on the right\-hand side of the operator to the object name on the left\-hand side).
```
x <- c(5, 7, 9, 13, -4, 8) # preferred
x = c(5, 7, 9, 13, -4, 8) # equivalent
```
The above code creates a vector of length 6 using the `c()` function to concatenate scalars.
The `=` operator is used in other contexts for the specification of arguments to functions. Other assignment operators exist, as well as the `assign()` function (see `help("<-")` for more information).
The `exists()` function conveys whether an object exists in the workspace, and the `rm()` command removes it.
In **RStudio**, the “Environment” tab shows the names (and values) of all objects that exist in the current workspace.
Since vector operations are so fundamental in **R**, it is important to be able to access (or index) elements within these vectors.
Many different ways of
indexing vectors are available. Here, we introduce several of these using the `x` as created above. The command `x[2]` returns the second element of `x` (the scalar 7\), and `x[c(2, 4)]` returns the vector \\((7, 13\)\\). The expressions `x[c(TRUE, TRUE, TRUE, TRUE, TRUE, FALSE)]`, `x[1:5]` and `x[-6]` all return a vector consisting of the first 5 elements in `x` (the last specifies all elements except the 6th).
```
x[2]
```
```
[1] 7
```
```
x[c(2, 4)]
```
```
[1] 7 13
```
```
x[c(TRUE, TRUE, TRUE, TRUE, TRUE, FALSE)]
```
```
[1] 5 7 9 13 -4
```
```
x[1:5]
```
```
[1] 5 7 9 13 -4
```
```
x[-6]
```
```
[1] 5 7 9 13 -4
```
Vectors are [*recycled*](https://en.wikipedia.org/w/index.php?search=recycled) if needed; for example, when comparing each of the elements of a vector to a scalar.
```
x > 8
```
```
[1] FALSE FALSE TRUE TRUE FALSE FALSE
```
The above expression demonstrates the use of comparison operators (see `?Comparison`).
Only the third and fourth elements of `x` are greater than 8\.
The function returns a logical value of either `TRUE` or `FALSE` (see `?Logic`).
A count of elements meeting the condition can be generated using the `sum()` function.
Other comparison operators include `==` (equal), `>=` (greater than or equal), `<=` (less than or equal and `!=` (not equal).
Care needs to be taken in the comparison using `==` if noninteger values are present (see `all.equal()`).
```
sum(x > 8)
```
```
[1] 2
```
### B.3\.2 Operators
There are many operators defined in **R** to carry out a variety of tasks.
Many of these were demonstrated in the sample session (assignment,
arithmetic) and previous examples (comparison).
Arithmetic operations include `+`,
`-`, `*`, `/`, `^` (exponentiation), `%%` (modulus), and `%/%` (integer division). More information about operators can be found using the help system (e.g., `?"+"`). Background information on
other operators and precedence rules can be found using `help(Syntax)`.
Boolean operations (OR, AND, NOT, and XOR) are supported using the `|`, `||`, `&`, `!` operators and the `xor()` function.
The `|` is an “or” operator that operates on each element of a vector,
while the `||` is another “or” operator that stops evaluation the first time that the result is true (see `?Logic`).
### B.3\.3 Lists
Lists in **R** are very general objects that can contain other objects of arbitrary types. List members can be named, or referenced using numeric indices (using the `[[` operator).
```
newlist <- list(first = "hello", second = 42, Bob = TRUE)
is.list(newlist)
```
```
[1] TRUE
```
```
newlist
```
```
$first
[1] "hello"
$second
[1] 42
$Bob
[1] TRUE
```
```
newlist[[2]]
```
```
[1] 42
```
```
newlist$Bob
```
```
[1] TRUE
```
The `unlist()` function flattens (makes a vector out of) the elements in a list (see also `relist()`).
Note that unlisted objects are coerced to a common type (in this case `character`).
```
unlisted <- unlist(newlist)
unlisted
```
```
first second Bob
"hello" "42" "TRUE"
```
### B.3\.4 Matrices
Matrices are like two\-dimensional vectors: rectangular objects where all entries have the same type.
We can create a \\(2 \\times 3\\) matrix, display it, and test for its type.
```
A <- matrix(x, 2, 3)
A
```
```
[,1] [,2] [,3]
[1,] 5 9 -4
[2,] 7 13 8
```
```
is.matrix(A) # is A a matrix?
```
```
[1] TRUE
```
```
is.vector(A)
```
```
[1] FALSE
```
```
is.matrix(x)
```
```
[1] FALSE
```
Note that comments are supported within **R** (any input given after a `#` character is ignored).
Indexing for matrices is done in a similar fashion as for vectors, albeit with a second dimension (denoted by a comma).
```
A[2, 3]
```
```
[1] 8
```
```
A[, 1]
```
```
[1] 5 7
```
```
A[1, ]
```
```
[1] 5 9 -4
```
### B.3\.5 Dataframes and tibbles
Data sets are often stored in a `data.frame`, which is a special type of `list` that is more general than a `matrix`.
This rectangular object, similar to a data table in other systems, can be thought of as a two\-dimensional array with columns of vectors of the same length, but of possibly different types (as opposed to a matrix, which consists of vectors of the *same* type; or a list, whose elements needn’t be of the same length).
The function `read_csv()` in the **readr** package returns a `data.frame` object.
A simple
`data.frame`
can be created using the `data.frame()` command.
Variables can be accessed using the `$` operator, as shown below (see also `help(Extract)`).
In addition, operations can be performed by column (e.g., calculation of sample statistics).
We can check to see if an object is a `data.frame` with `is.data.frame()`.
```
y <- rep(11, length(x))
y
```
```
[1] 11 11 11 11 11 11
```
```
ds <- data.frame(x, y)
ds
```
```
x y
1 5 11
2 7 11
3 9 11
4 13 11
5 -4 11
6 8 11
```
```
ds$x[3]
```
```
[1] 9
```
```
is.data.frame(ds)
```
```
[1] TRUE
```
Tibbles are a form of simple data frames (a modern interpretation) that are described as “lazy and surly” (<https://tibble.tidyverse.org>). They support multiple data technologies (e.g., SQL databases), make more explicit their assumptions, and have an enhanced print method (so that output doesn’t scroll so much).
Many packages in the **tidyverse** create tibbles by default.
```
tbl <- as_tibble(ds)
is.data.frame(tbl)
```
```
[1] TRUE
```
```
is_tibble(ds)
```
```
[1] FALSE
```
```
is_tibble(tbl)
```
```
[1] TRUE
```
The use of `data.frame()` differs from the use of `cbind()`, which yields a `matrix` object (unless it is given data frames as inputs).
```
newmat <- cbind(x, y)
newmat
```
```
x y
[1,] 5 11
[2,] 7 11
[3,] 9 11
[4,] 13 11
[5,] -4 11
[6,] 8 11
```
```
is.data.frame(newmat)
```
```
[1] FALSE
```
```
is.matrix(newmat)
```
```
[1] TRUE
```
Data frames are created from matrices using `as.data.frame()`, while matrices
are constructed from data frames using `as.matrix()`.
Although we strongly discourage its use, data frames can be attached to the workspace using the `attach()` command.
The Tidyverse **R** Style guide (<https://style.tidyverse.org>) provides similar advice.
Name conflicts are a common problem with `attach()` (see `conflicts()`, which reports on objects that exist with the same name in two or more places on the search path).
The `search()` function lists attached packages and objects.
To avoid cluttering and confusing the name\-space, the command `detach()` should be used once a data frame or package is no longer needed.
A number of **R** functions include a `data` argument to specify a data frame as a local environment.
For functions without a `data` option, the `with()` and `within()` commands can be used to simplify reference to an object within a data frame without attaching.
### B.3\.6 Attributes and classes
Many objects have a set of associated attributes (such as names of variables, dimensions, or classes) that can be displayed or sometimes changed.
For example, we can find the dimension of the matrix defined earlier.
```
attributes(A)
```
```
$dim
[1] 2 3
```
Other types of objects within **R** include `list`s (ordered objects that are not necessarily rectangular), regression models (objects of class `lm`), and formulae (e.g., `y ~ x1 + x2`). **R** supports [*object\-oriented programming*](https://en.wikipedia.org/w/index.php?search=object-oriented%20programming) (see `help(UseMethod)`).
As a result, objects in **R** have an associated [*class*](https://en.wikipedia.org/w/index.php?search=class) attribute, which
changes the default behavior for some operations on that object.
Many functions (called [*generics*](https://en.wikipedia.org/w/index.php?search=generics)) have special capabilities when applied to objects of a particular class.
For example, when `summary()` is applied to an `lm` object, the `summary.lm()` function is called.
Conversely, `summary.aov()` is called when an `aov` object is given as argument.
These class\-specific implementations of generic functions are called [*methods*](https://en.wikipedia.org/w/index.php?search=methods).
The `class()` function returns the classes to which an object belongs, while the `methods()` function displays all of the classes supported by a generic function.
```
head(methods(summary))
```
```
[1] "summary,ANY-method" "summary,DBIObject-method"
[3] "summary,MySQLConnection-method" "summary,MySQLDriver-method"
[5] "summary,MySQLResult-method" "summary.aov"
```
Objects in **R** can belong to multiple classes, although those classes need not be nested. As noted above, generic functions are [*dispatched*](https://en.wikipedia.org/w/index.php?search=dispatched) according the class attribute of each object.
Thus, in the example below we create the `tbl` object, which belongs to multiple classes.
When the `print()` function is called on `tbl`, **R** looks for a method called `print.tbl_df()`.
If no such method is found, **R** looks for a method called `print.tbl()`.
If no such method is found, **R** looks for a method called `print.data.frame()`. This process continues until a suitable method is found. If there is none, then `print.default()` is called.
```
tbl <- as_tibble(ds)
class(tbl)
```
```
[1] "tbl_df" "tbl" "data.frame"
```
```
print(tbl)
```
```
# A tibble: 6 × 2
x y
<dbl> <dbl>
1 5 11
2 7 11
3 9 11
4 13 11
5 -4 11
6 8 11
```
```
print.data.frame(tbl)
```
```
x y
1 5 11
2 7 11
3 9 11
4 13 11
5 -4 11
6 8 11
```
```
print.default(tbl)
```
```
$x
[1] 5 7 9 13 -4 8
$y
[1] 11 11 11 11 11 11
attr(,"class")
[1] "tbl_df" "tbl" "data.frame"
```
There are a number of functions that assist with learning about an object in **R**. The `attributes()` command displays the attributes associated with an object. The `typeof()` function provides information about the underlying data structure of objects (e.g., logical, integer, double, complex, character, and list).
The `str()` function displays the structure of an object, and the `mode()` function displays its storage mode. For data frames, the `glimpse()` function provides a useful summary of each variable.
A few quick notes on specific types of objects are worth relating here:
* A vector is a one\-dimensional array of items of the same data type. There are [six basic data types](https://cran.r-project.org/doc/manuals/r-release/R-lang.html#Basic-types) that a vector can contain: `logical`, `character`, `integer`, `double`, `complex`, and `raw`. Vectors have a `length()` but not a `dim()`. Vectors can have—but needn’t have—`names()`.
* A `factor` is a special type of vector for categorical data. A factor has `level()`s. We change the reference level of a factor with `relevel()`. Factors are stored internally as integers that correspond to the id’s of the factor levels.
Factors can be problematic and their use is discouraged since they can complicate some aspects of data wrangling. A number of **R** developers have encouraged the use of the `stringsAsFactors = FALSE` option.
* A `matrix` is a two\-dimensional array of items of the same data type. A matrix has a `length()` that is equal to `nrow()` times `ncol()`, or the product of `dim()`.
* A `data.frame` is a `list` of vectors of the same length. This is like a matrix, except that columns can be of different data types. Data frames always have `names()` and often have `row.names()`.
Do not confuse a `factor` with a `character` vector.
Note that data sets typically have class `data.frame` but are of type `list`. This is because, as noted above, **R** stores data frames as special types of lists—a list of several vectors having the same length, but possibly having different types.
```
class(mtcars)
```
```
[1] "data.frame"
```
```
typeof(mtcars)
```
```
[1] "list"
```
If you ever get confused when working with data frames and matrices, remember that a `data.frame` is a `list` (that can accommodate multiple types of objects), whereas a `matrix` is more like a `vector` (in that it can only support one type of object).
### B.3\.7 Options
The `options()` function in **R** can be used to change various default behaviors. For example, the `digits` argument controls the number of digits to display in output.
The current options are returned when `options()` is called, to allow them to be restored. The command `help(options)` lists all of the settable options.
### B.3\.8 Functions
Fundamental actions within **R** are carried out by calling [*functions*](https://en.wikipedia.org/w/index.php?search=functions) (either built\-in or user defined—see Appendix [C](ch-function.html#ch:function) for guidance on the latter).
Multiple [*arguments*](https://en.wikipedia.org/w/index.php?search=arguments) may be given, separated by commas.
The function carries out operations using the provided arguments
and returns values (an object such as a vector or list) that are displayed (by default) or which can be saved by assignment to an object.
It’s a good idea to name arguments to functions.
This practice minimizes errors assigning unnamed arguments to options and makes code more readable.
As an example, the `quantile()` function takes a numeric vector and returns the minimum, 25th percentile,
median, 75th percentile, and maximum of the values in that vector.
However, if an optional vector of quantiles is given, those quantiles are calculated instead.
```
vals <- rnorm(1000) # generate 1000 standard normal random variables
quantile(vals)
```
```
0% 25% 50% 75% 100%
-3.520 -0.675 0.012 0.737 3.352
```
```
quantile(vals, c(.025, .975))
```
```
2.5% 97.5%
-2.00 1.98
```
```
# Return values can be saved for later use.
res <- quantile(vals, c(.025, .975))
res[1]
```
```
2.5%
-2
```
Arguments (options) are available for most functions.
The documentation specifies the default action if named arguments are not specified.
If not named, the arguments are provided to the function in order specified in the function call.
For the `quantile()` function, there is a `type` argument that allows specification of one of nine algorithms for calculating quantiles.
```
res <- quantile(vals, probs = c(.025, .975), type = 3)
res
```
```
2.5% 97.5%
-2.02 1.98
```
Some functions allow a variable number of arguments.
An example is the
`paste()` function.
The calling sequence is described in the documentation as follows.
```
paste(..., sep = " ", collapse = NULL)
```
To override the default behavior of a space being added between
elements output by `paste()`, the user can specify a different
value for `sep`.
B.4 Add\-ons: Packages
----------------------
### B.4\.1 Introduction to packages
Additional functionality in **R** is added through packages, which consist of
functions, data sets, examples, vignettes, and help files that can be downloaded from CRAN.
The function `install.packages()` can be used to download and install packages.
Alternatively, **RStudio** provides an easy\-to\-use `Packages` tab to install and load packages.
Throughout the book, we assume that the **tidyverse** and **mdsr** packages are loaded.
In many cases, additional add\-on packages (see Appendix [A](ch-mdsr.html#ch:mdsr)) need to be installed prior to running the examples in this book.
Packages that are not on CRAN can be installed using the `install_github()` function in the **remotes** package.
```
install.packages("mdsr") # CRAN version
remotes::install_github("mdsr-book/mdsr") # development version
```
The `library()` function will load an installed package.
For example, to install and load Frank Harrell’s `Hmisc()` package, two commands are needed:
```
install.packages("Hmisc")
library(Hmisc)
```
If a package is not installed, running the `library()` command will yield an
error.
Here we try to load the **xaringanthemer** package (which has not been installed):
```
> library(xaringanthemer)
Error in library(xaringanthemer) : there is no package called 'xaringanthemer'
```
To rectify the problem, we install the package from CRAN.
```
> install.packages("xaringanthemer")
trying URL 'https://cloud.r-project.org/src/contrib/xaringanthemer_0.3.0.tar.gz'
Content type 'application/x-gzip' length 1362643 bytes (1.3 MB)
==================================================
downloaded 1.3 Mb
```
```
library(xaringanthemer)
```
The `require()` function will test whether a package is available—this will load the library if it is installed, and generate a warning message if it is not (as opposed to `library()`, which will return an error).
The names of all variables within a given data set (or more generally for sub\-objects within an object) are provided by the `names()` command.
The names of all objects defined within an **R** session can be generated using the `objects()` and `ls()` commands, which return a vector of character strings.
**RStudio** includes an `Environment` tab that lists all the objects in the current environment.
The `print()` and `summary()` functions return the object or summaries of that object, respectively.
Running `print(object)` at the command line is equivalent to just entering
the name of the object, i.e., `object`.
### B.4\.2 Packages and name conflicts
Different package authors may choose the same name for functions that
exist within base **R** (or within other packages). This will cause the other
function or object to be [*masked*](https://en.wikipedia.org/w/index.php?search=masked). This can sometimes lead to confusion, when the expected version of a function is not the one that is called.
The `find()` function can be used to determine where in the environment (workspace) a given object can be found.
```
find("mean")
```
```
[1] "package:base"
```
Sometimes it is desirable to remove a package from the workspace.
For example, a package might define a function with the same name as an existing function.
Packages can be detached using the syntax `detach(package:PKGNAME)`,
where `PKGNAME` is the name of the package.
Objects with the same name that appear in multiple places in the environment can be accessed using the `location::objectname` syntax.
As an example, to access the `mean()` function from the **base** package, the user would specify `base::mean()` instead of `mean()`.
It is sometimes preferable to reference a function or object in this way rather than loading the package.
As an example where this might be useful, there are functions in the
**base** and **Hmisc** packages called `units()`. The find
command would display both (in the order in which they would be accessed).
```
library(Hmisc)
find("units")
```
```
[1] "package:Hmisc" "package:base"
```
When the **Hmisc** package is loaded, the `units()` function from the **base** package is masked and would not be used by default.
To specify that the version of the function from the **base** package should be used,
prefix the function with the package name followed by two colons: `base::units()`.
The `conflicts()` function
reports on objects that exist with the same name in two or more places on the search path.
Running the command `library(help = "PKGNAME")`
will display information about an installed package.
Alternatively, the `Packages` tab in **RStudio** can be used to list, install, and update packages.
The `session_info()` function from the **sessioninfo** package provides improved reporting version information about **R** as well as details of loaded packages.
```
sessioninfo::session_info()
```
```
─ Session info ───────────────────────────────────────────────────────────
setting value
version R version 4.1.0 (2021-05-18)
os Ubuntu 20.04.2 LTS
system x86_64, linux-gnu
ui X11
language (EN)
collate en_US.UTF-8
ctype en_US.UTF-8
tz America/New_York
date 2021-07-28
─ Packages ───────────────────────────────────────────────────────────────
package * version date lib source
assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.1.0)
backports 1.2.1 2020-12-09 [1] CRAN (R 4.1.0)
base64enc 0.1-3 2015-07-28 [1] CRAN (R 4.1.0)
bookdown 0.22 2021-04-22 [1] CRAN (R 4.1.0)
broom 0.7.8 2021-06-24 [1] CRAN (R 4.1.0)
bslib 0.2.5.1 2021-05-18 [1] CRAN (R 4.1.0)
cellranger 1.1.0 2016-07-27 [1] CRAN (R 4.1.0)
checkmate 2.0.0 2020-02-06 [1] CRAN (R 4.1.0)
cli 3.0.1 2021-07-17 [1] CRAN (R 4.1.0)
cluster 2.1.2 2021-04-17 [4] CRAN (R 4.0.5)
colorspace 2.0-2 2021-06-24 [1] CRAN (R 4.1.0)
crayon 1.4.1 2021-02-08 [1] CRAN (R 4.1.0)
data.table 1.14.0 2021-02-21 [1] CRAN (R 4.1.0)
DBI * 1.1.1 2021-01-15 [1] CRAN (R 4.1.0)
dbplyr 2.1.1 2021-04-06 [1] CRAN (R 4.1.0)
digest 0.6.27 2020-10-24 [1] CRAN (R 4.1.0)
dplyr * 1.0.7 2021-06-18 [1] CRAN (R 4.1.0)
ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.1.0)
evaluate 0.14 2019-05-28 [1] CRAN (R 4.1.0)
fansi 0.5.0 2021-05-25 [1] CRAN (R 4.1.0)
forcats * 0.5.1 2021-01-27 [1] CRAN (R 4.1.0)
foreign 0.8-81 2020-12-22 [4] CRAN (R 4.0.3)
Formula * 1.2-4 2020-10-16 [1] CRAN (R 4.1.0)
fs 1.5.0 2020-07-31 [1] CRAN (R 4.1.0)
generics 0.1.0 2020-10-31 [1] CRAN (R 4.1.0)
ggplot2 * 3.3.5 2021-06-25 [1] CRAN (R 4.1.0)
glue 1.4.2 2020-08-27 [1] CRAN (R 4.1.0)
gridExtra 2.3 2017-09-09 [1] CRAN (R 4.1.0)
gtable 0.3.0 2019-03-25 [1] CRAN (R 4.1.0)
haven 2.4.1 2021-04-23 [1] CRAN (R 4.1.0)
Hmisc 4.5-0 2021-02-28 [1] CRAN (R 4.1.0)
hms 1.1.0 2021-05-17 [1] CRAN (R 4.1.0)
htmlTable 2.2.1 2021-05-18 [1] CRAN (R 4.1.0)
htmltools 0.5.1.1 2021-01-22 [1] CRAN (R 4.1.0)
htmlwidgets 1.5.3 2020-12-10 [1] CRAN (R 4.1.0)
httr 1.4.2 2020-07-20 [1] CRAN (R 4.1.0)
jpeg 0.1-9 2021-07-24 [1] CRAN (R 4.1.0)
jquerylib 0.1.4 2021-04-26 [1] CRAN (R 4.1.0)
jsonlite 1.7.2 2020-12-09 [1] CRAN (R 4.1.0)
knitr 1.33 2021-04-24 [1] CRAN (R 4.1.0)
lattice * 0.20-44 2021-05-02 [4] CRAN (R 4.1.0)
latticeExtra 0.6-29 2019-12-19 [1] CRAN (R 4.1.0)
lifecycle 1.0.0 2021-02-15 [1] CRAN (R 4.1.0)
lubridate 1.7.10 2021-02-26 [1] CRAN (R 4.1.0)
magrittr 2.0.1 2020-11-17 [1] CRAN (R 4.1.0)
Matrix 1.3-4 2021-06-01 [4] CRAN (R 4.1.0)
mdsr * 0.2.5 2021-03-29 [1] CRAN (R 4.1.0)
modelr 0.1.8 2020-05-19 [1] CRAN (R 4.1.0)
mosaicData * 0.20.2 2021-01-16 [1] CRAN (R 4.1.0)
munsell 0.5.0 2018-06-12 [1] CRAN (R 4.1.0)
nnet 7.3-16 2021-05-03 [4] CRAN (R 4.0.5)
pillar 1.6.1 2021-05-16 [1] CRAN (R 4.1.0)
pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.1.0)
png 0.1-7 2013-12-03 [1] CRAN (R 4.1.0)
purrr * 0.3.4 2020-04-17 [1] CRAN (R 4.1.0)
R6 2.5.0 2020-10-28 [1] CRAN (R 4.1.0)
RColorBrewer 1.1-2 2014-12-07 [1] CRAN (R 4.1.0)
Rcpp 1.0.7 2021-07-07 [1] CRAN (R 4.1.0)
readr * 2.0.0 2021-07-20 [1] CRAN (R 4.1.0)
readxl 1.3.1 2019-03-13 [1] CRAN (R 4.1.0)
repr 1.1.3 2021-01-21 [1] CRAN (R 4.1.0)
reprex 2.0.0 2021-04-02 [1] CRAN (R 4.1.0)
rlang 0.4.11 2021-04-30 [1] CRAN (R 4.1.0)
rmarkdown 2.9 2021-06-15 [1] CRAN (R 4.1.0)
RMySQL 0.10.22 2021-06-22 [1] CRAN (R 4.1.0)
rpart 4.1-15 2019-04-12 [4] CRAN (R 4.0.0)
rstudioapi 0.13 2020-11-12 [1] CRAN (R 4.1.0)
rvest 1.0.1 2021-07-26 [1] CRAN (R 4.1.0)
sass 0.4.0 2021-05-12 [1] CRAN (R 4.1.0)
scales 1.1.1 2020-05-11 [1] CRAN (R 4.1.0)
sessioninfo 1.1.1 2018-11-05 [1] CRAN (R 4.1.0)
skimr 2.1.3 2021-03-07 [1] CRAN (R 4.1.0)
stringi 1.7.3 2021-07-16 [1] CRAN (R 4.1.0)
stringr * 1.4.0 2019-02-10 [1] CRAN (R 4.1.0)
survival * 3.2-11 2021-04-26 [4] CRAN (R 4.0.5)
tibble * 3.1.3 2021-07-23 [1] CRAN (R 4.1.0)
tidyr * 1.1.3 2021-03-03 [1] CRAN (R 4.1.0)
tidyselect 1.1.1 2021-04-30 [1] CRAN (R 4.1.0)
tidyverse * 1.3.1 2021-04-15 [1] CRAN (R 4.1.0)
tzdb 0.1.2 2021-07-20 [1] CRAN (R 4.1.0)
utf8 1.2.2 2021-07-24 [1] CRAN (R 4.1.0)
vctrs 0.3.8 2021-04-29 [1] CRAN (R 4.1.0)
withr 2.4.2 2021-04-18 [1] CRAN (R 4.1.0)
xaringanthemer * 0.4.0 2021-06-24 [1] CRAN (R 4.1.0)
xfun 0.24 2021-06-15 [1] CRAN (R 4.1.0)
xml2 1.3.2 2020-04-23 [1] CRAN (R 4.1.0)
yaml 2.2.1 2020-02-01 [1] CRAN (R 4.1.0)
[1] /home/bbaumer/R/x86_64-pc-linux-gnu-library/4.1
[2] /usr/local/lib/R/site-library
[3] /usr/lib/R/site-library
[4] /usr/lib/R/library
```
The `update.packages()` function should be run periodically to ensure that packages are up\-to\-date
As of December 2020, there were more than 16,800 packages available from CRAN.
This represents a tremendous investment of time and code by many developers (Fox 2009\).
While each of these has met a minimal standard for inclusion, it is important to keep in mind that packages in **R** are created by individuals or small groups, and not endorsed by the **R** core group. As a result, they do not necessarily undergo the same level of testing and quality assurance that the core **R** system does.
### B.4\.3 CRAN task views
The [“Task Views” on CRAN](http://cran.r-project.org/web/views) are
a very useful resource for finding packages. These are curated
listings of relevant packages within a particular application area
(such as multivariate statistics, psychometrics, or survival analysis).
Table [B.1](ch-R.html#tab:cran) displays the task views available as of July 2021\.
Table B.1: A complete list of CRAN task views.
| Task View | Subject |
| --- | --- |
| [Bayesian](https://cran.r-project.org/web/views/Bayesian.html) | Bayesian Inference |
| [ChemPhys](https://cran.r-project.org/web/views/ChemPhys.html) | Chemometrics and Computational Physics |
| [ClinicalTrials](https://cran.r-project.org/web/views/ClinicalTrials.html) | Clinical Trial Design, Monitoring, and Analysis |
| [Cluster](https://cran.r-project.org/web/views/Cluster.html) | Cluster Analysis and Finite Mixture Models |
| [Databases](https://cran.r-project.org/web/views/Databases.html) | Databases with R |
| [DifferentialEquations](https://cran.r-project.org/web/views/DifferentialEquations.html) | Differential Equations |
| [Distributions](https://cran.r-project.org/web/views/Distributions.html) | Probability Distributions |
| [Econometrics](https://cran.r-project.org/web/views/Econometrics.html) | Econometrics |
| [Environmetrics](https://cran.r-project.org/web/views/Environmetrics.html) | Analysis of Ecological and Environmental Data |
| [ExperimentalDesign](https://cran.r-project.org/web/views/ExperimentalDesign.html) | Design of Experiments (DoE) and Analysis of Experimental Data |
| [ExtremeValue](https://cran.r-project.org/web/views/ExtremeValue.html) | Extreme Value Analysis |
| [Finance](https://cran.r-project.org/web/views/Finance.html) | Empirical Finance |
| [FunctionalData](https://cran.r-project.org/web/views/FunctionalData.html) | Functional Data Analysis |
| [Genetics](https://cran.r-project.org/web/views/Genetics.html) | Statistical Genetics |
| [gR](https://cran.r-project.org/web/views/gR.html) | gRaphical Models in R |
| [Graphics](https://cran.r-project.org/web/views/Graphics.html) | Graphic Displays and Dynamic Graphics and Graphic Devices and Visualization |
| [HighPerformanceComputing](https://cran.r-project.org/web/views/HighPerformanceComputing.html) | High\-Performance and Parallel Computing with R |
| [Hydrology](https://cran.r-project.org/web/views/Hydrology.html) | Hydrological Data and Modeling |
| [MachineLearning](https://cran.r-project.org/web/views/MachineLearning.html) | Machine Learning and Statistical Learning |
| [MedicalImaging](https://cran.r-project.org/web/views/MedicalImaging.html) | Medical Image Analysis |
| [MetaAnalysis](https://cran.r-project.org/web/views/MetaAnalysis.html) | Meta\-Analysis |
| [MissingData](https://cran.r-project.org/web/views/MissingData.html) | Missing Data |
| [ModelDeployment](https://cran.r-project.org/web/views/ModelDeployment.html) | Model Deployment with R |
| [Multivariate](https://cran.r-project.org/web/views/Multivariate.html) | Multivariate Statistics |
| [NaturalLanguageProcessing](https://cran.r-project.org/web/views/NaturalLanguageProcessing.html) | Natural Language Processing |
| [NumericalMathematics](https://cran.r-project.org/web/views/NumericalMathematics.html) | Numerical Mathematics |
| [OfficialStatistics](https://cran.r-project.org/web/views/OfficialStatistics.html) | Official Statistics and Survey Methodology |
| [Optimization](https://cran.r-project.org/web/views/Optimization.html) | Optimization and Mathematical Programming |
| [Pharmacokinetics](https://cran.r-project.org/web/views/Pharmacokinetics.html) | Analysis of Pharmacokinetic Data |
| [Phylogenetics](https://cran.r-project.org/web/views/Phylogenetics.html) | Phylogenetics, Especially Comparative Methods |
| [Psychometrics](https://cran.r-project.org/web/views/Psychometrics.html) | Psychometric Models and Methods |
| [ReproducibleResearch](https://cran.r-project.org/web/views/ReproducibleResearch.html) | Reproducible Research |
| [Robust](https://cran.r-project.org/web/views/Robust.html) | Robust Statistical Methods |
| [SocialSciences](https://cran.r-project.org/web/views/SocialSciences.html) | Statistics for the Social Sciences |
| [Spatial](https://cran.r-project.org/web/views/Spatial.html) | Analysis of Spatial Data |
| [SpatioTemporal](https://cran.r-project.org/web/views/SpatioTemporal.html) | Handling and Analyzing Spatio\-Temporal Data |
| [Survival](https://cran.r-project.org/web/views/Survival.html) | Survival Analysis |
| [TeachingStatistics](https://cran.r-project.org/web/views/TeachingStatistics.html) | Teaching Statistics |
| [TimeSeries](https://cran.r-project.org/web/views/TimeSeries.html) | Time Series Analysis |
| [Tracking](https://cran.r-project.org/web/views/Tracking.html) | Processing and Analysis of Tracking Data |
| [WebTechnologies](https://cran.r-project.org/web/views/WebTechnologies.html) | Web Technologies and Services |
B.5 Further resources
---------------------
[*Advanced R*](https://adv-r.hadley.nz) is an excellent source for learning more about how **R** works (H. Wickham 2019\).
Extensive resources and documentation about **R** can be found at the Comprehensive R Archive Network (CRAN).
The **forcats** package, included in the **tidyverse**, is designed to facilitate data wrangling with factors.
More information regarding tibbles can be found at <https://tibble.tidyverse.org>.
JupyterLab and [JupyterHub](https://jupyter.org/hub) are alternative environments that support analysis via sophisticated notebooks for multiple languages including Julia, Python, and **R**.
B.6 Exercises
-------------
**Problem 1 (Easy)**: The following code chunk throws an error.
```
mtcars %>%
select(mpg, cyl)
```
```
mpg cyl
Mazda RX4 21.0 6
Mazda RX4 Wag 21.0 6
Datsun 710 22.8 4
Hornet 4 Drive 21.4 6
Hornet Sportabout 18.7 8
Valiant 18.1 6
Duster 360 14.3 8
Merc 240D 24.4 4
Merc 230 22.8 4
Merc 280 19.2 6
Merc 280C 17.8 6
Merc 450SE 16.4 8
Merc 450SL 17.3 8
Merc 450SLC 15.2 8
Cadillac Fleetwood 10.4 8
Lincoln Continental 10.4 8
Chrysler Imperial 14.7 8
Fiat 128 32.4 4
Honda Civic 30.4 4
Toyota Corolla 33.9 4
Toyota Corona 21.5 4
Dodge Challenger 15.5 8
AMC Javelin 15.2 8
Camaro Z28 13.3 8
Pontiac Firebird 19.2 8
Fiat X1-9 27.3 4
Porsche 914-2 26.0 4
Lotus Europa 30.4 4
Ford Pantera L 15.8 8
Ferrari Dino 19.7 6
Maserati Bora 15.0 8
Volvo 142E 21.4 4
```
What is the problem?
**Problem 2 (Easy)**: Which of these kinds of names should be wrapped with quotation marks when used in R?
* function name
* file name
* the name of an argument in a named argument
* object name
**Problem 3 (Easy)**: A user has typed the following commands into the RStudio console.
```
obj1 <- 2:10
obj2 <- c(2, 5)
obj3 <- c(TRUE, FALSE)
obj4 <- 42
```
What values are returned by the following commands?
```
obj1 * 10
obj1[2:4]
obj1[-3]
obj1 + obj2
obj1 * obj3
obj1 + obj4
obj2 + obj3
sum(obj2)
sum(obj3)
```
**Problem 4 (Easy)**: A user has typed the following commands into the RStudio console:
```
mylist <- list(x1 = "sally", x2 = 42, x3 = FALSE, x4 = 1:5)
```
What values do each of the following commands return?
```
is.list(mylist)
names(mylist)
length(mylist)
mylist[[2]]
mylist[["x1"]]
mylist$x2
length(mylist[["x4"]])
class(mylist)
typeof(mylist)
class(mylist[[4]])
typeof(mylist[[3]])
```
**Problem 5 (Easy)**: What’s wrong with this statement?
```
help(NHANES, package <- "NHANES")
```
**Problem 6 (Easy)**: Consult the documentation for `CPS85` in the `mosaicData` package to determine the meaning of CPS.
**Problem 7 (Easy)**: The following code chunk throws an error. Why?
```
library(tidyverse)
mtcars %>%
filter(cylinders == 4)
```
```
Error: Problem with `filter()` input `..1`.
ℹ Input `..1` is `cylinders == 4`.
x object 'cylinders' not found
```
What is the problem?
**Problem 8 (Easy)**: The `date` function returns an indication of the current time and date. What arguments does `date` take? What kind of object is the result from `date`? What kind of object is the result from `Sys.time`?
**Problem 9 (Easy)**: A user has typed the following commands into the RStudio console.
```
a <- c(10, 15)
b <- c(TRUE, FALSE)
c <- c("happy", "sad")
```
What do each of the following commands return? Describe the class of the object as well as its value.
```
data.frame(a, b, c)
cbind(a, b)
rbind(a, b)
cbind(a, b, c)
list(a, b, c)[[2]]
```
**Problem 10 (Easy)**: For each of the following assignment statements, describe the error (or note why it does not generate an error).
```
result1 <- sqrt 10
result2 <-- "Hello to you!"
3result <- "Hello to you"
result4 <- "Hello to you
result5 <- date()
```
**Problem 11 (Easy)**: The following code chunk throws an error.
```
library(tidyverse)
mtcars %>%
filter(cyl = 4)
```
```
Error: Problem with `filter()` input `..1`.
x Input `..1` is named.
ℹ This usually means that you've used `=` instead of `==`.
ℹ Did you mean `cyl == 4`?
```
The error suggests that you need to use `==` inside of `filter()`. Why?
**Problem 12 (Medium)**: The following code undertakes some data analysis using the HELP (Health Evaluation and Linkage to Primary Care) trial.
```
library(mosaic)
ds <-
read.csv("http://nhorton.people.amherst.edu/r2/datasets/helpmiss.csv")
summarise(group_by(
select(filter(mutate(ds,
sex = ifelse(female == 1, "F", "M")
), !is.na(pcs)), age, pcs, sex),
sex
), meanage = mean(age), meanpcs = mean(pcs), n = n())
```
Describe in words what computations are being done.
Using the pipe notation, translate this code into a more readable version.
**Problem 13 (Medium)**: The following concepts should have some meaning to you: package, function, command, argument, assignment, object, object name, data frame, named argument, quoted character string.
Construct an example of R commands
that make use of at least four of these. Label which part of your example R command corresponds to each.
B.7 Supplementary exercises
---------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/appR.html\#datavizI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/appR.html#datavizI-online-exercises)
No exercises found
---
B.1 Installation
----------------
New users are encouraged to download and install **R** from the
Comprehensive **R** Archive Network (CRAN, [http://www.r\-project.org](http://www.r-project.org)) and install **RStudio**
from <http://www.rstudio.com/download>.
The sample session in the appendix of the *Introduction to R* documentation,
also available from CRAN, is recommended reading.
The home page for the **R** project, located at [http://r\-project.org](http://r-project.org), is the best starting place for information about the software.
It includes links to CRAN, which features pre\-compiled binaries as well as source code for **R**, add\-on packages, documentation (including manuals,
frequently asked questions, and the **R** newsletter) as well as general background information.
Mirrored CRAN sites with identical copies of these files exist all around the world.
Updates to **R** and packages are regularly posted on CRAN.
### B.1\.1 RStudio
**RStudio** for Mac OS X, Windows, or Linux can be downloaded
from <https://rstudio.com/products/rstudio>.
**RStudio** requires **R** to be installed on the local machine.
A server version
(accessible from Web browsers) is also available for download.
Documentation of the advanced features is available on the **RStudio** website.
### B.1\.1 RStudio
**RStudio** for Mac OS X, Windows, or Linux can be downloaded
from <https://rstudio.com/products/rstudio>.
**RStudio** requires **R** to be installed on the local machine.
A server version
(accessible from Web browsers) is also available for download.
Documentation of the advanced features is available on the **RStudio** website.
B.2 Learning **R**
------------------
The **R** environment features extensive online documentation, though it can sometimes be challenging to comprehend. Each command has an associated help file that describes
usage, lists arguments, provides details of actions, gives references, lists other related functions, and includes examples of its use. The help system is invoked using either the `?` or `help()` commands.
```
?function
help(function)
```
where `function` is the name of the function of interest. (Alternatively, the `Help` tab in **RStudio** can be used to access the help system.)
Some commands (e.g., `if`) are reserved, so `?if` will not generate the desired documentation.
Running `?"if"` will work (see also `?Reserved` and `?Control`). Other reserved words include `else`, `repeat`, `while`, `function`, `for`, `in`, `next`, `break`, `TRUE`, `FALSE`, `NULL`, `Inf`, `NaN`, and `NA`.
The `RSiteSearch()` function will search for key words or phrases in many places (including the search engine at [http://search.r\-project.org](http://search.r-project.org)).
The [RSeek.org](http://rseek.org) site can also be helpful in finding more information and examples.
Examples of many functions are available using the `example()` function.
```
example(mean)
```
Other useful resources are `help.start()`, which provides a set of online manuals, and `help.search()`, which can be used to look up entries by description. The `apropos()` command returns any functions in the current search list that match a given pattern (which facilitates searching for a function based on what it does, as opposed to its name).
Other resources for help available from CRAN include the **R** help mailing list.
The [StackOverflow site](http://stackoverflow.com/questions/tagged/r) for **R** provides a series of questions and answers for common questions that are tagged as being related to **R**.
New users are also encouraged to read the **R** FAQ (frequently asked questions) list.
**RStudio** provides a [curated guide](http://www.rstudio.com/resources/training/online-learning) to resources for learning **R** and its extensions.
B.3 Fundamental structures and objects
--------------------------------------
Here we provide a brief introduction to **R** data structures.
### B.3\.1 Objects and vectors
Almost everything in **R** is an object, which may be initially confusing to a new user.
An object is simply something stored in **R**’s memory. Common objects include vectors, matrices, arrays, factors, data frames (akin to data sets in other systems), lists, and functions.
The basic variable structure is a vector.
Vectors (and other objects) are created using the `<-` or `=` assignment operators (which assign the evaluated expression on the right\-hand side of the operator to the object name on the left\-hand side).
```
x <- c(5, 7, 9, 13, -4, 8) # preferred
x = c(5, 7, 9, 13, -4, 8) # equivalent
```
The above code creates a vector of length 6 using the `c()` function to concatenate scalars.
The `=` operator is used in other contexts for the specification of arguments to functions. Other assignment operators exist, as well as the `assign()` function (see `help("<-")` for more information).
The `exists()` function conveys whether an object exists in the workspace, and the `rm()` command removes it.
In **RStudio**, the “Environment” tab shows the names (and values) of all objects that exist in the current workspace.
Since vector operations are so fundamental in **R**, it is important to be able to access (or index) elements within these vectors.
Many different ways of
indexing vectors are available. Here, we introduce several of these using the `x` as created above. The command `x[2]` returns the second element of `x` (the scalar 7\), and `x[c(2, 4)]` returns the vector \\((7, 13\)\\). The expressions `x[c(TRUE, TRUE, TRUE, TRUE, TRUE, FALSE)]`, `x[1:5]` and `x[-6]` all return a vector consisting of the first 5 elements in `x` (the last specifies all elements except the 6th).
```
x[2]
```
```
[1] 7
```
```
x[c(2, 4)]
```
```
[1] 7 13
```
```
x[c(TRUE, TRUE, TRUE, TRUE, TRUE, FALSE)]
```
```
[1] 5 7 9 13 -4
```
```
x[1:5]
```
```
[1] 5 7 9 13 -4
```
```
x[-6]
```
```
[1] 5 7 9 13 -4
```
Vectors are [*recycled*](https://en.wikipedia.org/w/index.php?search=recycled) if needed; for example, when comparing each of the elements of a vector to a scalar.
```
x > 8
```
```
[1] FALSE FALSE TRUE TRUE FALSE FALSE
```
The above expression demonstrates the use of comparison operators (see `?Comparison`).
Only the third and fourth elements of `x` are greater than 8\.
The function returns a logical value of either `TRUE` or `FALSE` (see `?Logic`).
A count of elements meeting the condition can be generated using the `sum()` function.
Other comparison operators include `==` (equal), `>=` (greater than or equal), `<=` (less than or equal and `!=` (not equal).
Care needs to be taken in the comparison using `==` if noninteger values are present (see `all.equal()`).
```
sum(x > 8)
```
```
[1] 2
```
### B.3\.2 Operators
There are many operators defined in **R** to carry out a variety of tasks.
Many of these were demonstrated in the sample session (assignment,
arithmetic) and previous examples (comparison).
Arithmetic operations include `+`,
`-`, `*`, `/`, `^` (exponentiation), `%%` (modulus), and `%/%` (integer division). More information about operators can be found using the help system (e.g., `?"+"`). Background information on
other operators and precedence rules can be found using `help(Syntax)`.
Boolean operations (OR, AND, NOT, and XOR) are supported using the `|`, `||`, `&`, `!` operators and the `xor()` function.
The `|` is an “or” operator that operates on each element of a vector,
while the `||` is another “or” operator that stops evaluation the first time that the result is true (see `?Logic`).
### B.3\.3 Lists
Lists in **R** are very general objects that can contain other objects of arbitrary types. List members can be named, or referenced using numeric indices (using the `[[` operator).
```
newlist <- list(first = "hello", second = 42, Bob = TRUE)
is.list(newlist)
```
```
[1] TRUE
```
```
newlist
```
```
$first
[1] "hello"
$second
[1] 42
$Bob
[1] TRUE
```
```
newlist[[2]]
```
```
[1] 42
```
```
newlist$Bob
```
```
[1] TRUE
```
The `unlist()` function flattens (makes a vector out of) the elements in a list (see also `relist()`).
Note that unlisted objects are coerced to a common type (in this case `character`).
```
unlisted <- unlist(newlist)
unlisted
```
```
first second Bob
"hello" "42" "TRUE"
```
### B.3\.4 Matrices
Matrices are like two\-dimensional vectors: rectangular objects where all entries have the same type.
We can create a \\(2 \\times 3\\) matrix, display it, and test for its type.
```
A <- matrix(x, 2, 3)
A
```
```
[,1] [,2] [,3]
[1,] 5 9 -4
[2,] 7 13 8
```
```
is.matrix(A) # is A a matrix?
```
```
[1] TRUE
```
```
is.vector(A)
```
```
[1] FALSE
```
```
is.matrix(x)
```
```
[1] FALSE
```
Note that comments are supported within **R** (any input given after a `#` character is ignored).
Indexing for matrices is done in a similar fashion as for vectors, albeit with a second dimension (denoted by a comma).
```
A[2, 3]
```
```
[1] 8
```
```
A[, 1]
```
```
[1] 5 7
```
```
A[1, ]
```
```
[1] 5 9 -4
```
### B.3\.5 Dataframes and tibbles
Data sets are often stored in a `data.frame`, which is a special type of `list` that is more general than a `matrix`.
This rectangular object, similar to a data table in other systems, can be thought of as a two\-dimensional array with columns of vectors of the same length, but of possibly different types (as opposed to a matrix, which consists of vectors of the *same* type; or a list, whose elements needn’t be of the same length).
The function `read_csv()` in the **readr** package returns a `data.frame` object.
A simple
`data.frame`
can be created using the `data.frame()` command.
Variables can be accessed using the `$` operator, as shown below (see also `help(Extract)`).
In addition, operations can be performed by column (e.g., calculation of sample statistics).
We can check to see if an object is a `data.frame` with `is.data.frame()`.
```
y <- rep(11, length(x))
y
```
```
[1] 11 11 11 11 11 11
```
```
ds <- data.frame(x, y)
ds
```
```
x y
1 5 11
2 7 11
3 9 11
4 13 11
5 -4 11
6 8 11
```
```
ds$x[3]
```
```
[1] 9
```
```
is.data.frame(ds)
```
```
[1] TRUE
```
Tibbles are a form of simple data frames (a modern interpretation) that are described as “lazy and surly” (<https://tibble.tidyverse.org>). They support multiple data technologies (e.g., SQL databases), make more explicit their assumptions, and have an enhanced print method (so that output doesn’t scroll so much).
Many packages in the **tidyverse** create tibbles by default.
```
tbl <- as_tibble(ds)
is.data.frame(tbl)
```
```
[1] TRUE
```
```
is_tibble(ds)
```
```
[1] FALSE
```
```
is_tibble(tbl)
```
```
[1] TRUE
```
The use of `data.frame()` differs from the use of `cbind()`, which yields a `matrix` object (unless it is given data frames as inputs).
```
newmat <- cbind(x, y)
newmat
```
```
x y
[1,] 5 11
[2,] 7 11
[3,] 9 11
[4,] 13 11
[5,] -4 11
[6,] 8 11
```
```
is.data.frame(newmat)
```
```
[1] FALSE
```
```
is.matrix(newmat)
```
```
[1] TRUE
```
Data frames are created from matrices using `as.data.frame()`, while matrices
are constructed from data frames using `as.matrix()`.
Although we strongly discourage its use, data frames can be attached to the workspace using the `attach()` command.
The Tidyverse **R** Style guide (<https://style.tidyverse.org>) provides similar advice.
Name conflicts are a common problem with `attach()` (see `conflicts()`, which reports on objects that exist with the same name in two or more places on the search path).
The `search()` function lists attached packages and objects.
To avoid cluttering and confusing the name\-space, the command `detach()` should be used once a data frame or package is no longer needed.
A number of **R** functions include a `data` argument to specify a data frame as a local environment.
For functions without a `data` option, the `with()` and `within()` commands can be used to simplify reference to an object within a data frame without attaching.
### B.3\.6 Attributes and classes
Many objects have a set of associated attributes (such as names of variables, dimensions, or classes) that can be displayed or sometimes changed.
For example, we can find the dimension of the matrix defined earlier.
```
attributes(A)
```
```
$dim
[1] 2 3
```
Other types of objects within **R** include `list`s (ordered objects that are not necessarily rectangular), regression models (objects of class `lm`), and formulae (e.g., `y ~ x1 + x2`). **R** supports [*object\-oriented programming*](https://en.wikipedia.org/w/index.php?search=object-oriented%20programming) (see `help(UseMethod)`).
As a result, objects in **R** have an associated [*class*](https://en.wikipedia.org/w/index.php?search=class) attribute, which
changes the default behavior for some operations on that object.
Many functions (called [*generics*](https://en.wikipedia.org/w/index.php?search=generics)) have special capabilities when applied to objects of a particular class.
For example, when `summary()` is applied to an `lm` object, the `summary.lm()` function is called.
Conversely, `summary.aov()` is called when an `aov` object is given as argument.
These class\-specific implementations of generic functions are called [*methods*](https://en.wikipedia.org/w/index.php?search=methods).
The `class()` function returns the classes to which an object belongs, while the `methods()` function displays all of the classes supported by a generic function.
```
head(methods(summary))
```
```
[1] "summary,ANY-method" "summary,DBIObject-method"
[3] "summary,MySQLConnection-method" "summary,MySQLDriver-method"
[5] "summary,MySQLResult-method" "summary.aov"
```
Objects in **R** can belong to multiple classes, although those classes need not be nested. As noted above, generic functions are [*dispatched*](https://en.wikipedia.org/w/index.php?search=dispatched) according the class attribute of each object.
Thus, in the example below we create the `tbl` object, which belongs to multiple classes.
When the `print()` function is called on `tbl`, **R** looks for a method called `print.tbl_df()`.
If no such method is found, **R** looks for a method called `print.tbl()`.
If no such method is found, **R** looks for a method called `print.data.frame()`. This process continues until a suitable method is found. If there is none, then `print.default()` is called.
```
tbl <- as_tibble(ds)
class(tbl)
```
```
[1] "tbl_df" "tbl" "data.frame"
```
```
print(tbl)
```
```
# A tibble: 6 × 2
x y
<dbl> <dbl>
1 5 11
2 7 11
3 9 11
4 13 11
5 -4 11
6 8 11
```
```
print.data.frame(tbl)
```
```
x y
1 5 11
2 7 11
3 9 11
4 13 11
5 -4 11
6 8 11
```
```
print.default(tbl)
```
```
$x
[1] 5 7 9 13 -4 8
$y
[1] 11 11 11 11 11 11
attr(,"class")
[1] "tbl_df" "tbl" "data.frame"
```
There are a number of functions that assist with learning about an object in **R**. The `attributes()` command displays the attributes associated with an object. The `typeof()` function provides information about the underlying data structure of objects (e.g., logical, integer, double, complex, character, and list).
The `str()` function displays the structure of an object, and the `mode()` function displays its storage mode. For data frames, the `glimpse()` function provides a useful summary of each variable.
A few quick notes on specific types of objects are worth relating here:
* A vector is a one\-dimensional array of items of the same data type. There are [six basic data types](https://cran.r-project.org/doc/manuals/r-release/R-lang.html#Basic-types) that a vector can contain: `logical`, `character`, `integer`, `double`, `complex`, and `raw`. Vectors have a `length()` but not a `dim()`. Vectors can have—but needn’t have—`names()`.
* A `factor` is a special type of vector for categorical data. A factor has `level()`s. We change the reference level of a factor with `relevel()`. Factors are stored internally as integers that correspond to the id’s of the factor levels.
Factors can be problematic and their use is discouraged since they can complicate some aspects of data wrangling. A number of **R** developers have encouraged the use of the `stringsAsFactors = FALSE` option.
* A `matrix` is a two\-dimensional array of items of the same data type. A matrix has a `length()` that is equal to `nrow()` times `ncol()`, or the product of `dim()`.
* A `data.frame` is a `list` of vectors of the same length. This is like a matrix, except that columns can be of different data types. Data frames always have `names()` and often have `row.names()`.
Do not confuse a `factor` with a `character` vector.
Note that data sets typically have class `data.frame` but are of type `list`. This is because, as noted above, **R** stores data frames as special types of lists—a list of several vectors having the same length, but possibly having different types.
```
class(mtcars)
```
```
[1] "data.frame"
```
```
typeof(mtcars)
```
```
[1] "list"
```
If you ever get confused when working with data frames and matrices, remember that a `data.frame` is a `list` (that can accommodate multiple types of objects), whereas a `matrix` is more like a `vector` (in that it can only support one type of object).
### B.3\.7 Options
The `options()` function in **R** can be used to change various default behaviors. For example, the `digits` argument controls the number of digits to display in output.
The current options are returned when `options()` is called, to allow them to be restored. The command `help(options)` lists all of the settable options.
### B.3\.8 Functions
Fundamental actions within **R** are carried out by calling [*functions*](https://en.wikipedia.org/w/index.php?search=functions) (either built\-in or user defined—see Appendix [C](ch-function.html#ch:function) for guidance on the latter).
Multiple [*arguments*](https://en.wikipedia.org/w/index.php?search=arguments) may be given, separated by commas.
The function carries out operations using the provided arguments
and returns values (an object such as a vector or list) that are displayed (by default) or which can be saved by assignment to an object.
It’s a good idea to name arguments to functions.
This practice minimizes errors assigning unnamed arguments to options and makes code more readable.
As an example, the `quantile()` function takes a numeric vector and returns the minimum, 25th percentile,
median, 75th percentile, and maximum of the values in that vector.
However, if an optional vector of quantiles is given, those quantiles are calculated instead.
```
vals <- rnorm(1000) # generate 1000 standard normal random variables
quantile(vals)
```
```
0% 25% 50% 75% 100%
-3.520 -0.675 0.012 0.737 3.352
```
```
quantile(vals, c(.025, .975))
```
```
2.5% 97.5%
-2.00 1.98
```
```
# Return values can be saved for later use.
res <- quantile(vals, c(.025, .975))
res[1]
```
```
2.5%
-2
```
Arguments (options) are available for most functions.
The documentation specifies the default action if named arguments are not specified.
If not named, the arguments are provided to the function in order specified in the function call.
For the `quantile()` function, there is a `type` argument that allows specification of one of nine algorithms for calculating quantiles.
```
res <- quantile(vals, probs = c(.025, .975), type = 3)
res
```
```
2.5% 97.5%
-2.02 1.98
```
Some functions allow a variable number of arguments.
An example is the
`paste()` function.
The calling sequence is described in the documentation as follows.
```
paste(..., sep = " ", collapse = NULL)
```
To override the default behavior of a space being added between
elements output by `paste()`, the user can specify a different
value for `sep`.
### B.3\.1 Objects and vectors
Almost everything in **R** is an object, which may be initially confusing to a new user.
An object is simply something stored in **R**’s memory. Common objects include vectors, matrices, arrays, factors, data frames (akin to data sets in other systems), lists, and functions.
The basic variable structure is a vector.
Vectors (and other objects) are created using the `<-` or `=` assignment operators (which assign the evaluated expression on the right\-hand side of the operator to the object name on the left\-hand side).
```
x <- c(5, 7, 9, 13, -4, 8) # preferred
x = c(5, 7, 9, 13, -4, 8) # equivalent
```
The above code creates a vector of length 6 using the `c()` function to concatenate scalars.
The `=` operator is used in other contexts for the specification of arguments to functions. Other assignment operators exist, as well as the `assign()` function (see `help("<-")` for more information).
The `exists()` function conveys whether an object exists in the workspace, and the `rm()` command removes it.
In **RStudio**, the “Environment” tab shows the names (and values) of all objects that exist in the current workspace.
Since vector operations are so fundamental in **R**, it is important to be able to access (or index) elements within these vectors.
Many different ways of
indexing vectors are available. Here, we introduce several of these using the `x` as created above. The command `x[2]` returns the second element of `x` (the scalar 7\), and `x[c(2, 4)]` returns the vector \\((7, 13\)\\). The expressions `x[c(TRUE, TRUE, TRUE, TRUE, TRUE, FALSE)]`, `x[1:5]` and `x[-6]` all return a vector consisting of the first 5 elements in `x` (the last specifies all elements except the 6th).
```
x[2]
```
```
[1] 7
```
```
x[c(2, 4)]
```
```
[1] 7 13
```
```
x[c(TRUE, TRUE, TRUE, TRUE, TRUE, FALSE)]
```
```
[1] 5 7 9 13 -4
```
```
x[1:5]
```
```
[1] 5 7 9 13 -4
```
```
x[-6]
```
```
[1] 5 7 9 13 -4
```
Vectors are [*recycled*](https://en.wikipedia.org/w/index.php?search=recycled) if needed; for example, when comparing each of the elements of a vector to a scalar.
```
x > 8
```
```
[1] FALSE FALSE TRUE TRUE FALSE FALSE
```
The above expression demonstrates the use of comparison operators (see `?Comparison`).
Only the third and fourth elements of `x` are greater than 8\.
The function returns a logical value of either `TRUE` or `FALSE` (see `?Logic`).
A count of elements meeting the condition can be generated using the `sum()` function.
Other comparison operators include `==` (equal), `>=` (greater than or equal), `<=` (less than or equal and `!=` (not equal).
Care needs to be taken in the comparison using `==` if noninteger values are present (see `all.equal()`).
```
sum(x > 8)
```
```
[1] 2
```
### B.3\.2 Operators
There are many operators defined in **R** to carry out a variety of tasks.
Many of these were demonstrated in the sample session (assignment,
arithmetic) and previous examples (comparison).
Arithmetic operations include `+`,
`-`, `*`, `/`, `^` (exponentiation), `%%` (modulus), and `%/%` (integer division). More information about operators can be found using the help system (e.g., `?"+"`). Background information on
other operators and precedence rules can be found using `help(Syntax)`.
Boolean operations (OR, AND, NOT, and XOR) are supported using the `|`, `||`, `&`, `!` operators and the `xor()` function.
The `|` is an “or” operator that operates on each element of a vector,
while the `||` is another “or” operator that stops evaluation the first time that the result is true (see `?Logic`).
### B.3\.3 Lists
Lists in **R** are very general objects that can contain other objects of arbitrary types. List members can be named, or referenced using numeric indices (using the `[[` operator).
```
newlist <- list(first = "hello", second = 42, Bob = TRUE)
is.list(newlist)
```
```
[1] TRUE
```
```
newlist
```
```
$first
[1] "hello"
$second
[1] 42
$Bob
[1] TRUE
```
```
newlist[[2]]
```
```
[1] 42
```
```
newlist$Bob
```
```
[1] TRUE
```
The `unlist()` function flattens (makes a vector out of) the elements in a list (see also `relist()`).
Note that unlisted objects are coerced to a common type (in this case `character`).
```
unlisted <- unlist(newlist)
unlisted
```
```
first second Bob
"hello" "42" "TRUE"
```
### B.3\.4 Matrices
Matrices are like two\-dimensional vectors: rectangular objects where all entries have the same type.
We can create a \\(2 \\times 3\\) matrix, display it, and test for its type.
```
A <- matrix(x, 2, 3)
A
```
```
[,1] [,2] [,3]
[1,] 5 9 -4
[2,] 7 13 8
```
```
is.matrix(A) # is A a matrix?
```
```
[1] TRUE
```
```
is.vector(A)
```
```
[1] FALSE
```
```
is.matrix(x)
```
```
[1] FALSE
```
Note that comments are supported within **R** (any input given after a `#` character is ignored).
Indexing for matrices is done in a similar fashion as for vectors, albeit with a second dimension (denoted by a comma).
```
A[2, 3]
```
```
[1] 8
```
```
A[, 1]
```
```
[1] 5 7
```
```
A[1, ]
```
```
[1] 5 9 -4
```
### B.3\.5 Dataframes and tibbles
Data sets are often stored in a `data.frame`, which is a special type of `list` that is more general than a `matrix`.
This rectangular object, similar to a data table in other systems, can be thought of as a two\-dimensional array with columns of vectors of the same length, but of possibly different types (as opposed to a matrix, which consists of vectors of the *same* type; or a list, whose elements needn’t be of the same length).
The function `read_csv()` in the **readr** package returns a `data.frame` object.
A simple
`data.frame`
can be created using the `data.frame()` command.
Variables can be accessed using the `$` operator, as shown below (see also `help(Extract)`).
In addition, operations can be performed by column (e.g., calculation of sample statistics).
We can check to see if an object is a `data.frame` with `is.data.frame()`.
```
y <- rep(11, length(x))
y
```
```
[1] 11 11 11 11 11 11
```
```
ds <- data.frame(x, y)
ds
```
```
x y
1 5 11
2 7 11
3 9 11
4 13 11
5 -4 11
6 8 11
```
```
ds$x[3]
```
```
[1] 9
```
```
is.data.frame(ds)
```
```
[1] TRUE
```
Tibbles are a form of simple data frames (a modern interpretation) that are described as “lazy and surly” (<https://tibble.tidyverse.org>). They support multiple data technologies (e.g., SQL databases), make more explicit their assumptions, and have an enhanced print method (so that output doesn’t scroll so much).
Many packages in the **tidyverse** create tibbles by default.
```
tbl <- as_tibble(ds)
is.data.frame(tbl)
```
```
[1] TRUE
```
```
is_tibble(ds)
```
```
[1] FALSE
```
```
is_tibble(tbl)
```
```
[1] TRUE
```
The use of `data.frame()` differs from the use of `cbind()`, which yields a `matrix` object (unless it is given data frames as inputs).
```
newmat <- cbind(x, y)
newmat
```
```
x y
[1,] 5 11
[2,] 7 11
[3,] 9 11
[4,] 13 11
[5,] -4 11
[6,] 8 11
```
```
is.data.frame(newmat)
```
```
[1] FALSE
```
```
is.matrix(newmat)
```
```
[1] TRUE
```
Data frames are created from matrices using `as.data.frame()`, while matrices
are constructed from data frames using `as.matrix()`.
Although we strongly discourage its use, data frames can be attached to the workspace using the `attach()` command.
The Tidyverse **R** Style guide (<https://style.tidyverse.org>) provides similar advice.
Name conflicts are a common problem with `attach()` (see `conflicts()`, which reports on objects that exist with the same name in two or more places on the search path).
The `search()` function lists attached packages and objects.
To avoid cluttering and confusing the name\-space, the command `detach()` should be used once a data frame or package is no longer needed.
A number of **R** functions include a `data` argument to specify a data frame as a local environment.
For functions without a `data` option, the `with()` and `within()` commands can be used to simplify reference to an object within a data frame without attaching.
### B.3\.6 Attributes and classes
Many objects have a set of associated attributes (such as names of variables, dimensions, or classes) that can be displayed or sometimes changed.
For example, we can find the dimension of the matrix defined earlier.
```
attributes(A)
```
```
$dim
[1] 2 3
```
Other types of objects within **R** include `list`s (ordered objects that are not necessarily rectangular), regression models (objects of class `lm`), and formulae (e.g., `y ~ x1 + x2`). **R** supports [*object\-oriented programming*](https://en.wikipedia.org/w/index.php?search=object-oriented%20programming) (see `help(UseMethod)`).
As a result, objects in **R** have an associated [*class*](https://en.wikipedia.org/w/index.php?search=class) attribute, which
changes the default behavior for some operations on that object.
Many functions (called [*generics*](https://en.wikipedia.org/w/index.php?search=generics)) have special capabilities when applied to objects of a particular class.
For example, when `summary()` is applied to an `lm` object, the `summary.lm()` function is called.
Conversely, `summary.aov()` is called when an `aov` object is given as argument.
These class\-specific implementations of generic functions are called [*methods*](https://en.wikipedia.org/w/index.php?search=methods).
The `class()` function returns the classes to which an object belongs, while the `methods()` function displays all of the classes supported by a generic function.
```
head(methods(summary))
```
```
[1] "summary,ANY-method" "summary,DBIObject-method"
[3] "summary,MySQLConnection-method" "summary,MySQLDriver-method"
[5] "summary,MySQLResult-method" "summary.aov"
```
Objects in **R** can belong to multiple classes, although those classes need not be nested. As noted above, generic functions are [*dispatched*](https://en.wikipedia.org/w/index.php?search=dispatched) according the class attribute of each object.
Thus, in the example below we create the `tbl` object, which belongs to multiple classes.
When the `print()` function is called on `tbl`, **R** looks for a method called `print.tbl_df()`.
If no such method is found, **R** looks for a method called `print.tbl()`.
If no such method is found, **R** looks for a method called `print.data.frame()`. This process continues until a suitable method is found. If there is none, then `print.default()` is called.
```
tbl <- as_tibble(ds)
class(tbl)
```
```
[1] "tbl_df" "tbl" "data.frame"
```
```
print(tbl)
```
```
# A tibble: 6 × 2
x y
<dbl> <dbl>
1 5 11
2 7 11
3 9 11
4 13 11
5 -4 11
6 8 11
```
```
print.data.frame(tbl)
```
```
x y
1 5 11
2 7 11
3 9 11
4 13 11
5 -4 11
6 8 11
```
```
print.default(tbl)
```
```
$x
[1] 5 7 9 13 -4 8
$y
[1] 11 11 11 11 11 11
attr(,"class")
[1] "tbl_df" "tbl" "data.frame"
```
There are a number of functions that assist with learning about an object in **R**. The `attributes()` command displays the attributes associated with an object. The `typeof()` function provides information about the underlying data structure of objects (e.g., logical, integer, double, complex, character, and list).
The `str()` function displays the structure of an object, and the `mode()` function displays its storage mode. For data frames, the `glimpse()` function provides a useful summary of each variable.
A few quick notes on specific types of objects are worth relating here:
* A vector is a one\-dimensional array of items of the same data type. There are [six basic data types](https://cran.r-project.org/doc/manuals/r-release/R-lang.html#Basic-types) that a vector can contain: `logical`, `character`, `integer`, `double`, `complex`, and `raw`. Vectors have a `length()` but not a `dim()`. Vectors can have—but needn’t have—`names()`.
* A `factor` is a special type of vector for categorical data. A factor has `level()`s. We change the reference level of a factor with `relevel()`. Factors are stored internally as integers that correspond to the id’s of the factor levels.
Factors can be problematic and their use is discouraged since they can complicate some aspects of data wrangling. A number of **R** developers have encouraged the use of the `stringsAsFactors = FALSE` option.
* A `matrix` is a two\-dimensional array of items of the same data type. A matrix has a `length()` that is equal to `nrow()` times `ncol()`, or the product of `dim()`.
* A `data.frame` is a `list` of vectors of the same length. This is like a matrix, except that columns can be of different data types. Data frames always have `names()` and often have `row.names()`.
Do not confuse a `factor` with a `character` vector.
Note that data sets typically have class `data.frame` but are of type `list`. This is because, as noted above, **R** stores data frames as special types of lists—a list of several vectors having the same length, but possibly having different types.
```
class(mtcars)
```
```
[1] "data.frame"
```
```
typeof(mtcars)
```
```
[1] "list"
```
If you ever get confused when working with data frames and matrices, remember that a `data.frame` is a `list` (that can accommodate multiple types of objects), whereas a `matrix` is more like a `vector` (in that it can only support one type of object).
### B.3\.7 Options
The `options()` function in **R** can be used to change various default behaviors. For example, the `digits` argument controls the number of digits to display in output.
The current options are returned when `options()` is called, to allow them to be restored. The command `help(options)` lists all of the settable options.
### B.3\.8 Functions
Fundamental actions within **R** are carried out by calling [*functions*](https://en.wikipedia.org/w/index.php?search=functions) (either built\-in or user defined—see Appendix [C](ch-function.html#ch:function) for guidance on the latter).
Multiple [*arguments*](https://en.wikipedia.org/w/index.php?search=arguments) may be given, separated by commas.
The function carries out operations using the provided arguments
and returns values (an object such as a vector or list) that are displayed (by default) or which can be saved by assignment to an object.
It’s a good idea to name arguments to functions.
This practice minimizes errors assigning unnamed arguments to options and makes code more readable.
As an example, the `quantile()` function takes a numeric vector and returns the minimum, 25th percentile,
median, 75th percentile, and maximum of the values in that vector.
However, if an optional vector of quantiles is given, those quantiles are calculated instead.
```
vals <- rnorm(1000) # generate 1000 standard normal random variables
quantile(vals)
```
```
0% 25% 50% 75% 100%
-3.520 -0.675 0.012 0.737 3.352
```
```
quantile(vals, c(.025, .975))
```
```
2.5% 97.5%
-2.00 1.98
```
```
# Return values can be saved for later use.
res <- quantile(vals, c(.025, .975))
res[1]
```
```
2.5%
-2
```
Arguments (options) are available for most functions.
The documentation specifies the default action if named arguments are not specified.
If not named, the arguments are provided to the function in order specified in the function call.
For the `quantile()` function, there is a `type` argument that allows specification of one of nine algorithms for calculating quantiles.
```
res <- quantile(vals, probs = c(.025, .975), type = 3)
res
```
```
2.5% 97.5%
-2.02 1.98
```
Some functions allow a variable number of arguments.
An example is the
`paste()` function.
The calling sequence is described in the documentation as follows.
```
paste(..., sep = " ", collapse = NULL)
```
To override the default behavior of a space being added between
elements output by `paste()`, the user can specify a different
value for `sep`.
B.4 Add\-ons: Packages
----------------------
### B.4\.1 Introduction to packages
Additional functionality in **R** is added through packages, which consist of
functions, data sets, examples, vignettes, and help files that can be downloaded from CRAN.
The function `install.packages()` can be used to download and install packages.
Alternatively, **RStudio** provides an easy\-to\-use `Packages` tab to install and load packages.
Throughout the book, we assume that the **tidyverse** and **mdsr** packages are loaded.
In many cases, additional add\-on packages (see Appendix [A](ch-mdsr.html#ch:mdsr)) need to be installed prior to running the examples in this book.
Packages that are not on CRAN can be installed using the `install_github()` function in the **remotes** package.
```
install.packages("mdsr") # CRAN version
remotes::install_github("mdsr-book/mdsr") # development version
```
The `library()` function will load an installed package.
For example, to install and load Frank Harrell’s `Hmisc()` package, two commands are needed:
```
install.packages("Hmisc")
library(Hmisc)
```
If a package is not installed, running the `library()` command will yield an
error.
Here we try to load the **xaringanthemer** package (which has not been installed):
```
> library(xaringanthemer)
Error in library(xaringanthemer) : there is no package called 'xaringanthemer'
```
To rectify the problem, we install the package from CRAN.
```
> install.packages("xaringanthemer")
trying URL 'https://cloud.r-project.org/src/contrib/xaringanthemer_0.3.0.tar.gz'
Content type 'application/x-gzip' length 1362643 bytes (1.3 MB)
==================================================
downloaded 1.3 Mb
```
```
library(xaringanthemer)
```
The `require()` function will test whether a package is available—this will load the library if it is installed, and generate a warning message if it is not (as opposed to `library()`, which will return an error).
The names of all variables within a given data set (or more generally for sub\-objects within an object) are provided by the `names()` command.
The names of all objects defined within an **R** session can be generated using the `objects()` and `ls()` commands, which return a vector of character strings.
**RStudio** includes an `Environment` tab that lists all the objects in the current environment.
The `print()` and `summary()` functions return the object or summaries of that object, respectively.
Running `print(object)` at the command line is equivalent to just entering
the name of the object, i.e., `object`.
### B.4\.2 Packages and name conflicts
Different package authors may choose the same name for functions that
exist within base **R** (or within other packages). This will cause the other
function or object to be [*masked*](https://en.wikipedia.org/w/index.php?search=masked). This can sometimes lead to confusion, when the expected version of a function is not the one that is called.
The `find()` function can be used to determine where in the environment (workspace) a given object can be found.
```
find("mean")
```
```
[1] "package:base"
```
Sometimes it is desirable to remove a package from the workspace.
For example, a package might define a function with the same name as an existing function.
Packages can be detached using the syntax `detach(package:PKGNAME)`,
where `PKGNAME` is the name of the package.
Objects with the same name that appear in multiple places in the environment can be accessed using the `location::objectname` syntax.
As an example, to access the `mean()` function from the **base** package, the user would specify `base::mean()` instead of `mean()`.
It is sometimes preferable to reference a function or object in this way rather than loading the package.
As an example where this might be useful, there are functions in the
**base** and **Hmisc** packages called `units()`. The find
command would display both (in the order in which they would be accessed).
```
library(Hmisc)
find("units")
```
```
[1] "package:Hmisc" "package:base"
```
When the **Hmisc** package is loaded, the `units()` function from the **base** package is masked and would not be used by default.
To specify that the version of the function from the **base** package should be used,
prefix the function with the package name followed by two colons: `base::units()`.
The `conflicts()` function
reports on objects that exist with the same name in two or more places on the search path.
Running the command `library(help = "PKGNAME")`
will display information about an installed package.
Alternatively, the `Packages` tab in **RStudio** can be used to list, install, and update packages.
The `session_info()` function from the **sessioninfo** package provides improved reporting version information about **R** as well as details of loaded packages.
```
sessioninfo::session_info()
```
```
─ Session info ───────────────────────────────────────────────────────────
setting value
version R version 4.1.0 (2021-05-18)
os Ubuntu 20.04.2 LTS
system x86_64, linux-gnu
ui X11
language (EN)
collate en_US.UTF-8
ctype en_US.UTF-8
tz America/New_York
date 2021-07-28
─ Packages ───────────────────────────────────────────────────────────────
package * version date lib source
assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.1.0)
backports 1.2.1 2020-12-09 [1] CRAN (R 4.1.0)
base64enc 0.1-3 2015-07-28 [1] CRAN (R 4.1.0)
bookdown 0.22 2021-04-22 [1] CRAN (R 4.1.0)
broom 0.7.8 2021-06-24 [1] CRAN (R 4.1.0)
bslib 0.2.5.1 2021-05-18 [1] CRAN (R 4.1.0)
cellranger 1.1.0 2016-07-27 [1] CRAN (R 4.1.0)
checkmate 2.0.0 2020-02-06 [1] CRAN (R 4.1.0)
cli 3.0.1 2021-07-17 [1] CRAN (R 4.1.0)
cluster 2.1.2 2021-04-17 [4] CRAN (R 4.0.5)
colorspace 2.0-2 2021-06-24 [1] CRAN (R 4.1.0)
crayon 1.4.1 2021-02-08 [1] CRAN (R 4.1.0)
data.table 1.14.0 2021-02-21 [1] CRAN (R 4.1.0)
DBI * 1.1.1 2021-01-15 [1] CRAN (R 4.1.0)
dbplyr 2.1.1 2021-04-06 [1] CRAN (R 4.1.0)
digest 0.6.27 2020-10-24 [1] CRAN (R 4.1.0)
dplyr * 1.0.7 2021-06-18 [1] CRAN (R 4.1.0)
ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.1.0)
evaluate 0.14 2019-05-28 [1] CRAN (R 4.1.0)
fansi 0.5.0 2021-05-25 [1] CRAN (R 4.1.0)
forcats * 0.5.1 2021-01-27 [1] CRAN (R 4.1.0)
foreign 0.8-81 2020-12-22 [4] CRAN (R 4.0.3)
Formula * 1.2-4 2020-10-16 [1] CRAN (R 4.1.0)
fs 1.5.0 2020-07-31 [1] CRAN (R 4.1.0)
generics 0.1.0 2020-10-31 [1] CRAN (R 4.1.0)
ggplot2 * 3.3.5 2021-06-25 [1] CRAN (R 4.1.0)
glue 1.4.2 2020-08-27 [1] CRAN (R 4.1.0)
gridExtra 2.3 2017-09-09 [1] CRAN (R 4.1.0)
gtable 0.3.0 2019-03-25 [1] CRAN (R 4.1.0)
haven 2.4.1 2021-04-23 [1] CRAN (R 4.1.0)
Hmisc 4.5-0 2021-02-28 [1] CRAN (R 4.1.0)
hms 1.1.0 2021-05-17 [1] CRAN (R 4.1.0)
htmlTable 2.2.1 2021-05-18 [1] CRAN (R 4.1.0)
htmltools 0.5.1.1 2021-01-22 [1] CRAN (R 4.1.0)
htmlwidgets 1.5.3 2020-12-10 [1] CRAN (R 4.1.0)
httr 1.4.2 2020-07-20 [1] CRAN (R 4.1.0)
jpeg 0.1-9 2021-07-24 [1] CRAN (R 4.1.0)
jquerylib 0.1.4 2021-04-26 [1] CRAN (R 4.1.0)
jsonlite 1.7.2 2020-12-09 [1] CRAN (R 4.1.0)
knitr 1.33 2021-04-24 [1] CRAN (R 4.1.0)
lattice * 0.20-44 2021-05-02 [4] CRAN (R 4.1.0)
latticeExtra 0.6-29 2019-12-19 [1] CRAN (R 4.1.0)
lifecycle 1.0.0 2021-02-15 [1] CRAN (R 4.1.0)
lubridate 1.7.10 2021-02-26 [1] CRAN (R 4.1.0)
magrittr 2.0.1 2020-11-17 [1] CRAN (R 4.1.0)
Matrix 1.3-4 2021-06-01 [4] CRAN (R 4.1.0)
mdsr * 0.2.5 2021-03-29 [1] CRAN (R 4.1.0)
modelr 0.1.8 2020-05-19 [1] CRAN (R 4.1.0)
mosaicData * 0.20.2 2021-01-16 [1] CRAN (R 4.1.0)
munsell 0.5.0 2018-06-12 [1] CRAN (R 4.1.0)
nnet 7.3-16 2021-05-03 [4] CRAN (R 4.0.5)
pillar 1.6.1 2021-05-16 [1] CRAN (R 4.1.0)
pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.1.0)
png 0.1-7 2013-12-03 [1] CRAN (R 4.1.0)
purrr * 0.3.4 2020-04-17 [1] CRAN (R 4.1.0)
R6 2.5.0 2020-10-28 [1] CRAN (R 4.1.0)
RColorBrewer 1.1-2 2014-12-07 [1] CRAN (R 4.1.0)
Rcpp 1.0.7 2021-07-07 [1] CRAN (R 4.1.0)
readr * 2.0.0 2021-07-20 [1] CRAN (R 4.1.0)
readxl 1.3.1 2019-03-13 [1] CRAN (R 4.1.0)
repr 1.1.3 2021-01-21 [1] CRAN (R 4.1.0)
reprex 2.0.0 2021-04-02 [1] CRAN (R 4.1.0)
rlang 0.4.11 2021-04-30 [1] CRAN (R 4.1.0)
rmarkdown 2.9 2021-06-15 [1] CRAN (R 4.1.0)
RMySQL 0.10.22 2021-06-22 [1] CRAN (R 4.1.0)
rpart 4.1-15 2019-04-12 [4] CRAN (R 4.0.0)
rstudioapi 0.13 2020-11-12 [1] CRAN (R 4.1.0)
rvest 1.0.1 2021-07-26 [1] CRAN (R 4.1.0)
sass 0.4.0 2021-05-12 [1] CRAN (R 4.1.0)
scales 1.1.1 2020-05-11 [1] CRAN (R 4.1.0)
sessioninfo 1.1.1 2018-11-05 [1] CRAN (R 4.1.0)
skimr 2.1.3 2021-03-07 [1] CRAN (R 4.1.0)
stringi 1.7.3 2021-07-16 [1] CRAN (R 4.1.0)
stringr * 1.4.0 2019-02-10 [1] CRAN (R 4.1.0)
survival * 3.2-11 2021-04-26 [4] CRAN (R 4.0.5)
tibble * 3.1.3 2021-07-23 [1] CRAN (R 4.1.0)
tidyr * 1.1.3 2021-03-03 [1] CRAN (R 4.1.0)
tidyselect 1.1.1 2021-04-30 [1] CRAN (R 4.1.0)
tidyverse * 1.3.1 2021-04-15 [1] CRAN (R 4.1.0)
tzdb 0.1.2 2021-07-20 [1] CRAN (R 4.1.0)
utf8 1.2.2 2021-07-24 [1] CRAN (R 4.1.0)
vctrs 0.3.8 2021-04-29 [1] CRAN (R 4.1.0)
withr 2.4.2 2021-04-18 [1] CRAN (R 4.1.0)
xaringanthemer * 0.4.0 2021-06-24 [1] CRAN (R 4.1.0)
xfun 0.24 2021-06-15 [1] CRAN (R 4.1.0)
xml2 1.3.2 2020-04-23 [1] CRAN (R 4.1.0)
yaml 2.2.1 2020-02-01 [1] CRAN (R 4.1.0)
[1] /home/bbaumer/R/x86_64-pc-linux-gnu-library/4.1
[2] /usr/local/lib/R/site-library
[3] /usr/lib/R/site-library
[4] /usr/lib/R/library
```
The `update.packages()` function should be run periodically to ensure that packages are up\-to\-date
As of December 2020, there were more than 16,800 packages available from CRAN.
This represents a tremendous investment of time and code by many developers (Fox 2009\).
While each of these has met a minimal standard for inclusion, it is important to keep in mind that packages in **R** are created by individuals or small groups, and not endorsed by the **R** core group. As a result, they do not necessarily undergo the same level of testing and quality assurance that the core **R** system does.
### B.4\.3 CRAN task views
The [“Task Views” on CRAN](http://cran.r-project.org/web/views) are
a very useful resource for finding packages. These are curated
listings of relevant packages within a particular application area
(such as multivariate statistics, psychometrics, or survival analysis).
Table [B.1](ch-R.html#tab:cran) displays the task views available as of July 2021\.
Table B.1: A complete list of CRAN task views.
| Task View | Subject |
| --- | --- |
| [Bayesian](https://cran.r-project.org/web/views/Bayesian.html) | Bayesian Inference |
| [ChemPhys](https://cran.r-project.org/web/views/ChemPhys.html) | Chemometrics and Computational Physics |
| [ClinicalTrials](https://cran.r-project.org/web/views/ClinicalTrials.html) | Clinical Trial Design, Monitoring, and Analysis |
| [Cluster](https://cran.r-project.org/web/views/Cluster.html) | Cluster Analysis and Finite Mixture Models |
| [Databases](https://cran.r-project.org/web/views/Databases.html) | Databases with R |
| [DifferentialEquations](https://cran.r-project.org/web/views/DifferentialEquations.html) | Differential Equations |
| [Distributions](https://cran.r-project.org/web/views/Distributions.html) | Probability Distributions |
| [Econometrics](https://cran.r-project.org/web/views/Econometrics.html) | Econometrics |
| [Environmetrics](https://cran.r-project.org/web/views/Environmetrics.html) | Analysis of Ecological and Environmental Data |
| [ExperimentalDesign](https://cran.r-project.org/web/views/ExperimentalDesign.html) | Design of Experiments (DoE) and Analysis of Experimental Data |
| [ExtremeValue](https://cran.r-project.org/web/views/ExtremeValue.html) | Extreme Value Analysis |
| [Finance](https://cran.r-project.org/web/views/Finance.html) | Empirical Finance |
| [FunctionalData](https://cran.r-project.org/web/views/FunctionalData.html) | Functional Data Analysis |
| [Genetics](https://cran.r-project.org/web/views/Genetics.html) | Statistical Genetics |
| [gR](https://cran.r-project.org/web/views/gR.html) | gRaphical Models in R |
| [Graphics](https://cran.r-project.org/web/views/Graphics.html) | Graphic Displays and Dynamic Graphics and Graphic Devices and Visualization |
| [HighPerformanceComputing](https://cran.r-project.org/web/views/HighPerformanceComputing.html) | High\-Performance and Parallel Computing with R |
| [Hydrology](https://cran.r-project.org/web/views/Hydrology.html) | Hydrological Data and Modeling |
| [MachineLearning](https://cran.r-project.org/web/views/MachineLearning.html) | Machine Learning and Statistical Learning |
| [MedicalImaging](https://cran.r-project.org/web/views/MedicalImaging.html) | Medical Image Analysis |
| [MetaAnalysis](https://cran.r-project.org/web/views/MetaAnalysis.html) | Meta\-Analysis |
| [MissingData](https://cran.r-project.org/web/views/MissingData.html) | Missing Data |
| [ModelDeployment](https://cran.r-project.org/web/views/ModelDeployment.html) | Model Deployment with R |
| [Multivariate](https://cran.r-project.org/web/views/Multivariate.html) | Multivariate Statistics |
| [NaturalLanguageProcessing](https://cran.r-project.org/web/views/NaturalLanguageProcessing.html) | Natural Language Processing |
| [NumericalMathematics](https://cran.r-project.org/web/views/NumericalMathematics.html) | Numerical Mathematics |
| [OfficialStatistics](https://cran.r-project.org/web/views/OfficialStatistics.html) | Official Statistics and Survey Methodology |
| [Optimization](https://cran.r-project.org/web/views/Optimization.html) | Optimization and Mathematical Programming |
| [Pharmacokinetics](https://cran.r-project.org/web/views/Pharmacokinetics.html) | Analysis of Pharmacokinetic Data |
| [Phylogenetics](https://cran.r-project.org/web/views/Phylogenetics.html) | Phylogenetics, Especially Comparative Methods |
| [Psychometrics](https://cran.r-project.org/web/views/Psychometrics.html) | Psychometric Models and Methods |
| [ReproducibleResearch](https://cran.r-project.org/web/views/ReproducibleResearch.html) | Reproducible Research |
| [Robust](https://cran.r-project.org/web/views/Robust.html) | Robust Statistical Methods |
| [SocialSciences](https://cran.r-project.org/web/views/SocialSciences.html) | Statistics for the Social Sciences |
| [Spatial](https://cran.r-project.org/web/views/Spatial.html) | Analysis of Spatial Data |
| [SpatioTemporal](https://cran.r-project.org/web/views/SpatioTemporal.html) | Handling and Analyzing Spatio\-Temporal Data |
| [Survival](https://cran.r-project.org/web/views/Survival.html) | Survival Analysis |
| [TeachingStatistics](https://cran.r-project.org/web/views/TeachingStatistics.html) | Teaching Statistics |
| [TimeSeries](https://cran.r-project.org/web/views/TimeSeries.html) | Time Series Analysis |
| [Tracking](https://cran.r-project.org/web/views/Tracking.html) | Processing and Analysis of Tracking Data |
| [WebTechnologies](https://cran.r-project.org/web/views/WebTechnologies.html) | Web Technologies and Services |
### B.4\.1 Introduction to packages
Additional functionality in **R** is added through packages, which consist of
functions, data sets, examples, vignettes, and help files that can be downloaded from CRAN.
The function `install.packages()` can be used to download and install packages.
Alternatively, **RStudio** provides an easy\-to\-use `Packages` tab to install and load packages.
Throughout the book, we assume that the **tidyverse** and **mdsr** packages are loaded.
In many cases, additional add\-on packages (see Appendix [A](ch-mdsr.html#ch:mdsr)) need to be installed prior to running the examples in this book.
Packages that are not on CRAN can be installed using the `install_github()` function in the **remotes** package.
```
install.packages("mdsr") # CRAN version
remotes::install_github("mdsr-book/mdsr") # development version
```
The `library()` function will load an installed package.
For example, to install and load Frank Harrell’s `Hmisc()` package, two commands are needed:
```
install.packages("Hmisc")
library(Hmisc)
```
If a package is not installed, running the `library()` command will yield an
error.
Here we try to load the **xaringanthemer** package (which has not been installed):
```
> library(xaringanthemer)
Error in library(xaringanthemer) : there is no package called 'xaringanthemer'
```
To rectify the problem, we install the package from CRAN.
```
> install.packages("xaringanthemer")
trying URL 'https://cloud.r-project.org/src/contrib/xaringanthemer_0.3.0.tar.gz'
Content type 'application/x-gzip' length 1362643 bytes (1.3 MB)
==================================================
downloaded 1.3 Mb
```
```
library(xaringanthemer)
```
The `require()` function will test whether a package is available—this will load the library if it is installed, and generate a warning message if it is not (as opposed to `library()`, which will return an error).
The names of all variables within a given data set (or more generally for sub\-objects within an object) are provided by the `names()` command.
The names of all objects defined within an **R** session can be generated using the `objects()` and `ls()` commands, which return a vector of character strings.
**RStudio** includes an `Environment` tab that lists all the objects in the current environment.
The `print()` and `summary()` functions return the object or summaries of that object, respectively.
Running `print(object)` at the command line is equivalent to just entering
the name of the object, i.e., `object`.
### B.4\.2 Packages and name conflicts
Different package authors may choose the same name for functions that
exist within base **R** (or within other packages). This will cause the other
function or object to be [*masked*](https://en.wikipedia.org/w/index.php?search=masked). This can sometimes lead to confusion, when the expected version of a function is not the one that is called.
The `find()` function can be used to determine where in the environment (workspace) a given object can be found.
```
find("mean")
```
```
[1] "package:base"
```
Sometimes it is desirable to remove a package from the workspace.
For example, a package might define a function with the same name as an existing function.
Packages can be detached using the syntax `detach(package:PKGNAME)`,
where `PKGNAME` is the name of the package.
Objects with the same name that appear in multiple places in the environment can be accessed using the `location::objectname` syntax.
As an example, to access the `mean()` function from the **base** package, the user would specify `base::mean()` instead of `mean()`.
It is sometimes preferable to reference a function or object in this way rather than loading the package.
As an example where this might be useful, there are functions in the
**base** and **Hmisc** packages called `units()`. The find
command would display both (in the order in which they would be accessed).
```
library(Hmisc)
find("units")
```
```
[1] "package:Hmisc" "package:base"
```
When the **Hmisc** package is loaded, the `units()` function from the **base** package is masked and would not be used by default.
To specify that the version of the function from the **base** package should be used,
prefix the function with the package name followed by two colons: `base::units()`.
The `conflicts()` function
reports on objects that exist with the same name in two or more places on the search path.
Running the command `library(help = "PKGNAME")`
will display information about an installed package.
Alternatively, the `Packages` tab in **RStudio** can be used to list, install, and update packages.
The `session_info()` function from the **sessioninfo** package provides improved reporting version information about **R** as well as details of loaded packages.
```
sessioninfo::session_info()
```
```
─ Session info ───────────────────────────────────────────────────────────
setting value
version R version 4.1.0 (2021-05-18)
os Ubuntu 20.04.2 LTS
system x86_64, linux-gnu
ui X11
language (EN)
collate en_US.UTF-8
ctype en_US.UTF-8
tz America/New_York
date 2021-07-28
─ Packages ───────────────────────────────────────────────────────────────
package * version date lib source
assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.1.0)
backports 1.2.1 2020-12-09 [1] CRAN (R 4.1.0)
base64enc 0.1-3 2015-07-28 [1] CRAN (R 4.1.0)
bookdown 0.22 2021-04-22 [1] CRAN (R 4.1.0)
broom 0.7.8 2021-06-24 [1] CRAN (R 4.1.0)
bslib 0.2.5.1 2021-05-18 [1] CRAN (R 4.1.0)
cellranger 1.1.0 2016-07-27 [1] CRAN (R 4.1.0)
checkmate 2.0.0 2020-02-06 [1] CRAN (R 4.1.0)
cli 3.0.1 2021-07-17 [1] CRAN (R 4.1.0)
cluster 2.1.2 2021-04-17 [4] CRAN (R 4.0.5)
colorspace 2.0-2 2021-06-24 [1] CRAN (R 4.1.0)
crayon 1.4.1 2021-02-08 [1] CRAN (R 4.1.0)
data.table 1.14.0 2021-02-21 [1] CRAN (R 4.1.0)
DBI * 1.1.1 2021-01-15 [1] CRAN (R 4.1.0)
dbplyr 2.1.1 2021-04-06 [1] CRAN (R 4.1.0)
digest 0.6.27 2020-10-24 [1] CRAN (R 4.1.0)
dplyr * 1.0.7 2021-06-18 [1] CRAN (R 4.1.0)
ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.1.0)
evaluate 0.14 2019-05-28 [1] CRAN (R 4.1.0)
fansi 0.5.0 2021-05-25 [1] CRAN (R 4.1.0)
forcats * 0.5.1 2021-01-27 [1] CRAN (R 4.1.0)
foreign 0.8-81 2020-12-22 [4] CRAN (R 4.0.3)
Formula * 1.2-4 2020-10-16 [1] CRAN (R 4.1.0)
fs 1.5.0 2020-07-31 [1] CRAN (R 4.1.0)
generics 0.1.0 2020-10-31 [1] CRAN (R 4.1.0)
ggplot2 * 3.3.5 2021-06-25 [1] CRAN (R 4.1.0)
glue 1.4.2 2020-08-27 [1] CRAN (R 4.1.0)
gridExtra 2.3 2017-09-09 [1] CRAN (R 4.1.0)
gtable 0.3.0 2019-03-25 [1] CRAN (R 4.1.0)
haven 2.4.1 2021-04-23 [1] CRAN (R 4.1.0)
Hmisc 4.5-0 2021-02-28 [1] CRAN (R 4.1.0)
hms 1.1.0 2021-05-17 [1] CRAN (R 4.1.0)
htmlTable 2.2.1 2021-05-18 [1] CRAN (R 4.1.0)
htmltools 0.5.1.1 2021-01-22 [1] CRAN (R 4.1.0)
htmlwidgets 1.5.3 2020-12-10 [1] CRAN (R 4.1.0)
httr 1.4.2 2020-07-20 [1] CRAN (R 4.1.0)
jpeg 0.1-9 2021-07-24 [1] CRAN (R 4.1.0)
jquerylib 0.1.4 2021-04-26 [1] CRAN (R 4.1.0)
jsonlite 1.7.2 2020-12-09 [1] CRAN (R 4.1.0)
knitr 1.33 2021-04-24 [1] CRAN (R 4.1.0)
lattice * 0.20-44 2021-05-02 [4] CRAN (R 4.1.0)
latticeExtra 0.6-29 2019-12-19 [1] CRAN (R 4.1.0)
lifecycle 1.0.0 2021-02-15 [1] CRAN (R 4.1.0)
lubridate 1.7.10 2021-02-26 [1] CRAN (R 4.1.0)
magrittr 2.0.1 2020-11-17 [1] CRAN (R 4.1.0)
Matrix 1.3-4 2021-06-01 [4] CRAN (R 4.1.0)
mdsr * 0.2.5 2021-03-29 [1] CRAN (R 4.1.0)
modelr 0.1.8 2020-05-19 [1] CRAN (R 4.1.0)
mosaicData * 0.20.2 2021-01-16 [1] CRAN (R 4.1.0)
munsell 0.5.0 2018-06-12 [1] CRAN (R 4.1.0)
nnet 7.3-16 2021-05-03 [4] CRAN (R 4.0.5)
pillar 1.6.1 2021-05-16 [1] CRAN (R 4.1.0)
pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.1.0)
png 0.1-7 2013-12-03 [1] CRAN (R 4.1.0)
purrr * 0.3.4 2020-04-17 [1] CRAN (R 4.1.0)
R6 2.5.0 2020-10-28 [1] CRAN (R 4.1.0)
RColorBrewer 1.1-2 2014-12-07 [1] CRAN (R 4.1.0)
Rcpp 1.0.7 2021-07-07 [1] CRAN (R 4.1.0)
readr * 2.0.0 2021-07-20 [1] CRAN (R 4.1.0)
readxl 1.3.1 2019-03-13 [1] CRAN (R 4.1.0)
repr 1.1.3 2021-01-21 [1] CRAN (R 4.1.0)
reprex 2.0.0 2021-04-02 [1] CRAN (R 4.1.0)
rlang 0.4.11 2021-04-30 [1] CRAN (R 4.1.0)
rmarkdown 2.9 2021-06-15 [1] CRAN (R 4.1.0)
RMySQL 0.10.22 2021-06-22 [1] CRAN (R 4.1.0)
rpart 4.1-15 2019-04-12 [4] CRAN (R 4.0.0)
rstudioapi 0.13 2020-11-12 [1] CRAN (R 4.1.0)
rvest 1.0.1 2021-07-26 [1] CRAN (R 4.1.0)
sass 0.4.0 2021-05-12 [1] CRAN (R 4.1.0)
scales 1.1.1 2020-05-11 [1] CRAN (R 4.1.0)
sessioninfo 1.1.1 2018-11-05 [1] CRAN (R 4.1.0)
skimr 2.1.3 2021-03-07 [1] CRAN (R 4.1.0)
stringi 1.7.3 2021-07-16 [1] CRAN (R 4.1.0)
stringr * 1.4.0 2019-02-10 [1] CRAN (R 4.1.0)
survival * 3.2-11 2021-04-26 [4] CRAN (R 4.0.5)
tibble * 3.1.3 2021-07-23 [1] CRAN (R 4.1.0)
tidyr * 1.1.3 2021-03-03 [1] CRAN (R 4.1.0)
tidyselect 1.1.1 2021-04-30 [1] CRAN (R 4.1.0)
tidyverse * 1.3.1 2021-04-15 [1] CRAN (R 4.1.0)
tzdb 0.1.2 2021-07-20 [1] CRAN (R 4.1.0)
utf8 1.2.2 2021-07-24 [1] CRAN (R 4.1.0)
vctrs 0.3.8 2021-04-29 [1] CRAN (R 4.1.0)
withr 2.4.2 2021-04-18 [1] CRAN (R 4.1.0)
xaringanthemer * 0.4.0 2021-06-24 [1] CRAN (R 4.1.0)
xfun 0.24 2021-06-15 [1] CRAN (R 4.1.0)
xml2 1.3.2 2020-04-23 [1] CRAN (R 4.1.0)
yaml 2.2.1 2020-02-01 [1] CRAN (R 4.1.0)
[1] /home/bbaumer/R/x86_64-pc-linux-gnu-library/4.1
[2] /usr/local/lib/R/site-library
[3] /usr/lib/R/site-library
[4] /usr/lib/R/library
```
The `update.packages()` function should be run periodically to ensure that packages are up\-to\-date
As of December 2020, there were more than 16,800 packages available from CRAN.
This represents a tremendous investment of time and code by many developers (Fox 2009\).
While each of these has met a minimal standard for inclusion, it is important to keep in mind that packages in **R** are created by individuals or small groups, and not endorsed by the **R** core group. As a result, they do not necessarily undergo the same level of testing and quality assurance that the core **R** system does.
### B.4\.3 CRAN task views
The [“Task Views” on CRAN](http://cran.r-project.org/web/views) are
a very useful resource for finding packages. These are curated
listings of relevant packages within a particular application area
(such as multivariate statistics, psychometrics, or survival analysis).
Table [B.1](ch-R.html#tab:cran) displays the task views available as of July 2021\.
Table B.1: A complete list of CRAN task views.
| Task View | Subject |
| --- | --- |
| [Bayesian](https://cran.r-project.org/web/views/Bayesian.html) | Bayesian Inference |
| [ChemPhys](https://cran.r-project.org/web/views/ChemPhys.html) | Chemometrics and Computational Physics |
| [ClinicalTrials](https://cran.r-project.org/web/views/ClinicalTrials.html) | Clinical Trial Design, Monitoring, and Analysis |
| [Cluster](https://cran.r-project.org/web/views/Cluster.html) | Cluster Analysis and Finite Mixture Models |
| [Databases](https://cran.r-project.org/web/views/Databases.html) | Databases with R |
| [DifferentialEquations](https://cran.r-project.org/web/views/DifferentialEquations.html) | Differential Equations |
| [Distributions](https://cran.r-project.org/web/views/Distributions.html) | Probability Distributions |
| [Econometrics](https://cran.r-project.org/web/views/Econometrics.html) | Econometrics |
| [Environmetrics](https://cran.r-project.org/web/views/Environmetrics.html) | Analysis of Ecological and Environmental Data |
| [ExperimentalDesign](https://cran.r-project.org/web/views/ExperimentalDesign.html) | Design of Experiments (DoE) and Analysis of Experimental Data |
| [ExtremeValue](https://cran.r-project.org/web/views/ExtremeValue.html) | Extreme Value Analysis |
| [Finance](https://cran.r-project.org/web/views/Finance.html) | Empirical Finance |
| [FunctionalData](https://cran.r-project.org/web/views/FunctionalData.html) | Functional Data Analysis |
| [Genetics](https://cran.r-project.org/web/views/Genetics.html) | Statistical Genetics |
| [gR](https://cran.r-project.org/web/views/gR.html) | gRaphical Models in R |
| [Graphics](https://cran.r-project.org/web/views/Graphics.html) | Graphic Displays and Dynamic Graphics and Graphic Devices and Visualization |
| [HighPerformanceComputing](https://cran.r-project.org/web/views/HighPerformanceComputing.html) | High\-Performance and Parallel Computing with R |
| [Hydrology](https://cran.r-project.org/web/views/Hydrology.html) | Hydrological Data and Modeling |
| [MachineLearning](https://cran.r-project.org/web/views/MachineLearning.html) | Machine Learning and Statistical Learning |
| [MedicalImaging](https://cran.r-project.org/web/views/MedicalImaging.html) | Medical Image Analysis |
| [MetaAnalysis](https://cran.r-project.org/web/views/MetaAnalysis.html) | Meta\-Analysis |
| [MissingData](https://cran.r-project.org/web/views/MissingData.html) | Missing Data |
| [ModelDeployment](https://cran.r-project.org/web/views/ModelDeployment.html) | Model Deployment with R |
| [Multivariate](https://cran.r-project.org/web/views/Multivariate.html) | Multivariate Statistics |
| [NaturalLanguageProcessing](https://cran.r-project.org/web/views/NaturalLanguageProcessing.html) | Natural Language Processing |
| [NumericalMathematics](https://cran.r-project.org/web/views/NumericalMathematics.html) | Numerical Mathematics |
| [OfficialStatistics](https://cran.r-project.org/web/views/OfficialStatistics.html) | Official Statistics and Survey Methodology |
| [Optimization](https://cran.r-project.org/web/views/Optimization.html) | Optimization and Mathematical Programming |
| [Pharmacokinetics](https://cran.r-project.org/web/views/Pharmacokinetics.html) | Analysis of Pharmacokinetic Data |
| [Phylogenetics](https://cran.r-project.org/web/views/Phylogenetics.html) | Phylogenetics, Especially Comparative Methods |
| [Psychometrics](https://cran.r-project.org/web/views/Psychometrics.html) | Psychometric Models and Methods |
| [ReproducibleResearch](https://cran.r-project.org/web/views/ReproducibleResearch.html) | Reproducible Research |
| [Robust](https://cran.r-project.org/web/views/Robust.html) | Robust Statistical Methods |
| [SocialSciences](https://cran.r-project.org/web/views/SocialSciences.html) | Statistics for the Social Sciences |
| [Spatial](https://cran.r-project.org/web/views/Spatial.html) | Analysis of Spatial Data |
| [SpatioTemporal](https://cran.r-project.org/web/views/SpatioTemporal.html) | Handling and Analyzing Spatio\-Temporal Data |
| [Survival](https://cran.r-project.org/web/views/Survival.html) | Survival Analysis |
| [TeachingStatistics](https://cran.r-project.org/web/views/TeachingStatistics.html) | Teaching Statistics |
| [TimeSeries](https://cran.r-project.org/web/views/TimeSeries.html) | Time Series Analysis |
| [Tracking](https://cran.r-project.org/web/views/Tracking.html) | Processing and Analysis of Tracking Data |
| [WebTechnologies](https://cran.r-project.org/web/views/WebTechnologies.html) | Web Technologies and Services |
B.5 Further resources
---------------------
[*Advanced R*](https://adv-r.hadley.nz) is an excellent source for learning more about how **R** works (H. Wickham 2019\).
Extensive resources and documentation about **R** can be found at the Comprehensive R Archive Network (CRAN).
The **forcats** package, included in the **tidyverse**, is designed to facilitate data wrangling with factors.
More information regarding tibbles can be found at <https://tibble.tidyverse.org>.
JupyterLab and [JupyterHub](https://jupyter.org/hub) are alternative environments that support analysis via sophisticated notebooks for multiple languages including Julia, Python, and **R**.
B.6 Exercises
-------------
**Problem 1 (Easy)**: The following code chunk throws an error.
```
mtcars %>%
select(mpg, cyl)
```
```
mpg cyl
Mazda RX4 21.0 6
Mazda RX4 Wag 21.0 6
Datsun 710 22.8 4
Hornet 4 Drive 21.4 6
Hornet Sportabout 18.7 8
Valiant 18.1 6
Duster 360 14.3 8
Merc 240D 24.4 4
Merc 230 22.8 4
Merc 280 19.2 6
Merc 280C 17.8 6
Merc 450SE 16.4 8
Merc 450SL 17.3 8
Merc 450SLC 15.2 8
Cadillac Fleetwood 10.4 8
Lincoln Continental 10.4 8
Chrysler Imperial 14.7 8
Fiat 128 32.4 4
Honda Civic 30.4 4
Toyota Corolla 33.9 4
Toyota Corona 21.5 4
Dodge Challenger 15.5 8
AMC Javelin 15.2 8
Camaro Z28 13.3 8
Pontiac Firebird 19.2 8
Fiat X1-9 27.3 4
Porsche 914-2 26.0 4
Lotus Europa 30.4 4
Ford Pantera L 15.8 8
Ferrari Dino 19.7 6
Maserati Bora 15.0 8
Volvo 142E 21.4 4
```
What is the problem?
**Problem 2 (Easy)**: Which of these kinds of names should be wrapped with quotation marks when used in R?
* function name
* file name
* the name of an argument in a named argument
* object name
**Problem 3 (Easy)**: A user has typed the following commands into the RStudio console.
```
obj1 <- 2:10
obj2 <- c(2, 5)
obj3 <- c(TRUE, FALSE)
obj4 <- 42
```
What values are returned by the following commands?
```
obj1 * 10
obj1[2:4]
obj1[-3]
obj1 + obj2
obj1 * obj3
obj1 + obj4
obj2 + obj3
sum(obj2)
sum(obj3)
```
**Problem 4 (Easy)**: A user has typed the following commands into the RStudio console:
```
mylist <- list(x1 = "sally", x2 = 42, x3 = FALSE, x4 = 1:5)
```
What values do each of the following commands return?
```
is.list(mylist)
names(mylist)
length(mylist)
mylist[[2]]
mylist[["x1"]]
mylist$x2
length(mylist[["x4"]])
class(mylist)
typeof(mylist)
class(mylist[[4]])
typeof(mylist[[3]])
```
**Problem 5 (Easy)**: What’s wrong with this statement?
```
help(NHANES, package <- "NHANES")
```
**Problem 6 (Easy)**: Consult the documentation for `CPS85` in the `mosaicData` package to determine the meaning of CPS.
**Problem 7 (Easy)**: The following code chunk throws an error. Why?
```
library(tidyverse)
mtcars %>%
filter(cylinders == 4)
```
```
Error: Problem with `filter()` input `..1`.
ℹ Input `..1` is `cylinders == 4`.
x object 'cylinders' not found
```
What is the problem?
**Problem 8 (Easy)**: The `date` function returns an indication of the current time and date. What arguments does `date` take? What kind of object is the result from `date`? What kind of object is the result from `Sys.time`?
**Problem 9 (Easy)**: A user has typed the following commands into the RStudio console.
```
a <- c(10, 15)
b <- c(TRUE, FALSE)
c <- c("happy", "sad")
```
What do each of the following commands return? Describe the class of the object as well as its value.
```
data.frame(a, b, c)
cbind(a, b)
rbind(a, b)
cbind(a, b, c)
list(a, b, c)[[2]]
```
**Problem 10 (Easy)**: For each of the following assignment statements, describe the error (or note why it does not generate an error).
```
result1 <- sqrt 10
result2 <-- "Hello to you!"
3result <- "Hello to you"
result4 <- "Hello to you
result5 <- date()
```
**Problem 11 (Easy)**: The following code chunk throws an error.
```
library(tidyverse)
mtcars %>%
filter(cyl = 4)
```
```
Error: Problem with `filter()` input `..1`.
x Input `..1` is named.
ℹ This usually means that you've used `=` instead of `==`.
ℹ Did you mean `cyl == 4`?
```
The error suggests that you need to use `==` inside of `filter()`. Why?
**Problem 12 (Medium)**: The following code undertakes some data analysis using the HELP (Health Evaluation and Linkage to Primary Care) trial.
```
library(mosaic)
ds <-
read.csv("http://nhorton.people.amherst.edu/r2/datasets/helpmiss.csv")
summarise(group_by(
select(filter(mutate(ds,
sex = ifelse(female == 1, "F", "M")
), !is.na(pcs)), age, pcs, sex),
sex
), meanage = mean(age), meanpcs = mean(pcs), n = n())
```
Describe in words what computations are being done.
Using the pipe notation, translate this code into a more readable version.
**Problem 13 (Medium)**: The following concepts should have some meaning to you: package, function, command, argument, assignment, object, object name, data frame, named argument, quoted character string.
Construct an example of R commands
that make use of at least four of these. Label which part of your example R command corresponds to each.
B.7 Supplementary exercises
---------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/appR.html\#datavizI\-online\-exercises](https://mdsr-book.github.io/mdsr2e/appR.html#datavizI-online-exercises)
No exercises found
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-function.html |
C Algorithmic thinking
======================
C.1 Introduction
----------------
Algorithmic thinking can be defined as a set of abilities that
are related to constructing and understanding algorithms (Futschek 2006\):
1. the ability to analyze a given problem
2. the ability to precisely specify a problem
3. the ability to find the basic actions that are adequate to solve a problem
4. the ability to construct a correct algorithm to a given problem using basic
actions
5. the ability to think about all possible special and normal cases of a problem
6. the ability to improve the efficiency of an algorithm
These important capacities are a necessary but not sufficient component of “computational thinking” and data science.
It is critical that data scientists have the skills to break problems down and code solutions
in a flexible and powerful computing environment using [*functions*](https://en.wikipedia.org/w/index.php?search=functions).
We focus on the use of **R** for this task (although other environments such as Python have many adherents and virtues).
In this appendix, we presume a basic background in **R** to the level of Appendix [B](ch-R.html#ch:R).
C.2 Simple example
------------------
We begin with an example that creates a simple function to complete a statistical task (calculate a confidence interval for an estimate).
In **R**, a new [*function*](https://en.wikipedia.org/w/index.php?search=function) is defined by the syntax shown below, using the keyword `function`.
This creates a new object in **R** called `new_function()` in the workspace that takes two arguments (`argument1` and `argument2`).
The body is made up of a series of commands (or expressions), typically separated by line breaks and enclosed in curly braces.
```
library(tidyverse)
library(mdsr)
new_function <- function(argument1, argument2) {
R expression
another R expression
}
```
Here, we create a function to calculate the estimated confidence interval (CI) for a mean,
using the formula \\(\\bar{X} \\pm t^\* s / \\sqrt{n}\\), where \\(t^\*\\) is the appropriate t\-value
for that particular confidence level.
As an example, for a 95% interval with 50 degrees
of freedom (equivalent to \\(n\=51\\) observations) the appropriate value of \\(t^\*\\) can be calculated using the `cdist()` function from the **mosaic** package.
This computes the quantiles of the t\-distribution between which 95% of the distribution lies.
A graphical illustration is shown in Figure [C.1](ch-function.html#fig:xqt).
```
library(mosaic)
mosaic::cdist(dist = "t", p = 0.95, df = 50)
```
```
[1] -2.01 2.01
```
```
mosaic::xqt(p = c(0.025, 0.975), df = 50)
```
Figure C.1: Illustration of the location of the critical value for a 95% confidence interval for a mean. The critical value of 2\.01 corresponds to the location in the t\-distribution with 50 degrees of freedom, for which 2\.5% of the distribution lies above it.
```
[1] -2.01 2.01
```
We see that the value is slightly larger than 2\.
Note that since by construction our confidence interval will be centered around the mean, we want the critical value that corresponds to having 95% of the distribution in the middle.
We will write a function to compute a t\-based confidence interval for a mean from scratch.
We’ll call this function `ci_calc()`, and it will take a numeric vector `x` as its first argument, and an optional second argument `alpha`, which will have a default value of `0.95`.
```
# calculate a t confidence interval for a mean
ci_calc <- function(x, alpha = 0.95) {
samp_size <- length(x)
t_star <- qt(1 - ((1 - alpha)/2), df = samp_size - 1)
my_mean <- mean(x)
my_sd <- sd(x)
se <- my_sd/sqrt(samp_size)
me <- t_star * se
return(
list(
ci_vals = c(my_mean - me, my_mean + me),
alpha = alpha
)
)
}
```
Here the appropriate quantile of the t\-distribution is calculated using the `qt()` function, and the appropriate confidence interval is calculated and returned as a list.
In this example, we explicitly `return()` a `list` of values.
If no return statement is provided, the result of the last expression
evaluation is returned by default.
The tidyverse style guide (see Section [6\.3](ch-dataII.html#sec:naming)) encourages that default, but we prefer to make the return explicit.
The function has been stored in the object `ci_calc()`.
Once created, it
can be used like any other built\-in function.
For example, the expression below will print the CI and confidence level for the object `x1` (a set of 100 random normal variables with mean 0 and standard deviation 1\).
```
x1 <- rnorm(100, mean = 0, sd = 1)
ci_calc(x1)
```
```
$ci_vals
[1] -0.0867 0.2933
$alpha
[1] 0.95
```
The order of arguments in **R** matters if arguments to a function are not named.
When a function is called, the arguments are assumed to correspond to the order of the arguments as the function is defined.
To see that order, check the documentation, use the `args()` function, or look at the code of the function itself.
```
?ci_calc # won't work because we haven't written any documentation
args(ci_calc)
ci_calc
```
Consider creating an **R** package for commonly used functions that you develop so that
they can be more easily documented, tested, and reused.
Since we provided only one unnamed argument (`x1`), **R** passed the value `x1` to the argument `x` of `ci_calc()`.
Since we did not specify a value for the `alpha` argument, the default value of `0.95` was used.
User\-defined functions nest just as pre\-existing functions do.
The expression below will return the CI and report that the confidence limit is `0.9` for 100 normal random variates.
```
ci_calc(rnorm(100), 0.9)
```
To change the confidence level, we need only change the `alpha` option by specifying it as a [*named argument*](https://en.wikipedia.org/w/index.php?search=named%20argument).
```
ci_calc(x1, alpha = 0.90)
```
```
$ci_vals
[1] -0.0557 0.2623
$alpha
[1] 0.9
```
The output is equivalent to running the command `ci_calc(x1, 0.90)` with two unnamed arguments, where the arguments are matched in order.
Perhaps less intuitive but equivalent would be the following call.
```
ci_calc(alpha = 0.90, x = x1)
```
```
$ci_vals
[1] -0.0557 0.2623
$alpha
[1] 0.9
```
The key take\-home message is that the order of arguments is not important *if all of the arguments are named*.
Using the pipe operator introduced in Chapter [4](ch-dataI.html#ch:dataI) can avoid nesting.
```
rnorm(100, mean = 0, sd = 1) %>%
ci_calc(alpha = 0.9)
```
```
$ci_vals
[1] -0.0175 0.2741
$alpha
[1] 0.9
```
The **testthat** package can help to improve your functions by writing testing routines to check that the function does what you expect it to.
C.3 Extended example: Law of large numbers
------------------------------------------
The [*law of large numbers*](https://en.wikipedia.org/w/index.php?search=law%20of%20large%20numbers) concerns the convergence of the arithmetic average of a sample to the expected value of a random variable, as the sample size increases.
What this means is that with a sufficiently large unbiased sample, we can be pretty confident about the true mean.
This is an important result in statistics, described in Section [9\.2\.1](ch-foundations.html#sec:lln).
The convergence (or lack thereof, for certain distributions) can easily be visualized.
We define a function to calculate the running average for a given vector, allowing for variates from many distributions to be generated.
```
runave <- function(n, gendist, ...) {
x <- gendist(n, ...)
avex <- numeric(n)
for (k in 1:n) {
avex[k] <- mean(x[1:k])
}
return(tibble(x, avex, n = 1:length(avex)))
}
```
The `runave()` function takes at a minimum two arguments: a sample size `n` and function (see [B.3\.8](ch-R.html#sec:func)) denoted by `gendist` that is used to generate samples from a distribution.
Note that there are more efficient ways to write this function using vector operations (see for example the `cummean()` function).
Other options for the function can be specified, using the `...` (dots) syntax.
This syntax allows additional options to be provided to functions that might be called downstream.
For example, the dots are used to specify the degrees of freedom for the samples generated for the t\-distribution in the next code block.
The Cauchy distribution is symmetric and has heavy tails.
It is useful to know that because the expectation of a Cauchy random variable is undefined (Romano and Siegel 1986\), the sample average does not converge to the center (see related discussion in Section [9\.2\.1](ch-foundations.html#sec:lln)).
The variance of a Cauchy random variable is also infinite (does not exist).
Such a distribution arises when ratios are calculated.
Conversely, a t\-distribution with more than 1 degree of freedom (a distribution with less of a heavy tail) does converge to the center.
For comparison, the two distributions are displayed in Figure [C.2](ch-function.html#fig:cauchyt).
```
mosaic::plotDist(
"t",
params = list(df = 4),
xlim = c(-5, 5),
lty = 2,
lwd = 3
)
mosaic::plotDist("cauchy", xlim = c(-10, 10), lwd = 3, add = TRUE)
```
Figure C.2: Cauchy distribution (solid line) and t\-distribution with 4 degrees of freedom (dashed line).
To make sure we can replicate our results for this simulation, we first set a fixed seed (see Section [13\.6\.3](ch-simulation.html#sec:seed)).
Next, we generate some data, using our new `runave()` function.
```
nvals <- 1000
set.seed(1984)
sims <- bind_rows(
runave(nvals, rt, 4),
runave(nvals, rcauchy)
) %>%
mutate(dist = rep(c("t4", "cauchy"), each = nvals))
```
In this example, the value `4` is provided to the `rt()` function using the `...` mechanism. This is used to specify the `df` argument to `rt()`.
The results are plotted in Figure [C.3](ch-function.html#fig:t4).
While the running average of the t\-distribution converges to the true mean of zero, the running average of the Cauchy distribution does not.
```
ggplot(
data = sims,
aes(x = n, y = avex, color = dist)
) +
geom_hline(yintercept = 0, color = "black", linetype = 2) +
geom_line() +
geom_point() +
labs(color = "Distribution", y = "running mean", x = "sample size") +
xlim(c(0, 600)
)
```
Figure C.3: Running average for t\-distribution with 4 degrees of freedom and a Cauchy random variable (equivalent to a t\-distribution with 1 degree of freedom). Note that while the former converges, the latter does not.
C.4 Non\-standard evaluation
----------------------------
When evaluating expressions, **R** searches for objects in an [*environment*](https://en.wikipedia.org/w/index.php?search=environment). The most general environment is the [*global environment*](https://en.wikipedia.org/w/index.php?search=global%20environment), the contents of which are displayed in the environment tab in **RStudio** or through the `ls()` command. When you try to access an object that cannot be found in the global environment, you get an error.
We will use a subset of the `NHANES` data frame from the **NHANES** package to illustrate a few of these subtleties. This data frame has a variety of data types.
```
library(NHANES)
nhanes_small <- NHANES %>%
select(ID, SurveyYr, Gender, Age, AgeMonths, Race1, Poverty)
glimpse(nhanes_small)
```
```
Rows: 10,000
Columns: 7
$ ID <int> 51624, 51624, 51624, 51625, 51630, 51638, 51646, 51647, …
$ SurveyYr <fct> 2009_10, 2009_10, 2009_10, 2009_10, 2009_10, 2009_10, 20…
$ Gender <fct> male, male, male, male, female, male, male, female, fema…
$ Age <int> 34, 34, 34, 4, 49, 9, 8, 45, 45, 45, 66, 58, 54, 10, 58,…
$ AgeMonths <int> 409, 409, 409, 49, 596, 115, 101, 541, 541, 541, 795, 70…
$ Race1 <fct> White, White, White, Other, White, White, White, White, …
$ Poverty <dbl> 1.36, 1.36, 1.36, 1.07, 1.91, 1.84, 2.33, 5.00, 5.00, 5.…
```
Consider the differences between trying to access the `ID` variable each of the three ways shown below. In the first case, we are simply creating a character vector of length one that contains the single string `ID`. The second command causes **R** to search the global environment for an object called `ID`—which does not exist. In the third command, we correctly access the `ID` variable within the `nhanes_small` data frame, which *is* accessible in the global environment. These are different examples of how **R** uses [*scoping*](https://en.wikipedia.org/w/index.php?search=scoping) to identify objects.
```
"ID" # string variable
```
```
[1] "ID"
```
```
ID # generates an error
```
```
Error in eval(expr, envir, enclos): object 'ID' not found
```
```
nhanes_small %>%
pull(ID) %>% # access within a data frame
summary()
```
```
Min. 1st Qu. Median Mean 3rd Qu. Max.
51624 56904 62160 61945 67039 71915
```
How might this be relevant?
Notice that several of the variables in `nhanes_small` are factors. We might want to convert each of them to type `character`. Typically, we would do this using the `mutate()` command that we introduced in Chapter [4](ch-dataI.html#ch:dataI).
```
nhanes_small %>%
mutate(SurveyYr = as.character(SurveyYr)) %>%
select(ID, SurveyYr) %>%
glimpse()
```
```
Rows: 10,000
Columns: 2
$ ID <int> 51624, 51624, 51624, 51625, 51630, 51638, 51646, 51647, 5…
$ SurveyYr <chr> "2009_10", "2009_10", "2009_10", "2009_10", "2009_10", "2…
```
Note however, that in this construction we have to know the name of the variable we wish to convert (i.e., `SurveyYr`) and list it explicitly. This is unfortunate if the goal is to automate
our data wrangling (see Chapter [7](ch-iteration.html#ch:iteration)).
If we tried instead to set the name of the column (i.e., `SurveyYr`) to a variable (i.e., `varname`) and use that variable to change the names, it would not work as intended.
In this case, rather than changing the data type of `SurveyYr`, we have created a new variable called `varname` that is a character vector of the values `SurveyYr`.
```
varname <- "SurveyYr"
nhanes_small %>%
mutate(varname = as.character(varname)) %>%
select(ID, SurveyYr, varname) %>%
glimpse()
```
```
Rows: 10,000
Columns: 3
$ ID <int> 51624, 51624, 51624, 51625, 51630, 51638, 51646, 51647, 5…
$ SurveyYr <fct> 2009_10, 2009_10, 2009_10, 2009_10, 2009_10, 2009_10, 200…
$ varname <chr> "SurveyYr", "SurveyYr", "SurveyYr", "SurveyYr", "SurveyYr…
```
This behavior is a consequence of a feature of the **R** language called [*non\-standard evaluation*](https://en.wikipedia.org/w/index.php?search=non-standard%20evaluation) (NSE). The **rlang** package provides a principled way to work with expressions in the **tidyverse** and is used extensively in the **dplyr** package.
The **dplyr** functions use a form of non\-standard evaluation called [*tidy evaluation*](https://en.wikipedia.org/w/index.php?search=tidy%20evaluation).
Tidy evaluation allows **R** to locate `SurveyYr`, even though there is no object called `SurveyYr` in the global environment. Here, `mutate()` knows to look for `SurveyYr` within the `nhanes_small` data frame.
In this case, we can solve our problem using the `across()` and `where()` adverbs and `mutate()`.
Namely, we can use `is.factor()` in conjunction with `across()` to find all of the variables that are factors and convert them to `character` using `as.character()`.
```
nhanes_small %>%
mutate(across(where(is.factor), as.character))
```
```
# A tibble: 10,000 × 7
ID SurveyYr Gender Age AgeMonths Race1 Poverty
<int> <chr> <chr> <int> <int> <chr> <dbl>
1 51624 2009_10 male 34 409 White 1.36
2 51624 2009_10 male 34 409 White 1.36
3 51624 2009_10 male 34 409 White 1.36
4 51625 2009_10 male 4 49 Other 1.07
5 51630 2009_10 female 49 596 White 1.91
6 51638 2009_10 male 9 115 White 1.84
7 51646 2009_10 male 8 101 White 2.33
8 51647 2009_10 female 45 541 White 5
9 51647 2009_10 female 45 541 White 5
10 51647 2009_10 female 45 541 White 5
# … with 9,990 more rows
```
When you come to a problem that involves programming with **tidyverse** functions that you aren’t sure how to solve, consider these approaches:
1. Use `across()` and/or `where()`: This approach is outlined above and in Section [7\.2](ch-iteration.html#sec:scoped). If this suffices, it will usually be the cleanest solution.
2. Pass the dots: Sometimes, you can avoid having to hard\-code variable names by passing the dots through your function to another **tidyverse** function. For example, you might allow your function to take an arbitrary list of arguments that are passed directly to `select()` or `filter()`.
3. Use tidy evaluation: A full discussion of this is beyond the scope of this book, but we provide pointers to more material in Section [C.6](ch-function.html#sec:algorithmic-further).
4. Use base **R** syntax: If you are comfortable working in this dialect, the code can be relatively simple, but it has two obstacles for **tidyverse** learners: 1\) the use of selectors like `[[` may be jarring and break pipelines; and 2\) the quotation marks around variable names can get cumbersome. We show a simple example below.
#### A base **R** implementation
In base **R**, you can do this:
```
var_to_char_base <- function(data, varname) {
data[[varname]] <- as.character(data[[varname]])
data
}
var_to_char_base(nhanes_small, "SurveyYr")
```
```
# A tibble: 10,000 × 7
ID SurveyYr Gender Age AgeMonths Race1 Poverty
<int> <chr> <fct> <int> <int> <fct> <dbl>
1 51624 2009_10 male 34 409 White 1.36
2 51624 2009_10 male 34 409 White 1.36
3 51624 2009_10 male 34 409 White 1.36
4 51625 2009_10 male 4 49 Other 1.07
5 51630 2009_10 female 49 596 White 1.91
6 51638 2009_10 male 9 115 White 1.84
7 51646 2009_10 male 8 101 White 2.33
8 51647 2009_10 female 45 541 White 5
9 51647 2009_10 female 45 541 White 5
10 51647 2009_10 female 45 541 White 5
# … with 9,990 more rows
```
Note that the variable names have quotes when they are put into the function.
C.5 Debugging and defensive coding
----------------------------------
It can be challenging to identify issues and problems with code that is not working.
Both **R** and **RStudio** include support for debugging functions and code.
Calling the `browser()` function
in the body of a function will cause execution to stop and set up an **R** interpreter.
Once at the browser prompt, the analyst
can enter either commands (such as `c` to continue execution, `f` to finish execution of the current function,
`n` to evaluate the next statement (without stepping into function calls), `s` to evaluate the next statement
(stepping into function calls), `Q` to exit the browser, or `help` to print this list.
Other commands entered
at the browser are interpreted as **R** expressions to be evaluated (the function `ls()` lists available objects).
Calls to the browser can be set using the `debug()` or `debugonce()` functions (and turned off using the `undebug()` function).
**RStudio** includes a debugging mode that is displayed when `debug()` is called.
Adopting [*defensive coding*](https://en.wikipedia.org/w/index.php?search=defensive%20coding) techniques is always recommended: They tend to identify problems early and minimize errors.
The `try()` function can be used to evaluate an
expression while allowing for error recovery.
The `stop()` function can be used to stop evaluation of the current
expression and execute an error action (typically displaying an error message).
More flexible testing is available in the **assertthat** package.
Let’s revisit the `ci_calc()` function we defined to calculate a confidence interval. How might we make this more robust?
We can begin by confirming that the calling arguments are sensible.
```
library(assertthat)
# calculate a t confidence interval for a mean
ci_calc <- function(x, alpha = 0.95) {
if (length(x) < 2) {
stop("Need to provide a vector of length at least 2.\n")
}
if (alpha < 0 | alpha > 1) {
stop("alpha must be between 0 and 1.\n")
}
assert_that(is.numeric(x))
samp_size <- length(x)
t_star <- qt(1 - ((1 - alpha)/2), df = samp_size - 1)
my_mean <- mean(x)
my_sd <- sd(x)
se <- my_sd / sqrt(samp_size)
me <- t_star * se
return(list(ci_vals = c(my_mean - me, my_mean + me),
alpha = alpha))
}
ci_calc(1) # will generate error
```
```
Error in ci_calc(1): Need to provide a vector of length at least 2.
```
```
ci_calc(1:3, alpha = -1) # will generate error
```
```
Error in ci_calc(1:3, alpha = -1): alpha must be between 0 and 1.
```
```
ci_calc(c("hello", "goodbye")) # will generate error
```
```
Error: x is not a numeric or integer vector
```
C.6 Further resources
---------------------
More examples of functions can be found in Chapter [13](ch-simulation.html#ch:simulation).
The American Statistical Association’s *Guidelines for Undergraduate Programs in Statistics* American Statistical Association Undergraduate Guidelines Workgroup (2014\) stress the importance of algorithmic thinking (see also Deborah Nolan and Temple Lang (2010\)).
Rizzo (2019\) and H. Wickham (2019\) provide
useful reviews of statistical computing.
A variety of online resources are available to describe how to create **R** packages and to deploy them on GitHub (see for example <http://kbroman.org/pkg_primer>).
Hadley Wickham (2015\) is a comprehensive and accessible guide to writing **R** packages.
The **testthat** package is helpful in
structuring more extensive unit tests for functions.
The **dplyr** package documentation includes a vignette detailing its use of the [**lazyeval** package](https://cran.r-project.org/web/packages/lazyeval/vignettes/lazyeval.html) for performing [non\-standard evaluation](https://cran.r-project.org/web/packages/dplyr/vignettes/nse.html). Henry (2020\) explains the [most recent developments surrounding tidy evaluation](https://rstudio.com/resources/rstudioconf-2020/interactivity-and-programming-in-the-tidyverse/).
H. Wickham (2019\) includes [a fuller discussion of passing the dots](https://adv-r.hadley.nz/functions.html#fun-dot-dot-dot).
C.7 Exercises
-------------
**Problem 1 (Easy)**: Consider the following function definition, and subsequent call of that function.
```
library(tidyverse)
summarize_species <- function(pattern = "Human") {
x <- starwars %>%
filter(species == pattern) %>%
summarize(
num_people = n(),
avg_height = mean(height, na.rm = TRUE)
)
}
summarize_species("Wookiee")
x
```
```
Error in eval(expr, envir, enclos): object 'x' not found
```
What causes this error?
**Problem 2 (Medium)**: Write a function called `count_name` that, when given a name as an argument, returns the total number of births by year from the `babynames` data frame in the `babynames` package that match that name. The function should return one row per year that matches (and generate an error message if there are no matches).
Run the function once with the argument `Ezekiel` and once with `Ezze`.
**Problem 3 (Medium)**:
1. Write a function called `count_na` that, when given a vector as an argument, will count the number of `NA's` in that vector.
Count the number of missing values in the `SEXRISK` variable in the `HELPfull` data frame in the `mosaicData` package.
2. Apply `count_na` to the columns of the `Teams` data frame from the `Lahman` package.
How many of the columns have missing data?
**Problem 4 (Medium)**: Write a function called `map_negative` that takes as arguments a data frame and the
name of a variable and returns that data frame with the negative values of the variable
replaced by zeroes. Apply this function the `cyl` variable in the `mtcars` data set.
**Problem 5 (Medium)**: Write a function called `grab_name` that, when given a name and a year as an argument, returns the rows from the `babynames` data frame in the `babynames` package that match that name for that year (and returns an error if that name and year combination does not match any rows).
Run the function once with the arguments `Ezekiel` and `1883` and once with `Ezekiel` and `1983`.
**Problem 6 (Medium)**: Write a function called `prop_cancel` that takes as arguments a month number
and destination airport and returns the proportion of flights missing arrival delay for each day to that destination.
Apply this function to the `nycflights13` package for February and Atlanta airport `ATL` and again with an invalid month number.
**Problem 7 (Medium)**: Write a function called `cum_min()` that, when given a vector as an
argument, returns the cumulative minimum of that vector. Compare the result of your function
to the built\-in `cummin()` function for the vector `c(4, 7, 9, -2, 12)`.
**Problem 8 (Hard)**: Benford’s law concerns the frequency distribution of leading digits from numerical data.
Write a function that takes a vector of numbers and returns the empirical distribution
of the first digit. Apply this function to data from the `corporate.payment` data set in the `benford.analysis` package.
C.8 Supplementary exercises
---------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-algorithmic.html\#algorithmic\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-algorithmic.html#algorithmic-online-exercises)
**Problem 1 (Easy)**: Consider the following function definition, and subsequent call of that function.
```
library(tidyverse)
summarize_species <- function(data, pattern = "Human") {
data %>%
filter(species == pattern) %>%
summarize(
num_people = n(),
avg_height = mean(height, na.rm = TRUE)
)
}
summarize_species("Wookiee", starwars)
```
```
Error in UseMethod("filter"): no applicable method for 'filter' applied to an object of class "character"
```
How could we modify the call to the function to make this work?
**Problem 2 (Easy)**: Consider the following function definition, and subsequent call of that function.
```
library(tidyverse)
summarize_species <- function(data, pattern = "Human") {
data %>%
filter(species == pattern) %>%
summarize(
num_people = n(),
avg_height = mean(height, na.rm = TRUE)
)
}
summarize_species("Wookiee")
```
What errors are there in this function definition and call?
**Problem 3 (Medium)**:
1. Write a function called `group_consecutive()` that takes a vector of integers as input and returns a character string where the integers are sorted and consecutive values are listed with a `-` and other values separated by commas.
2. Use the function and the `Lahman` package to concisely list which seasons where Jim Bouton (author of *Ball Four*) played major league baseball.
---
C.1 Introduction
----------------
Algorithmic thinking can be defined as a set of abilities that
are related to constructing and understanding algorithms (Futschek 2006\):
1. the ability to analyze a given problem
2. the ability to precisely specify a problem
3. the ability to find the basic actions that are adequate to solve a problem
4. the ability to construct a correct algorithm to a given problem using basic
actions
5. the ability to think about all possible special and normal cases of a problem
6. the ability to improve the efficiency of an algorithm
These important capacities are a necessary but not sufficient component of “computational thinking” and data science.
It is critical that data scientists have the skills to break problems down and code solutions
in a flexible and powerful computing environment using [*functions*](https://en.wikipedia.org/w/index.php?search=functions).
We focus on the use of **R** for this task (although other environments such as Python have many adherents and virtues).
In this appendix, we presume a basic background in **R** to the level of Appendix [B](ch-R.html#ch:R).
C.2 Simple example
------------------
We begin with an example that creates a simple function to complete a statistical task (calculate a confidence interval for an estimate).
In **R**, a new [*function*](https://en.wikipedia.org/w/index.php?search=function) is defined by the syntax shown below, using the keyword `function`.
This creates a new object in **R** called `new_function()` in the workspace that takes two arguments (`argument1` and `argument2`).
The body is made up of a series of commands (or expressions), typically separated by line breaks and enclosed in curly braces.
```
library(tidyverse)
library(mdsr)
new_function <- function(argument1, argument2) {
R expression
another R expression
}
```
Here, we create a function to calculate the estimated confidence interval (CI) for a mean,
using the formula \\(\\bar{X} \\pm t^\* s / \\sqrt{n}\\), where \\(t^\*\\) is the appropriate t\-value
for that particular confidence level.
As an example, for a 95% interval with 50 degrees
of freedom (equivalent to \\(n\=51\\) observations) the appropriate value of \\(t^\*\\) can be calculated using the `cdist()` function from the **mosaic** package.
This computes the quantiles of the t\-distribution between which 95% of the distribution lies.
A graphical illustration is shown in Figure [C.1](ch-function.html#fig:xqt).
```
library(mosaic)
mosaic::cdist(dist = "t", p = 0.95, df = 50)
```
```
[1] -2.01 2.01
```
```
mosaic::xqt(p = c(0.025, 0.975), df = 50)
```
Figure C.1: Illustration of the location of the critical value for a 95% confidence interval for a mean. The critical value of 2\.01 corresponds to the location in the t\-distribution with 50 degrees of freedom, for which 2\.5% of the distribution lies above it.
```
[1] -2.01 2.01
```
We see that the value is slightly larger than 2\.
Note that since by construction our confidence interval will be centered around the mean, we want the critical value that corresponds to having 95% of the distribution in the middle.
We will write a function to compute a t\-based confidence interval for a mean from scratch.
We’ll call this function `ci_calc()`, and it will take a numeric vector `x` as its first argument, and an optional second argument `alpha`, which will have a default value of `0.95`.
```
# calculate a t confidence interval for a mean
ci_calc <- function(x, alpha = 0.95) {
samp_size <- length(x)
t_star <- qt(1 - ((1 - alpha)/2), df = samp_size - 1)
my_mean <- mean(x)
my_sd <- sd(x)
se <- my_sd/sqrt(samp_size)
me <- t_star * se
return(
list(
ci_vals = c(my_mean - me, my_mean + me),
alpha = alpha
)
)
}
```
Here the appropriate quantile of the t\-distribution is calculated using the `qt()` function, and the appropriate confidence interval is calculated and returned as a list.
In this example, we explicitly `return()` a `list` of values.
If no return statement is provided, the result of the last expression
evaluation is returned by default.
The tidyverse style guide (see Section [6\.3](ch-dataII.html#sec:naming)) encourages that default, but we prefer to make the return explicit.
The function has been stored in the object `ci_calc()`.
Once created, it
can be used like any other built\-in function.
For example, the expression below will print the CI and confidence level for the object `x1` (a set of 100 random normal variables with mean 0 and standard deviation 1\).
```
x1 <- rnorm(100, mean = 0, sd = 1)
ci_calc(x1)
```
```
$ci_vals
[1] -0.0867 0.2933
$alpha
[1] 0.95
```
The order of arguments in **R** matters if arguments to a function are not named.
When a function is called, the arguments are assumed to correspond to the order of the arguments as the function is defined.
To see that order, check the documentation, use the `args()` function, or look at the code of the function itself.
```
?ci_calc # won't work because we haven't written any documentation
args(ci_calc)
ci_calc
```
Consider creating an **R** package for commonly used functions that you develop so that
they can be more easily documented, tested, and reused.
Since we provided only one unnamed argument (`x1`), **R** passed the value `x1` to the argument `x` of `ci_calc()`.
Since we did not specify a value for the `alpha` argument, the default value of `0.95` was used.
User\-defined functions nest just as pre\-existing functions do.
The expression below will return the CI and report that the confidence limit is `0.9` for 100 normal random variates.
```
ci_calc(rnorm(100), 0.9)
```
To change the confidence level, we need only change the `alpha` option by specifying it as a [*named argument*](https://en.wikipedia.org/w/index.php?search=named%20argument).
```
ci_calc(x1, alpha = 0.90)
```
```
$ci_vals
[1] -0.0557 0.2623
$alpha
[1] 0.9
```
The output is equivalent to running the command `ci_calc(x1, 0.90)` with two unnamed arguments, where the arguments are matched in order.
Perhaps less intuitive but equivalent would be the following call.
```
ci_calc(alpha = 0.90, x = x1)
```
```
$ci_vals
[1] -0.0557 0.2623
$alpha
[1] 0.9
```
The key take\-home message is that the order of arguments is not important *if all of the arguments are named*.
Using the pipe operator introduced in Chapter [4](ch-dataI.html#ch:dataI) can avoid nesting.
```
rnorm(100, mean = 0, sd = 1) %>%
ci_calc(alpha = 0.9)
```
```
$ci_vals
[1] -0.0175 0.2741
$alpha
[1] 0.9
```
The **testthat** package can help to improve your functions by writing testing routines to check that the function does what you expect it to.
C.3 Extended example: Law of large numbers
------------------------------------------
The [*law of large numbers*](https://en.wikipedia.org/w/index.php?search=law%20of%20large%20numbers) concerns the convergence of the arithmetic average of a sample to the expected value of a random variable, as the sample size increases.
What this means is that with a sufficiently large unbiased sample, we can be pretty confident about the true mean.
This is an important result in statistics, described in Section [9\.2\.1](ch-foundations.html#sec:lln).
The convergence (or lack thereof, for certain distributions) can easily be visualized.
We define a function to calculate the running average for a given vector, allowing for variates from many distributions to be generated.
```
runave <- function(n, gendist, ...) {
x <- gendist(n, ...)
avex <- numeric(n)
for (k in 1:n) {
avex[k] <- mean(x[1:k])
}
return(tibble(x, avex, n = 1:length(avex)))
}
```
The `runave()` function takes at a minimum two arguments: a sample size `n` and function (see [B.3\.8](ch-R.html#sec:func)) denoted by `gendist` that is used to generate samples from a distribution.
Note that there are more efficient ways to write this function using vector operations (see for example the `cummean()` function).
Other options for the function can be specified, using the `...` (dots) syntax.
This syntax allows additional options to be provided to functions that might be called downstream.
For example, the dots are used to specify the degrees of freedom for the samples generated for the t\-distribution in the next code block.
The Cauchy distribution is symmetric and has heavy tails.
It is useful to know that because the expectation of a Cauchy random variable is undefined (Romano and Siegel 1986\), the sample average does not converge to the center (see related discussion in Section [9\.2\.1](ch-foundations.html#sec:lln)).
The variance of a Cauchy random variable is also infinite (does not exist).
Such a distribution arises when ratios are calculated.
Conversely, a t\-distribution with more than 1 degree of freedom (a distribution with less of a heavy tail) does converge to the center.
For comparison, the two distributions are displayed in Figure [C.2](ch-function.html#fig:cauchyt).
```
mosaic::plotDist(
"t",
params = list(df = 4),
xlim = c(-5, 5),
lty = 2,
lwd = 3
)
mosaic::plotDist("cauchy", xlim = c(-10, 10), lwd = 3, add = TRUE)
```
Figure C.2: Cauchy distribution (solid line) and t\-distribution with 4 degrees of freedom (dashed line).
To make sure we can replicate our results for this simulation, we first set a fixed seed (see Section [13\.6\.3](ch-simulation.html#sec:seed)).
Next, we generate some data, using our new `runave()` function.
```
nvals <- 1000
set.seed(1984)
sims <- bind_rows(
runave(nvals, rt, 4),
runave(nvals, rcauchy)
) %>%
mutate(dist = rep(c("t4", "cauchy"), each = nvals))
```
In this example, the value `4` is provided to the `rt()` function using the `...` mechanism. This is used to specify the `df` argument to `rt()`.
The results are plotted in Figure [C.3](ch-function.html#fig:t4).
While the running average of the t\-distribution converges to the true mean of zero, the running average of the Cauchy distribution does not.
```
ggplot(
data = sims,
aes(x = n, y = avex, color = dist)
) +
geom_hline(yintercept = 0, color = "black", linetype = 2) +
geom_line() +
geom_point() +
labs(color = "Distribution", y = "running mean", x = "sample size") +
xlim(c(0, 600)
)
```
Figure C.3: Running average for t\-distribution with 4 degrees of freedom and a Cauchy random variable (equivalent to a t\-distribution with 1 degree of freedom). Note that while the former converges, the latter does not.
C.4 Non\-standard evaluation
----------------------------
When evaluating expressions, **R** searches for objects in an [*environment*](https://en.wikipedia.org/w/index.php?search=environment). The most general environment is the [*global environment*](https://en.wikipedia.org/w/index.php?search=global%20environment), the contents of which are displayed in the environment tab in **RStudio** or through the `ls()` command. When you try to access an object that cannot be found in the global environment, you get an error.
We will use a subset of the `NHANES` data frame from the **NHANES** package to illustrate a few of these subtleties. This data frame has a variety of data types.
```
library(NHANES)
nhanes_small <- NHANES %>%
select(ID, SurveyYr, Gender, Age, AgeMonths, Race1, Poverty)
glimpse(nhanes_small)
```
```
Rows: 10,000
Columns: 7
$ ID <int> 51624, 51624, 51624, 51625, 51630, 51638, 51646, 51647, …
$ SurveyYr <fct> 2009_10, 2009_10, 2009_10, 2009_10, 2009_10, 2009_10, 20…
$ Gender <fct> male, male, male, male, female, male, male, female, fema…
$ Age <int> 34, 34, 34, 4, 49, 9, 8, 45, 45, 45, 66, 58, 54, 10, 58,…
$ AgeMonths <int> 409, 409, 409, 49, 596, 115, 101, 541, 541, 541, 795, 70…
$ Race1 <fct> White, White, White, Other, White, White, White, White, …
$ Poverty <dbl> 1.36, 1.36, 1.36, 1.07, 1.91, 1.84, 2.33, 5.00, 5.00, 5.…
```
Consider the differences between trying to access the `ID` variable each of the three ways shown below. In the first case, we are simply creating a character vector of length one that contains the single string `ID`. The second command causes **R** to search the global environment for an object called `ID`—which does not exist. In the third command, we correctly access the `ID` variable within the `nhanes_small` data frame, which *is* accessible in the global environment. These are different examples of how **R** uses [*scoping*](https://en.wikipedia.org/w/index.php?search=scoping) to identify objects.
```
"ID" # string variable
```
```
[1] "ID"
```
```
ID # generates an error
```
```
Error in eval(expr, envir, enclos): object 'ID' not found
```
```
nhanes_small %>%
pull(ID) %>% # access within a data frame
summary()
```
```
Min. 1st Qu. Median Mean 3rd Qu. Max.
51624 56904 62160 61945 67039 71915
```
How might this be relevant?
Notice that several of the variables in `nhanes_small` are factors. We might want to convert each of them to type `character`. Typically, we would do this using the `mutate()` command that we introduced in Chapter [4](ch-dataI.html#ch:dataI).
```
nhanes_small %>%
mutate(SurveyYr = as.character(SurveyYr)) %>%
select(ID, SurveyYr) %>%
glimpse()
```
```
Rows: 10,000
Columns: 2
$ ID <int> 51624, 51624, 51624, 51625, 51630, 51638, 51646, 51647, 5…
$ SurveyYr <chr> "2009_10", "2009_10", "2009_10", "2009_10", "2009_10", "2…
```
Note however, that in this construction we have to know the name of the variable we wish to convert (i.e., `SurveyYr`) and list it explicitly. This is unfortunate if the goal is to automate
our data wrangling (see Chapter [7](ch-iteration.html#ch:iteration)).
If we tried instead to set the name of the column (i.e., `SurveyYr`) to a variable (i.e., `varname`) and use that variable to change the names, it would not work as intended.
In this case, rather than changing the data type of `SurveyYr`, we have created a new variable called `varname` that is a character vector of the values `SurveyYr`.
```
varname <- "SurveyYr"
nhanes_small %>%
mutate(varname = as.character(varname)) %>%
select(ID, SurveyYr, varname) %>%
glimpse()
```
```
Rows: 10,000
Columns: 3
$ ID <int> 51624, 51624, 51624, 51625, 51630, 51638, 51646, 51647, 5…
$ SurveyYr <fct> 2009_10, 2009_10, 2009_10, 2009_10, 2009_10, 2009_10, 200…
$ varname <chr> "SurveyYr", "SurveyYr", "SurveyYr", "SurveyYr", "SurveyYr…
```
This behavior is a consequence of a feature of the **R** language called [*non\-standard evaluation*](https://en.wikipedia.org/w/index.php?search=non-standard%20evaluation) (NSE). The **rlang** package provides a principled way to work with expressions in the **tidyverse** and is used extensively in the **dplyr** package.
The **dplyr** functions use a form of non\-standard evaluation called [*tidy evaluation*](https://en.wikipedia.org/w/index.php?search=tidy%20evaluation).
Tidy evaluation allows **R** to locate `SurveyYr`, even though there is no object called `SurveyYr` in the global environment. Here, `mutate()` knows to look for `SurveyYr` within the `nhanes_small` data frame.
In this case, we can solve our problem using the `across()` and `where()` adverbs and `mutate()`.
Namely, we can use `is.factor()` in conjunction with `across()` to find all of the variables that are factors and convert them to `character` using `as.character()`.
```
nhanes_small %>%
mutate(across(where(is.factor), as.character))
```
```
# A tibble: 10,000 × 7
ID SurveyYr Gender Age AgeMonths Race1 Poverty
<int> <chr> <chr> <int> <int> <chr> <dbl>
1 51624 2009_10 male 34 409 White 1.36
2 51624 2009_10 male 34 409 White 1.36
3 51624 2009_10 male 34 409 White 1.36
4 51625 2009_10 male 4 49 Other 1.07
5 51630 2009_10 female 49 596 White 1.91
6 51638 2009_10 male 9 115 White 1.84
7 51646 2009_10 male 8 101 White 2.33
8 51647 2009_10 female 45 541 White 5
9 51647 2009_10 female 45 541 White 5
10 51647 2009_10 female 45 541 White 5
# … with 9,990 more rows
```
When you come to a problem that involves programming with **tidyverse** functions that you aren’t sure how to solve, consider these approaches:
1. Use `across()` and/or `where()`: This approach is outlined above and in Section [7\.2](ch-iteration.html#sec:scoped). If this suffices, it will usually be the cleanest solution.
2. Pass the dots: Sometimes, you can avoid having to hard\-code variable names by passing the dots through your function to another **tidyverse** function. For example, you might allow your function to take an arbitrary list of arguments that are passed directly to `select()` or `filter()`.
3. Use tidy evaluation: A full discussion of this is beyond the scope of this book, but we provide pointers to more material in Section [C.6](ch-function.html#sec:algorithmic-further).
4. Use base **R** syntax: If you are comfortable working in this dialect, the code can be relatively simple, but it has two obstacles for **tidyverse** learners: 1\) the use of selectors like `[[` may be jarring and break pipelines; and 2\) the quotation marks around variable names can get cumbersome. We show a simple example below.
#### A base **R** implementation
In base **R**, you can do this:
```
var_to_char_base <- function(data, varname) {
data[[varname]] <- as.character(data[[varname]])
data
}
var_to_char_base(nhanes_small, "SurveyYr")
```
```
# A tibble: 10,000 × 7
ID SurveyYr Gender Age AgeMonths Race1 Poverty
<int> <chr> <fct> <int> <int> <fct> <dbl>
1 51624 2009_10 male 34 409 White 1.36
2 51624 2009_10 male 34 409 White 1.36
3 51624 2009_10 male 34 409 White 1.36
4 51625 2009_10 male 4 49 Other 1.07
5 51630 2009_10 female 49 596 White 1.91
6 51638 2009_10 male 9 115 White 1.84
7 51646 2009_10 male 8 101 White 2.33
8 51647 2009_10 female 45 541 White 5
9 51647 2009_10 female 45 541 White 5
10 51647 2009_10 female 45 541 White 5
# … with 9,990 more rows
```
Note that the variable names have quotes when they are put into the function.
#### A base **R** implementation
In base **R**, you can do this:
```
var_to_char_base <- function(data, varname) {
data[[varname]] <- as.character(data[[varname]])
data
}
var_to_char_base(nhanes_small, "SurveyYr")
```
```
# A tibble: 10,000 × 7
ID SurveyYr Gender Age AgeMonths Race1 Poverty
<int> <chr> <fct> <int> <int> <fct> <dbl>
1 51624 2009_10 male 34 409 White 1.36
2 51624 2009_10 male 34 409 White 1.36
3 51624 2009_10 male 34 409 White 1.36
4 51625 2009_10 male 4 49 Other 1.07
5 51630 2009_10 female 49 596 White 1.91
6 51638 2009_10 male 9 115 White 1.84
7 51646 2009_10 male 8 101 White 2.33
8 51647 2009_10 female 45 541 White 5
9 51647 2009_10 female 45 541 White 5
10 51647 2009_10 female 45 541 White 5
# … with 9,990 more rows
```
Note that the variable names have quotes when they are put into the function.
C.5 Debugging and defensive coding
----------------------------------
It can be challenging to identify issues and problems with code that is not working.
Both **R** and **RStudio** include support for debugging functions and code.
Calling the `browser()` function
in the body of a function will cause execution to stop and set up an **R** interpreter.
Once at the browser prompt, the analyst
can enter either commands (such as `c` to continue execution, `f` to finish execution of the current function,
`n` to evaluate the next statement (without stepping into function calls), `s` to evaluate the next statement
(stepping into function calls), `Q` to exit the browser, or `help` to print this list.
Other commands entered
at the browser are interpreted as **R** expressions to be evaluated (the function `ls()` lists available objects).
Calls to the browser can be set using the `debug()` or `debugonce()` functions (and turned off using the `undebug()` function).
**RStudio** includes a debugging mode that is displayed when `debug()` is called.
Adopting [*defensive coding*](https://en.wikipedia.org/w/index.php?search=defensive%20coding) techniques is always recommended: They tend to identify problems early and minimize errors.
The `try()` function can be used to evaluate an
expression while allowing for error recovery.
The `stop()` function can be used to stop evaluation of the current
expression and execute an error action (typically displaying an error message).
More flexible testing is available in the **assertthat** package.
Let’s revisit the `ci_calc()` function we defined to calculate a confidence interval. How might we make this more robust?
We can begin by confirming that the calling arguments are sensible.
```
library(assertthat)
# calculate a t confidence interval for a mean
ci_calc <- function(x, alpha = 0.95) {
if (length(x) < 2) {
stop("Need to provide a vector of length at least 2.\n")
}
if (alpha < 0 | alpha > 1) {
stop("alpha must be between 0 and 1.\n")
}
assert_that(is.numeric(x))
samp_size <- length(x)
t_star <- qt(1 - ((1 - alpha)/2), df = samp_size - 1)
my_mean <- mean(x)
my_sd <- sd(x)
se <- my_sd / sqrt(samp_size)
me <- t_star * se
return(list(ci_vals = c(my_mean - me, my_mean + me),
alpha = alpha))
}
ci_calc(1) # will generate error
```
```
Error in ci_calc(1): Need to provide a vector of length at least 2.
```
```
ci_calc(1:3, alpha = -1) # will generate error
```
```
Error in ci_calc(1:3, alpha = -1): alpha must be between 0 and 1.
```
```
ci_calc(c("hello", "goodbye")) # will generate error
```
```
Error: x is not a numeric or integer vector
```
C.6 Further resources
---------------------
More examples of functions can be found in Chapter [13](ch-simulation.html#ch:simulation).
The American Statistical Association’s *Guidelines for Undergraduate Programs in Statistics* American Statistical Association Undergraduate Guidelines Workgroup (2014\) stress the importance of algorithmic thinking (see also Deborah Nolan and Temple Lang (2010\)).
Rizzo (2019\) and H. Wickham (2019\) provide
useful reviews of statistical computing.
A variety of online resources are available to describe how to create **R** packages and to deploy them on GitHub (see for example <http://kbroman.org/pkg_primer>).
Hadley Wickham (2015\) is a comprehensive and accessible guide to writing **R** packages.
The **testthat** package is helpful in
structuring more extensive unit tests for functions.
The **dplyr** package documentation includes a vignette detailing its use of the [**lazyeval** package](https://cran.r-project.org/web/packages/lazyeval/vignettes/lazyeval.html) for performing [non\-standard evaluation](https://cran.r-project.org/web/packages/dplyr/vignettes/nse.html). Henry (2020\) explains the [most recent developments surrounding tidy evaluation](https://rstudio.com/resources/rstudioconf-2020/interactivity-and-programming-in-the-tidyverse/).
H. Wickham (2019\) includes [a fuller discussion of passing the dots](https://adv-r.hadley.nz/functions.html#fun-dot-dot-dot).
C.7 Exercises
-------------
**Problem 1 (Easy)**: Consider the following function definition, and subsequent call of that function.
```
library(tidyverse)
summarize_species <- function(pattern = "Human") {
x <- starwars %>%
filter(species == pattern) %>%
summarize(
num_people = n(),
avg_height = mean(height, na.rm = TRUE)
)
}
summarize_species("Wookiee")
x
```
```
Error in eval(expr, envir, enclos): object 'x' not found
```
What causes this error?
**Problem 2 (Medium)**: Write a function called `count_name` that, when given a name as an argument, returns the total number of births by year from the `babynames` data frame in the `babynames` package that match that name. The function should return one row per year that matches (and generate an error message if there are no matches).
Run the function once with the argument `Ezekiel` and once with `Ezze`.
**Problem 3 (Medium)**:
1. Write a function called `count_na` that, when given a vector as an argument, will count the number of `NA's` in that vector.
Count the number of missing values in the `SEXRISK` variable in the `HELPfull` data frame in the `mosaicData` package.
2. Apply `count_na` to the columns of the `Teams` data frame from the `Lahman` package.
How many of the columns have missing data?
**Problem 4 (Medium)**: Write a function called `map_negative` that takes as arguments a data frame and the
name of a variable and returns that data frame with the negative values of the variable
replaced by zeroes. Apply this function the `cyl` variable in the `mtcars` data set.
**Problem 5 (Medium)**: Write a function called `grab_name` that, when given a name and a year as an argument, returns the rows from the `babynames` data frame in the `babynames` package that match that name for that year (and returns an error if that name and year combination does not match any rows).
Run the function once with the arguments `Ezekiel` and `1883` and once with `Ezekiel` and `1983`.
**Problem 6 (Medium)**: Write a function called `prop_cancel` that takes as arguments a month number
and destination airport and returns the proportion of flights missing arrival delay for each day to that destination.
Apply this function to the `nycflights13` package for February and Atlanta airport `ATL` and again with an invalid month number.
**Problem 7 (Medium)**: Write a function called `cum_min()` that, when given a vector as an
argument, returns the cumulative minimum of that vector. Compare the result of your function
to the built\-in `cummin()` function for the vector `c(4, 7, 9, -2, 12)`.
**Problem 8 (Hard)**: Benford’s law concerns the frequency distribution of leading digits from numerical data.
Write a function that takes a vector of numbers and returns the empirical distribution
of the first digit. Apply this function to data from the `corporate.payment` data set in the `benford.analysis` package.
C.8 Supplementary exercises
---------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-algorithmic.html\#algorithmic\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-algorithmic.html#algorithmic-online-exercises)
**Problem 1 (Easy)**: Consider the following function definition, and subsequent call of that function.
```
library(tidyverse)
summarize_species <- function(data, pattern = "Human") {
data %>%
filter(species == pattern) %>%
summarize(
num_people = n(),
avg_height = mean(height, na.rm = TRUE)
)
}
summarize_species("Wookiee", starwars)
```
```
Error in UseMethod("filter"): no applicable method for 'filter' applied to an object of class "character"
```
How could we modify the call to the function to make this work?
**Problem 2 (Easy)**: Consider the following function definition, and subsequent call of that function.
```
library(tidyverse)
summarize_species <- function(data, pattern = "Human") {
data %>%
filter(species == pattern) %>%
summarize(
num_people = n(),
avg_height = mean(height, na.rm = TRUE)
)
}
summarize_species("Wookiee")
```
What errors are there in this function definition and call?
**Problem 3 (Medium)**:
1. Write a function called `group_consecutive()` that takes a vector of integers as input and returns a character string where the integers are sorted and consecutive values are listed with a `-` and other values separated by commas.
2. Use the function and the `Lahman` package to concisely list which seasons where Jim Bouton (author of *Ball Four*) played major league baseball.
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-reproduce.html |
D Reproducible analysis and workflow
====================================
The notion that scientific findings can be confirmed repeatedly through replication is fundamental to the centuries\-old paradigm of science. The underlying logic is that if you have identified a truth about the world, that truth should persist upon further investigation by other observers.
In the physical sciences, there are two challenges in replicating a study: *replicating* the experiment itself, and *reproducing* the subsequent data analysis that led to the conclusion.
More concisely, replicability means that different people get the same results with *different* data. Reproducibility means that the same person (or different people) get the same results with the *same* data.
It is easy to imagine why replicating a physical experiment might be difficult, and not being physical scientists ourselves, we won’t tackle those issues here. On the other hand, the latter challenge of reproducing the data analysis is most certainly our domain. It seems like a much lower hurdle to clear—isn’t this just a matter of following a few steps? Upon review, for a variety of reasons many scientists are in fact tripping over even this low hurdle.
To further explicate the distinction between [*replicability*](https://en.wikipedia.org/w/index.php?search=replicability) and [*reproducibility*](https://en.wikipedia.org/w/index.php?search=reproducibility), recall that scientists are legendary keepers of lab notebooks. These notebooks are intended to contain all of the information needed to carry out the study again (i.e., replicate): reagents and other supplies, equipment, experimental material, etc. Modern software tools enable scientists to carry this same ethos to data analysis: Everything needed to repeat the analysis (i.e., reproduce) should be recorded in one place.
Even better, modern software tools allow the analysis to be repeated at the push of a button. This provides a proof that the analysis being documented is in fact exactly the same as the analysis that was performed. Moreover, this capability is a boon to those generating the analysis. It enables them to draft and redraft the analysis until they get it exactly right. Even better, when the analysis is written appropriately, it’s straightforward to apply the analysis to new data. Spreadsheet software, despite its popularity, is not suitable for this. Spreadsheet software references specific rows and columns of data, and so the analysis commands themselves need to be updated to conform to new data.
The [*replication crisis*](https://en.wikipedia.org/w/index.php?search=replication%20crisis) is a very real problem for modern science. More than 15 years ago, Ioannidis (2005\) argued that “most published research findings are false.” More recently, the journal *Nature* ran a series of editorials bemoaning the lack of replicability in published research (Editorial 2013\). It now appears that even among peer\-reviewed, published scientific articles, many of the findings—which are supported by experimental and statistical evidence—do not hold up under the scrutiny of replication. That is, when other researchers try to do the same study, they don’t reliably reach the same conclusions.
Some of the issues leading to irreproducibility are hard to understand, let alone solve. Much of the blame involves multiplicity and the “garden of forking paths” introduced in Chapter [9](ch-foundations.html#ch:foundations).
While we touch upon issues related to null hypothesis testing in Chapter [9](ch-foundations.html#ch:foundations), the focus of this chapter is on modern workflows for [*reproducible analysis*](https://en.wikipedia.org/w/index.php?search=reproducible%20analysis), since the ability to regenerate a set of results at a later point in time is a necessary but not sufficient condition for reproducible results.
The National Academies report on undergraduate data science included workflow and reproducibility as an important part of [*data acumen*](https://en.wikipedia.org/w/index.php?search=data%20acumen) (National Academies of Science, Engineering, and Medicine 2018\).
They described key components including workflows and workflow systems, reproducible analysis, documentation and code standards, version control systems, and collaboration.
Reproducible workflows consist of three components: a fully scriptable statistical programming environment (such as **R** or Python), reproducible analysis (first described as literate programming), and version control (commonly implemented using GitHub).
D.1 Scriptable statistical computing
------------------------------------
In order for data analysis to be reproducible, all of the steps taken in the analysis have to be recorded in a linear fashion. Scriptable applications like Python, **R**, SAS, and Stata do this by default.
Even when graphical user interfaces to these programs are used, they add the automatically\-generated code to the history so that it too can be recorded.
Thus, the full series of commands that make up the data analysis can be recorded, reviewed, and transmitted. Contrast this with the behavior of spreadsheet applications like [*Microsoft Excel*](https://en.wikipedia.org/w/index.php?search=Microsoft%20Excel) and [*Google Sheets*](https://en.wikipedia.org/w/index.php?search=Google%20Sheets), where it is not always possible to fully retrace one’s steps.
D.2 Reproducible analysis with **R** Markdown
---------------------------------------------
The concept of [*literate programming*](https://en.wikipedia.org/w/index.php?search=literate%20programming) was introduced by Knuth decades ago (Knuth 1992\). His advice was:
> “Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”
Central to this prescription is the idea that the relevant documentation for the code—which is understandable not just to the programmer, but to other human beings as well—occurs alongside the code itself. In data analysis, this is manifest as the need to have three kinds of things in one document: the code, the results of that code, and the written analysis. We belong to a growing group of people who find the **rmarkdown** (J. J. Allaire et al. 2014\) and **knitr** packages (Y. Xie 2014\) to be an environment that is ideally suited to support a reproducible analysis workflow (B. S. Baumer et al. 2014\).
The **rmarkdown** and **knitr** packages use a source file and output file paradigm. This approach is common in programming but is fundamentally different than a “what\-you\-see\-is\-what\-you\-get” editor like [*Microsoft Word*](https://en.wikipedia.org/w/index.php?search=Microsoft%20Word) or [*Google Docs*](https://en.wikipedia.org/w/index.php?search=Google%20Docs). Code is typed into the source document, which is then rendered into an output format that is readable by anyone. The principles of literate programming stipulate that the source file should *also* be
readable by anyone.
We favor the simple document markup language **R** Markdown (J. Allaire et al. 2020\) for most applications. An **R** Markdown source file can be rendered (by **knitr**, leveraging [`pandoc`](http://johnmcfarlane.net/pandoc)) into PDF, HTML, and Microsoft Word formats. The resulting document will contain the **R** code, the results of that code, and the analyst’s written analysis.
Markdown is well\-integrated with **RStudio**, and both *LaTeX* and Markdown source files can be rendered via a single\-click mechanism.
More details can be found in Y. Xie (2014\) and Gandrud (2014\) as well as the CRAN reproducible research task view (Kuhn 2020\).
See also (<http://yihui.name/knitr>).
As an example of how these systems work, we demonstrate a document written in the Markdown format
using data from the
`SwimRecords` data frame. Within **RStudio**, a new template **R** Markdown file can be generated by selecting `R Markdown` from the `New File` option on the `File` menu. This generates the dialog box displayed in Figure
[D.1](ch-reproduce.html#fig:rmarkdowndialog). The default output format is HTML, but other options (PDF or Microsoft Word) are available.
The Markdown templates included with some packages (e.g., **mosaic**) are useful to set up more appropriate defaults for figure and font size. These can be accessed using the “From Template” option when opening a new Markdown file.
Figure D.1: Generating a new R Markdown file in RStudio.
```
---
title: "Sample R Markdown example"
author: "Sample User"
date: "November 8, 2020"
output:
html_document: default
fig_height: 2.8
fig_width: 5
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
library(tidyverse)
library(mdsr)
library(mosaic)
```
## R Markdown
This is an R Markdown document. Markdown is a simple formatting syntax for
authoring HTML, PDF, and MS Word documents. For more details on using R
Markdown see http://rmarkdown.rstudio.com.
When you click the **Knit** button a document will be generated that
includes both content as well as the output of any embedded R code chunks
within the document. You can embed an R code chunk like this:
```{r display}
glimpse(SwimRecords)
```
## Including Plots
You can also embed plots, for example:
```{r scatplot, echo=FALSE, message = FALSE}
ggplot(
data = SwimRecords,
aes(x = year, y = time, color = sex)
) +
geom_point() +
geom_smooth(method = loess, se = FALSE) +
theme(legend.position = "right") +
labs(title = "100m Swimming Records over time")
```
There are n=`r nrow(SwimRecords)` rows in the Swim records dataset.
Note that the `echo = FALSE` option was added to the code chunk to
prevent printing of the R code that generated the plot.
```
Figure D.2: Sample Markdown input file.
Figure [D.2](ch-reproduce.html#fig:mark1) displays a modified version of the default **R** Markdown input file.
The file is given a title (`Sample R Markdown example`) with output format set by default to HTML.
Simple markup (such as bolding) is added through use of the `**` characters before and after the word `Help`.
Blocks of code are begun using a ````{r}` line and closed
with a ````` line (three back quotes in both cases).
The formatted output can be generated and displayed by clicking the `Knit HTML` button in RStudio, or by using the commands in the following code block, which can also be used when running **R** without the benefit of **RStudio**.
```
library(rmarkdown)
render("filename.Rmd") # creates filename.html
browseURL("filename.html")
```
The `render()` function extracts the **R** commands from a specially formatted **R** Markdown
input file (`filename.Rmd`), evaluates them, and integrates the resulting output, including text and graphics, into an output file (`filename.html`).
A screenshot of the results of performing these steps on the `.Rmd` file displayed in Figure [D.2](ch-reproduce.html#fig:mark1) is displayed in Figure [D.3](ch-reproduce.html#fig:rmark).
`render()` uses the value of the `output:` option to determine what format to generate. If the `.Rmd` file specified `output: word_document`, then a Microsoft Word document
would be created.
Figure D.3: Formatted output from R Markdown example.
Alternatively, a PDF or Microsoft Word document can be generated in **RStudio** by selecting `New` from the `R Markdown` menu, then clicking on the PDF or Word options.
**RStudio** also supports the creation of **R** Presentations using a variant of the **R** Markdown syntax. Instructions and an example can be found by opening a new `R presentations` document in **RStudio**.
D.3 Projects and version control
--------------------------------
[Projects](https://support.rstudio.com/hc/en-us/articles/200526207-Using-Projects) are a useful feature of **RStudio**. A project provides a separate workspace. Selecting a project also reorients your **RStudio** environment to a specified directory, in the process reorienting the Files tab, the working directory, etc. Once you start working on multiple projects, being able to switch back and forth becomes very helpful.
Given that data science has been called a “team sport,” the ability to
track changes to files and discuss issues in a collaborative manner is an important prerequisite to reproducible analysis.
Projects can be tied to a version control system, such as [*Subversion*](https://en.wikipedia.org/w/index.php?search=Subversion) or [*GitHub*](https://en.wikipedia.org/w/index.php?search=GitHub). These systems help you and your collaborators keep track of changes to files, so that you can go back in time to review changes to previous pieces of code, compare versions, and retrieve older versions as needed.
While critical for collaboration with others, source code version control systems are also useful for individual projects because they document changes and maintain version histories. In such a setting, the collaboration is with your future self!
[*GitHub*](https://en.wikipedia.org/w/index.php?search=GitHub) is a cloud\-based implementation of Git that is tightly integrated into **RStudio**.
It works efficiently, without cluttering your workspace with duplicate copies of old files or compressed archives.
**RStudio** users can collaborate on projects hosted on GitHub without having to use the command line. This has proven to be an effective way of ensuring a consistent, reproducible workflow, even for beginners. This book was written collaboratively through a private repository on GitHub, just as the [**mdsr**](http://github.com/mdsr-book/mdsr) package is maintained in a public repository.
Finally, random number seeds (see section [13\.6\.3](ch-simulation.html#sec:seed)) are an important part of a reproducible workflow.
It is a good idea to set a seed to allow others (or yourself) to be able to reproduce a particular analysis that has a stochastic component to it.
D.4 Further resources
---------------------
[Project TIER](https://www.haverford.edu/project-tier/) is an organization at [*Haverford College*](https://en.wikipedia.org/w/index.php?search=Haverford%20College) that has developed a protocol (Ball and Medeiros 2012\) for reproducible research. Their efforts originated in the social sciences using [*Stata*](https://en.wikipedia.org/w/index.php?search=Stata) but have since expanded to include **R**.
**R** Markdown is under active development. For the latest features see the
**R** Markdown authoring guide at (<http://rmarkdown.rstudio.com>).
The **RStudio** cheat sheet serves as a useful reference.
Broman and Woo (2018\) describe best practices for organizing data in spreadsheets.
GitHub can be challenging to learn but is now the default in many data science research
settings.
[Jenny Bryan](https://en.wikipedia.org/w/index.php?search=Jenny%20Bryan)’s resources on [*Happy Git and GitHub for the useR*](http://happygitwithr.com)
are particularly relevant for new data scientists beginning to use GitHub (Bryan, the STAT 545 TAs, and Hester 2018\).
Another challenge for reproducible analysis is ever\-changing versions of **R** and other **R** packages.
The **packrat** and **renv**
packages help ensure that projects can maintain a particular version of **R** and set of packages.
The **reproducible** package provides a set of tools to enhance reproducibility.
D.5 Exercises
-------------
**Problem 1 (Easy)**: Insert a chunk in an R Markdown document that generates an error.
Set the options so that the file renders even though there is an error.
(Note: Some errors are easier to diagnose if you can execute specific R statements during rendering and leave more evidence behind for forensic examination.)
**Problem 2 (Easy)**: Why does the `mosaic` package plain R Markdown template include the code chunk option `message=FALSE` when the `mosaic` package is loaded?
**Problem 3 (Easy)**: Consider an R Markdown file that includes the following code chunks.
What will be output when this file is rendered?
```
```{r}
x <- 1:5
```
```
```
```{r}
x <- x + 1
```
```
```
```{r}
x
```
```
**Problem 4 (Easy)**: Consider an R Markdown file that includes the following code chunks.
What will be output when the file is rendered?
```
```{r echo = FALSE}
x <- 1:5
```
```
```
```{r echo = FALSE}
x <- x + 1
```
```
```
```{r include = FALSE}
x
```
```
**Problem 5 (Easy)**: Consider an R Markdown file that includes the following code chunks.
What will be output when this file is rendered?
```
```{r echo = FALSE}
x <- 1:5
```
```
```
```{r echo = FALSE}
x <- x + 1
```
```
```
```{r echo = FALSE}
x
```
```
**Problem 6 (Easy)**: Consider an R Markdown file that includes the following code chunks. What will be output when the file is rendered?
```
```{r echo = FALSE}
x <- 1:5
```
```
```
```{r echo = FALSE, eval = FALSE}
x <- x + 1
```
```
```
```{r echo = FALSE}
x
```
```
**Problem 7 (Easy)**: Describe in words what the following excerpt from an R Markdown file will display when rendered.
```
$\hat{y} = \hat{\beta}_0 + \hat{\beta}_1 \cdot x + \epsilon$
```
**Problem 8 (Easy)**: Create an RMarkdown file that uses an inline call to R to display the value of an object that you have created previously in that file.
**Problem 9 (Easy)**: Describe the implications of changing `warning=TRUE` to `warning=FALSE` in the following code chunk.
```
sqrt(-1)
```
```
Warning in sqrt(-1): NaNs produced
```
```
[1] NaN
```
**Problem 10 (Easy)**: Describe how the `fig.width` and `fig.height` chunk options can be used to control the size of graphical figures.
Generate two versions of a figure with different options.
**Problem 11 (Medium)**: The `knitr` package allows the analyst to display nicely formatted
tables and results when outputting to pdf files.
Use the following code chunk as an example to create a similar display using your own data.
```
library(mdsr)
library(mosaicData)
mod <- broom::tidy(lm(cesd ~ mcs + sex, data = HELPrct))
knitr::kable(
mod,
digits = c(0, 2, 2, 2, 4),
caption = "Regression model from HELP clinical trial.",
longtable = TRUE
)
```
Table D.1: Regression model from HELP clinical trial.
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 55\.79 | 1\.31 | 42\.62 | 0\.0000 |
| mcs | \-0\.65 | 0\.03 | \-19\.48 | 0\.0000 |
| sexmale | \-2\.95 | 1\.01 | \-2\.91 | 0\.0038 |
**Problem 12 (Medium)**: Explain what the following code chunks will display and why this might be useful for technical reports from a data science project.
```
```{r chunk1, eval = TRUE, include = FALSE}
x <- 15
cat("assigning value to x.\n")
```
```
```
```{r chunk2, eval = TRUE, include = FALSE}
x <- x + 3
cat("updating value of x.\n")
```
```
```
```{r chunk3, eval = FALSE, include = TRUE}
cat("x =", x, "\n")
```
```
```
```{r chunk1, eval = FALSE, include = TRUE}
```
```
```
```{r chunk2, eval = FALSE, include = TRUE}
```
```
D.6 Supplementary exercises
---------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-reproducible.html\#reproducible\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-reproducible.html#reproducible-online-exercises)
No exercises found
---
D.1 Scriptable statistical computing
------------------------------------
In order for data analysis to be reproducible, all of the steps taken in the analysis have to be recorded in a linear fashion. Scriptable applications like Python, **R**, SAS, and Stata do this by default.
Even when graphical user interfaces to these programs are used, they add the automatically\-generated code to the history so that it too can be recorded.
Thus, the full series of commands that make up the data analysis can be recorded, reviewed, and transmitted. Contrast this with the behavior of spreadsheet applications like [*Microsoft Excel*](https://en.wikipedia.org/w/index.php?search=Microsoft%20Excel) and [*Google Sheets*](https://en.wikipedia.org/w/index.php?search=Google%20Sheets), where it is not always possible to fully retrace one’s steps.
D.2 Reproducible analysis with **R** Markdown
---------------------------------------------
The concept of [*literate programming*](https://en.wikipedia.org/w/index.php?search=literate%20programming) was introduced by Knuth decades ago (Knuth 1992\). His advice was:
> “Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”
Central to this prescription is the idea that the relevant documentation for the code—which is understandable not just to the programmer, but to other human beings as well—occurs alongside the code itself. In data analysis, this is manifest as the need to have three kinds of things in one document: the code, the results of that code, and the written analysis. We belong to a growing group of people who find the **rmarkdown** (J. J. Allaire et al. 2014\) and **knitr** packages (Y. Xie 2014\) to be an environment that is ideally suited to support a reproducible analysis workflow (B. S. Baumer et al. 2014\).
The **rmarkdown** and **knitr** packages use a source file and output file paradigm. This approach is common in programming but is fundamentally different than a “what\-you\-see\-is\-what\-you\-get” editor like [*Microsoft Word*](https://en.wikipedia.org/w/index.php?search=Microsoft%20Word) or [*Google Docs*](https://en.wikipedia.org/w/index.php?search=Google%20Docs). Code is typed into the source document, which is then rendered into an output format that is readable by anyone. The principles of literate programming stipulate that the source file should *also* be
readable by anyone.
We favor the simple document markup language **R** Markdown (J. Allaire et al. 2020\) for most applications. An **R** Markdown source file can be rendered (by **knitr**, leveraging [`pandoc`](http://johnmcfarlane.net/pandoc)) into PDF, HTML, and Microsoft Word formats. The resulting document will contain the **R** code, the results of that code, and the analyst’s written analysis.
Markdown is well\-integrated with **RStudio**, and both *LaTeX* and Markdown source files can be rendered via a single\-click mechanism.
More details can be found in Y. Xie (2014\) and Gandrud (2014\) as well as the CRAN reproducible research task view (Kuhn 2020\).
See also (<http://yihui.name/knitr>).
As an example of how these systems work, we demonstrate a document written in the Markdown format
using data from the
`SwimRecords` data frame. Within **RStudio**, a new template **R** Markdown file can be generated by selecting `R Markdown` from the `New File` option on the `File` menu. This generates the dialog box displayed in Figure
[D.1](ch-reproduce.html#fig:rmarkdowndialog). The default output format is HTML, but other options (PDF or Microsoft Word) are available.
The Markdown templates included with some packages (e.g., **mosaic**) are useful to set up more appropriate defaults for figure and font size. These can be accessed using the “From Template” option when opening a new Markdown file.
Figure D.1: Generating a new R Markdown file in RStudio.
```
---
title: "Sample R Markdown example"
author: "Sample User"
date: "November 8, 2020"
output:
html_document: default
fig_height: 2.8
fig_width: 5
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
library(tidyverse)
library(mdsr)
library(mosaic)
```
## R Markdown
This is an R Markdown document. Markdown is a simple formatting syntax for
authoring HTML, PDF, and MS Word documents. For more details on using R
Markdown see http://rmarkdown.rstudio.com.
When you click the **Knit** button a document will be generated that
includes both content as well as the output of any embedded R code chunks
within the document. You can embed an R code chunk like this:
```{r display}
glimpse(SwimRecords)
```
## Including Plots
You can also embed plots, for example:
```{r scatplot, echo=FALSE, message = FALSE}
ggplot(
data = SwimRecords,
aes(x = year, y = time, color = sex)
) +
geom_point() +
geom_smooth(method = loess, se = FALSE) +
theme(legend.position = "right") +
labs(title = "100m Swimming Records over time")
```
There are n=`r nrow(SwimRecords)` rows in the Swim records dataset.
Note that the `echo = FALSE` option was added to the code chunk to
prevent printing of the R code that generated the plot.
```
Figure D.2: Sample Markdown input file.
Figure [D.2](ch-reproduce.html#fig:mark1) displays a modified version of the default **R** Markdown input file.
The file is given a title (`Sample R Markdown example`) with output format set by default to HTML.
Simple markup (such as bolding) is added through use of the `**` characters before and after the word `Help`.
Blocks of code are begun using a ````{r}` line and closed
with a ````` line (three back quotes in both cases).
The formatted output can be generated and displayed by clicking the `Knit HTML` button in RStudio, or by using the commands in the following code block, which can also be used when running **R** without the benefit of **RStudio**.
```
library(rmarkdown)
render("filename.Rmd") # creates filename.html
browseURL("filename.html")
```
The `render()` function extracts the **R** commands from a specially formatted **R** Markdown
input file (`filename.Rmd`), evaluates them, and integrates the resulting output, including text and graphics, into an output file (`filename.html`).
A screenshot of the results of performing these steps on the `.Rmd` file displayed in Figure [D.2](ch-reproduce.html#fig:mark1) is displayed in Figure [D.3](ch-reproduce.html#fig:rmark).
`render()` uses the value of the `output:` option to determine what format to generate. If the `.Rmd` file specified `output: word_document`, then a Microsoft Word document
would be created.
Figure D.3: Formatted output from R Markdown example.
Alternatively, a PDF or Microsoft Word document can be generated in **RStudio** by selecting `New` from the `R Markdown` menu, then clicking on the PDF or Word options.
**RStudio** also supports the creation of **R** Presentations using a variant of the **R** Markdown syntax. Instructions and an example can be found by opening a new `R presentations` document in **RStudio**.
D.3 Projects and version control
--------------------------------
[Projects](https://support.rstudio.com/hc/en-us/articles/200526207-Using-Projects) are a useful feature of **RStudio**. A project provides a separate workspace. Selecting a project also reorients your **RStudio** environment to a specified directory, in the process reorienting the Files tab, the working directory, etc. Once you start working on multiple projects, being able to switch back and forth becomes very helpful.
Given that data science has been called a “team sport,” the ability to
track changes to files and discuss issues in a collaborative manner is an important prerequisite to reproducible analysis.
Projects can be tied to a version control system, such as [*Subversion*](https://en.wikipedia.org/w/index.php?search=Subversion) or [*GitHub*](https://en.wikipedia.org/w/index.php?search=GitHub). These systems help you and your collaborators keep track of changes to files, so that you can go back in time to review changes to previous pieces of code, compare versions, and retrieve older versions as needed.
While critical for collaboration with others, source code version control systems are also useful for individual projects because they document changes and maintain version histories. In such a setting, the collaboration is with your future self!
[*GitHub*](https://en.wikipedia.org/w/index.php?search=GitHub) is a cloud\-based implementation of Git that is tightly integrated into **RStudio**.
It works efficiently, without cluttering your workspace with duplicate copies of old files or compressed archives.
**RStudio** users can collaborate on projects hosted on GitHub without having to use the command line. This has proven to be an effective way of ensuring a consistent, reproducible workflow, even for beginners. This book was written collaboratively through a private repository on GitHub, just as the [**mdsr**](http://github.com/mdsr-book/mdsr) package is maintained in a public repository.
Finally, random number seeds (see section [13\.6\.3](ch-simulation.html#sec:seed)) are an important part of a reproducible workflow.
It is a good idea to set a seed to allow others (or yourself) to be able to reproduce a particular analysis that has a stochastic component to it.
D.4 Further resources
---------------------
[Project TIER](https://www.haverford.edu/project-tier/) is an organization at [*Haverford College*](https://en.wikipedia.org/w/index.php?search=Haverford%20College) that has developed a protocol (Ball and Medeiros 2012\) for reproducible research. Their efforts originated in the social sciences using [*Stata*](https://en.wikipedia.org/w/index.php?search=Stata) but have since expanded to include **R**.
**R** Markdown is under active development. For the latest features see the
**R** Markdown authoring guide at (<http://rmarkdown.rstudio.com>).
The **RStudio** cheat sheet serves as a useful reference.
Broman and Woo (2018\) describe best practices for organizing data in spreadsheets.
GitHub can be challenging to learn but is now the default in many data science research
settings.
[Jenny Bryan](https://en.wikipedia.org/w/index.php?search=Jenny%20Bryan)’s resources on [*Happy Git and GitHub for the useR*](http://happygitwithr.com)
are particularly relevant for new data scientists beginning to use GitHub (Bryan, the STAT 545 TAs, and Hester 2018\).
Another challenge for reproducible analysis is ever\-changing versions of **R** and other **R** packages.
The **packrat** and **renv**
packages help ensure that projects can maintain a particular version of **R** and set of packages.
The **reproducible** package provides a set of tools to enhance reproducibility.
D.5 Exercises
-------------
**Problem 1 (Easy)**: Insert a chunk in an R Markdown document that generates an error.
Set the options so that the file renders even though there is an error.
(Note: Some errors are easier to diagnose if you can execute specific R statements during rendering and leave more evidence behind for forensic examination.)
**Problem 2 (Easy)**: Why does the `mosaic` package plain R Markdown template include the code chunk option `message=FALSE` when the `mosaic` package is loaded?
**Problem 3 (Easy)**: Consider an R Markdown file that includes the following code chunks.
What will be output when this file is rendered?
```
```{r}
x <- 1:5
```
```
```
```{r}
x <- x + 1
```
```
```
```{r}
x
```
```
**Problem 4 (Easy)**: Consider an R Markdown file that includes the following code chunks.
What will be output when the file is rendered?
```
```{r echo = FALSE}
x <- 1:5
```
```
```
```{r echo = FALSE}
x <- x + 1
```
```
```
```{r include = FALSE}
x
```
```
**Problem 5 (Easy)**: Consider an R Markdown file that includes the following code chunks.
What will be output when this file is rendered?
```
```{r echo = FALSE}
x <- 1:5
```
```
```
```{r echo = FALSE}
x <- x + 1
```
```
```
```{r echo = FALSE}
x
```
```
**Problem 6 (Easy)**: Consider an R Markdown file that includes the following code chunks. What will be output when the file is rendered?
```
```{r echo = FALSE}
x <- 1:5
```
```
```
```{r echo = FALSE, eval = FALSE}
x <- x + 1
```
```
```
```{r echo = FALSE}
x
```
```
**Problem 7 (Easy)**: Describe in words what the following excerpt from an R Markdown file will display when rendered.
```
$\hat{y} = \hat{\beta}_0 + \hat{\beta}_1 \cdot x + \epsilon$
```
**Problem 8 (Easy)**: Create an RMarkdown file that uses an inline call to R to display the value of an object that you have created previously in that file.
**Problem 9 (Easy)**: Describe the implications of changing `warning=TRUE` to `warning=FALSE` in the following code chunk.
```
sqrt(-1)
```
```
Warning in sqrt(-1): NaNs produced
```
```
[1] NaN
```
**Problem 10 (Easy)**: Describe how the `fig.width` and `fig.height` chunk options can be used to control the size of graphical figures.
Generate two versions of a figure with different options.
**Problem 11 (Medium)**: The `knitr` package allows the analyst to display nicely formatted
tables and results when outputting to pdf files.
Use the following code chunk as an example to create a similar display using your own data.
```
library(mdsr)
library(mosaicData)
mod <- broom::tidy(lm(cesd ~ mcs + sex, data = HELPrct))
knitr::kable(
mod,
digits = c(0, 2, 2, 2, 4),
caption = "Regression model from HELP clinical trial.",
longtable = TRUE
)
```
Table D.1: Regression model from HELP clinical trial.
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 55\.79 | 1\.31 | 42\.62 | 0\.0000 |
| mcs | \-0\.65 | 0\.03 | \-19\.48 | 0\.0000 |
| sexmale | \-2\.95 | 1\.01 | \-2\.91 | 0\.0038 |
**Problem 12 (Medium)**: Explain what the following code chunks will display and why this might be useful for technical reports from a data science project.
```
```{r chunk1, eval = TRUE, include = FALSE}
x <- 15
cat("assigning value to x.\n")
```
```
```
```{r chunk2, eval = TRUE, include = FALSE}
x <- x + 3
cat("updating value of x.\n")
```
```
```
```{r chunk3, eval = FALSE, include = TRUE}
cat("x =", x, "\n")
```
```
```
```{r chunk1, eval = FALSE, include = TRUE}
```
```
```
```{r chunk2, eval = FALSE, include = TRUE}
```
```
D.6 Supplementary exercises
---------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-reproducible.html\#reproducible\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-reproducible.html#reproducible-online-exercises)
No exercises found
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-regression.html |
E Regression modeling
=====================
Regression analysis is a powerful and flexible framework that allows an analyst to model an outcome (the [*response variable*](https://en.wikipedia.org/w/index.php?search=response%20variable)) as a function of one or more [*explanatory variables*](https://en.wikipedia.org/w/index.php?search=explanatory%20variables) (or predictors). Regression forms the basis of many important statistical models described in Chapters [9](ch-foundations.html#ch:foundations) and [11](ch-learningI.html#ch:learningI). This appendix provides a brief review of linear and logistic regression models, beginning with a single predictor, then extending to multiple predictors.
E.1 Simple linear regression
----------------------------
Linear regression can help us understand how values of a quantitative (numerical) outcome (or response) are associated with values of a quantitative explanatory (or predictor) variable.
This technique is often applied in two ways: to generate predicted values or to make inferences regarding associations in the dataset.
In some disciplines the outcome is called the dependent variable and the predictor the independent variable. We avoid such usage since the words dependent and independent have many meanings in statistics.
A simple linear regression model for an outcome \\(y\\) as a function of a predictor \\(x\\) takes the form:
\\\[
y\_i \= \\beta\_0 \+ \\beta\_1 x\_i \+ \\epsilon\_i \\,, \\text{ for } i\=1,\\ldots,n \\,,
\\]
where \\(n\\) represents the number of observations (rows) in the data set.
For this model, \\(\\beta\_0\\) is the population parameter corresponding to the [*intercept*](https://en.wikipedia.org/w/index.php?search=intercept) (i.e., the predicted value when \\(x\=0\\)) and \\(\\beta\_1\\) is the true (population) [*slope*](https://en.wikipedia.org/w/index.php?search=slope) coefficient (i.e., the predicted increase in \\(y\\) for a unit increase in \\(x\\)). The \\(\\epsilon\_i\\)’s are the [*errors*](https://en.wikipedia.org/w/index.php?search=errors) (these are assumed to be random noise with mean 0\).
We almost never know the true values of the population parameters \\(\\beta\_0\\) and \\(\\beta\_1\\), but we estimate them using data from our sample. The `lm()` function finds the “best” coefficients \\(\\hat{\\beta}\_0\\) and \\(\\hat{\\beta}\_1\\) where
the [*fitted values*](https://en.wikipedia.org/w/index.php?search=fitted%20values) (or expected values) are given by
\\(\\hat{y}\_i \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_i\\).
What is left over is captured by the [*residuals*](https://en.wikipedia.org/w/index.php?search=residuals) (\\(\\hat{\\epsilon}\_i \= y\_i \- \\hat{y}\_i\\)).
The model almost never fits perfectly—if it did there would be no need for a model.
The best\-fitting
regression line is usually determined by a [*least squares*](https://en.wikipedia.org/w/index.php?search=least%20squares) criteria that minimizes the sum of the squared residuals (\\(\\epsilon\_i^2\\)).
The least squares regression line (defined by the values of \\(\\hat{\\beta\_0}\\) and \\(\\hat{\\beta}\_1\\)) is unique.
### E.1\.1 Motivating example: Modeling usage of a rail trail
The [*Pioneer Valley*](https://en.wikipedia.org/w/index.php?search=Pioneer%20Valley) Planning Commission (PVPC) collected data north of Chestnut Street in [*Florence, Massachusetts*](https://en.wikipedia.org/w/index.php?search=Florence,%20Massachusetts) for a 90\-day period.
Data collectors set up a laser sensor that recorded when a rail\-trail user passed the data collection station.
The data are available in the `RailTrail` data set in the **mosaicData** package.
```
library(tidyverse)
library(mdsr)
library(mosaic)
glimpse(RailTrail)
```
```
Rows: 90
Columns: 11
$ hightemp <int> 83, 73, 74, 95, 44, 69, 66, 66, 80, 79, 78, 65, 41, 59,…
$ lowtemp <int> 50, 49, 52, 61, 52, 54, 39, 38, 55, 45, 55, 48, 49, 35,…
$ avgtemp <dbl> 66.5, 61.0, 63.0, 78.0, 48.0, 61.5, 52.5, 52.0, 67.5, 6…
$ spring <int> 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1…
$ summer <int> 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0…
$ fall <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0…
$ cloudcover <dbl> 7.6, 6.3, 7.5, 2.6, 10.0, 6.6, 2.4, 0.0, 3.8, 4.1, 8.5,…
$ precip <dbl> 0.00, 0.29, 0.32, 0.00, 0.14, 0.02, 0.00, 0.00, 0.00, 0…
$ volume <int> 501, 419, 397, 385, 200, 375, 417, 629, 533, 547, 432, …
$ weekday <lgl> TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE…
$ dayType <chr> "weekday", "weekday", "weekday", "weekend", "weekday", …
```
The PVPC wants to understand the relationship between daily ridership (i.e., the number of riders and walkers who use the bike path on any given day) and a collection of explanatory variables, including the temperature, rainfall, cloud cover, and day of the week.
In a simple linear regression model, there
is a single quantitative explanatory variable.
It seems reasonable that the high temperature for the day (`hightemp`, measured in degrees Fahrenheit) might be related to ridership, so we will explore that first.
Figure [E.1](ch-regression.html#fig:railtrail) shows a scatterplot between ridership (`volume`) and high temperature (`hightemp`), with the simple linear regression line overlaid.
The fitted coefficients are calculated through a call to the `lm()` function.
We will use functions from the **broom** package to display model results in a tidy fashion.
```
mod <- lm(volume ~ hightemp, data = RailTrail)
library(broom)
tidy(mod)
```
```
# A tibble: 2 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -17.1 59.4 -0.288 0.774
2 hightemp 5.70 0.848 6.72 0.00000000171
```
```
ggplot(RailTrail, aes(x = hightemp, y = volume)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE)
```
Figure E.1: Scatterplot of number of trail crossings as a function of highest daily temperature (in degrees Fahrenheit).
The first coefficient is \\(\\hat{\\beta}\_0\\), the estimated \\(y\\)\-intercept. The interpretation is that if the high temperature was 0 degrees Fahrenheit, then the estimated ridership would be about \\(\-17\\) riders. This is doubly non\-sensical in this context, since it is impossible to have a negative number of riders and this represents a substantial extrapolation to far colder temperatures than are present in the data set (recall the [*Challenger*](https://en.wikipedia.org/w/index.php?search=Challenger) discussion from Chapter [2](ch-vizI.html#ch:vizI)). It turns out that the monitoring equipment didn’t work when it got too cold, so values for those days are unavailable.
In this case, it is not appropriate to simply multiply
the average number of users on the observed days by the number of days in a year, since cold days that are likely to have fewer trail users are excluded due to instrumentation issues.
Such missing data can lead to selection bias.
The second coefficient (the slope) is usually more interesting. This coefficient (\\(\\hat{\\beta}\_1\\)) is interpreted as the predicted increase in trail users for each additional degree in temperature. We expect to see about 5\.7 additional riders use the rail trail on a day that is 1 degree warmer than another day.
### E.1\.2 Model visualization
Figure [E.1](ch-regression.html#fig:railtrail) allows us to visualize our model in the data space. How does our model compare to a null model? That is, how do we know that our model is useful?
```
library(broom)
mod_avg <- RailTrail %>%
lm(volume ~ 1, data = .) %>%
augment(RailTrail)
mod_temp <- RailTrail %>%
lm(volume ~ hightemp, data = .) %>%
augment(RailTrail)
mod_data <- bind_rows(mod_avg, mod_temp) %>%
mutate(model = rep(c("null", "slr"), each = nrow(RailTrail)))
```
In Figure [E.2](ch-regression.html#fig:regplot1), we compare the least squares regression line (right) with the null model that simply returns the average for every input (left). That is, on the left, the average temperature of the day is ignored. The model simply predicts an average ridership every day, regardless of the temperature. However, on the right, the model takes the average ridership into account, and accordingly makes a different prediction for each input value.
```
ggplot(data = mod_data, aes(x = hightemp, y = volume)) +
geom_smooth(
data = filter(mod_data, model == "null"),
method = "lm", se = FALSE, formula = y ~ 1,
color = "dodgerblue", size = 0.5
) +
geom_smooth(
data = filter(mod_data, model == "slr"),
method = "lm", se = FALSE, formula = y ~ x,
color = "dodgerblue", size = 0.5
) +
geom_segment(
aes(xend = hightemp, yend = .fitted),
arrow = arrow(length = unit(0.1, "cm")),
size = 0.5, color = "darkgray"
) +
geom_point(color = "dodgerblue") +
facet_wrap(~model)
```
Figure E.2: At left, the model based on the overall average high temperature. At right, the simple linear regression model.
Obviously, the regression model works better than the null model (that forces the slope to be zero), since it is more flexible. But how much better?
### E.1\.3 Measuring the strength of fit
The correlation coefficient, \\(r\\), is used to
quantify the strength of the linear relationship between two variables. We can quantify the proportion of variation in the response variable (\\(y\\)) that is explained by the model in a similar fashion. This quantity is called the [*coefficient of determination*](https://en.wikipedia.org/w/index.php?search=coefficient%20of%20determination) and is denoted \\(R^2\\). It is a common measure of goodness\-of\-fit for regression models.
Like any proportion, \\(R^2\\) is always between 0 and 1\. For simple linear regression (one explanatory variable), \\(R^2 \= r^2\\). The definition of \\(R^2\\) is given by the following expression.
\\\[
\\begin{aligned}
R^2 \&\= 1 \- \\frac{SSE}{SST} \= \\frac{SSM}{SST} \\\\
\&\= 1 \- \\frac{\\sum\_{i\=1}^n (y\_i \- \\hat{y}\_i)^2}{\\sum\_{i\=1}^n (y\_i \- \\bar{y})^2} \\\\
\&\= 1 \- \\frac{SSE}{(n\-1\) Var(y)} \\, ,
\\end{aligned}
\\]
Here, \\(\\hat{y}\\) is the predicted value, \\(Var(y)\\) is the observed variance, \\(SSE\\) is the sum of the squared residuals, \\(SSM\\) is the sum of the squares attributed to the model, and \\(SST\\) is the total sum of the squares.
We can calculate these values for the rail trail example.
```
n <- nrow(RailTrail)
SST <- var(pull(RailTrail, volume)) * (n - 1)
SSE <- var(residuals(mod)) * (n - 1)
1 - SSE / SST
```
```
[1] 0.339
```
```
glance(mod)
```
```
# A tibble: 1 × 12
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.339 0.332 104. 45.2 1.71e-9 1 -545. 1096. 1103.
# … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
```
In Figure [E.2](ch-regression.html#fig:regplot1), the null model on the left has an \\(R^2\\) of 0, because \\(\\hat{y}\_i \= \\bar{y}\\) for all \\(i\\), and so \\(SSE \= SST\\). On the other hand, the \\(R^2\\) of the regression model on the right is 0\.339\. We say that the regression model based on average daily temperature explained about 34% of the variation in daily ridership.
### E.1\.4 Categorical explanatory variables
Suppose that instead of using temperature as our explanatory variable for ridership on the rail trail, we only considered whether it was a weekday or not a weekday (e.g., weekend or holiday). The indicator variable `weekday` is [*binary*](https://en.wikipedia.org/w/index.php?search=binary) (or dichotomous) in that it only takes on the values 0 and 1\. (Such variables are sometimes called [*indicator*](https://en.wikipedia.org/w/index.php?search=indicator) variables or more pejoratively [*dummy*](https://en.wikipedia.org/w/index.php?search=dummy) variables.)
This new linear regression model has the form:
\\\[
\\widehat{volume} \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 \\cdot \\mathrm{weekday} \\,,
\\]
where the fitted coefficients are given below.
```
coef(lm(volume ~ weekday, data = RailTrail))
```
```
(Intercept) weekdayTRUE
430.7 -80.3
```
Note that these coefficients could have been calculated from the means of the two groups (since the regression model has only two possible predicted values). The average ridership on weekdays is 350\.4 while the average on non\-weekdays is 430\.7\.
```
RailTrail %>%
group_by(weekday) %>%
summarize(mean_volume = mean(volume))
```
```
# A tibble: 2 × 2
weekday mean_volume
<lgl> <dbl>
1 FALSE 431.
2 TRUE 350.
```
In the coefficients listed above, the `weekdayTRUE` variable corresponds to rows in which the value of the `weekday` variable was `TRUE` (i.e., weekdays). Because this value is negative, our interpretation is that 80 fewer riders are expected on a weekday as opposed to a weekend or holiday.
To improve the readability of the output we can create a new variable with more mnemonic values.
```
RailTrail <- RailTrail %>%
mutate(day = ifelse(weekday == 1, "weekday", "weekend/holiday"))
```
Care was needed to recode the `weekday` variable because it was a `factor`. Avoid the use of factors unless they are needed.
```
coef(lm(volume ~ day, data = RailTrail))
```
```
(Intercept) dayweekend/holiday
350.4 80.3
```
The model coefficients have changed (although they still provide the same interpretation). By default, the `lm()` function will pick the alphabetically lowest value of the categorical predictor as the [*reference group*](https://en.wikipedia.org/w/index.php?search=reference%20group) and create indicators for the other levels (in this case `dayweekend/holiday`). As a result the intercept is now the predicted number of trail crossings on a `weekday`.
In either formulation, the interpretation of the model remains the same: On a weekday, 80 fewer riders are expected than on a weekend or holiday.
E.2 Multiple regression
-----------------------
Multiple regression is a natural extension of simple linear regression that incorporates multiple explanatory (or predictor) variables. It has the general form:
\\\[
y \= \\beta\_0 \+ \\beta\_1 x\_1 \+ \\beta\_2 x\_2 \+ \\cdots \+ \\beta\_p x\_p \+ \\epsilon, \\text{ where } \\epsilon \\sim N(0, \\sigma\_\\epsilon) \\,.
\\]
The estimated coefficients (i.e., \\(\\hat{\\beta}\_i\\)’s) are now interpreted as “conditional on” the other variables—each \\(\\beta\_i\\) reflects the *predicted* change in \\(y\\) associated with a one\-unit increase in \\(x\_i\\), conditional upon the rest of the \\(x\_i\\)’s.
This type of model can help to disentangle more complex relationships between three or more variables.
The value of \\(R^2\\) from a multiple regression model has the same interpretation as before: the proportion of variability explained by the model.
Interpreting conditional regression parameters can be challenging. The analyst needs to ensure
that comparisons that hold other factors constant do not involve extrapolations beyond the
observed data.
### E.2\.1 Parallel slopes: Multiple regression with a categorical variable
Consider first the case where \\(x\_2\\) is an [*indicator*](https://en.wikipedia.org/w/index.php?search=indicator) variable that can only be 0 or 1 (e.g., `weekday`). Then,
\\\[
\\hat{y} \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 x\_2 \\,.
\\]
In the case where \\(x\_1\\) is quantitative but \\(x\_2\\) is an indicator variable, we have:
\\\[
\\begin{aligned}
\\text{For weekends, } \\qquad \\hat{y} \|\_{ x\_1, x\_2 \= 0} \&\= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \\\\
\\text{For weekdays, } \\qquad \\hat{y} \|\_{ x\_1, x\_2 \= 1} \&\= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 \\cdot 1 \\\\
\&\= \\left( \\hat{\\beta}\_0 \+ \\hat{\\beta}\_2 \\right) \+ \\hat{\\beta}\_1 x\_1 \\, .
\\end{aligned}
\\]
This is called a [*parallel slopes*](https://en.wikipedia.org/w/index.php?search=parallel%20slopes) model (see Figure [E.3](ch-regression.html#fig:parallel-slopes)), since the predicted values of the model take the geometric shape of two parallel lines with slope \\(\\hat{\\beta}\_1\\): one with \\(y\\)\-intercept \\(\\hat{\\beta}\_0\\) for weekends, and another with \\(y\\)\-intercept \\(\\hat{\\beta}\_0 \+ \\hat{\\beta}\_2\\) for weekdays.
```
mod_parallel <- lm(volume ~ hightemp + weekday, data = RailTrail)
tidy(mod_parallel)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 42.8 64.3 0.665 0.508
2 hightemp 5.35 0.846 6.32 0.0000000109
3 weekdayTRUE -51.6 23.7 -2.18 0.0321
```
```
glance(mod_parallel)
```
```
# A tibble: 1 × 12
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.374 0.359 102. 25.9 1.46e-9 2 -542. 1093. 1103.
# … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
```
```
mod_parallel %>%
augment() %>%
ggplot(aes(x = hightemp, y = volume, color = weekday)) +
geom_point() +
geom_line(aes(y = .fitted)) +
labs(color = "Is it a\nweekday?")
```
Figure E.3: Visualization of parallel slopes model for the rail trail data.
### E.2\.2 Parallel planes: Multiple regression with a second quantitative variable
If \\(x\_2\\) is a quantitative variable, then we have:
\\\[
\\hat{y} \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 x\_2 \\,.
\\]
Notice that our model is no longer a line, rather it is a *plane* that exists in three dimensions.
Now suppose that we want to improve our model for ridership by considering not only the average temperature, but also the amount of precipitation (rain or snow, measured in inches). We can do this in **R** by simply adding this variable to our regression model.
```
mod_plane <- lm(volume ~ hightemp + precip, data = RailTrail)
tidy(mod_plane)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -31.5 55.2 -0.571 5.70e- 1
2 hightemp 6.12 0.794 7.70 1.97e-11
3 precip -153. 39.3 -3.90 1.90e- 4
```
Note that the coefficient on `hightemp` (6\.1 riders per degree) has changed from its value in the simple linear regression model (5\.7 riders per degree). This is due to the moderating effect of precipitation. Our interpretation is that for each additional degree in temperature, we expect an additional 6\.1 riders on the rail trail, after controlling for the amount of precipitation.
As you can imagine, the effect of precipitation is strong—some people may be less likely to bike or walk in the rain. Thus, even after controlling for temperature, an inch of rainfall is associated with a drop in ridership of about 153\.
Note that since the median precipitation on days when there was precipitation was only 0\.15 inches, a predicted change for an additional inch may be misleading. It may be better to report a predicted difference of 0\.15 additional inches or replace the continuous term in the model with a dichotomous indicator of any precipitation.
If we added all three explanatory variables to the model we would have parallel [*planes*](https://en.wikipedia.org/w/index.php?search=planes).
```
mod_p_planes <- lm(volume ~ hightemp + precip + weekday, data = RailTrail)
tidy(mod_p_planes)
```
```
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 19.3 60.3 0.320 7.50e- 1
2 hightemp 5.80 0.799 7.26 1.59e-10
3 precip -146. 38.9 -3.74 3.27e- 4
4 weekdayTRUE -43.1 22.2 -1.94 5.52e- 2
```
### E.2\.3 Non\-parallel slopes: Multiple regression with interaction
Let’s return to a model that includes `weekday` and `hightemp` as predictors.
What if the parallel lines model doesn’t fit well?
Adding an additional term into the model can make it more flexible
and allow there to be a different slope on the two different types of days.
\\\[
\\hat{y} \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 x\_2 \+ \\hat{\\beta}\_3 x\_1 x\_2 \\,.
\\]
The model can also be described separately for weekends/holidays and weekdays.
\\\[
\\begin{aligned}
\\text{For weekends, } \\qquad \\hat{y} \|\_{ x\_1, x\_2 \= 0} \&\= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \\\\
\\text{For weekdays, } \\qquad \\hat{y} \|\_{ x\_1, x\_2 \= 1} \&\= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 \\cdot 1 \+ \\hat{\\beta}\_3 \\cdot x\_1\\\\
\&\= \\left( \\hat{\\beta}\_0 \+ \\hat{\\beta}\_2 \\right) \+ \\left( \\hat{\\beta}\_1 \+ \\hat{\\beta}\_3 \\right) x\_1 \\, .
\\end{aligned}
\\]
This is called an [*interaction model*](https://en.wikipedia.org/w/index.php?search=interaction%20model) (see Figure [E.4](ch-regression.html#fig:interact)).
The predicted values of the model take the geometric shape of two non\-parallel lines with different slopes.
```
mod_interact <- lm(volume ~ hightemp + weekday + hightemp * weekday,
data = RailTrail)
tidy(mod_interact)
```
```
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 135. 108. 1.25 0.215
2 hightemp 4.07 1.47 2.78 0.00676
3 weekdayTRUE -186. 129. -1.44 0.153
4 hightemp:weekdayTRUE 1.91 1.80 1.06 0.292
```
```
glance(mod_interact)
```
```
# A tibble: 1 × 12
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.382 0.360 102. 17.7 4.96e-9 3 -542. 1094. 1106.
# … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
```
```
mod_interact %>%
augment() %>%
ggplot(aes(x = hightemp, y = volume, color = weekday)) +
geom_point() +
geom_line(aes(y = .fitted)) +
labs(color = "Is it a\nweekday?")
```
Figure E.4: Visualization of interaction model for the rail trail data.
We see that the slope on weekdays is about two riders per degree higher than on weekends and holidays.
This may indicate that trail users on weekends
and holidays are less concerned about the temperature than on weekdays.
### E.2\.4 Modeling non\-linear relationships
A linear model with a single parameter fits well in many situations but is not appropriate in others. Consider modeling height (in centimeters) as a function of age (in years) using data
from a subset of female subjects included in the [*National Health and Nutrition Examination Study*](https://en.wikipedia.org/w/index.php?search=National%20Health%20and%20Nutrition%20Examination%20Study) (from the **NHANES** package) with a linear term.
Another approach uses a
[*smoother*](https://en.wikipedia.org/w/index.php?search=smoother) instead of a linear model.
Unlike the straight line, the smoother can bend to better fit the points when modeling the
functional form of a relationship (see Figure [E.5](ch-regression.html#fig:ageheightmod)).
```
library(NHANES)
NHANES %>%
sample(300) %>%
filter(Gender == "female") %>%
ggplot(aes(x = Age, y = Height)) +
geom_point() +
geom_smooth(method = lm, se = FALSE) +
geom_smooth(method = loess, se = FALSE, color = "green") +
xlab("Age (in years)") +
ylab("Height (in cm)")
```
Figure E.5: Scatterplot of height as a function of age with superimposed linear model (blue) and smoother (green).
The fit of the linear model (denoted in blue) is poor: A straight line does not account for the dramatic increases in height during
puberty to young adulthood or for the gradual decline in height for older subjects. The smoother (in green) does a much better job of describing the functional form.
The improved fit does come with a cost. Compare the results for linear and smoothed models in Figure [E.6](ch-regression.html#fig:railtrailsmooth). Here the functional form of the relationship between high temperature and
volume of trail use is closer to linear (with some deviation for warmer temperatures).
```
ggplot(data = RailTrail, aes(x = hightemp, y = volume)) +
geom_point() +
geom_smooth(method = lm) +
geom_smooth(method = loess, color = "green") +
ylab("Number of trail crossings") +
xlab("High temperature (F)")
```
Figure E.6: Scatterplot of volume as a function of high temperature with superimposed linear and smooth models for the rail trail data.
The width of the confidence bands (95% confidence interval at each point) for the smoother tend to be wider than that for the linear model.
This is one of the costs of the additional flexibility in modeling.
Another cost is interpretation: It is more complicated to explain the results from the smoother than to interpret a slope coefficient (straight line).
E.3 Inference for regression
----------------------------
Thus far, we have fit several models and interpreted their estimated coefficients. However,
with the exception of the confidence bands in Figure [E.6](ch-regression.html#fig:railtrailsmooth),
we have only made statements about the estimated coefficients (i.e., the \\(\\hat{\\beta}\\)’s)—we have made no statements about the true coefficients (i.e., the \\(\\beta\\)’s), the values of which of course remain unknown.
However, we can use our understanding of the \\(t\\)\-distribution to make [*inferences*](https://en.wikipedia.org/w/index.php?search=inferences) about the true value of regression coefficients. In particular, we can test a hypothesis about \\(\\beta\_1\\) (most commonly that it is equal to zero) and find a confidence interval (range of plausible values) for it.
```
tidy(mod_p_planes)
```
```
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 19.3 60.3 0.320 7.50e- 1
2 hightemp 5.80 0.799 7.26 1.59e-10
3 precip -146. 38.9 -3.74 3.27e- 4
4 weekdayTRUE -43.1 22.2 -1.94 5.52e- 2
```
```
glance(mod_p_planes)
```
```
# A tibble: 1 × 12
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.461 0.443 95.2 24.6 1.44e-11 3 -536. 1081. 1094.
# … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
```
In the output above, the p\-value that is associated with the `hightemp` coefficient is displayed as 1\.59e\-10 (or nearly zero).
That is, if the true coefficient (\\(\\beta\_1\\)) was in fact zero, then the probability of observing an association on ridership due to average temperature as large or larger than the one we actually observed in the data, after controlling for precipitation and day of the week, is essentially zero.
This output suggests that the hypothesis that \\(\\beta\_1\\) was in fact zero is dubious based on these data.
Perhaps there is a real association between ridership and average temperature?
Very small p\-values should be rounded to the nearest 0\.0001\. We suggest reporting this p\-value as \\(p\<0\.0001\\).
Another way of thinking about this process is to form a confidence interval around our estimate of the slope coefficient \\(\\hat{\\beta}\_1\\). Here we can say with 95% confidence that the value of the true coefficient \\(\\beta\_1\\) is between 4\.21 and 7\.39 riders per degree. That this interval does not contain zero confirms the result from the hypothesis test.
```
confint(mod_p_planes)
```
```
2.5 % 97.5 %
(Intercept) -100.63 139.268
hightemp 4.21 7.388
precip -222.93 -68.291
weekdayTRUE -87.27 0.976
```
E.4 Assumptions underlying regression
-------------------------------------
The inferences we made above were predicated upon our assumption that the slope follows a \\(t\\)\-distribution. This follows from the assumption that the errors
follow a normal distribution (with mean 0 and standard deviation \\(\\sigma\_\\epsilon\\), for some constant \\(\\sigma\_\\epsilon\\)). Inferences from the model are only valid if the following assumptions hold:
* **L**inearity: The functional form of the relationship between the predictors and the outcome follows a linear combination of regression parameters that are correctly specified (this assumption can be verified by bivariate graphical displays).
* **I**ndependence: Are the errors uncorrelated? Or do they follow a pattern (perhaps over time or within clusters of subjects)?
* **N**ormality of residuals: Do the residuals follow a distribution that is approximately normal? This assumption can be verified using univariate displays.
* **E**qual variance of residuals: Is the variance in the residuals constant across the explanatory variables ([*homoscedastic errors*](https://en.wikipedia.org/w/index.php?search=homoscedastic%20errors))? Or does the variance in the residuals depend on the value of one or more of the explanatory variables ([*heteroscedastic errors*](https://en.wikipedia.org/w/index.php?search=heteroscedastic%20errors))? This assumption can be verified using residual diagnostics.
These conditions are sometimes called the “LINE” assumptions.
All but the independence assumption can be assessed using diagnostic plots.
How might we assess the `mod_p_planes` model?
Figure [E.7](ch-regression.html#fig:plotmod3a) displays a scatterplot of residuals versus fitted (predicted) values.
As we observed in Figure [E.6](ch-regression.html#fig:railtrailsmooth), the number of crossings does not increase as much for warm temperatures as it does for more moderate ones. We may need to consider a more sophisticated model with a more complex model for temperature.
```
mplot(mod_p_planes, which = 1, system = "ggplot2")
```
Figure E.7: Assessing linearity using a scatterplot of residuals versus fitted (predicted) values.
Figure [E.8](ch-regression.html#fig:plotmod3b) displays the quantile\-quantile plot for the residuals from the
regression model. The plot deviates from the straight line: This indicates that the residuals
have heavier tails than a normal distribution.
```
mplot(mod_p_planes, which = 2, system = "ggplot2")
```
Figure E.8: Assessing normality assumption using a Q–Q plot.
Figure [E.9](ch-regression.html#fig:plotmod3c) displays the scale\-location plot for the residuals from the model: The results indicate that there is evidence of heteroscedasticity (the variance of the residuals increases as a function of predicted value).
```
mplot(mod_p_planes, which = 3, system = "ggplot2")
```
Figure E.9: Assessing equal variance using a scale–location plot.
When performing model diagnostics,
it is important to identify any outliers and understand their role in determining the regression coefficients.
* We call observations that don’t seem to fit the general pattern of the data [*outliers*](https://en.wikipedia.org/w/index.php?search=outliers)
Figures [E.7](ch-regression.html#fig:plotmod3a), [E.8](ch-regression.html#fig:plotmod3b), and [E.9](ch-regression.html#fig:plotmod3c) mark three points (8, 18, and 34\) as values that large negative or positive residuals that merit further exploration.
* An observation with an extreme value of the explanatory variable is a point of high [*leverage*](https://en.wikipedia.org/w/index.php?search=leverage).
* A high leverage point that exerts disproportionate influence on the slope of the regression line is an [*influential point*](https://en.wikipedia.org/w/index.php?search=influential%20point).
Figure [E.10](ch-regression.html#fig:plotmod3d) displays the values for [*Cook’s distance*](https://en.wikipedia.org/w/index.php?search=Cook's%20distance) (a common measure of
influential points in a regression model).
```
mplot(mod_p_planes, which = 4, system = "ggplot2")
```
Figure E.10: Cook’s distance for rail trail model.
We can use the `augment()` function from the **broom** package to calculate the value of
this statistic and identify the most extreme Cook’s distance.
```
library(broom)
augment(mod_p_planes) %>%
mutate(row_num = row_number()) %>%
select(-.std.resid, -.sigma) %>%
filter(.cooksd > 0.4)
```
```
# A tibble: 1 × 9
volume hightemp precip weekday .fitted .resid .hat .cooksd row_num
<int> <int> <dbl> <lgl> <dbl> <dbl> <dbl> <dbl> <int>
1 388 84 1.49 TRUE 246. 142. 0.332 0.412 65
```
Observation 65 has the highest Cook’s distance.
It references a day with nearly one and a half inches of rain (the most recorded in the dataset) and a high temperature of 84 degrees.
This data point has high leverage and is influential on the results.
Observations 4 and 34 also have relatively high Cook’s distances and may merit further exploration.
E.5 Logistic regression
-----------------------
Our previous examples had quantitative (or continuous) outcomes.
What happens when we are interested in modeling a dichotomous outcome?
For example, we might model the probability of developing diabetes as a function of age and BMI (we explored this question further in Chapter [11](ch-learningI.html#ch:learningI)).
Figure [E.11](ch-regression.html#fig:diabeteslogreg) displays the scatterplot of diabetes status as a function of age, while
Figure [E.12](ch-regression.html#fig:diabeteslogreg2) displays the scatterplot of diabetes as a function of BMI (body mass index).
Note that each subject can either have diabetes or not, so all of the points are displayed at 0 or 1 on the \\(y\\)\-axis.
```
NHANES <- NHANES %>%
mutate(has_diabetes = as.numeric(Diabetes == "Yes"))
```
```
log_plot <- ggplot(data = NHANES, aes(x = Age, y = has_diabetes)) +
geom_jitter(alpha = 0.1, height = 0.05) +
geom_smooth(method = "glm", method.args = list(family = "binomial")) +
ylab("Diabetes status") +
xlab("Age (in years)")
log_plot
```
Figure E.11: Scatterplot of diabetes as a function of age with superimposed smoother.
```
log_plot + aes(x = BMI) + xlab("BMI (body mass index)")
```
Figure E.12: Scatterplot of diabetes as a function of BMI with superimposed smoother.
We see that the probability that a subject has diabetes tends to increase as both a function of age and of BMI.
Which variable is more important: `Age` or `BMI`?
We can use a multiple logistic regression model to model the probability of diabetes as a function of both predictors.
```
logreg <- glm(has_diabetes ~ BMI + Age, family = "binomial", data = NHANES)
tidy(logreg)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -8.08 0.244 -33.1 1.30e-239
2 BMI 0.0943 0.00552 17.1 1.74e- 65
3 Age 0.0573 0.00249 23.0 2.28e-117
```
The answer is that both predictors seem to be important (since both p\-values are very small).
To interpret the findings, we might consider a visual display of predicted probabilities as displayed in Figure [E.13](ch-regression.html#fig:plotFundiabetes) (compare with Figure [11\.11](ch-learningI.html#fig:mod-compare)).
```
library(modelr)
fake_grid <- data_grid(
NHANES,
Age = seq_range(Age, 100),
BMI = seq_range(BMI, 100)
)
y_hats <- fake_grid %>%
mutate(y_hat = predict(logreg, newdata = ., type = "response"))
head(y_hats, 1)
```
```
# A tibble: 1 × 3
Age BMI y_hat
<dbl> <dbl> <dbl>
1 0 12.9 0.00104
```
The predicted probability from the model is given by:
\\\[
\\pi\_i \= \\mathrm{logit} \\left( P (y\_i \= 1\) \\right) \= \\frac{e^{\\beta\_0 \+ \\beta\_1 Age\_i \+ \\beta\_2 BMI\_i}}{1 \+ e^{\\beta\_0 \+ \\beta\_1 Age\_i \+ \\beta\_2 BMI\_i}}\\ \\ \\text{ for } i\=1,\\ldots,n \\,.
\\]
Let’s consider a hypothetical 0 year old with a BMI of 12\.9 (corresponding to the first entry in the `y_hats` dataframe).
Their predicted probability of having diabetes would be calculated as a function of the regression coefficients.
```
linear_component <- c(1, 12.9, 0) %*% coef(logreg)
exp(linear_component) / (1 + exp(linear_component))
```
```
[,1]
[1,] 0.00104
```
The predicted probability is very small: about 1/10th of 1%.
But what about a 60 year old with a BMI of 25?
```
linear_component <- c(1, 25, 60) %*% coef(logreg)
exp(linear_component) / (1 + exp(linear_component))
```
```
[,1]
[1,] 0.0923
```
The predicted probability is now 9\.2%.
```
ggplot(data = NHANES, aes(x = Age, y = BMI)) +
geom_tile(data = y_hats, aes(fill = y_hat), color = NA) +
geom_count(aes(color = factor(has_diabetes)), alpha = 0.8) +
scale_fill_gradient(low = "white", high = "red") +
scale_color_manual("Diabetes", values = c("gold", "black")) +
scale_size(range = c(0, 2)) +
labs(fill = "Predicted\nprobability")
```
Figure E.13: Predicted probabilities for diabetes as a function of BMI and age.
Figure [E.13](ch-regression.html#fig:plotFundiabetes) displays the predicted probabilities for each of our grid points.
We see that very few young adults have diabetes, even if they have moderately high BMI scores.
As we look at older subjects while holding BMI fixed, the predicted probability of diabetes increases.
E.6 Further resources
---------------------
Regression is described in many books. An introduction is found in most introductory statistics textbooks, including *Open Intro Statistics* (Diez, Barr, and Çetinkaya\-Rundel 2019\).
For a deeper but still accessible treatment, we suggest Cannon et al. (2019\).
G. James et al. (2013\) and Hastie, Tibshirani, and Friedman (2009\) also cover regression from a modeling and machine learning perspective.
Hoaglin (2016\) provides guidance on how to interpret conditional regression parameters.
Cook (1982\) comprehensively reviews regression diagnostics.
An accessible introduction to smoothing can be found in Ruppert, Wand, and Carroll (2003\).
E.7 Exercises
-------------
**Problem 1 (Easy)**: In 1966, [Cyril Burt](http://en.wikipedia.org/wiki/Cyril_Burt) published a paper called *The genetic determination of differences in intelligence: A study of monozygotic twins reared apart.* The data consist of IQ scores for \[an assumed random sample of] 27 identical twins, one raised by foster parents, the other by the biological parents.
Here is the regression output for using `Biological` IQ to predict `Foster` IQ:
```
library(mdsr)
library(faraway)
mod <- lm(Foster ~ Biological, data = twins)
coef(mod)
```
```
(Intercept) Biological
9.208 0.901
```
```
mosaic::rsquared(mod)
```
```
[1] 0.778
```
Which of the following is **FALSE**? Justify your answers.
* Alice and Beth were raised by their biological parents. If Beth’s IQ is 10 points higher than Alice’s, then we would expect that her foster twin Bernice’s IQ is 9 points higher than the IQ of Alice’s foster twin Ashley.
* Roughly 78% of the foster twins’ IQs can be accurately predicted by the model.
* The linear model is \\(\\widehat{Foster} \= 9\.2 \+ 0\.9 \\times Biological\\).
* Foster twins with IQs higher than average are expected to have biological twins with higher than average IQs as well.
**Problem 2 (Medium)**: The `atus` package includes data from the American Time Use Survey (ATUS). Use the `atusresp` dataset to model `hourly_wage` as a function of other predictors in the dataset.
**Problem 3 (Medium)**: The `Gestation` data set in `mdsr` contains birth weight, date, and gestational period collected as part of the Child Health and Development Studies. Information about the baby’s parents—age, education, height, weight, and whether the mother smoked is also recorded.
1. Fit a linear regression model for birthweight (`wt`) as a function of the mother’s age (`age`).
2. Find a 95% confidence interval and p\-value for the slope coefficient.
3. What do you conclude about the association between a mother’s age and her baby’s birthweight?
**Problem 4 (Medium)**: The Child Health and Development Studies investigate a range of topics. One study, in particular, considered all pregnancies among women in the Kaiser Foundation Health Plan in the San Francisco East Bay area. The goal is to model the weight of the infants (`bwt`, in ounces) using variables including length of pregnancy in days (`gestation`), mother’s age in years (`age`), mother’s height in inches (`height`), whether the child was the first born (`parity`), mother’s pregnancy weight in pounds (`weight`), and whether the mother was a smoker (`smoke`).
The summary table that follows shows the results of a regression model for predicting the average birth weight of babies based on all of the variables included in the data set.
```
library(mdsr)
library(mosaicData)
babies <- Gestation %>%
rename(bwt = wt, height = ht, weight = wt.1) %>%
mutate(parity = parity == 0, smoke = smoke > 0) %>%
select(id, bwt, gestation, age, height, weight, parity, smoke)
mod <- lm(bwt ~ gestation + age + height + weight + parity + smoke,
data = babies
)
coef(mod)
```
```
(Intercept) gestation age height weight parityTRUE
-85.7875 0.4601 0.0429 1.0623 0.0653 -2.9530
smokeTRUE
NA
```
Answer the following questions regarding this linear regression model.
1. The coefficient for `parity` is different than if you fit a linear model predicting weight using only that variable. Why might there be a difference?
2. Calculate the residual for the first observation in the data set.
3. This data set contains missing values. What happens to these rows when we fit the model?
**Problem 5 (Medium)**: Investigators in the HELP (Health Evaluation and Linkage to Primary Care) study were interested in modeling predictors of being `homeless` (one or more nights spent on the street or in a shelter in the past six months vs. housed) using baseline data from the clinical trial. Fit and interpret a parsimonious model that would help the investigators identify predictors of homelessness.
E.8 Supplementary exercises
---------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-regression.html\#regression\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-regression.html#regression-online-exercises)
**Problem 1 (Medium)**: In the HELP (Health Evaluation and Linkage to Primary Care) study, investigators were interested in determining predictors of severe depressive symptoms (measured by the Center for Epidemiologic Studies—Depression scale, `cesd`) amongst a cohort enrolled at a substance abuse treatment facility. These predictors include `substance` of abuse (alcohol, cocaine, or heroin), `mcs` (a measure of mental well\-being), gender, and housing status (housed or homeless). Answer the following questions regarding the following multiple regression model.
```
library(mdsr)
fm <- lm(cesd ~ substance + mcs + sex + homeless, data = HELPrct)
msummary(fm)
```
```
Estimate Std. Error t value Pr(>|t|)
(Intercept) 57.7794 1.4664 39.40 <2e-16 ***
substancecocaine -3.5406 1.0101 -3.51 0.0005 ***
substanceheroin -1.6818 1.0731 -1.57 0.1178
mcs -0.6407 0.0338 -18.97 <2e-16 ***
sexmale -3.3239 1.0075 -3.30 0.0010 **
homelesshoused -0.8327 0.8686 -0.96 0.3383
Residual standard error: 8.97 on 447 degrees of freedom
Multiple R-squared: 0.492, Adjusted R-squared: 0.486
F-statistic: 86.4 on 5 and 447 DF, p-value: <2e-16
```
```
confint(fm)
```
```
2.5 % 97.5 %
(Intercept) 54.898 60.661
substancecocaine -5.526 -1.555
substanceheroin -3.791 0.427
mcs -0.707 -0.574
sexmale -5.304 -1.344
homelesshoused -2.540 0.874
```
* Write out the linear model.
* Calculate the predicted CESD for a female homeless cocaine\-involved subject with
an MCS score of 20\.
* Identify the null and alternative hypotheses for the 8 tests displayed above.
* Interpret the 95% confidence interval for the `substancecocaine` coefficient.
* Make a conclusion and summarize the results of a test of the `homeless` parameter.
* Report and interpret the \\(R^2\\) (coefficient of determination) for this model.
* Which of the residual diagnostic plots are redundant?
* What do we conclude about the distribution of the residuals?
* What do we conclude about the relationship between the fitted values and the residuals?
* What do we conclude about the relationship between the MCS score and the residuals?
* What other things can we learn from the residual diagnostics?
* Which observations should we flag for further study?
---
E.1 Simple linear regression
----------------------------
Linear regression can help us understand how values of a quantitative (numerical) outcome (or response) are associated with values of a quantitative explanatory (or predictor) variable.
This technique is often applied in two ways: to generate predicted values or to make inferences regarding associations in the dataset.
In some disciplines the outcome is called the dependent variable and the predictor the independent variable. We avoid such usage since the words dependent and independent have many meanings in statistics.
A simple linear regression model for an outcome \\(y\\) as a function of a predictor \\(x\\) takes the form:
\\\[
y\_i \= \\beta\_0 \+ \\beta\_1 x\_i \+ \\epsilon\_i \\,, \\text{ for } i\=1,\\ldots,n \\,,
\\]
where \\(n\\) represents the number of observations (rows) in the data set.
For this model, \\(\\beta\_0\\) is the population parameter corresponding to the [*intercept*](https://en.wikipedia.org/w/index.php?search=intercept) (i.e., the predicted value when \\(x\=0\\)) and \\(\\beta\_1\\) is the true (population) [*slope*](https://en.wikipedia.org/w/index.php?search=slope) coefficient (i.e., the predicted increase in \\(y\\) for a unit increase in \\(x\\)). The \\(\\epsilon\_i\\)’s are the [*errors*](https://en.wikipedia.org/w/index.php?search=errors) (these are assumed to be random noise with mean 0\).
We almost never know the true values of the population parameters \\(\\beta\_0\\) and \\(\\beta\_1\\), but we estimate them using data from our sample. The `lm()` function finds the “best” coefficients \\(\\hat{\\beta}\_0\\) and \\(\\hat{\\beta}\_1\\) where
the [*fitted values*](https://en.wikipedia.org/w/index.php?search=fitted%20values) (or expected values) are given by
\\(\\hat{y}\_i \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_i\\).
What is left over is captured by the [*residuals*](https://en.wikipedia.org/w/index.php?search=residuals) (\\(\\hat{\\epsilon}\_i \= y\_i \- \\hat{y}\_i\\)).
The model almost never fits perfectly—if it did there would be no need for a model.
The best\-fitting
regression line is usually determined by a [*least squares*](https://en.wikipedia.org/w/index.php?search=least%20squares) criteria that minimizes the sum of the squared residuals (\\(\\epsilon\_i^2\\)).
The least squares regression line (defined by the values of \\(\\hat{\\beta\_0}\\) and \\(\\hat{\\beta}\_1\\)) is unique.
### E.1\.1 Motivating example: Modeling usage of a rail trail
The [*Pioneer Valley*](https://en.wikipedia.org/w/index.php?search=Pioneer%20Valley) Planning Commission (PVPC) collected data north of Chestnut Street in [*Florence, Massachusetts*](https://en.wikipedia.org/w/index.php?search=Florence,%20Massachusetts) for a 90\-day period.
Data collectors set up a laser sensor that recorded when a rail\-trail user passed the data collection station.
The data are available in the `RailTrail` data set in the **mosaicData** package.
```
library(tidyverse)
library(mdsr)
library(mosaic)
glimpse(RailTrail)
```
```
Rows: 90
Columns: 11
$ hightemp <int> 83, 73, 74, 95, 44, 69, 66, 66, 80, 79, 78, 65, 41, 59,…
$ lowtemp <int> 50, 49, 52, 61, 52, 54, 39, 38, 55, 45, 55, 48, 49, 35,…
$ avgtemp <dbl> 66.5, 61.0, 63.0, 78.0, 48.0, 61.5, 52.5, 52.0, 67.5, 6…
$ spring <int> 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1…
$ summer <int> 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0…
$ fall <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0…
$ cloudcover <dbl> 7.6, 6.3, 7.5, 2.6, 10.0, 6.6, 2.4, 0.0, 3.8, 4.1, 8.5,…
$ precip <dbl> 0.00, 0.29, 0.32, 0.00, 0.14, 0.02, 0.00, 0.00, 0.00, 0…
$ volume <int> 501, 419, 397, 385, 200, 375, 417, 629, 533, 547, 432, …
$ weekday <lgl> TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE…
$ dayType <chr> "weekday", "weekday", "weekday", "weekend", "weekday", …
```
The PVPC wants to understand the relationship between daily ridership (i.e., the number of riders and walkers who use the bike path on any given day) and a collection of explanatory variables, including the temperature, rainfall, cloud cover, and day of the week.
In a simple linear regression model, there
is a single quantitative explanatory variable.
It seems reasonable that the high temperature for the day (`hightemp`, measured in degrees Fahrenheit) might be related to ridership, so we will explore that first.
Figure [E.1](ch-regression.html#fig:railtrail) shows a scatterplot between ridership (`volume`) and high temperature (`hightemp`), with the simple linear regression line overlaid.
The fitted coefficients are calculated through a call to the `lm()` function.
We will use functions from the **broom** package to display model results in a tidy fashion.
```
mod <- lm(volume ~ hightemp, data = RailTrail)
library(broom)
tidy(mod)
```
```
# A tibble: 2 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -17.1 59.4 -0.288 0.774
2 hightemp 5.70 0.848 6.72 0.00000000171
```
```
ggplot(RailTrail, aes(x = hightemp, y = volume)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE)
```
Figure E.1: Scatterplot of number of trail crossings as a function of highest daily temperature (in degrees Fahrenheit).
The first coefficient is \\(\\hat{\\beta}\_0\\), the estimated \\(y\\)\-intercept. The interpretation is that if the high temperature was 0 degrees Fahrenheit, then the estimated ridership would be about \\(\-17\\) riders. This is doubly non\-sensical in this context, since it is impossible to have a negative number of riders and this represents a substantial extrapolation to far colder temperatures than are present in the data set (recall the [*Challenger*](https://en.wikipedia.org/w/index.php?search=Challenger) discussion from Chapter [2](ch-vizI.html#ch:vizI)). It turns out that the monitoring equipment didn’t work when it got too cold, so values for those days are unavailable.
In this case, it is not appropriate to simply multiply
the average number of users on the observed days by the number of days in a year, since cold days that are likely to have fewer trail users are excluded due to instrumentation issues.
Such missing data can lead to selection bias.
The second coefficient (the slope) is usually more interesting. This coefficient (\\(\\hat{\\beta}\_1\\)) is interpreted as the predicted increase in trail users for each additional degree in temperature. We expect to see about 5\.7 additional riders use the rail trail on a day that is 1 degree warmer than another day.
### E.1\.2 Model visualization
Figure [E.1](ch-regression.html#fig:railtrail) allows us to visualize our model in the data space. How does our model compare to a null model? That is, how do we know that our model is useful?
```
library(broom)
mod_avg <- RailTrail %>%
lm(volume ~ 1, data = .) %>%
augment(RailTrail)
mod_temp <- RailTrail %>%
lm(volume ~ hightemp, data = .) %>%
augment(RailTrail)
mod_data <- bind_rows(mod_avg, mod_temp) %>%
mutate(model = rep(c("null", "slr"), each = nrow(RailTrail)))
```
In Figure [E.2](ch-regression.html#fig:regplot1), we compare the least squares regression line (right) with the null model that simply returns the average for every input (left). That is, on the left, the average temperature of the day is ignored. The model simply predicts an average ridership every day, regardless of the temperature. However, on the right, the model takes the average ridership into account, and accordingly makes a different prediction for each input value.
```
ggplot(data = mod_data, aes(x = hightemp, y = volume)) +
geom_smooth(
data = filter(mod_data, model == "null"),
method = "lm", se = FALSE, formula = y ~ 1,
color = "dodgerblue", size = 0.5
) +
geom_smooth(
data = filter(mod_data, model == "slr"),
method = "lm", se = FALSE, formula = y ~ x,
color = "dodgerblue", size = 0.5
) +
geom_segment(
aes(xend = hightemp, yend = .fitted),
arrow = arrow(length = unit(0.1, "cm")),
size = 0.5, color = "darkgray"
) +
geom_point(color = "dodgerblue") +
facet_wrap(~model)
```
Figure E.2: At left, the model based on the overall average high temperature. At right, the simple linear regression model.
Obviously, the regression model works better than the null model (that forces the slope to be zero), since it is more flexible. But how much better?
### E.1\.3 Measuring the strength of fit
The correlation coefficient, \\(r\\), is used to
quantify the strength of the linear relationship between two variables. We can quantify the proportion of variation in the response variable (\\(y\\)) that is explained by the model in a similar fashion. This quantity is called the [*coefficient of determination*](https://en.wikipedia.org/w/index.php?search=coefficient%20of%20determination) and is denoted \\(R^2\\). It is a common measure of goodness\-of\-fit for regression models.
Like any proportion, \\(R^2\\) is always between 0 and 1\. For simple linear regression (one explanatory variable), \\(R^2 \= r^2\\). The definition of \\(R^2\\) is given by the following expression.
\\\[
\\begin{aligned}
R^2 \&\= 1 \- \\frac{SSE}{SST} \= \\frac{SSM}{SST} \\\\
\&\= 1 \- \\frac{\\sum\_{i\=1}^n (y\_i \- \\hat{y}\_i)^2}{\\sum\_{i\=1}^n (y\_i \- \\bar{y})^2} \\\\
\&\= 1 \- \\frac{SSE}{(n\-1\) Var(y)} \\, ,
\\end{aligned}
\\]
Here, \\(\\hat{y}\\) is the predicted value, \\(Var(y)\\) is the observed variance, \\(SSE\\) is the sum of the squared residuals, \\(SSM\\) is the sum of the squares attributed to the model, and \\(SST\\) is the total sum of the squares.
We can calculate these values for the rail trail example.
```
n <- nrow(RailTrail)
SST <- var(pull(RailTrail, volume)) * (n - 1)
SSE <- var(residuals(mod)) * (n - 1)
1 - SSE / SST
```
```
[1] 0.339
```
```
glance(mod)
```
```
# A tibble: 1 × 12
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.339 0.332 104. 45.2 1.71e-9 1 -545. 1096. 1103.
# … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
```
In Figure [E.2](ch-regression.html#fig:regplot1), the null model on the left has an \\(R^2\\) of 0, because \\(\\hat{y}\_i \= \\bar{y}\\) for all \\(i\\), and so \\(SSE \= SST\\). On the other hand, the \\(R^2\\) of the regression model on the right is 0\.339\. We say that the regression model based on average daily temperature explained about 34% of the variation in daily ridership.
### E.1\.4 Categorical explanatory variables
Suppose that instead of using temperature as our explanatory variable for ridership on the rail trail, we only considered whether it was a weekday or not a weekday (e.g., weekend or holiday). The indicator variable `weekday` is [*binary*](https://en.wikipedia.org/w/index.php?search=binary) (or dichotomous) in that it only takes on the values 0 and 1\. (Such variables are sometimes called [*indicator*](https://en.wikipedia.org/w/index.php?search=indicator) variables or more pejoratively [*dummy*](https://en.wikipedia.org/w/index.php?search=dummy) variables.)
This new linear regression model has the form:
\\\[
\\widehat{volume} \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 \\cdot \\mathrm{weekday} \\,,
\\]
where the fitted coefficients are given below.
```
coef(lm(volume ~ weekday, data = RailTrail))
```
```
(Intercept) weekdayTRUE
430.7 -80.3
```
Note that these coefficients could have been calculated from the means of the two groups (since the regression model has only two possible predicted values). The average ridership on weekdays is 350\.4 while the average on non\-weekdays is 430\.7\.
```
RailTrail %>%
group_by(weekday) %>%
summarize(mean_volume = mean(volume))
```
```
# A tibble: 2 × 2
weekday mean_volume
<lgl> <dbl>
1 FALSE 431.
2 TRUE 350.
```
In the coefficients listed above, the `weekdayTRUE` variable corresponds to rows in which the value of the `weekday` variable was `TRUE` (i.e., weekdays). Because this value is negative, our interpretation is that 80 fewer riders are expected on a weekday as opposed to a weekend or holiday.
To improve the readability of the output we can create a new variable with more mnemonic values.
```
RailTrail <- RailTrail %>%
mutate(day = ifelse(weekday == 1, "weekday", "weekend/holiday"))
```
Care was needed to recode the `weekday` variable because it was a `factor`. Avoid the use of factors unless they are needed.
```
coef(lm(volume ~ day, data = RailTrail))
```
```
(Intercept) dayweekend/holiday
350.4 80.3
```
The model coefficients have changed (although they still provide the same interpretation). By default, the `lm()` function will pick the alphabetically lowest value of the categorical predictor as the [*reference group*](https://en.wikipedia.org/w/index.php?search=reference%20group) and create indicators for the other levels (in this case `dayweekend/holiday`). As a result the intercept is now the predicted number of trail crossings on a `weekday`.
In either formulation, the interpretation of the model remains the same: On a weekday, 80 fewer riders are expected than on a weekend or holiday.
### E.1\.1 Motivating example: Modeling usage of a rail trail
The [*Pioneer Valley*](https://en.wikipedia.org/w/index.php?search=Pioneer%20Valley) Planning Commission (PVPC) collected data north of Chestnut Street in [*Florence, Massachusetts*](https://en.wikipedia.org/w/index.php?search=Florence,%20Massachusetts) for a 90\-day period.
Data collectors set up a laser sensor that recorded when a rail\-trail user passed the data collection station.
The data are available in the `RailTrail` data set in the **mosaicData** package.
```
library(tidyverse)
library(mdsr)
library(mosaic)
glimpse(RailTrail)
```
```
Rows: 90
Columns: 11
$ hightemp <int> 83, 73, 74, 95, 44, 69, 66, 66, 80, 79, 78, 65, 41, 59,…
$ lowtemp <int> 50, 49, 52, 61, 52, 54, 39, 38, 55, 45, 55, 48, 49, 35,…
$ avgtemp <dbl> 66.5, 61.0, 63.0, 78.0, 48.0, 61.5, 52.5, 52.0, 67.5, 6…
$ spring <int> 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1…
$ summer <int> 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0…
$ fall <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0…
$ cloudcover <dbl> 7.6, 6.3, 7.5, 2.6, 10.0, 6.6, 2.4, 0.0, 3.8, 4.1, 8.5,…
$ precip <dbl> 0.00, 0.29, 0.32, 0.00, 0.14, 0.02, 0.00, 0.00, 0.00, 0…
$ volume <int> 501, 419, 397, 385, 200, 375, 417, 629, 533, 547, 432, …
$ weekday <lgl> TRUE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE…
$ dayType <chr> "weekday", "weekday", "weekday", "weekend", "weekday", …
```
The PVPC wants to understand the relationship between daily ridership (i.e., the number of riders and walkers who use the bike path on any given day) and a collection of explanatory variables, including the temperature, rainfall, cloud cover, and day of the week.
In a simple linear regression model, there
is a single quantitative explanatory variable.
It seems reasonable that the high temperature for the day (`hightemp`, measured in degrees Fahrenheit) might be related to ridership, so we will explore that first.
Figure [E.1](ch-regression.html#fig:railtrail) shows a scatterplot between ridership (`volume`) and high temperature (`hightemp`), with the simple linear regression line overlaid.
The fitted coefficients are calculated through a call to the `lm()` function.
We will use functions from the **broom** package to display model results in a tidy fashion.
```
mod <- lm(volume ~ hightemp, data = RailTrail)
library(broom)
tidy(mod)
```
```
# A tibble: 2 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -17.1 59.4 -0.288 0.774
2 hightemp 5.70 0.848 6.72 0.00000000171
```
```
ggplot(RailTrail, aes(x = hightemp, y = volume)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE)
```
Figure E.1: Scatterplot of number of trail crossings as a function of highest daily temperature (in degrees Fahrenheit).
The first coefficient is \\(\\hat{\\beta}\_0\\), the estimated \\(y\\)\-intercept. The interpretation is that if the high temperature was 0 degrees Fahrenheit, then the estimated ridership would be about \\(\-17\\) riders. This is doubly non\-sensical in this context, since it is impossible to have a negative number of riders and this represents a substantial extrapolation to far colder temperatures than are present in the data set (recall the [*Challenger*](https://en.wikipedia.org/w/index.php?search=Challenger) discussion from Chapter [2](ch-vizI.html#ch:vizI)). It turns out that the monitoring equipment didn’t work when it got too cold, so values for those days are unavailable.
In this case, it is not appropriate to simply multiply
the average number of users on the observed days by the number of days in a year, since cold days that are likely to have fewer trail users are excluded due to instrumentation issues.
Such missing data can lead to selection bias.
The second coefficient (the slope) is usually more interesting. This coefficient (\\(\\hat{\\beta}\_1\\)) is interpreted as the predicted increase in trail users for each additional degree in temperature. We expect to see about 5\.7 additional riders use the rail trail on a day that is 1 degree warmer than another day.
### E.1\.2 Model visualization
Figure [E.1](ch-regression.html#fig:railtrail) allows us to visualize our model in the data space. How does our model compare to a null model? That is, how do we know that our model is useful?
```
library(broom)
mod_avg <- RailTrail %>%
lm(volume ~ 1, data = .) %>%
augment(RailTrail)
mod_temp <- RailTrail %>%
lm(volume ~ hightemp, data = .) %>%
augment(RailTrail)
mod_data <- bind_rows(mod_avg, mod_temp) %>%
mutate(model = rep(c("null", "slr"), each = nrow(RailTrail)))
```
In Figure [E.2](ch-regression.html#fig:regplot1), we compare the least squares regression line (right) with the null model that simply returns the average for every input (left). That is, on the left, the average temperature of the day is ignored. The model simply predicts an average ridership every day, regardless of the temperature. However, on the right, the model takes the average ridership into account, and accordingly makes a different prediction for each input value.
```
ggplot(data = mod_data, aes(x = hightemp, y = volume)) +
geom_smooth(
data = filter(mod_data, model == "null"),
method = "lm", se = FALSE, formula = y ~ 1,
color = "dodgerblue", size = 0.5
) +
geom_smooth(
data = filter(mod_data, model == "slr"),
method = "lm", se = FALSE, formula = y ~ x,
color = "dodgerblue", size = 0.5
) +
geom_segment(
aes(xend = hightemp, yend = .fitted),
arrow = arrow(length = unit(0.1, "cm")),
size = 0.5, color = "darkgray"
) +
geom_point(color = "dodgerblue") +
facet_wrap(~model)
```
Figure E.2: At left, the model based on the overall average high temperature. At right, the simple linear regression model.
Obviously, the regression model works better than the null model (that forces the slope to be zero), since it is more flexible. But how much better?
### E.1\.3 Measuring the strength of fit
The correlation coefficient, \\(r\\), is used to
quantify the strength of the linear relationship between two variables. We can quantify the proportion of variation in the response variable (\\(y\\)) that is explained by the model in a similar fashion. This quantity is called the [*coefficient of determination*](https://en.wikipedia.org/w/index.php?search=coefficient%20of%20determination) and is denoted \\(R^2\\). It is a common measure of goodness\-of\-fit for regression models.
Like any proportion, \\(R^2\\) is always between 0 and 1\. For simple linear regression (one explanatory variable), \\(R^2 \= r^2\\). The definition of \\(R^2\\) is given by the following expression.
\\\[
\\begin{aligned}
R^2 \&\= 1 \- \\frac{SSE}{SST} \= \\frac{SSM}{SST} \\\\
\&\= 1 \- \\frac{\\sum\_{i\=1}^n (y\_i \- \\hat{y}\_i)^2}{\\sum\_{i\=1}^n (y\_i \- \\bar{y})^2} \\\\
\&\= 1 \- \\frac{SSE}{(n\-1\) Var(y)} \\, ,
\\end{aligned}
\\]
Here, \\(\\hat{y}\\) is the predicted value, \\(Var(y)\\) is the observed variance, \\(SSE\\) is the sum of the squared residuals, \\(SSM\\) is the sum of the squares attributed to the model, and \\(SST\\) is the total sum of the squares.
We can calculate these values for the rail trail example.
```
n <- nrow(RailTrail)
SST <- var(pull(RailTrail, volume)) * (n - 1)
SSE <- var(residuals(mod)) * (n - 1)
1 - SSE / SST
```
```
[1] 0.339
```
```
glance(mod)
```
```
# A tibble: 1 × 12
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.339 0.332 104. 45.2 1.71e-9 1 -545. 1096. 1103.
# … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
```
In Figure [E.2](ch-regression.html#fig:regplot1), the null model on the left has an \\(R^2\\) of 0, because \\(\\hat{y}\_i \= \\bar{y}\\) for all \\(i\\), and so \\(SSE \= SST\\). On the other hand, the \\(R^2\\) of the regression model on the right is 0\.339\. We say that the regression model based on average daily temperature explained about 34% of the variation in daily ridership.
### E.1\.4 Categorical explanatory variables
Suppose that instead of using temperature as our explanatory variable for ridership on the rail trail, we only considered whether it was a weekday or not a weekday (e.g., weekend or holiday). The indicator variable `weekday` is [*binary*](https://en.wikipedia.org/w/index.php?search=binary) (or dichotomous) in that it only takes on the values 0 and 1\. (Such variables are sometimes called [*indicator*](https://en.wikipedia.org/w/index.php?search=indicator) variables or more pejoratively [*dummy*](https://en.wikipedia.org/w/index.php?search=dummy) variables.)
This new linear regression model has the form:
\\\[
\\widehat{volume} \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 \\cdot \\mathrm{weekday} \\,,
\\]
where the fitted coefficients are given below.
```
coef(lm(volume ~ weekday, data = RailTrail))
```
```
(Intercept) weekdayTRUE
430.7 -80.3
```
Note that these coefficients could have been calculated from the means of the two groups (since the regression model has only two possible predicted values). The average ridership on weekdays is 350\.4 while the average on non\-weekdays is 430\.7\.
```
RailTrail %>%
group_by(weekday) %>%
summarize(mean_volume = mean(volume))
```
```
# A tibble: 2 × 2
weekday mean_volume
<lgl> <dbl>
1 FALSE 431.
2 TRUE 350.
```
In the coefficients listed above, the `weekdayTRUE` variable corresponds to rows in which the value of the `weekday` variable was `TRUE` (i.e., weekdays). Because this value is negative, our interpretation is that 80 fewer riders are expected on a weekday as opposed to a weekend or holiday.
To improve the readability of the output we can create a new variable with more mnemonic values.
```
RailTrail <- RailTrail %>%
mutate(day = ifelse(weekday == 1, "weekday", "weekend/holiday"))
```
Care was needed to recode the `weekday` variable because it was a `factor`. Avoid the use of factors unless they are needed.
```
coef(lm(volume ~ day, data = RailTrail))
```
```
(Intercept) dayweekend/holiday
350.4 80.3
```
The model coefficients have changed (although they still provide the same interpretation). By default, the `lm()` function will pick the alphabetically lowest value of the categorical predictor as the [*reference group*](https://en.wikipedia.org/w/index.php?search=reference%20group) and create indicators for the other levels (in this case `dayweekend/holiday`). As a result the intercept is now the predicted number of trail crossings on a `weekday`.
In either formulation, the interpretation of the model remains the same: On a weekday, 80 fewer riders are expected than on a weekend or holiday.
E.2 Multiple regression
-----------------------
Multiple regression is a natural extension of simple linear regression that incorporates multiple explanatory (or predictor) variables. It has the general form:
\\\[
y \= \\beta\_0 \+ \\beta\_1 x\_1 \+ \\beta\_2 x\_2 \+ \\cdots \+ \\beta\_p x\_p \+ \\epsilon, \\text{ where } \\epsilon \\sim N(0, \\sigma\_\\epsilon) \\,.
\\]
The estimated coefficients (i.e., \\(\\hat{\\beta}\_i\\)’s) are now interpreted as “conditional on” the other variables—each \\(\\beta\_i\\) reflects the *predicted* change in \\(y\\) associated with a one\-unit increase in \\(x\_i\\), conditional upon the rest of the \\(x\_i\\)’s.
This type of model can help to disentangle more complex relationships between three or more variables.
The value of \\(R^2\\) from a multiple regression model has the same interpretation as before: the proportion of variability explained by the model.
Interpreting conditional regression parameters can be challenging. The analyst needs to ensure
that comparisons that hold other factors constant do not involve extrapolations beyond the
observed data.
### E.2\.1 Parallel slopes: Multiple regression with a categorical variable
Consider first the case where \\(x\_2\\) is an [*indicator*](https://en.wikipedia.org/w/index.php?search=indicator) variable that can only be 0 or 1 (e.g., `weekday`). Then,
\\\[
\\hat{y} \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 x\_2 \\,.
\\]
In the case where \\(x\_1\\) is quantitative but \\(x\_2\\) is an indicator variable, we have:
\\\[
\\begin{aligned}
\\text{For weekends, } \\qquad \\hat{y} \|\_{ x\_1, x\_2 \= 0} \&\= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \\\\
\\text{For weekdays, } \\qquad \\hat{y} \|\_{ x\_1, x\_2 \= 1} \&\= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 \\cdot 1 \\\\
\&\= \\left( \\hat{\\beta}\_0 \+ \\hat{\\beta}\_2 \\right) \+ \\hat{\\beta}\_1 x\_1 \\, .
\\end{aligned}
\\]
This is called a [*parallel slopes*](https://en.wikipedia.org/w/index.php?search=parallel%20slopes) model (see Figure [E.3](ch-regression.html#fig:parallel-slopes)), since the predicted values of the model take the geometric shape of two parallel lines with slope \\(\\hat{\\beta}\_1\\): one with \\(y\\)\-intercept \\(\\hat{\\beta}\_0\\) for weekends, and another with \\(y\\)\-intercept \\(\\hat{\\beta}\_0 \+ \\hat{\\beta}\_2\\) for weekdays.
```
mod_parallel <- lm(volume ~ hightemp + weekday, data = RailTrail)
tidy(mod_parallel)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 42.8 64.3 0.665 0.508
2 hightemp 5.35 0.846 6.32 0.0000000109
3 weekdayTRUE -51.6 23.7 -2.18 0.0321
```
```
glance(mod_parallel)
```
```
# A tibble: 1 × 12
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.374 0.359 102. 25.9 1.46e-9 2 -542. 1093. 1103.
# … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
```
```
mod_parallel %>%
augment() %>%
ggplot(aes(x = hightemp, y = volume, color = weekday)) +
geom_point() +
geom_line(aes(y = .fitted)) +
labs(color = "Is it a\nweekday?")
```
Figure E.3: Visualization of parallel slopes model for the rail trail data.
### E.2\.2 Parallel planes: Multiple regression with a second quantitative variable
If \\(x\_2\\) is a quantitative variable, then we have:
\\\[
\\hat{y} \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 x\_2 \\,.
\\]
Notice that our model is no longer a line, rather it is a *plane* that exists in three dimensions.
Now suppose that we want to improve our model for ridership by considering not only the average temperature, but also the amount of precipitation (rain or snow, measured in inches). We can do this in **R** by simply adding this variable to our regression model.
```
mod_plane <- lm(volume ~ hightemp + precip, data = RailTrail)
tidy(mod_plane)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -31.5 55.2 -0.571 5.70e- 1
2 hightemp 6.12 0.794 7.70 1.97e-11
3 precip -153. 39.3 -3.90 1.90e- 4
```
Note that the coefficient on `hightemp` (6\.1 riders per degree) has changed from its value in the simple linear regression model (5\.7 riders per degree). This is due to the moderating effect of precipitation. Our interpretation is that for each additional degree in temperature, we expect an additional 6\.1 riders on the rail trail, after controlling for the amount of precipitation.
As you can imagine, the effect of precipitation is strong—some people may be less likely to bike or walk in the rain. Thus, even after controlling for temperature, an inch of rainfall is associated with a drop in ridership of about 153\.
Note that since the median precipitation on days when there was precipitation was only 0\.15 inches, a predicted change for an additional inch may be misleading. It may be better to report a predicted difference of 0\.15 additional inches or replace the continuous term in the model with a dichotomous indicator of any precipitation.
If we added all three explanatory variables to the model we would have parallel [*planes*](https://en.wikipedia.org/w/index.php?search=planes).
```
mod_p_planes <- lm(volume ~ hightemp + precip + weekday, data = RailTrail)
tidy(mod_p_planes)
```
```
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 19.3 60.3 0.320 7.50e- 1
2 hightemp 5.80 0.799 7.26 1.59e-10
3 precip -146. 38.9 -3.74 3.27e- 4
4 weekdayTRUE -43.1 22.2 -1.94 5.52e- 2
```
### E.2\.3 Non\-parallel slopes: Multiple regression with interaction
Let’s return to a model that includes `weekday` and `hightemp` as predictors.
What if the parallel lines model doesn’t fit well?
Adding an additional term into the model can make it more flexible
and allow there to be a different slope on the two different types of days.
\\\[
\\hat{y} \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 x\_2 \+ \\hat{\\beta}\_3 x\_1 x\_2 \\,.
\\]
The model can also be described separately for weekends/holidays and weekdays.
\\\[
\\begin{aligned}
\\text{For weekends, } \\qquad \\hat{y} \|\_{ x\_1, x\_2 \= 0} \&\= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \\\\
\\text{For weekdays, } \\qquad \\hat{y} \|\_{ x\_1, x\_2 \= 1} \&\= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 \\cdot 1 \+ \\hat{\\beta}\_3 \\cdot x\_1\\\\
\&\= \\left( \\hat{\\beta}\_0 \+ \\hat{\\beta}\_2 \\right) \+ \\left( \\hat{\\beta}\_1 \+ \\hat{\\beta}\_3 \\right) x\_1 \\, .
\\end{aligned}
\\]
This is called an [*interaction model*](https://en.wikipedia.org/w/index.php?search=interaction%20model) (see Figure [E.4](ch-regression.html#fig:interact)).
The predicted values of the model take the geometric shape of two non\-parallel lines with different slopes.
```
mod_interact <- lm(volume ~ hightemp + weekday + hightemp * weekday,
data = RailTrail)
tidy(mod_interact)
```
```
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 135. 108. 1.25 0.215
2 hightemp 4.07 1.47 2.78 0.00676
3 weekdayTRUE -186. 129. -1.44 0.153
4 hightemp:weekdayTRUE 1.91 1.80 1.06 0.292
```
```
glance(mod_interact)
```
```
# A tibble: 1 × 12
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.382 0.360 102. 17.7 4.96e-9 3 -542. 1094. 1106.
# … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
```
```
mod_interact %>%
augment() %>%
ggplot(aes(x = hightemp, y = volume, color = weekday)) +
geom_point() +
geom_line(aes(y = .fitted)) +
labs(color = "Is it a\nweekday?")
```
Figure E.4: Visualization of interaction model for the rail trail data.
We see that the slope on weekdays is about two riders per degree higher than on weekends and holidays.
This may indicate that trail users on weekends
and holidays are less concerned about the temperature than on weekdays.
### E.2\.4 Modeling non\-linear relationships
A linear model with a single parameter fits well in many situations but is not appropriate in others. Consider modeling height (in centimeters) as a function of age (in years) using data
from a subset of female subjects included in the [*National Health and Nutrition Examination Study*](https://en.wikipedia.org/w/index.php?search=National%20Health%20and%20Nutrition%20Examination%20Study) (from the **NHANES** package) with a linear term.
Another approach uses a
[*smoother*](https://en.wikipedia.org/w/index.php?search=smoother) instead of a linear model.
Unlike the straight line, the smoother can bend to better fit the points when modeling the
functional form of a relationship (see Figure [E.5](ch-regression.html#fig:ageheightmod)).
```
library(NHANES)
NHANES %>%
sample(300) %>%
filter(Gender == "female") %>%
ggplot(aes(x = Age, y = Height)) +
geom_point() +
geom_smooth(method = lm, se = FALSE) +
geom_smooth(method = loess, se = FALSE, color = "green") +
xlab("Age (in years)") +
ylab("Height (in cm)")
```
Figure E.5: Scatterplot of height as a function of age with superimposed linear model (blue) and smoother (green).
The fit of the linear model (denoted in blue) is poor: A straight line does not account for the dramatic increases in height during
puberty to young adulthood or for the gradual decline in height for older subjects. The smoother (in green) does a much better job of describing the functional form.
The improved fit does come with a cost. Compare the results for linear and smoothed models in Figure [E.6](ch-regression.html#fig:railtrailsmooth). Here the functional form of the relationship between high temperature and
volume of trail use is closer to linear (with some deviation for warmer temperatures).
```
ggplot(data = RailTrail, aes(x = hightemp, y = volume)) +
geom_point() +
geom_smooth(method = lm) +
geom_smooth(method = loess, color = "green") +
ylab("Number of trail crossings") +
xlab("High temperature (F)")
```
Figure E.6: Scatterplot of volume as a function of high temperature with superimposed linear and smooth models for the rail trail data.
The width of the confidence bands (95% confidence interval at each point) for the smoother tend to be wider than that for the linear model.
This is one of the costs of the additional flexibility in modeling.
Another cost is interpretation: It is more complicated to explain the results from the smoother than to interpret a slope coefficient (straight line).
### E.2\.1 Parallel slopes: Multiple regression with a categorical variable
Consider first the case where \\(x\_2\\) is an [*indicator*](https://en.wikipedia.org/w/index.php?search=indicator) variable that can only be 0 or 1 (e.g., `weekday`). Then,
\\\[
\\hat{y} \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 x\_2 \\,.
\\]
In the case where \\(x\_1\\) is quantitative but \\(x\_2\\) is an indicator variable, we have:
\\\[
\\begin{aligned}
\\text{For weekends, } \\qquad \\hat{y} \|\_{ x\_1, x\_2 \= 0} \&\= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \\\\
\\text{For weekdays, } \\qquad \\hat{y} \|\_{ x\_1, x\_2 \= 1} \&\= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 \\cdot 1 \\\\
\&\= \\left( \\hat{\\beta}\_0 \+ \\hat{\\beta}\_2 \\right) \+ \\hat{\\beta}\_1 x\_1 \\, .
\\end{aligned}
\\]
This is called a [*parallel slopes*](https://en.wikipedia.org/w/index.php?search=parallel%20slopes) model (see Figure [E.3](ch-regression.html#fig:parallel-slopes)), since the predicted values of the model take the geometric shape of two parallel lines with slope \\(\\hat{\\beta}\_1\\): one with \\(y\\)\-intercept \\(\\hat{\\beta}\_0\\) for weekends, and another with \\(y\\)\-intercept \\(\\hat{\\beta}\_0 \+ \\hat{\\beta}\_2\\) for weekdays.
```
mod_parallel <- lm(volume ~ hightemp + weekday, data = RailTrail)
tidy(mod_parallel)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 42.8 64.3 0.665 0.508
2 hightemp 5.35 0.846 6.32 0.0000000109
3 weekdayTRUE -51.6 23.7 -2.18 0.0321
```
```
glance(mod_parallel)
```
```
# A tibble: 1 × 12
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.374 0.359 102. 25.9 1.46e-9 2 -542. 1093. 1103.
# … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
```
```
mod_parallel %>%
augment() %>%
ggplot(aes(x = hightemp, y = volume, color = weekday)) +
geom_point() +
geom_line(aes(y = .fitted)) +
labs(color = "Is it a\nweekday?")
```
Figure E.3: Visualization of parallel slopes model for the rail trail data.
### E.2\.2 Parallel planes: Multiple regression with a second quantitative variable
If \\(x\_2\\) is a quantitative variable, then we have:
\\\[
\\hat{y} \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 x\_2 \\,.
\\]
Notice that our model is no longer a line, rather it is a *plane* that exists in three dimensions.
Now suppose that we want to improve our model for ridership by considering not only the average temperature, but also the amount of precipitation (rain or snow, measured in inches). We can do this in **R** by simply adding this variable to our regression model.
```
mod_plane <- lm(volume ~ hightemp + precip, data = RailTrail)
tidy(mod_plane)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -31.5 55.2 -0.571 5.70e- 1
2 hightemp 6.12 0.794 7.70 1.97e-11
3 precip -153. 39.3 -3.90 1.90e- 4
```
Note that the coefficient on `hightemp` (6\.1 riders per degree) has changed from its value in the simple linear regression model (5\.7 riders per degree). This is due to the moderating effect of precipitation. Our interpretation is that for each additional degree in temperature, we expect an additional 6\.1 riders on the rail trail, after controlling for the amount of precipitation.
As you can imagine, the effect of precipitation is strong—some people may be less likely to bike or walk in the rain. Thus, even after controlling for temperature, an inch of rainfall is associated with a drop in ridership of about 153\.
Note that since the median precipitation on days when there was precipitation was only 0\.15 inches, a predicted change for an additional inch may be misleading. It may be better to report a predicted difference of 0\.15 additional inches or replace the continuous term in the model with a dichotomous indicator of any precipitation.
If we added all three explanatory variables to the model we would have parallel [*planes*](https://en.wikipedia.org/w/index.php?search=planes).
```
mod_p_planes <- lm(volume ~ hightemp + precip + weekday, data = RailTrail)
tidy(mod_p_planes)
```
```
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 19.3 60.3 0.320 7.50e- 1
2 hightemp 5.80 0.799 7.26 1.59e-10
3 precip -146. 38.9 -3.74 3.27e- 4
4 weekdayTRUE -43.1 22.2 -1.94 5.52e- 2
```
### E.2\.3 Non\-parallel slopes: Multiple regression with interaction
Let’s return to a model that includes `weekday` and `hightemp` as predictors.
What if the parallel lines model doesn’t fit well?
Adding an additional term into the model can make it more flexible
and allow there to be a different slope on the two different types of days.
\\\[
\\hat{y} \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 x\_2 \+ \\hat{\\beta}\_3 x\_1 x\_2 \\,.
\\]
The model can also be described separately for weekends/holidays and weekdays.
\\\[
\\begin{aligned}
\\text{For weekends, } \\qquad \\hat{y} \|\_{ x\_1, x\_2 \= 0} \&\= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \\\\
\\text{For weekdays, } \\qquad \\hat{y} \|\_{ x\_1, x\_2 \= 1} \&\= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_1 \+ \\hat{\\beta}\_2 \\cdot 1 \+ \\hat{\\beta}\_3 \\cdot x\_1\\\\
\&\= \\left( \\hat{\\beta}\_0 \+ \\hat{\\beta}\_2 \\right) \+ \\left( \\hat{\\beta}\_1 \+ \\hat{\\beta}\_3 \\right) x\_1 \\, .
\\end{aligned}
\\]
This is called an [*interaction model*](https://en.wikipedia.org/w/index.php?search=interaction%20model) (see Figure [E.4](ch-regression.html#fig:interact)).
The predicted values of the model take the geometric shape of two non\-parallel lines with different slopes.
```
mod_interact <- lm(volume ~ hightemp + weekday + hightemp * weekday,
data = RailTrail)
tidy(mod_interact)
```
```
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 135. 108. 1.25 0.215
2 hightemp 4.07 1.47 2.78 0.00676
3 weekdayTRUE -186. 129. -1.44 0.153
4 hightemp:weekdayTRUE 1.91 1.80 1.06 0.292
```
```
glance(mod_interact)
```
```
# A tibble: 1 × 12
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.382 0.360 102. 17.7 4.96e-9 3 -542. 1094. 1106.
# … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
```
```
mod_interact %>%
augment() %>%
ggplot(aes(x = hightemp, y = volume, color = weekday)) +
geom_point() +
geom_line(aes(y = .fitted)) +
labs(color = "Is it a\nweekday?")
```
Figure E.4: Visualization of interaction model for the rail trail data.
We see that the slope on weekdays is about two riders per degree higher than on weekends and holidays.
This may indicate that trail users on weekends
and holidays are less concerned about the temperature than on weekdays.
### E.2\.4 Modeling non\-linear relationships
A linear model with a single parameter fits well in many situations but is not appropriate in others. Consider modeling height (in centimeters) as a function of age (in years) using data
from a subset of female subjects included in the [*National Health and Nutrition Examination Study*](https://en.wikipedia.org/w/index.php?search=National%20Health%20and%20Nutrition%20Examination%20Study) (from the **NHANES** package) with a linear term.
Another approach uses a
[*smoother*](https://en.wikipedia.org/w/index.php?search=smoother) instead of a linear model.
Unlike the straight line, the smoother can bend to better fit the points when modeling the
functional form of a relationship (see Figure [E.5](ch-regression.html#fig:ageheightmod)).
```
library(NHANES)
NHANES %>%
sample(300) %>%
filter(Gender == "female") %>%
ggplot(aes(x = Age, y = Height)) +
geom_point() +
geom_smooth(method = lm, se = FALSE) +
geom_smooth(method = loess, se = FALSE, color = "green") +
xlab("Age (in years)") +
ylab("Height (in cm)")
```
Figure E.5: Scatterplot of height as a function of age with superimposed linear model (blue) and smoother (green).
The fit of the linear model (denoted in blue) is poor: A straight line does not account for the dramatic increases in height during
puberty to young adulthood or for the gradual decline in height for older subjects. The smoother (in green) does a much better job of describing the functional form.
The improved fit does come with a cost. Compare the results for linear and smoothed models in Figure [E.6](ch-regression.html#fig:railtrailsmooth). Here the functional form of the relationship between high temperature and
volume of trail use is closer to linear (with some deviation for warmer temperatures).
```
ggplot(data = RailTrail, aes(x = hightemp, y = volume)) +
geom_point() +
geom_smooth(method = lm) +
geom_smooth(method = loess, color = "green") +
ylab("Number of trail crossings") +
xlab("High temperature (F)")
```
Figure E.6: Scatterplot of volume as a function of high temperature with superimposed linear and smooth models for the rail trail data.
The width of the confidence bands (95% confidence interval at each point) for the smoother tend to be wider than that for the linear model.
This is one of the costs of the additional flexibility in modeling.
Another cost is interpretation: It is more complicated to explain the results from the smoother than to interpret a slope coefficient (straight line).
E.3 Inference for regression
----------------------------
Thus far, we have fit several models and interpreted their estimated coefficients. However,
with the exception of the confidence bands in Figure [E.6](ch-regression.html#fig:railtrailsmooth),
we have only made statements about the estimated coefficients (i.e., the \\(\\hat{\\beta}\\)’s)—we have made no statements about the true coefficients (i.e., the \\(\\beta\\)’s), the values of which of course remain unknown.
However, we can use our understanding of the \\(t\\)\-distribution to make [*inferences*](https://en.wikipedia.org/w/index.php?search=inferences) about the true value of regression coefficients. In particular, we can test a hypothesis about \\(\\beta\_1\\) (most commonly that it is equal to zero) and find a confidence interval (range of plausible values) for it.
```
tidy(mod_p_planes)
```
```
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 19.3 60.3 0.320 7.50e- 1
2 hightemp 5.80 0.799 7.26 1.59e-10
3 precip -146. 38.9 -3.74 3.27e- 4
4 weekdayTRUE -43.1 22.2 -1.94 5.52e- 2
```
```
glance(mod_p_planes)
```
```
# A tibble: 1 × 12
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.461 0.443 95.2 24.6 1.44e-11 3 -536. 1081. 1094.
# … with 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
```
In the output above, the p\-value that is associated with the `hightemp` coefficient is displayed as 1\.59e\-10 (or nearly zero).
That is, if the true coefficient (\\(\\beta\_1\\)) was in fact zero, then the probability of observing an association on ridership due to average temperature as large or larger than the one we actually observed in the data, after controlling for precipitation and day of the week, is essentially zero.
This output suggests that the hypothesis that \\(\\beta\_1\\) was in fact zero is dubious based on these data.
Perhaps there is a real association between ridership and average temperature?
Very small p\-values should be rounded to the nearest 0\.0001\. We suggest reporting this p\-value as \\(p\<0\.0001\\).
Another way of thinking about this process is to form a confidence interval around our estimate of the slope coefficient \\(\\hat{\\beta}\_1\\). Here we can say with 95% confidence that the value of the true coefficient \\(\\beta\_1\\) is between 4\.21 and 7\.39 riders per degree. That this interval does not contain zero confirms the result from the hypothesis test.
```
confint(mod_p_planes)
```
```
2.5 % 97.5 %
(Intercept) -100.63 139.268
hightemp 4.21 7.388
precip -222.93 -68.291
weekdayTRUE -87.27 0.976
```
E.4 Assumptions underlying regression
-------------------------------------
The inferences we made above were predicated upon our assumption that the slope follows a \\(t\\)\-distribution. This follows from the assumption that the errors
follow a normal distribution (with mean 0 and standard deviation \\(\\sigma\_\\epsilon\\), for some constant \\(\\sigma\_\\epsilon\\)). Inferences from the model are only valid if the following assumptions hold:
* **L**inearity: The functional form of the relationship between the predictors and the outcome follows a linear combination of regression parameters that are correctly specified (this assumption can be verified by bivariate graphical displays).
* **I**ndependence: Are the errors uncorrelated? Or do they follow a pattern (perhaps over time or within clusters of subjects)?
* **N**ormality of residuals: Do the residuals follow a distribution that is approximately normal? This assumption can be verified using univariate displays.
* **E**qual variance of residuals: Is the variance in the residuals constant across the explanatory variables ([*homoscedastic errors*](https://en.wikipedia.org/w/index.php?search=homoscedastic%20errors))? Or does the variance in the residuals depend on the value of one or more of the explanatory variables ([*heteroscedastic errors*](https://en.wikipedia.org/w/index.php?search=heteroscedastic%20errors))? This assumption can be verified using residual diagnostics.
These conditions are sometimes called the “LINE” assumptions.
All but the independence assumption can be assessed using diagnostic plots.
How might we assess the `mod_p_planes` model?
Figure [E.7](ch-regression.html#fig:plotmod3a) displays a scatterplot of residuals versus fitted (predicted) values.
As we observed in Figure [E.6](ch-regression.html#fig:railtrailsmooth), the number of crossings does not increase as much for warm temperatures as it does for more moderate ones. We may need to consider a more sophisticated model with a more complex model for temperature.
```
mplot(mod_p_planes, which = 1, system = "ggplot2")
```
Figure E.7: Assessing linearity using a scatterplot of residuals versus fitted (predicted) values.
Figure [E.8](ch-regression.html#fig:plotmod3b) displays the quantile\-quantile plot for the residuals from the
regression model. The plot deviates from the straight line: This indicates that the residuals
have heavier tails than a normal distribution.
```
mplot(mod_p_planes, which = 2, system = "ggplot2")
```
Figure E.8: Assessing normality assumption using a Q–Q plot.
Figure [E.9](ch-regression.html#fig:plotmod3c) displays the scale\-location plot for the residuals from the model: The results indicate that there is evidence of heteroscedasticity (the variance of the residuals increases as a function of predicted value).
```
mplot(mod_p_planes, which = 3, system = "ggplot2")
```
Figure E.9: Assessing equal variance using a scale–location plot.
When performing model diagnostics,
it is important to identify any outliers and understand their role in determining the regression coefficients.
* We call observations that don’t seem to fit the general pattern of the data [*outliers*](https://en.wikipedia.org/w/index.php?search=outliers)
Figures [E.7](ch-regression.html#fig:plotmod3a), [E.8](ch-regression.html#fig:plotmod3b), and [E.9](ch-regression.html#fig:plotmod3c) mark three points (8, 18, and 34\) as values that large negative or positive residuals that merit further exploration.
* An observation with an extreme value of the explanatory variable is a point of high [*leverage*](https://en.wikipedia.org/w/index.php?search=leverage).
* A high leverage point that exerts disproportionate influence on the slope of the regression line is an [*influential point*](https://en.wikipedia.org/w/index.php?search=influential%20point).
Figure [E.10](ch-regression.html#fig:plotmod3d) displays the values for [*Cook’s distance*](https://en.wikipedia.org/w/index.php?search=Cook's%20distance) (a common measure of
influential points in a regression model).
```
mplot(mod_p_planes, which = 4, system = "ggplot2")
```
Figure E.10: Cook’s distance for rail trail model.
We can use the `augment()` function from the **broom** package to calculate the value of
this statistic and identify the most extreme Cook’s distance.
```
library(broom)
augment(mod_p_planes) %>%
mutate(row_num = row_number()) %>%
select(-.std.resid, -.sigma) %>%
filter(.cooksd > 0.4)
```
```
# A tibble: 1 × 9
volume hightemp precip weekday .fitted .resid .hat .cooksd row_num
<int> <int> <dbl> <lgl> <dbl> <dbl> <dbl> <dbl> <int>
1 388 84 1.49 TRUE 246. 142. 0.332 0.412 65
```
Observation 65 has the highest Cook’s distance.
It references a day with nearly one and a half inches of rain (the most recorded in the dataset) and a high temperature of 84 degrees.
This data point has high leverage and is influential on the results.
Observations 4 and 34 also have relatively high Cook’s distances and may merit further exploration.
E.5 Logistic regression
-----------------------
Our previous examples had quantitative (or continuous) outcomes.
What happens when we are interested in modeling a dichotomous outcome?
For example, we might model the probability of developing diabetes as a function of age and BMI (we explored this question further in Chapter [11](ch-learningI.html#ch:learningI)).
Figure [E.11](ch-regression.html#fig:diabeteslogreg) displays the scatterplot of diabetes status as a function of age, while
Figure [E.12](ch-regression.html#fig:diabeteslogreg2) displays the scatterplot of diabetes as a function of BMI (body mass index).
Note that each subject can either have diabetes or not, so all of the points are displayed at 0 or 1 on the \\(y\\)\-axis.
```
NHANES <- NHANES %>%
mutate(has_diabetes = as.numeric(Diabetes == "Yes"))
```
```
log_plot <- ggplot(data = NHANES, aes(x = Age, y = has_diabetes)) +
geom_jitter(alpha = 0.1, height = 0.05) +
geom_smooth(method = "glm", method.args = list(family = "binomial")) +
ylab("Diabetes status") +
xlab("Age (in years)")
log_plot
```
Figure E.11: Scatterplot of diabetes as a function of age with superimposed smoother.
```
log_plot + aes(x = BMI) + xlab("BMI (body mass index)")
```
Figure E.12: Scatterplot of diabetes as a function of BMI with superimposed smoother.
We see that the probability that a subject has diabetes tends to increase as both a function of age and of BMI.
Which variable is more important: `Age` or `BMI`?
We can use a multiple logistic regression model to model the probability of diabetes as a function of both predictors.
```
logreg <- glm(has_diabetes ~ BMI + Age, family = "binomial", data = NHANES)
tidy(logreg)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -8.08 0.244 -33.1 1.30e-239
2 BMI 0.0943 0.00552 17.1 1.74e- 65
3 Age 0.0573 0.00249 23.0 2.28e-117
```
The answer is that both predictors seem to be important (since both p\-values are very small).
To interpret the findings, we might consider a visual display of predicted probabilities as displayed in Figure [E.13](ch-regression.html#fig:plotFundiabetes) (compare with Figure [11\.11](ch-learningI.html#fig:mod-compare)).
```
library(modelr)
fake_grid <- data_grid(
NHANES,
Age = seq_range(Age, 100),
BMI = seq_range(BMI, 100)
)
y_hats <- fake_grid %>%
mutate(y_hat = predict(logreg, newdata = ., type = "response"))
head(y_hats, 1)
```
```
# A tibble: 1 × 3
Age BMI y_hat
<dbl> <dbl> <dbl>
1 0 12.9 0.00104
```
The predicted probability from the model is given by:
\\\[
\\pi\_i \= \\mathrm{logit} \\left( P (y\_i \= 1\) \\right) \= \\frac{e^{\\beta\_0 \+ \\beta\_1 Age\_i \+ \\beta\_2 BMI\_i}}{1 \+ e^{\\beta\_0 \+ \\beta\_1 Age\_i \+ \\beta\_2 BMI\_i}}\\ \\ \\text{ for } i\=1,\\ldots,n \\,.
\\]
Let’s consider a hypothetical 0 year old with a BMI of 12\.9 (corresponding to the first entry in the `y_hats` dataframe).
Their predicted probability of having diabetes would be calculated as a function of the regression coefficients.
```
linear_component <- c(1, 12.9, 0) %*% coef(logreg)
exp(linear_component) / (1 + exp(linear_component))
```
```
[,1]
[1,] 0.00104
```
The predicted probability is very small: about 1/10th of 1%.
But what about a 60 year old with a BMI of 25?
```
linear_component <- c(1, 25, 60) %*% coef(logreg)
exp(linear_component) / (1 + exp(linear_component))
```
```
[,1]
[1,] 0.0923
```
The predicted probability is now 9\.2%.
```
ggplot(data = NHANES, aes(x = Age, y = BMI)) +
geom_tile(data = y_hats, aes(fill = y_hat), color = NA) +
geom_count(aes(color = factor(has_diabetes)), alpha = 0.8) +
scale_fill_gradient(low = "white", high = "red") +
scale_color_manual("Diabetes", values = c("gold", "black")) +
scale_size(range = c(0, 2)) +
labs(fill = "Predicted\nprobability")
```
Figure E.13: Predicted probabilities for diabetes as a function of BMI and age.
Figure [E.13](ch-regression.html#fig:plotFundiabetes) displays the predicted probabilities for each of our grid points.
We see that very few young adults have diabetes, even if they have moderately high BMI scores.
As we look at older subjects while holding BMI fixed, the predicted probability of diabetes increases.
E.6 Further resources
---------------------
Regression is described in many books. An introduction is found in most introductory statistics textbooks, including *Open Intro Statistics* (Diez, Barr, and Çetinkaya\-Rundel 2019\).
For a deeper but still accessible treatment, we suggest Cannon et al. (2019\).
G. James et al. (2013\) and Hastie, Tibshirani, and Friedman (2009\) also cover regression from a modeling and machine learning perspective.
Hoaglin (2016\) provides guidance on how to interpret conditional regression parameters.
Cook (1982\) comprehensively reviews regression diagnostics.
An accessible introduction to smoothing can be found in Ruppert, Wand, and Carroll (2003\).
E.7 Exercises
-------------
**Problem 1 (Easy)**: In 1966, [Cyril Burt](http://en.wikipedia.org/wiki/Cyril_Burt) published a paper called *The genetic determination of differences in intelligence: A study of monozygotic twins reared apart.* The data consist of IQ scores for \[an assumed random sample of] 27 identical twins, one raised by foster parents, the other by the biological parents.
Here is the regression output for using `Biological` IQ to predict `Foster` IQ:
```
library(mdsr)
library(faraway)
mod <- lm(Foster ~ Biological, data = twins)
coef(mod)
```
```
(Intercept) Biological
9.208 0.901
```
```
mosaic::rsquared(mod)
```
```
[1] 0.778
```
Which of the following is **FALSE**? Justify your answers.
* Alice and Beth were raised by their biological parents. If Beth’s IQ is 10 points higher than Alice’s, then we would expect that her foster twin Bernice’s IQ is 9 points higher than the IQ of Alice’s foster twin Ashley.
* Roughly 78% of the foster twins’ IQs can be accurately predicted by the model.
* The linear model is \\(\\widehat{Foster} \= 9\.2 \+ 0\.9 \\times Biological\\).
* Foster twins with IQs higher than average are expected to have biological twins with higher than average IQs as well.
**Problem 2 (Medium)**: The `atus` package includes data from the American Time Use Survey (ATUS). Use the `atusresp` dataset to model `hourly_wage` as a function of other predictors in the dataset.
**Problem 3 (Medium)**: The `Gestation` data set in `mdsr` contains birth weight, date, and gestational period collected as part of the Child Health and Development Studies. Information about the baby’s parents—age, education, height, weight, and whether the mother smoked is also recorded.
1. Fit a linear regression model for birthweight (`wt`) as a function of the mother’s age (`age`).
2. Find a 95% confidence interval and p\-value for the slope coefficient.
3. What do you conclude about the association between a mother’s age and her baby’s birthweight?
**Problem 4 (Medium)**: The Child Health and Development Studies investigate a range of topics. One study, in particular, considered all pregnancies among women in the Kaiser Foundation Health Plan in the San Francisco East Bay area. The goal is to model the weight of the infants (`bwt`, in ounces) using variables including length of pregnancy in days (`gestation`), mother’s age in years (`age`), mother’s height in inches (`height`), whether the child was the first born (`parity`), mother’s pregnancy weight in pounds (`weight`), and whether the mother was a smoker (`smoke`).
The summary table that follows shows the results of a regression model for predicting the average birth weight of babies based on all of the variables included in the data set.
```
library(mdsr)
library(mosaicData)
babies <- Gestation %>%
rename(bwt = wt, height = ht, weight = wt.1) %>%
mutate(parity = parity == 0, smoke = smoke > 0) %>%
select(id, bwt, gestation, age, height, weight, parity, smoke)
mod <- lm(bwt ~ gestation + age + height + weight + parity + smoke,
data = babies
)
coef(mod)
```
```
(Intercept) gestation age height weight parityTRUE
-85.7875 0.4601 0.0429 1.0623 0.0653 -2.9530
smokeTRUE
NA
```
Answer the following questions regarding this linear regression model.
1. The coefficient for `parity` is different than if you fit a linear model predicting weight using only that variable. Why might there be a difference?
2. Calculate the residual for the first observation in the data set.
3. This data set contains missing values. What happens to these rows when we fit the model?
**Problem 5 (Medium)**: Investigators in the HELP (Health Evaluation and Linkage to Primary Care) study were interested in modeling predictors of being `homeless` (one or more nights spent on the street or in a shelter in the past six months vs. housed) using baseline data from the clinical trial. Fit and interpret a parsimonious model that would help the investigators identify predictors of homelessness.
E.8 Supplementary exercises
---------------------------
Available at [https://mdsr\-book.github.io/mdsr2e/ch\-regression.html\#regression\-online\-exercises](https://mdsr-book.github.io/mdsr2e/ch-regression.html#regression-online-exercises)
**Problem 1 (Medium)**: In the HELP (Health Evaluation and Linkage to Primary Care) study, investigators were interested in determining predictors of severe depressive symptoms (measured by the Center for Epidemiologic Studies—Depression scale, `cesd`) amongst a cohort enrolled at a substance abuse treatment facility. These predictors include `substance` of abuse (alcohol, cocaine, or heroin), `mcs` (a measure of mental well\-being), gender, and housing status (housed or homeless). Answer the following questions regarding the following multiple regression model.
```
library(mdsr)
fm <- lm(cesd ~ substance + mcs + sex + homeless, data = HELPrct)
msummary(fm)
```
```
Estimate Std. Error t value Pr(>|t|)
(Intercept) 57.7794 1.4664 39.40 <2e-16 ***
substancecocaine -3.5406 1.0101 -3.51 0.0005 ***
substanceheroin -1.6818 1.0731 -1.57 0.1178
mcs -0.6407 0.0338 -18.97 <2e-16 ***
sexmale -3.3239 1.0075 -3.30 0.0010 **
homelesshoused -0.8327 0.8686 -0.96 0.3383
Residual standard error: 8.97 on 447 degrees of freedom
Multiple R-squared: 0.492, Adjusted R-squared: 0.486
F-statistic: 86.4 on 5 and 447 DF, p-value: <2e-16
```
```
confint(fm)
```
```
2.5 % 97.5 %
(Intercept) 54.898 60.661
substancecocaine -5.526 -1.555
substanceheroin -3.791 0.427
mcs -0.707 -0.574
sexmale -5.304 -1.344
homelesshoused -2.540 0.874
```
* Write out the linear model.
* Calculate the predicted CESD for a female homeless cocaine\-involved subject with
an MCS score of 20\.
* Identify the null and alternative hypotheses for the 8 tests displayed above.
* Interpret the 95% confidence interval for the `substancecocaine` coefficient.
* Make a conclusion and summarize the results of a test of the `homeless` parameter.
* Report and interpret the \\(R^2\\) (coefficient of determination) for this model.
* Which of the residual diagnostic plots are redundant?
* What do we conclude about the distribution of the residuals?
* What do we conclude about the relationship between the fitted values and the residuals?
* What do we conclude about the relationship between the MCS score and the residuals?
* What other things can we learn from the residual diagnostics?
* Which observations should we flag for further study?
---
| Data Science |
mdsr-book.github.io | https://mdsr-book.github.io/mdsr2e/ch-db-setup.html |
F Setting up a database server
==============================
Setting up a local or remote database server is neither trivial nor difficult.
In this chapter, we provide instructions as to how to set up a local database server on a computer that you control. While everything that is done in this chapter can be accomplished on any modern operating system, many tools for data science are designed for [*UNIX\-like*](https://en.wikipedia.org/w/index.php?search=UNIX-like) operating systems, and can be a challenge to set up on [*Windows*](https://en.wikipedia.org/w/index.php?search=Windows). This is no exception. In particular, comfort with the command line is a plus and the material presented here will make use of [*UNIX shell*](https://en.wikipedia.org/w/index.php?search=UNIX%20shell) commands. On [*Mac OS X*](https://en.wikipedia.org/w/index.php?search=Mac%20OS%20X) and other Unix\-like operating systems (e.g., [*Ubuntu*](https://en.wikipedia.org/w/index.php?search=Ubuntu)), the command line is accessible using a Terminal application. On Windows, some of these shell commands might work at a DOS prompt, but others will not.[42](#fn42) Unfortunately, providing Windows\-specific setup instructions is outside the scope of this book.
Three open\-source SQL database systems are most commonly encountered. These include
[SQLite](https://www.sqlite.org/index.html), [MySQL](http://www.mysql.com), and [PostgreSQL](http://www.postgresql.org/). While MySQL and PostgreSQL are full\-featured relational database systems that employ a strict client\-server model, SQLite is a lightweight program that runs only locally and requires no initial configuration. However, while SQLite is certainly the easiest system to set up, it has has far fewer functions, lacks a caching mechanism, and is not likely to perform as well under heavy usage. Please see the official [documentation for appropriate uses of SQLite](https://www.sqlite.org/whentouse.html) for assistance with choosing the right SQL implementation for your needs.
Both MySQL and PostgreSQL employ a [*client\-server*](https://en.wikipedia.org/w/index.php?search=client-server) architecture. That is, there is a server program running on a computer somewhere, and you can connect to that server from any number of client programs—from either that same machine or over the internet.
Still, even if you are running MySQL or PostgreSQL on your local machine, there are always two parts: the client and the server. This chapter provides instructions for setting up the server on a machine that you control—which for most analysts, is your local machine.
F.1 SQLite
----------
For SQLite, there is nothing to configure, but it must be installed.
The **RSQLite** package embeds the database engine and provides a straightforward interface.
On Linux systems, `sqlite` is likely already installed, but the source code, as well as pre\-built binaries for Mac OS X and Windows, is available at [SQLite.org](https://www.sqlite.org/download.html).
F.2 MySQL
---------
We will focus on the use of MySQL (with brief mention of PostgreSQL in the next section). The steps necessary to install a PostgreSQL server will follow similar logic, but the syntax will be importantly different.
### F.2\.1 Installation
If you are running Mac OS X or a Linux\-based operating system, then you probably already have a MySQL server installed and running on your machine. You can check to see if this is the case by running the following from your operating system’s shell (i.e., the command line or using the “Terminal” application).
```
ps aux | grep "mysql"
```
```
bbaumer@bbaumer-Precision-Tower-7810:~$ ps aux | grep "mysql"
mysql 1269 0.9 2092204 306204 ? Ssl May20 5:22 /usr/sbin/mysqld
```
If you see anything like the first line of this output (i.e., containing `mysqld`), then MySQL is already running. (If you don’t see anything like that, then it is not.)
If MySQL is not installed, then you can install it by downloading the relevant version of the [*MySQL Community Server*](https://en.wikipedia.org/w/index.php?search=MySQL%20Community%20Server) for your operating system at [dev.mysql.com](http://dev.mysql.com/downloads/mysql/). If you run into trouble, please consult the [installation instructions](https://dev.mysql.com/doc/refman/8.0/en/installing.html).
For Mac OS X, there are more [specific instructions](https://dev.mysql.com/doc/refman/5.6/en/osx-installation-pkg.html) available.
After installation, you will need to install the Preference Pane and start the server.
It is also helpful to add the `mysql` binary directory to your `PATH` [*environment variable*](https://en.wikipedia.org/w/index.php?search=environment%20variable), so you can launch `mysql` easily from the shell. To do this, execute the following command in your shell:
```
export PATH=$PATH:/usr/local/mysql/bin
echo $PATH
```
You may have to modify the path to the `mysql/bin` directory to suit your local setup.
If you don’t know where the directory is, you can try to find it using the `which` program provided by your operating system.
```
which mysql
```
```
/usr/local/mysql/bin
```
### F.2\.2 Access
In most cases, the installation process will result in a server process being launched on your machine, such as the one that we saw above in the output of the `ps` command. Once the server is running, we need to configure it properly for our use. The [full instructions for post\-installation](https://dev.mysql.com/doc/refman/5.6/en/postinstallation.html) provide great detail on this process. However, in our case, we will mostly stick with the default configuration, so there are only a few things to check.
The most important thing is to gain access to the server. MySQL maintains a set of user accounts just like your operating system. After installation, there is usually only one account created: `root`. In order to create other accounts, we need to log into MySQL as `root`. *Please read the documentation on [Securing the Initial MySQL Accounts](https://dev.mysql.com/doc/refman/5.6/en/default-privileges.html) for your setup.* From that documentation:
> Some accounts have the user name `root`. These are superuser accounts that have all privileges and can do anything. If these root accounts have empty passwords, anyone can connect to the MySQL server as root without a password and be granted all privileges.
If this is your first time accessing MySQL, typing this into your shell might work:
```
mysql -u root
```
If you see an `Access denied` error, it means that the `root` MySQL user has a password, but you did not supply it. You may have created a password during installation. If you did, try:
```
mysql -u root -p
```
and then enter that password (it may well be blank). If you don’t know the `root` password, try a few things that might be the password. If you can’t figure it out, contact your system administrator or re\-install MySQL.
You might—on Windows especially—get an error that says something about “command not found.”
This means that the program `mysql` is not accessible from your shell. You have two options: 1\) you can specify the full path to the MySQL application; or 2\) you can append your `PATH` variable to include the directory where the MySQL application is. The second option is preferred and is illustrated above.
On Linux or Mac OS X, it is probably in `/usr/bin/` or `/usr/local/mysql/bin` or something similar, and on Windows, it is probably in `\Applications\MySQL Server 5.6\bin` or something similar.
Once you find the path to the application and the password, you should be able to log in. You will know when it works if you see a `mysql` prompt instead of your usual one.
```
bbaumer@bbaumer-Precision-Tower-7810:~$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 47
Server version: 5.7.31-0ubuntu0.18.04.1 (Ubuntu)
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.
mysql>
```
Once you are logged into MySQL, try running the following command at the `mysql>` prompt (do not forget the trailing semi\-colon):[43](#fn43).
```
SELECT User, Host, Password FROM mysql.user;
```
This command will list the users on the MySQL server, their encrypted passwords, and the hosts from which they are allowed to connect.
Next, if you want to change the root password, set it to something else (in this example `mypass`).
```
UPDATE mysql.user SET Password = PASSWORD('mypass') WHERE User = 'root';
FLUSH PRIVILEGES;
```
The most responsible thing to do now is to [create a new account](https://dev.mysql.com/doc/refman/5.6/en/adding-users.html) for yourself. You should probably choose a different password than the one for the `root` user. Do this by running:
```
CREATE USER 'r-user'@'localhost' IDENTIFIED BY 'mypass';
```
It is important to understand that MySQL’s concept of users is really a \\(\\{user, host\\}\\) pair. That is, the user `'bbaumer'@'localhost'` can have a different password and set of privileges than the user `'bbaumer'@'%'`. The former is only allowed to connect to the server from the machine on which the server is running. (For most of you, that is your computer.) The latter can connect from anywhere (`'%'` is a wildcard character). Obviously, the former is more secure. Use the latter only if you want to connect to your MySQL database from elsewhere.
You will also want to make yourself a superuser.
```
GRANT ALL PRIVILEGES ON *.* TO 'r-user'@'localhost' WITH GRANT OPTION;
```
Again, flush the privileges to reload the tables.
```
FLUSH PRIVILEGES;
```
Finally, log out by typing `quit`. You should now be able to log in to MySQL as yourself by typing the following into your shell:
```
mysql -u r-user -p
```
#### F.2\.2\.1 Using an option file
A relatively safe and convenient method of connecting to MySQL servers (whether local or remote) is by using an option file. This is a simple text file located at `~/.my.cnf` that may contain various connection parameters. Your entire file might look like this:
```
[client]
user=r-user
password="mypass"
```
These options will be read by MySQL automatically anytime you connect from a client program. Thus, instead of having to type:
```
mysql -u yourusername -p
```
you should be automatically logged on with just `mysql`. Moreover, you can have **dplyr** read your MySQL option file using the `default.file` argument (see Section [F.4\.3\.1](ch-db-setup.html#sec:dplyr)).
### F.2\.3 Running scripts from the command line
MySQL will run SQL scripts contained in a file via the command line client. If the file `myscript.sql` is a text file containing MySQL commands, you can run it using the following command from your shell:
```
mysql -u yourusername -p dbname < myscript.sql
```
The result of each command in that script will be displayed in the terminal. Please see Section [16\.3](ch-sql2.html#sec:toy-db) for an example of this process in action.
F.3 PostgreSQL
--------------
Setting up a PostgreSQL server is logically analogous to the procedure demonstrated above for MySQL. The default user in a PostgreSQL installation is `postgres` and the default password is either `postgres` or blank. Either way, you can log into the PostgreSQL command line client—which is called `psql`—using the `sudo` command in your shell.
```
sudo -u postgres psql
```
This means: “Launch the `psql` program as if I was the user `postgres`.” If this is successful, then you can create a new account for yourself from inside PostgreSQL. Here again, the procedure is similar to the procedure demonstrated above for MySQL in section [F.2\.2](ch-db-setup.html#sec:mysql-access).
You can list all of the PostgreSQL users by typing at your `postgres` prompt:
```
\du
```
You can change the password for the `postgres` user:
```
ALTER USER postgres PASSWORD 'some_pass';
```
Create a new account for yourself:
```
CREATE USER yourusername SUPERUSER CREATEDB PASSWORD 'some_pass';
```
[Create a new database](http://www.postgresql.org/docs/current/static/sql-createdatabase.html) called `airlines`:
```
CREATE DATABASE airlines;
```
Quit the `psql` client by typing:
```
\q
```
Now that your user account is created, you can log out and back in with the shell command:
```
psql -U yourusername -W
```
If this doesn’t work, it is probably because the client authentication is set to `ident` instead of `md5`. Please see the documentation on [client authentication](http://www.postgresql.org/docs/9.5/static/client-authentication.html) for instructions on how to correct this on your installation, or simply continue to use the `sudo` method described above.
F.4 Connecting to SQL
---------------------
There are many different options for connecting to and retrieving data from an SQL server. In all cases, you will need to specify at least four pieces of information:
* `host`: the name of the SQL server. If you are running this server locally, that name is `localhost`
* `dbname`: the name of the database on that server to which you want to connect (e.g., `airlines`)
* `user`: your username on the SQL server
* `password`: your password on the SQL server
### F.4\.1 The command line client
From the command line, the syntax is:
```
mysql -u username -p -h localhost dbname
```
After entering your password, this will bring you to an interactive MySQL session, where you can bounce queries directly off of the server and see the results in your terminal. This is often useful for debugging because you can see the error messages directly, and you have the full suite of MySQL directives at your disposal. On the other hand, it is a fairly cumbersome route to database development, since you are limited to text\-editing capabilities of the command line.
Command\-line access to PostgreSQL is provided via the `psql` program described above.
### F.4\.2 GUIs
The [*MySQL Workbench*](https://en.wikipedia.org/w/index.php?search=MySQL%20Workbench) is a graphical user interface (GUI) that can be useful for configuration and development. [This software is available](https://www.mysql.com/products/workbench) on Windows, Linux, and Mac OS X. The [analogous tool for PostgreSQL](http://www.pgadmin.org/) is [*pgAdmin*](https://en.wikipedia.org/w/index.php?search=pgAdmin), and it is similarly cross\-platform. [`sqlitebrowser`](http://sqlitebrowser.org/) is another cross\-platform GUI for SQLite databases.
These programs provide full\-featured access to the underlying database system, with many helpful and easy\-to\-learn drop\-down menus. We recommend developing queries and databases in these programs, especially when learning SQL.
### F.4\.3 R and RStudio
The downside to the previous approaches is that you don’t actually capture the data returned by your queries, so you can’t do anything with them. Using the GUIs, you can of course save the results of any query to a CSV. But a more elegant solution is to pull the data directly into **R**. This functionality is provided by the **RMySQL**, **RPostgreSQL**, and **RSQLite** packages. The **DBI** package provides a common interface to all three of the SQL back\-ends listed above, and the **dbplyr** package provides a slicker interface to **DBI**. A schematic of these dependencies is displayed in Figure [F.1](ch-db-setup.html#fig:sql-r).
We recommend using either the **dplyr** or the **DBI** interfaces whenever possible, since they are implementation agnostic.
Figure F.1: Schematic of SQL\-related **R** packages and their dependencies.
For most purposes (e.g., `SELECT` queries) there may be significant convenience to using the **dplyr** interface. However, the functionality of this construction is limited to `SELECT` queries. Thus, other SQL directives (e.g., `EXPLAIN`, `INSERT`, `UPDATE`, etc.) will not work in the **dplyr** construction. This functionality must be accessed using **DBI**.
In what follows, we illustrate how to connect to a MySQL backend using **dplyr** and **DBI**. However, the instructions for connecting to a PostgreSQL and SQLite are analogous. First, you will need to load the relevant package.
```
library(RMySQL)
```
#### F.4\.3\.1 Using **dplyr** and `tbl()`
To set up a connection to a MySQL database using **dplyr**, we must specify the four parameters outlined above, and save the resulting object using the `dbConnect()` function.
```
library(dplyr)
db <- dbConnect(
RMySQL::MySQL(),
dbname = "airlines", host = "localhost",
user = "r-user", password = "mypass"
)
```
If you have a MySQL option file already set up (see Section [F.2\.2\.1](ch-db-setup.html#sec:mysql-option)), then you can alternatively connect using the `default.file` argument. This enables you to connect without having to type your password, or save it in plaintext in your **R** scripts.
```
db <- dbConnect(
RMySQL::MySQL(),
dbname = "airlines", host = "localhost",
default.file = "~/.my.cnf"
)
```
Next, we can retrieve data using the `tbl()` function and the `sql()` command.
```
res <- tbl(db, sql("SELECT faa, name FROM airports"))
res
```
```
# Source: SQL [?? x 2]
# Database: mysql 5.7.33-log
# [@mdsr.cdc7tgkkqd0n.us-east-1.rds.amazonaws.com:/airlines]
faa name
<chr> <chr>
1 04G Lansdowne Airport
2 06A Moton Field Municipal Airport
3 06C Schaumburg Regional
4 06N Randall Airport
5 09J Jekyll Island Airport
6 0A9 Elizabethton Municipal Airport
7 0G6 Williams County Airport
8 0G7 Finger Lakes Regional Airport
9 0P2 Shoestring Aviation Airfield
10 0S9 Jefferson County Intl
# … with more rows
```
Note that the resulting object has class `tbl_sql`.
```
class(res)
```
```
[1] "tbl_MySQLConnection" "tbl_dbi" "tbl_sql"
[4] "tbl_lazy" "tbl"
```
Note also that the derived table is described as having an unknown number of rows (indicated by `??`).
This is because **dplyr** is smart (and lazy) about evaluation.
It hasn’t actually pulled all of the data into **R**.
To force it to do so, use `collect()`.
```
collect(res)
```
```
# A tibble: 1,458 × 2
faa name
<chr> <chr>
1 04G Lansdowne Airport
2 06A Moton Field Municipal Airport
3 06C Schaumburg Regional
4 06N Randall Airport
5 09J Jekyll Island Airport
6 0A9 Elizabethton Municipal Airport
7 0G6 Williams County Airport
8 0G7 Finger Lakes Regional Airport
9 0P2 Shoestring Aviation Airfield
10 0S9 Jefferson County Intl
# … with 1,448 more rows
```
#### F.4\.3\.2 Writing SQL queries
In Section [F.4\.3\.1](ch-db-setup.html#sec:dplyr), we used the `tbl()` function to interact with a table stored in a SQL database. This enabled us to use **dplyr** functions directly, without having to write any SQL queries.
In this section, we use the `dbGetQuery()` function from the **DBI** package to send an SQL command to the server and retrieve the results.
```
dbGetQuery(db, "SELECT faa, name FROM airports LIMIT 0,5")
```
```
faa name
1 04G Lansdowne Airport
2 06A Moton Field Municipal Airport
3 06C Schaumburg Regional
4 06N Randall Airport
5 09J Jekyll Island Airport
```
Unlike the `tbl()` function from **dplyr**, `dbGetQuery()` can execute arbitrary SQL commands, not just `SELECT` statements. So we can also run `EXPLAIN`, `DESCRIBE`, and `SHOW` commands.
```
dbGetQuery(db, "EXPLAIN SELECT faa, name FROM airports")
```
```
id select_type table partitions type possible_keys key key_len ref
1 1 SIMPLE airports <NA> ALL <NA> <NA> <NA> <NA>
rows filtered Extra
1 1458 100 <NA>
```
```
dbGetQuery(db, "DESCRIBE airports")
```
```
Field Type Null Key Default Extra
1 faa varchar(3) NO PRI
2 name varchar(255) YES <NA>
3 lat decimal(10,7) YES <NA>
4 lon decimal(10,7) YES <NA>
5 alt int(11) YES <NA>
6 tz smallint(4) YES <NA>
7 dst char(1) YES <NA>
8 city varchar(255) YES <NA>
9 country varchar(255) YES <NA>
```
```
dbGetQuery(db, "SHOW DATABASES")
```
```
Database
1 information_schema
2 airlines
3 fec
4 imdb
5 lahman
6 nyctaxi
```
### F.4\.4 Load into SQLite database
A process similar to the one we exhibit in Section [16\.3](ch-sql2.html#sec:toy-db) can be used to create a SQLite database, although in this case it is not even necessary to specify the table schema in advance. Launch `sqlite3` from the command line using the shell command:
```
sqlite3
```
Create a new database called `babynames` in the current directory using the `.open` command:
```
.open babynamesdata.sqlite3
```
Next, set the `.mode` to `csv`, import the two tables, and exit.
```
.mode csv
.import babynames.csv babynames
.import births.csv births
.exit
```
This should result in an SQLite database file called `babynamesdata.sqlite3` existing in the current directory that contains two tables. We can connect to this database and query it using **dplyr**.
```
db <- dbConnect(RSQLite::SQLite(), "babynamesdata.sqlite3")
babynames <- tbl(db, "babynames")
babynames %>%
filter(name == "Benjamin")
```
```
# Source: lazy query [?? x 5]
# Database: sqlite 3.30.1
# [/home/bbaumer/Dropbox/git/mdsr2e/babynamesdata.sqlite3]
year sex name n prop
<chr> <chr> <chr> <chr> <chr>
1 1976 F Benjamin 53 3.37186805943904e-05
2 1976 M Benjamin 10680 0.0065391571834601
3 1977 F Benjamin 63 3.83028784917178e-05
4 1977 M Benjamin 12112 0.00708409319279004
5 1978 F Benjamin 73 4.44137806835342e-05
6 1978 M Benjamin 11411 0.00667764880752091
7 1979 F Benjamin 79 4.58511127310548e-05
8 1979 M Benjamin 12516 0.00698620342042644
9 1980 F Benjamin 80 4.49415983928884e-05
10 1980 M Benjamin 13630 0.00734980487697031
# … with more rows
```
Alternatively, the **RSQLite** package includes a vignette describing how to set up a database from within R.
F.1 SQLite
----------
For SQLite, there is nothing to configure, but it must be installed.
The **RSQLite** package embeds the database engine and provides a straightforward interface.
On Linux systems, `sqlite` is likely already installed, but the source code, as well as pre\-built binaries for Mac OS X and Windows, is available at [SQLite.org](https://www.sqlite.org/download.html).
F.2 MySQL
---------
We will focus on the use of MySQL (with brief mention of PostgreSQL in the next section). The steps necessary to install a PostgreSQL server will follow similar logic, but the syntax will be importantly different.
### F.2\.1 Installation
If you are running Mac OS X or a Linux\-based operating system, then you probably already have a MySQL server installed and running on your machine. You can check to see if this is the case by running the following from your operating system’s shell (i.e., the command line or using the “Terminal” application).
```
ps aux | grep "mysql"
```
```
bbaumer@bbaumer-Precision-Tower-7810:~$ ps aux | grep "mysql"
mysql 1269 0.9 2092204 306204 ? Ssl May20 5:22 /usr/sbin/mysqld
```
If you see anything like the first line of this output (i.e., containing `mysqld`), then MySQL is already running. (If you don’t see anything like that, then it is not.)
If MySQL is not installed, then you can install it by downloading the relevant version of the [*MySQL Community Server*](https://en.wikipedia.org/w/index.php?search=MySQL%20Community%20Server) for your operating system at [dev.mysql.com](http://dev.mysql.com/downloads/mysql/). If you run into trouble, please consult the [installation instructions](https://dev.mysql.com/doc/refman/8.0/en/installing.html).
For Mac OS X, there are more [specific instructions](https://dev.mysql.com/doc/refman/5.6/en/osx-installation-pkg.html) available.
After installation, you will need to install the Preference Pane and start the server.
It is also helpful to add the `mysql` binary directory to your `PATH` [*environment variable*](https://en.wikipedia.org/w/index.php?search=environment%20variable), so you can launch `mysql` easily from the shell. To do this, execute the following command in your shell:
```
export PATH=$PATH:/usr/local/mysql/bin
echo $PATH
```
You may have to modify the path to the `mysql/bin` directory to suit your local setup.
If you don’t know where the directory is, you can try to find it using the `which` program provided by your operating system.
```
which mysql
```
```
/usr/local/mysql/bin
```
### F.2\.2 Access
In most cases, the installation process will result in a server process being launched on your machine, such as the one that we saw above in the output of the `ps` command. Once the server is running, we need to configure it properly for our use. The [full instructions for post\-installation](https://dev.mysql.com/doc/refman/5.6/en/postinstallation.html) provide great detail on this process. However, in our case, we will mostly stick with the default configuration, so there are only a few things to check.
The most important thing is to gain access to the server. MySQL maintains a set of user accounts just like your operating system. After installation, there is usually only one account created: `root`. In order to create other accounts, we need to log into MySQL as `root`. *Please read the documentation on [Securing the Initial MySQL Accounts](https://dev.mysql.com/doc/refman/5.6/en/default-privileges.html) for your setup.* From that documentation:
> Some accounts have the user name `root`. These are superuser accounts that have all privileges and can do anything. If these root accounts have empty passwords, anyone can connect to the MySQL server as root without a password and be granted all privileges.
If this is your first time accessing MySQL, typing this into your shell might work:
```
mysql -u root
```
If you see an `Access denied` error, it means that the `root` MySQL user has a password, but you did not supply it. You may have created a password during installation. If you did, try:
```
mysql -u root -p
```
and then enter that password (it may well be blank). If you don’t know the `root` password, try a few things that might be the password. If you can’t figure it out, contact your system administrator or re\-install MySQL.
You might—on Windows especially—get an error that says something about “command not found.”
This means that the program `mysql` is not accessible from your shell. You have two options: 1\) you can specify the full path to the MySQL application; or 2\) you can append your `PATH` variable to include the directory where the MySQL application is. The second option is preferred and is illustrated above.
On Linux or Mac OS X, it is probably in `/usr/bin/` or `/usr/local/mysql/bin` or something similar, and on Windows, it is probably in `\Applications\MySQL Server 5.6\bin` or something similar.
Once you find the path to the application and the password, you should be able to log in. You will know when it works if you see a `mysql` prompt instead of your usual one.
```
bbaumer@bbaumer-Precision-Tower-7810:~$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 47
Server version: 5.7.31-0ubuntu0.18.04.1 (Ubuntu)
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.
mysql>
```
Once you are logged into MySQL, try running the following command at the `mysql>` prompt (do not forget the trailing semi\-colon):[43](#fn43).
```
SELECT User, Host, Password FROM mysql.user;
```
This command will list the users on the MySQL server, their encrypted passwords, and the hosts from which they are allowed to connect.
Next, if you want to change the root password, set it to something else (in this example `mypass`).
```
UPDATE mysql.user SET Password = PASSWORD('mypass') WHERE User = 'root';
FLUSH PRIVILEGES;
```
The most responsible thing to do now is to [create a new account](https://dev.mysql.com/doc/refman/5.6/en/adding-users.html) for yourself. You should probably choose a different password than the one for the `root` user. Do this by running:
```
CREATE USER 'r-user'@'localhost' IDENTIFIED BY 'mypass';
```
It is important to understand that MySQL’s concept of users is really a \\(\\{user, host\\}\\) pair. That is, the user `'bbaumer'@'localhost'` can have a different password and set of privileges than the user `'bbaumer'@'%'`. The former is only allowed to connect to the server from the machine on which the server is running. (For most of you, that is your computer.) The latter can connect from anywhere (`'%'` is a wildcard character). Obviously, the former is more secure. Use the latter only if you want to connect to your MySQL database from elsewhere.
You will also want to make yourself a superuser.
```
GRANT ALL PRIVILEGES ON *.* TO 'r-user'@'localhost' WITH GRANT OPTION;
```
Again, flush the privileges to reload the tables.
```
FLUSH PRIVILEGES;
```
Finally, log out by typing `quit`. You should now be able to log in to MySQL as yourself by typing the following into your shell:
```
mysql -u r-user -p
```
#### F.2\.2\.1 Using an option file
A relatively safe and convenient method of connecting to MySQL servers (whether local or remote) is by using an option file. This is a simple text file located at `~/.my.cnf` that may contain various connection parameters. Your entire file might look like this:
```
[client]
user=r-user
password="mypass"
```
These options will be read by MySQL automatically anytime you connect from a client program. Thus, instead of having to type:
```
mysql -u yourusername -p
```
you should be automatically logged on with just `mysql`. Moreover, you can have **dplyr** read your MySQL option file using the `default.file` argument (see Section [F.4\.3\.1](ch-db-setup.html#sec:dplyr)).
### F.2\.3 Running scripts from the command line
MySQL will run SQL scripts contained in a file via the command line client. If the file `myscript.sql` is a text file containing MySQL commands, you can run it using the following command from your shell:
```
mysql -u yourusername -p dbname < myscript.sql
```
The result of each command in that script will be displayed in the terminal. Please see Section [16\.3](ch-sql2.html#sec:toy-db) for an example of this process in action.
### F.2\.1 Installation
If you are running Mac OS X or a Linux\-based operating system, then you probably already have a MySQL server installed and running on your machine. You can check to see if this is the case by running the following from your operating system’s shell (i.e., the command line or using the “Terminal” application).
```
ps aux | grep "mysql"
```
```
bbaumer@bbaumer-Precision-Tower-7810:~$ ps aux | grep "mysql"
mysql 1269 0.9 2092204 306204 ? Ssl May20 5:22 /usr/sbin/mysqld
```
If you see anything like the first line of this output (i.e., containing `mysqld`), then MySQL is already running. (If you don’t see anything like that, then it is not.)
If MySQL is not installed, then you can install it by downloading the relevant version of the [*MySQL Community Server*](https://en.wikipedia.org/w/index.php?search=MySQL%20Community%20Server) for your operating system at [dev.mysql.com](http://dev.mysql.com/downloads/mysql/). If you run into trouble, please consult the [installation instructions](https://dev.mysql.com/doc/refman/8.0/en/installing.html).
For Mac OS X, there are more [specific instructions](https://dev.mysql.com/doc/refman/5.6/en/osx-installation-pkg.html) available.
After installation, you will need to install the Preference Pane and start the server.
It is also helpful to add the `mysql` binary directory to your `PATH` [*environment variable*](https://en.wikipedia.org/w/index.php?search=environment%20variable), so you can launch `mysql` easily from the shell. To do this, execute the following command in your shell:
```
export PATH=$PATH:/usr/local/mysql/bin
echo $PATH
```
You may have to modify the path to the `mysql/bin` directory to suit your local setup.
If you don’t know where the directory is, you can try to find it using the `which` program provided by your operating system.
```
which mysql
```
```
/usr/local/mysql/bin
```
### F.2\.2 Access
In most cases, the installation process will result in a server process being launched on your machine, such as the one that we saw above in the output of the `ps` command. Once the server is running, we need to configure it properly for our use. The [full instructions for post\-installation](https://dev.mysql.com/doc/refman/5.6/en/postinstallation.html) provide great detail on this process. However, in our case, we will mostly stick with the default configuration, so there are only a few things to check.
The most important thing is to gain access to the server. MySQL maintains a set of user accounts just like your operating system. After installation, there is usually only one account created: `root`. In order to create other accounts, we need to log into MySQL as `root`. *Please read the documentation on [Securing the Initial MySQL Accounts](https://dev.mysql.com/doc/refman/5.6/en/default-privileges.html) for your setup.* From that documentation:
> Some accounts have the user name `root`. These are superuser accounts that have all privileges and can do anything. If these root accounts have empty passwords, anyone can connect to the MySQL server as root without a password and be granted all privileges.
If this is your first time accessing MySQL, typing this into your shell might work:
```
mysql -u root
```
If you see an `Access denied` error, it means that the `root` MySQL user has a password, but you did not supply it. You may have created a password during installation. If you did, try:
```
mysql -u root -p
```
and then enter that password (it may well be blank). If you don’t know the `root` password, try a few things that might be the password. If you can’t figure it out, contact your system administrator or re\-install MySQL.
You might—on Windows especially—get an error that says something about “command not found.”
This means that the program `mysql` is not accessible from your shell. You have two options: 1\) you can specify the full path to the MySQL application; or 2\) you can append your `PATH` variable to include the directory where the MySQL application is. The second option is preferred and is illustrated above.
On Linux or Mac OS X, it is probably in `/usr/bin/` or `/usr/local/mysql/bin` or something similar, and on Windows, it is probably in `\Applications\MySQL Server 5.6\bin` or something similar.
Once you find the path to the application and the password, you should be able to log in. You will know when it works if you see a `mysql` prompt instead of your usual one.
```
bbaumer@bbaumer-Precision-Tower-7810:~$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 47
Server version: 5.7.31-0ubuntu0.18.04.1 (Ubuntu)
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.
mysql>
```
Once you are logged into MySQL, try running the following command at the `mysql>` prompt (do not forget the trailing semi\-colon):[43](#fn43).
```
SELECT User, Host, Password FROM mysql.user;
```
This command will list the users on the MySQL server, their encrypted passwords, and the hosts from which they are allowed to connect.
Next, if you want to change the root password, set it to something else (in this example `mypass`).
```
UPDATE mysql.user SET Password = PASSWORD('mypass') WHERE User = 'root';
FLUSH PRIVILEGES;
```
The most responsible thing to do now is to [create a new account](https://dev.mysql.com/doc/refman/5.6/en/adding-users.html) for yourself. You should probably choose a different password than the one for the `root` user. Do this by running:
```
CREATE USER 'r-user'@'localhost' IDENTIFIED BY 'mypass';
```
It is important to understand that MySQL’s concept of users is really a \\(\\{user, host\\}\\) pair. That is, the user `'bbaumer'@'localhost'` can have a different password and set of privileges than the user `'bbaumer'@'%'`. The former is only allowed to connect to the server from the machine on which the server is running. (For most of you, that is your computer.) The latter can connect from anywhere (`'%'` is a wildcard character). Obviously, the former is more secure. Use the latter only if you want to connect to your MySQL database from elsewhere.
You will also want to make yourself a superuser.
```
GRANT ALL PRIVILEGES ON *.* TO 'r-user'@'localhost' WITH GRANT OPTION;
```
Again, flush the privileges to reload the tables.
```
FLUSH PRIVILEGES;
```
Finally, log out by typing `quit`. You should now be able to log in to MySQL as yourself by typing the following into your shell:
```
mysql -u r-user -p
```
#### F.2\.2\.1 Using an option file
A relatively safe and convenient method of connecting to MySQL servers (whether local or remote) is by using an option file. This is a simple text file located at `~/.my.cnf` that may contain various connection parameters. Your entire file might look like this:
```
[client]
user=r-user
password="mypass"
```
These options will be read by MySQL automatically anytime you connect from a client program. Thus, instead of having to type:
```
mysql -u yourusername -p
```
you should be automatically logged on with just `mysql`. Moreover, you can have **dplyr** read your MySQL option file using the `default.file` argument (see Section [F.4\.3\.1](ch-db-setup.html#sec:dplyr)).
#### F.2\.2\.1 Using an option file
A relatively safe and convenient method of connecting to MySQL servers (whether local or remote) is by using an option file. This is a simple text file located at `~/.my.cnf` that may contain various connection parameters. Your entire file might look like this:
```
[client]
user=r-user
password="mypass"
```
These options will be read by MySQL automatically anytime you connect from a client program. Thus, instead of having to type:
```
mysql -u yourusername -p
```
you should be automatically logged on with just `mysql`. Moreover, you can have **dplyr** read your MySQL option file using the `default.file` argument (see Section [F.4\.3\.1](ch-db-setup.html#sec:dplyr)).
### F.2\.3 Running scripts from the command line
MySQL will run SQL scripts contained in a file via the command line client. If the file `myscript.sql` is a text file containing MySQL commands, you can run it using the following command from your shell:
```
mysql -u yourusername -p dbname < myscript.sql
```
The result of each command in that script will be displayed in the terminal. Please see Section [16\.3](ch-sql2.html#sec:toy-db) for an example of this process in action.
F.3 PostgreSQL
--------------
Setting up a PostgreSQL server is logically analogous to the procedure demonstrated above for MySQL. The default user in a PostgreSQL installation is `postgres` and the default password is either `postgres` or blank. Either way, you can log into the PostgreSQL command line client—which is called `psql`—using the `sudo` command in your shell.
```
sudo -u postgres psql
```
This means: “Launch the `psql` program as if I was the user `postgres`.” If this is successful, then you can create a new account for yourself from inside PostgreSQL. Here again, the procedure is similar to the procedure demonstrated above for MySQL in section [F.2\.2](ch-db-setup.html#sec:mysql-access).
You can list all of the PostgreSQL users by typing at your `postgres` prompt:
```
\du
```
You can change the password for the `postgres` user:
```
ALTER USER postgres PASSWORD 'some_pass';
```
Create a new account for yourself:
```
CREATE USER yourusername SUPERUSER CREATEDB PASSWORD 'some_pass';
```
[Create a new database](http://www.postgresql.org/docs/current/static/sql-createdatabase.html) called `airlines`:
```
CREATE DATABASE airlines;
```
Quit the `psql` client by typing:
```
\q
```
Now that your user account is created, you can log out and back in with the shell command:
```
psql -U yourusername -W
```
If this doesn’t work, it is probably because the client authentication is set to `ident` instead of `md5`. Please see the documentation on [client authentication](http://www.postgresql.org/docs/9.5/static/client-authentication.html) for instructions on how to correct this on your installation, or simply continue to use the `sudo` method described above.
F.4 Connecting to SQL
---------------------
There are many different options for connecting to and retrieving data from an SQL server. In all cases, you will need to specify at least four pieces of information:
* `host`: the name of the SQL server. If you are running this server locally, that name is `localhost`
* `dbname`: the name of the database on that server to which you want to connect (e.g., `airlines`)
* `user`: your username on the SQL server
* `password`: your password on the SQL server
### F.4\.1 The command line client
From the command line, the syntax is:
```
mysql -u username -p -h localhost dbname
```
After entering your password, this will bring you to an interactive MySQL session, where you can bounce queries directly off of the server and see the results in your terminal. This is often useful for debugging because you can see the error messages directly, and you have the full suite of MySQL directives at your disposal. On the other hand, it is a fairly cumbersome route to database development, since you are limited to text\-editing capabilities of the command line.
Command\-line access to PostgreSQL is provided via the `psql` program described above.
### F.4\.2 GUIs
The [*MySQL Workbench*](https://en.wikipedia.org/w/index.php?search=MySQL%20Workbench) is a graphical user interface (GUI) that can be useful for configuration and development. [This software is available](https://www.mysql.com/products/workbench) on Windows, Linux, and Mac OS X. The [analogous tool for PostgreSQL](http://www.pgadmin.org/) is [*pgAdmin*](https://en.wikipedia.org/w/index.php?search=pgAdmin), and it is similarly cross\-platform. [`sqlitebrowser`](http://sqlitebrowser.org/) is another cross\-platform GUI for SQLite databases.
These programs provide full\-featured access to the underlying database system, with many helpful and easy\-to\-learn drop\-down menus. We recommend developing queries and databases in these programs, especially when learning SQL.
### F.4\.3 R and RStudio
The downside to the previous approaches is that you don’t actually capture the data returned by your queries, so you can’t do anything with them. Using the GUIs, you can of course save the results of any query to a CSV. But a more elegant solution is to pull the data directly into **R**. This functionality is provided by the **RMySQL**, **RPostgreSQL**, and **RSQLite** packages. The **DBI** package provides a common interface to all three of the SQL back\-ends listed above, and the **dbplyr** package provides a slicker interface to **DBI**. A schematic of these dependencies is displayed in Figure [F.1](ch-db-setup.html#fig:sql-r).
We recommend using either the **dplyr** or the **DBI** interfaces whenever possible, since they are implementation agnostic.
Figure F.1: Schematic of SQL\-related **R** packages and their dependencies.
For most purposes (e.g., `SELECT` queries) there may be significant convenience to using the **dplyr** interface. However, the functionality of this construction is limited to `SELECT` queries. Thus, other SQL directives (e.g., `EXPLAIN`, `INSERT`, `UPDATE`, etc.) will not work in the **dplyr** construction. This functionality must be accessed using **DBI**.
In what follows, we illustrate how to connect to a MySQL backend using **dplyr** and **DBI**. However, the instructions for connecting to a PostgreSQL and SQLite are analogous. First, you will need to load the relevant package.
```
library(RMySQL)
```
#### F.4\.3\.1 Using **dplyr** and `tbl()`
To set up a connection to a MySQL database using **dplyr**, we must specify the four parameters outlined above, and save the resulting object using the `dbConnect()` function.
```
library(dplyr)
db <- dbConnect(
RMySQL::MySQL(),
dbname = "airlines", host = "localhost",
user = "r-user", password = "mypass"
)
```
If you have a MySQL option file already set up (see Section [F.2\.2\.1](ch-db-setup.html#sec:mysql-option)), then you can alternatively connect using the `default.file` argument. This enables you to connect without having to type your password, or save it in plaintext in your **R** scripts.
```
db <- dbConnect(
RMySQL::MySQL(),
dbname = "airlines", host = "localhost",
default.file = "~/.my.cnf"
)
```
Next, we can retrieve data using the `tbl()` function and the `sql()` command.
```
res <- tbl(db, sql("SELECT faa, name FROM airports"))
res
```
```
# Source: SQL [?? x 2]
# Database: mysql 5.7.33-log
# [@mdsr.cdc7tgkkqd0n.us-east-1.rds.amazonaws.com:/airlines]
faa name
<chr> <chr>
1 04G Lansdowne Airport
2 06A Moton Field Municipal Airport
3 06C Schaumburg Regional
4 06N Randall Airport
5 09J Jekyll Island Airport
6 0A9 Elizabethton Municipal Airport
7 0G6 Williams County Airport
8 0G7 Finger Lakes Regional Airport
9 0P2 Shoestring Aviation Airfield
10 0S9 Jefferson County Intl
# … with more rows
```
Note that the resulting object has class `tbl_sql`.
```
class(res)
```
```
[1] "tbl_MySQLConnection" "tbl_dbi" "tbl_sql"
[4] "tbl_lazy" "tbl"
```
Note also that the derived table is described as having an unknown number of rows (indicated by `??`).
This is because **dplyr** is smart (and lazy) about evaluation.
It hasn’t actually pulled all of the data into **R**.
To force it to do so, use `collect()`.
```
collect(res)
```
```
# A tibble: 1,458 × 2
faa name
<chr> <chr>
1 04G Lansdowne Airport
2 06A Moton Field Municipal Airport
3 06C Schaumburg Regional
4 06N Randall Airport
5 09J Jekyll Island Airport
6 0A9 Elizabethton Municipal Airport
7 0G6 Williams County Airport
8 0G7 Finger Lakes Regional Airport
9 0P2 Shoestring Aviation Airfield
10 0S9 Jefferson County Intl
# … with 1,448 more rows
```
#### F.4\.3\.2 Writing SQL queries
In Section [F.4\.3\.1](ch-db-setup.html#sec:dplyr), we used the `tbl()` function to interact with a table stored in a SQL database. This enabled us to use **dplyr** functions directly, without having to write any SQL queries.
In this section, we use the `dbGetQuery()` function from the **DBI** package to send an SQL command to the server and retrieve the results.
```
dbGetQuery(db, "SELECT faa, name FROM airports LIMIT 0,5")
```
```
faa name
1 04G Lansdowne Airport
2 06A Moton Field Municipal Airport
3 06C Schaumburg Regional
4 06N Randall Airport
5 09J Jekyll Island Airport
```
Unlike the `tbl()` function from **dplyr**, `dbGetQuery()` can execute arbitrary SQL commands, not just `SELECT` statements. So we can also run `EXPLAIN`, `DESCRIBE`, and `SHOW` commands.
```
dbGetQuery(db, "EXPLAIN SELECT faa, name FROM airports")
```
```
id select_type table partitions type possible_keys key key_len ref
1 1 SIMPLE airports <NA> ALL <NA> <NA> <NA> <NA>
rows filtered Extra
1 1458 100 <NA>
```
```
dbGetQuery(db, "DESCRIBE airports")
```
```
Field Type Null Key Default Extra
1 faa varchar(3) NO PRI
2 name varchar(255) YES <NA>
3 lat decimal(10,7) YES <NA>
4 lon decimal(10,7) YES <NA>
5 alt int(11) YES <NA>
6 tz smallint(4) YES <NA>
7 dst char(1) YES <NA>
8 city varchar(255) YES <NA>
9 country varchar(255) YES <NA>
```
```
dbGetQuery(db, "SHOW DATABASES")
```
```
Database
1 information_schema
2 airlines
3 fec
4 imdb
5 lahman
6 nyctaxi
```
### F.4\.4 Load into SQLite database
A process similar to the one we exhibit in Section [16\.3](ch-sql2.html#sec:toy-db) can be used to create a SQLite database, although in this case it is not even necessary to specify the table schema in advance. Launch `sqlite3` from the command line using the shell command:
```
sqlite3
```
Create a new database called `babynames` in the current directory using the `.open` command:
```
.open babynamesdata.sqlite3
```
Next, set the `.mode` to `csv`, import the two tables, and exit.
```
.mode csv
.import babynames.csv babynames
.import births.csv births
.exit
```
This should result in an SQLite database file called `babynamesdata.sqlite3` existing in the current directory that contains two tables. We can connect to this database and query it using **dplyr**.
```
db <- dbConnect(RSQLite::SQLite(), "babynamesdata.sqlite3")
babynames <- tbl(db, "babynames")
babynames %>%
filter(name == "Benjamin")
```
```
# Source: lazy query [?? x 5]
# Database: sqlite 3.30.1
# [/home/bbaumer/Dropbox/git/mdsr2e/babynamesdata.sqlite3]
year sex name n prop
<chr> <chr> <chr> <chr> <chr>
1 1976 F Benjamin 53 3.37186805943904e-05
2 1976 M Benjamin 10680 0.0065391571834601
3 1977 F Benjamin 63 3.83028784917178e-05
4 1977 M Benjamin 12112 0.00708409319279004
5 1978 F Benjamin 73 4.44137806835342e-05
6 1978 M Benjamin 11411 0.00667764880752091
7 1979 F Benjamin 79 4.58511127310548e-05
8 1979 M Benjamin 12516 0.00698620342042644
9 1980 F Benjamin 80 4.49415983928884e-05
10 1980 M Benjamin 13630 0.00734980487697031
# … with more rows
```
Alternatively, the **RSQLite** package includes a vignette describing how to set up a database from within R.
### F.4\.1 The command line client
From the command line, the syntax is:
```
mysql -u username -p -h localhost dbname
```
After entering your password, this will bring you to an interactive MySQL session, where you can bounce queries directly off of the server and see the results in your terminal. This is often useful for debugging because you can see the error messages directly, and you have the full suite of MySQL directives at your disposal. On the other hand, it is a fairly cumbersome route to database development, since you are limited to text\-editing capabilities of the command line.
Command\-line access to PostgreSQL is provided via the `psql` program described above.
### F.4\.2 GUIs
The [*MySQL Workbench*](https://en.wikipedia.org/w/index.php?search=MySQL%20Workbench) is a graphical user interface (GUI) that can be useful for configuration and development. [This software is available](https://www.mysql.com/products/workbench) on Windows, Linux, and Mac OS X. The [analogous tool for PostgreSQL](http://www.pgadmin.org/) is [*pgAdmin*](https://en.wikipedia.org/w/index.php?search=pgAdmin), and it is similarly cross\-platform. [`sqlitebrowser`](http://sqlitebrowser.org/) is another cross\-platform GUI for SQLite databases.
These programs provide full\-featured access to the underlying database system, with many helpful and easy\-to\-learn drop\-down menus. We recommend developing queries and databases in these programs, especially when learning SQL.
### F.4\.3 R and RStudio
The downside to the previous approaches is that you don’t actually capture the data returned by your queries, so you can’t do anything with them. Using the GUIs, you can of course save the results of any query to a CSV. But a more elegant solution is to pull the data directly into **R**. This functionality is provided by the **RMySQL**, **RPostgreSQL**, and **RSQLite** packages. The **DBI** package provides a common interface to all three of the SQL back\-ends listed above, and the **dbplyr** package provides a slicker interface to **DBI**. A schematic of these dependencies is displayed in Figure [F.1](ch-db-setup.html#fig:sql-r).
We recommend using either the **dplyr** or the **DBI** interfaces whenever possible, since they are implementation agnostic.
Figure F.1: Schematic of SQL\-related **R** packages and their dependencies.
For most purposes (e.g., `SELECT` queries) there may be significant convenience to using the **dplyr** interface. However, the functionality of this construction is limited to `SELECT` queries. Thus, other SQL directives (e.g., `EXPLAIN`, `INSERT`, `UPDATE`, etc.) will not work in the **dplyr** construction. This functionality must be accessed using **DBI**.
In what follows, we illustrate how to connect to a MySQL backend using **dplyr** and **DBI**. However, the instructions for connecting to a PostgreSQL and SQLite are analogous. First, you will need to load the relevant package.
```
library(RMySQL)
```
#### F.4\.3\.1 Using **dplyr** and `tbl()`
To set up a connection to a MySQL database using **dplyr**, we must specify the four parameters outlined above, and save the resulting object using the `dbConnect()` function.
```
library(dplyr)
db <- dbConnect(
RMySQL::MySQL(),
dbname = "airlines", host = "localhost",
user = "r-user", password = "mypass"
)
```
If you have a MySQL option file already set up (see Section [F.2\.2\.1](ch-db-setup.html#sec:mysql-option)), then you can alternatively connect using the `default.file` argument. This enables you to connect without having to type your password, or save it in plaintext in your **R** scripts.
```
db <- dbConnect(
RMySQL::MySQL(),
dbname = "airlines", host = "localhost",
default.file = "~/.my.cnf"
)
```
Next, we can retrieve data using the `tbl()` function and the `sql()` command.
```
res <- tbl(db, sql("SELECT faa, name FROM airports"))
res
```
```
# Source: SQL [?? x 2]
# Database: mysql 5.7.33-log
# [@mdsr.cdc7tgkkqd0n.us-east-1.rds.amazonaws.com:/airlines]
faa name
<chr> <chr>
1 04G Lansdowne Airport
2 06A Moton Field Municipal Airport
3 06C Schaumburg Regional
4 06N Randall Airport
5 09J Jekyll Island Airport
6 0A9 Elizabethton Municipal Airport
7 0G6 Williams County Airport
8 0G7 Finger Lakes Regional Airport
9 0P2 Shoestring Aviation Airfield
10 0S9 Jefferson County Intl
# … with more rows
```
Note that the resulting object has class `tbl_sql`.
```
class(res)
```
```
[1] "tbl_MySQLConnection" "tbl_dbi" "tbl_sql"
[4] "tbl_lazy" "tbl"
```
Note also that the derived table is described as having an unknown number of rows (indicated by `??`).
This is because **dplyr** is smart (and lazy) about evaluation.
It hasn’t actually pulled all of the data into **R**.
To force it to do so, use `collect()`.
```
collect(res)
```
```
# A tibble: 1,458 × 2
faa name
<chr> <chr>
1 04G Lansdowne Airport
2 06A Moton Field Municipal Airport
3 06C Schaumburg Regional
4 06N Randall Airport
5 09J Jekyll Island Airport
6 0A9 Elizabethton Municipal Airport
7 0G6 Williams County Airport
8 0G7 Finger Lakes Regional Airport
9 0P2 Shoestring Aviation Airfield
10 0S9 Jefferson County Intl
# … with 1,448 more rows
```
#### F.4\.3\.2 Writing SQL queries
In Section [F.4\.3\.1](ch-db-setup.html#sec:dplyr), we used the `tbl()` function to interact with a table stored in a SQL database. This enabled us to use **dplyr** functions directly, without having to write any SQL queries.
In this section, we use the `dbGetQuery()` function from the **DBI** package to send an SQL command to the server and retrieve the results.
```
dbGetQuery(db, "SELECT faa, name FROM airports LIMIT 0,5")
```
```
faa name
1 04G Lansdowne Airport
2 06A Moton Field Municipal Airport
3 06C Schaumburg Regional
4 06N Randall Airport
5 09J Jekyll Island Airport
```
Unlike the `tbl()` function from **dplyr**, `dbGetQuery()` can execute arbitrary SQL commands, not just `SELECT` statements. So we can also run `EXPLAIN`, `DESCRIBE`, and `SHOW` commands.
```
dbGetQuery(db, "EXPLAIN SELECT faa, name FROM airports")
```
```
id select_type table partitions type possible_keys key key_len ref
1 1 SIMPLE airports <NA> ALL <NA> <NA> <NA> <NA>
rows filtered Extra
1 1458 100 <NA>
```
```
dbGetQuery(db, "DESCRIBE airports")
```
```
Field Type Null Key Default Extra
1 faa varchar(3) NO PRI
2 name varchar(255) YES <NA>
3 lat decimal(10,7) YES <NA>
4 lon decimal(10,7) YES <NA>
5 alt int(11) YES <NA>
6 tz smallint(4) YES <NA>
7 dst char(1) YES <NA>
8 city varchar(255) YES <NA>
9 country varchar(255) YES <NA>
```
```
dbGetQuery(db, "SHOW DATABASES")
```
```
Database
1 information_schema
2 airlines
3 fec
4 imdb
5 lahman
6 nyctaxi
```
#### F.4\.3\.1 Using **dplyr** and `tbl()`
To set up a connection to a MySQL database using **dplyr**, we must specify the four parameters outlined above, and save the resulting object using the `dbConnect()` function.
```
library(dplyr)
db <- dbConnect(
RMySQL::MySQL(),
dbname = "airlines", host = "localhost",
user = "r-user", password = "mypass"
)
```
If you have a MySQL option file already set up (see Section [F.2\.2\.1](ch-db-setup.html#sec:mysql-option)), then you can alternatively connect using the `default.file` argument. This enables you to connect without having to type your password, or save it in plaintext in your **R** scripts.
```
db <- dbConnect(
RMySQL::MySQL(),
dbname = "airlines", host = "localhost",
default.file = "~/.my.cnf"
)
```
Next, we can retrieve data using the `tbl()` function and the `sql()` command.
```
res <- tbl(db, sql("SELECT faa, name FROM airports"))
res
```
```
# Source: SQL [?? x 2]
# Database: mysql 5.7.33-log
# [@mdsr.cdc7tgkkqd0n.us-east-1.rds.amazonaws.com:/airlines]
faa name
<chr> <chr>
1 04G Lansdowne Airport
2 06A Moton Field Municipal Airport
3 06C Schaumburg Regional
4 06N Randall Airport
5 09J Jekyll Island Airport
6 0A9 Elizabethton Municipal Airport
7 0G6 Williams County Airport
8 0G7 Finger Lakes Regional Airport
9 0P2 Shoestring Aviation Airfield
10 0S9 Jefferson County Intl
# … with more rows
```
Note that the resulting object has class `tbl_sql`.
```
class(res)
```
```
[1] "tbl_MySQLConnection" "tbl_dbi" "tbl_sql"
[4] "tbl_lazy" "tbl"
```
Note also that the derived table is described as having an unknown number of rows (indicated by `??`).
This is because **dplyr** is smart (and lazy) about evaluation.
It hasn’t actually pulled all of the data into **R**.
To force it to do so, use `collect()`.
```
collect(res)
```
```
# A tibble: 1,458 × 2
faa name
<chr> <chr>
1 04G Lansdowne Airport
2 06A Moton Field Municipal Airport
3 06C Schaumburg Regional
4 06N Randall Airport
5 09J Jekyll Island Airport
6 0A9 Elizabethton Municipal Airport
7 0G6 Williams County Airport
8 0G7 Finger Lakes Regional Airport
9 0P2 Shoestring Aviation Airfield
10 0S9 Jefferson County Intl
# … with 1,448 more rows
```
#### F.4\.3\.2 Writing SQL queries
In Section [F.4\.3\.1](ch-db-setup.html#sec:dplyr), we used the `tbl()` function to interact with a table stored in a SQL database. This enabled us to use **dplyr** functions directly, without having to write any SQL queries.
In this section, we use the `dbGetQuery()` function from the **DBI** package to send an SQL command to the server and retrieve the results.
```
dbGetQuery(db, "SELECT faa, name FROM airports LIMIT 0,5")
```
```
faa name
1 04G Lansdowne Airport
2 06A Moton Field Municipal Airport
3 06C Schaumburg Regional
4 06N Randall Airport
5 09J Jekyll Island Airport
```
Unlike the `tbl()` function from **dplyr**, `dbGetQuery()` can execute arbitrary SQL commands, not just `SELECT` statements. So we can also run `EXPLAIN`, `DESCRIBE`, and `SHOW` commands.
```
dbGetQuery(db, "EXPLAIN SELECT faa, name FROM airports")
```
```
id select_type table partitions type possible_keys key key_len ref
1 1 SIMPLE airports <NA> ALL <NA> <NA> <NA> <NA>
rows filtered Extra
1 1458 100 <NA>
```
```
dbGetQuery(db, "DESCRIBE airports")
```
```
Field Type Null Key Default Extra
1 faa varchar(3) NO PRI
2 name varchar(255) YES <NA>
3 lat decimal(10,7) YES <NA>
4 lon decimal(10,7) YES <NA>
5 alt int(11) YES <NA>
6 tz smallint(4) YES <NA>
7 dst char(1) YES <NA>
8 city varchar(255) YES <NA>
9 country varchar(255) YES <NA>
```
```
dbGetQuery(db, "SHOW DATABASES")
```
```
Database
1 information_schema
2 airlines
3 fec
4 imdb
5 lahman
6 nyctaxi
```
### F.4\.4 Load into SQLite database
A process similar to the one we exhibit in Section [16\.3](ch-sql2.html#sec:toy-db) can be used to create a SQLite database, although in this case it is not even necessary to specify the table schema in advance. Launch `sqlite3` from the command line using the shell command:
```
sqlite3
```
Create a new database called `babynames` in the current directory using the `.open` command:
```
.open babynamesdata.sqlite3
```
Next, set the `.mode` to `csv`, import the two tables, and exit.
```
.mode csv
.import babynames.csv babynames
.import births.csv births
.exit
```
This should result in an SQLite database file called `babynamesdata.sqlite3` existing in the current directory that contains two tables. We can connect to this database and query it using **dplyr**.
```
db <- dbConnect(RSQLite::SQLite(), "babynamesdata.sqlite3")
babynames <- tbl(db, "babynames")
babynames %>%
filter(name == "Benjamin")
```
```
# Source: lazy query [?? x 5]
# Database: sqlite 3.30.1
# [/home/bbaumer/Dropbox/git/mdsr2e/babynamesdata.sqlite3]
year sex name n prop
<chr> <chr> <chr> <chr> <chr>
1 1976 F Benjamin 53 3.37186805943904e-05
2 1976 M Benjamin 10680 0.0065391571834601
3 1977 F Benjamin 63 3.83028784917178e-05
4 1977 M Benjamin 12112 0.00708409319279004
5 1978 F Benjamin 73 4.44137806835342e-05
6 1978 M Benjamin 11411 0.00667764880752091
7 1979 F Benjamin 79 4.58511127310548e-05
8 1979 M Benjamin 12516 0.00698620342042644
9 1980 F Benjamin 80 4.49415983928884e-05
10 1980 M Benjamin 13630 0.00734980487697031
# … with more rows
```
Alternatively, the **RSQLite** package includes a vignette describing how to set up a database from within R.
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/introduction.html |
1 Introduction
==============
How this book is organized
--------------------------
The book is divided into sections in with the same numbers and titles as those in *R for Data Science*.
Not all sections have exercises.
Those sections without exercises have placeholder text indicating that there are no exercises.
The text for each exercise is followed by the solution.
Like *R for Data Science*, packages used in each chapter are loaded in a code chunk at the start of the chapter in a section titled “Prerequisites”.
If exercises depend on code in a section of *R for Data Science* it is either provided before the exercises or within the exercise solution.
If a package is used infrequently in solutions it may not be loaded, and functions using it will be called using the package name followed by two colons, as in `dplyr::mutate()` (see the *R for Data Science* [Introduction](https://r4ds.had.co.nz/introduction.html#running-r-code)).
The double colon may also be used to be explicit about the package from which a function comes.
Prerequisites
-------------
This book is a complement to, not a substitute of, R for Data Science.
It only provides the exercise solutions for it.
See the [R for Data Science](https://r4ds.had.co.nz/introduction.html#prerequisites) prerequisites.
Additional, the solutions use several packages that are not used in *R4DS*.
You can install all packages required to run the code in this book with the following line of code.
```
devtools::install_github("jrnold/r4ds-exercise-solutions")
```
Bugs/Contributing
-----------------
If you find any typos, errors in the solutions, have an alternative solution,
or think the solution could be improved, I would love your contributions.
The best way to contribute is through GitHub.
Please open an issue at [https://github.com/jrnold/r4ds\-exercise\-solutions/issues](https://github.com/jrnold/r4ds-exercise-solutions/issues) or a pull request at
[https://github.com/jrnold/r4ds\-exercise\-solutions/pulls](https://github.com/jrnold/r4ds-exercise-solutions/pulls).
Colophon
--------
HTML and PDF versions of this book are available at [https://jrnold.github.io/r4ds\-exercise\-solutions](https://jrnold.github.io/r4ds-exercise-solutions).
The book is powered by [bookdown](https://bookdown.org/home) which makes it easy to turn R markdown files into HTML, PDF, and EPUB.
The source of this book is available on GitHub at [https://github.com/jrnold/r4ds\-exercise\-solutions](https://github.com/jrnold/r4ds-exercise-solutions).
This book was built from commit [f0d0f0d](https://github.com/jrnold/r4ds-exercise-solutions/tree/f0d0f0de3c4e3c14bcb01ea700d45838200b95ad).
This book was built with these R packages.
```
devtools::session_info("r4ds.exercise.solutions")
#> ─ Session info ───────────────────────────────────────────────────────────────
#> setting value
#> version R version 4.0.0 (2020-04-24)
#> os Ubuntu 16.04.6 LTS
#> system x86_64, linux-gnu
#> ui X11
#> language en_US.UTF-8
#> collate en_US.UTF-8
#> ctype en_US.UTF-8
#> tz UTC
#> date 2020-07-19
#>
#> ─ Packages ───────────────────────────────────────────────────────────────────
#> ! package * version date lib source
#> R r4ds.exercise.solutions <NA> <NA> [?] <NA>
#>
#> [1] /home/travis/R/Library
#> [2] /usr/local/lib/R/site-library
#> [3] /home/travis/R-bin/lib/R/library
#>
#> R ── Package was removed from disk.
```
How this book is organized
--------------------------
The book is divided into sections in with the same numbers and titles as those in *R for Data Science*.
Not all sections have exercises.
Those sections without exercises have placeholder text indicating that there are no exercises.
The text for each exercise is followed by the solution.
Like *R for Data Science*, packages used in each chapter are loaded in a code chunk at the start of the chapter in a section titled “Prerequisites”.
If exercises depend on code in a section of *R for Data Science* it is either provided before the exercises or within the exercise solution.
If a package is used infrequently in solutions it may not be loaded, and functions using it will be called using the package name followed by two colons, as in `dplyr::mutate()` (see the *R for Data Science* [Introduction](https://r4ds.had.co.nz/introduction.html#running-r-code)).
The double colon may also be used to be explicit about the package from which a function comes.
Prerequisites
-------------
This book is a complement to, not a substitute of, R for Data Science.
It only provides the exercise solutions for it.
See the [R for Data Science](https://r4ds.had.co.nz/introduction.html#prerequisites) prerequisites.
Additional, the solutions use several packages that are not used in *R4DS*.
You can install all packages required to run the code in this book with the following line of code.
```
devtools::install_github("jrnold/r4ds-exercise-solutions")
```
Bugs/Contributing
-----------------
If you find any typos, errors in the solutions, have an alternative solution,
or think the solution could be improved, I would love your contributions.
The best way to contribute is through GitHub.
Please open an issue at [https://github.com/jrnold/r4ds\-exercise\-solutions/issues](https://github.com/jrnold/r4ds-exercise-solutions/issues) or a pull request at
[https://github.com/jrnold/r4ds\-exercise\-solutions/pulls](https://github.com/jrnold/r4ds-exercise-solutions/pulls).
Colophon
--------
HTML and PDF versions of this book are available at [https://jrnold.github.io/r4ds\-exercise\-solutions](https://jrnold.github.io/r4ds-exercise-solutions).
The book is powered by [bookdown](https://bookdown.org/home) which makes it easy to turn R markdown files into HTML, PDF, and EPUB.
The source of this book is available on GitHub at [https://github.com/jrnold/r4ds\-exercise\-solutions](https://github.com/jrnold/r4ds-exercise-solutions).
This book was built from commit [f0d0f0d](https://github.com/jrnold/r4ds-exercise-solutions/tree/f0d0f0de3c4e3c14bcb01ea700d45838200b95ad).
This book was built with these R packages.
```
devtools::session_info("r4ds.exercise.solutions")
#> ─ Session info ───────────────────────────────────────────────────────────────
#> setting value
#> version R version 4.0.0 (2020-04-24)
#> os Ubuntu 16.04.6 LTS
#> system x86_64, linux-gnu
#> ui X11
#> language en_US.UTF-8
#> collate en_US.UTF-8
#> ctype en_US.UTF-8
#> tz UTC
#> date 2020-07-19
#>
#> ─ Packages ───────────────────────────────────────────────────────────────────
#> ! package * version date lib source
#> R r4ds.exercise.solutions <NA> <NA> [?] <NA>
#>
#> [1] /home/travis/R/Library
#> [2] /usr/local/lib/R/site-library
#> [3] /home/travis/R-bin/lib/R/library
#>
#> R ── Package was removed from disk.
```
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/data-visualisation.html |
3 Data visualisation
====================
3\.1 Introduction
-----------------
```
library("tidyverse")
```
3\.2 First steps
----------------
### Exercise 3\.2\.1
Run `ggplot(data = mpg)` what do you see?
```
ggplot(data = mpg)
```
This code creates an empty plot.
The `ggplot()` function creates the background of the plot,
but since no layers were specified with geom function, nothing is drawn.
### Exercise 3\.2\.2
How many rows are in `mpg`?
How many columns?
There are 234 rows and 11 columns in the `mpg` data frame.
```
nrow(mpg)
#> [1] 234
ncol(mpg)
#> [1] 11
```
The `glimpse()` function also displays the number of rows and columns in a data frame.
```
glimpse(mpg)
#> Rows: 234
#> Columns: 11
#> $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", …
#> $ model <chr> "a4", "a4", "a4", "a4", "a4", "a4", "a4", "a4 quattro", …
#> $ displ <dbl> 1.8, 1.8, 2.0, 2.0, 2.8, 2.8, 3.1, 1.8, 1.8, 2.0, 2.0, 2…
#> $ year <int> 1999, 1999, 2008, 2008, 1999, 1999, 2008, 1999, 1999, 20…
#> $ cyl <int> 4, 4, 4, 4, 6, 6, 6, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 8, 8,…
#> $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(av)", "aut…
#> $ drv <chr> "f", "f", "f", "f", "f", "f", "f", "4", "4", "4", "4", "…
#> $ cty <int> 18, 21, 20, 21, 16, 18, 18, 18, 16, 20, 19, 15, 17, 17, …
#> $ hwy <int> 29, 29, 31, 30, 26, 26, 27, 26, 25, 28, 27, 25, 25, 25, …
#> $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "…
#> $ class <chr> "compact", "compact", "compact", "compact", "compact", "…
```
### Exercise 3\.2\.3
What does the `drv` variable describe?
Read the help for `?mpg` to find out.
The `drv` variable is a categorical variable which categorizes cars into front\-wheels, rear\-wheels, or four\-wheel drive.[1](#fn1)
| Value | Description |
| --- | --- |
| `"f"` | [front\-wheel drive](https://en.wikipedia.org/wiki/Front-wheel_drive) |
| `"r"` | [rear\-wheel drive](https://en.wikipedia.org/wiki/Automobile_layout#Rear-wheel-drive_layouts) |
| `"4"` | [four\-wheel drive](https://en.wikipedia.org/wiki/Four-wheel_drive) |
### Exercise 3\.2\.4
Make a scatter plot of `hwy` vs. `cyl`.
```
ggplot(mpg, aes(x = cyl, y = hwy)) +
geom_point()
```
### Exercise 3\.2\.5
What happens if you make a scatter plot of `class` vs `drv`?
Why is the plot not useful?
The resulting scatterplot has only a few points.
```
ggplot(mpg, aes(x = class, y = drv)) +
geom_point()
```
A scatter plot is not a useful display of these variables since both `drv` and `class` are categorical variables.
Since categorical variables typically take a small number of values,
there are a limited number of unique combinations of (`x`, `y`) values that can be displayed.
In this data, `drv` takes 3 values and `class` takes 7 values,
meaning that there are only 21 values that could be plotted on a scatterplot of `drv` vs. `class`.
In this data, there 12 values of (`drv`, `class`) are observed.
```
count(mpg, drv, class)
#> # A tibble: 12 x 3
#> drv class n
#> <chr> <chr> <int>
#> 1 4 compact 12
#> 2 4 midsize 3
#> 3 4 pickup 33
#> 4 4 subcompact 4
#> 5 4 suv 51
#> 6 f compact 35
#> # … with 6 more rows
```
A simple scatter plot does not show how many observations there are for each (`x`, `y`) value.
As such, scatterplots work best for plotting a continuous x and a continuous y variable, and when all (`x`, `y`) values are unique.
**Warning:** The following code uses functions introduced in a later section.
Come back to this after reading
section [7\.5\.2](https://r4ds.had.co.nz/exploratory-data-analysis.html#two-categorical-variables), which introduces methods for plotting two categorical variables.
The first is `geom_count()` which is similar to a scatterplot but uses the size of the points to show the number of observations at an (`x`, `y`) point.
```
ggplot(mpg, aes(x = class, y = drv)) +
geom_count()
```
The second is `geom_tile()` which uses a color scale to show the number of observations with each (`x`, `y`) value.
```
mpg %>%
count(class, drv) %>%
ggplot(aes(x = class, y = drv)) +
geom_tile(mapping = aes(fill = n))
```
In the previous plot, there are many missing tiles.
These missing tiles represent unobserved combinations of `class` and `drv` values.
These missing values are not unknown, but represent values of (`class`, `drv`) where `n = 0`.
The `complete()` function in the tidyr package adds new rows to a data frame for missing combinations of columns.
The following code adds rows for missing combinations of `class` and `drv` and uses the `fill` argument to set `n = 0` for those new rows.
```
mpg %>%
count(class, drv) %>%
complete(class, drv, fill = list(n = 0)) %>%
ggplot(aes(x = class, y = drv)) +
geom_tile(mapping = aes(fill = n))
```
3\.3 Aesthetic mappings
-----------------------
### Exercise 3\.3\.1
What’s gone wrong with this code?
Why are the points not blue?
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, colour = "blue"))
```
The argument`colour = "blue"` is included within the `mapping` argument, and as such, it is treated as an aesthetic, which is a mapping between a variable and a value.
In the expression, `colour = "blue"`, `"blue"` is interpreted as a categorical variable which only takes a single value `"blue"`.
If this is confusing, consider how `colour = 1:234` and `colour = 1` are interpreted by `aes()`.
The following code does produces the expected result.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy), colour = "blue")
```
### Exercise 3\.3\.2
Which variables in `mpg` are categorical?
Which variables are continuous?
(Hint: type `?mpg` to read the documentation for the dataset).
How can you see this information when you run `mpg`?
The following list contains the categorical variables in `mpg`:
* `manufacturer`
* `model`
* `trans`
* `drv`
* `fl`
* `class`
The following list contains the continuous variables in `mpg`:
* `displ`
* `year`
* `cyl`
* `cty`
* `hwy`
In the printed data frame, angled brackets at the top of each column provide type of each variable.
```
mpg
#> # A tibble: 234 x 11
#> manufacturer model displ year cyl trans drv cty hwy fl class
#> <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
#> 1 audi a4 1.8 1999 4 auto(l5) f 18 29 p compa…
#> 2 audi a4 1.8 1999 4 manual(m5) f 21 29 p compa…
#> 3 audi a4 2 2008 4 manual(m6) f 20 31 p compa…
#> 4 audi a4 2 2008 4 auto(av) f 21 30 p compa…
#> 5 audi a4 2.8 1999 6 auto(l5) f 16 26 p compa…
#> 6 audi a4 2.8 1999 6 manual(m5) f 18 26 p compa…
#> # … with 228 more rows
```
Those with `<chr>` above their columns are categorical, while those with `<dbl>` or `<int>` are continuous.
The exact meaning of these types will be discussed in [“Chapter 15: Vectors”](https://jrnold.github.io/r4ds-exercise-solutions/vectors.html).
`glimpse()` is another function that concisely displays the type of each column in the data frame:
```
glimpse(mpg)
#> Rows: 234
#> Columns: 11
#> $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", …
#> $ model <chr> "a4", "a4", "a4", "a4", "a4", "a4", "a4", "a4 quattro", …
#> $ displ <dbl> 1.8, 1.8, 2.0, 2.0, 2.8, 2.8, 3.1, 1.8, 1.8, 2.0, 2.0, 2…
#> $ year <int> 1999, 1999, 2008, 2008, 1999, 1999, 2008, 1999, 1999, 20…
#> $ cyl <int> 4, 4, 4, 4, 6, 6, 6, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 8, 8,…
#> $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(av)", "aut…
#> $ drv <chr> "f", "f", "f", "f", "f", "f", "f", "4", "4", "4", "4", "…
#> $ cty <int> 18, 21, 20, 21, 16, 18, 18, 18, 16, 20, 19, 15, 17, 17, …
#> $ hwy <int> 29, 29, 31, 30, 26, 26, 27, 26, 25, 28, 27, 25, 25, 25, …
#> $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "…
#> $ class <chr> "compact", "compact", "compact", "compact", "compact", "…
```
For those lists, I considered any variable that was non\-numeric was considered categorical and any variable that was numeric was considered continuous.
This largely corresponds to the heuristics `ggplot()` uses for will interpreting variables as discrete or continuous.
However, this definition of continuous vs. categorical misses several important cases.
Of the numeric variables, `year` and `cyl` (cylinders) clearly take on discrete values.
The variables `cty` and `hwy` are stored as integers (`int`) so they only take on a discrete values.
Even though `displ` has
In some sense, due to measurement and computational constraints all numeric variables are discrete ().
But unlike the categorical variables, it is possible to add and subtract these numeric variables in a meaningful way.
The typology of [levels of measurement](https://en.wikipedia.org/wiki/Level_of_measurement) is one such typology of data types.
In this case the R data types largely encode the semantics of the variables; e.g. integer variables are stored as integers, categorical variables with no order are stored as character vectors and so on.
However, that is not always the case.
Instead, the data could have stored the categorical `class` variable as an integer with values 1–7, where the documentation would note that 1 \= “compact”, 2 \= “midsize”, and so on.[2](#fn2)
Even though this integer vector could be added, multiplied, subtracted, and divided, those operations would be meaningless.
Fundamentally, categorizing variables as “discrete”, “continuous”, “ordinal”, “nominal”, “categorical”, etc. is about specifying what operations can be performed on the variables.
Discrete variables support counting and calculating the mode.
Variables with an ordering support sorting and calculating quantiles.
Variables that have an interval scale support addition and subtraction and operations such as taking the mean that rely on these primitives.
In this way, the types of data or variables types is an information class system, something that is beyond the scope of R4DS but discussed in [*Advanced R*](http://adv-r.had.co.nz/OO-essentials.html#s3).
### Exercise 3\.3\.3
Map a continuous variable to color, size, and shape.
How do these aesthetics behave differently for categorical vs. continuous variables?
The variable `cty`, city highway miles per gallon, is a continuous variable.
```
ggplot(mpg, aes(x = displ, y = hwy, colour = cty)) +
geom_point()
```
Instead of using discrete colors, the continuous variable uses a scale that varies from a light to dark blue color.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cty)) +
geom_point()
```
When mapped to size, the sizes of the points vary continuously as a function of their size.
```
ggplot(mpg, aes(x = displ, y = hwy, shape = cty)) +
geom_point()
#> Error: A continuous variable can not be mapped to shape
```
When a continuous value is mapped to shape, it gives an error.
Though we could split a continuous variable into discrete categories and use a shape aesthetic, this would conceptually not make sense.
A numeric variable has an order, but shapes do not.
It is clear that smaller points correspond to smaller values, or once the color scale is given, which colors correspond to larger or smaller values. But it is not clear whether a square is greater or less than a circle.
### Exercise 3\.3\.4
What happens if you map the same variable to multiple aesthetics?
```
ggplot(mpg, aes(x = displ, y = hwy, colour = hwy, size = displ)) +
geom_point()
```
In the above plot, `hwy` is mapped to both location on the y\-axis and color, and `displ` is mapped to both location on the x\-axis and size.
The code works and produces a plot, even if it is a bad one.
Mapping a single variable to multiple aesthetics is redundant.
Because it is redundant information, in most cases avoid mapping a single variable to multiple aesthetics.
### Exercise 3\.3\.5
What does the stroke aesthetic do?
What shapes does it work with?
(Hint: use `?geom_point`)
Stroke changes the size of the border for shapes (21\-25\).
These are filled shapes in which the color and size of the border can differ from that of the filled interior of the shape.
For example
```
ggplot(mtcars, aes(wt, mpg)) +
geom_point(shape = 21, colour = "black", fill = "white", size = 5, stroke = 5)
```
### Exercise 3\.3\.6
What happens if you map an aesthetic to something other than a variable name, like `aes(colour = displ < 5)`?
```
ggplot(mpg, aes(x = displ, y = hwy, colour = displ < 5)) +
geom_point()
```
Aesthetics can also be mapped to expressions like `displ < 5`.
The `ggplot()` function behaves as if a temporary variable was added to the data with values equal to the result of the expression.
In this case, the result of `displ < 5` is a logical variable which takes values of `TRUE` or `FALSE`.
This also explains why, in [Exercise 3\.3\.1](data-visualisation.html#exercise-3.3.1), the expression `colour = "blue"` created a categorical variable with only one category: “blue”.
3\.4 Common problems
--------------------
No exercises
3\.5 Facets
-----------
### Exercise 3\.5\.1
What happens if you facet on a continuous variable?
Let’s see.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
facet_grid(. ~ cty)
```
The continuous variable is converted to a categorical variable, and the plot contains a facet for each distinct value.
### Exercise 3\.5\.2
What do the empty cells in plot with `facet_grid(drv ~ cyl)` mean?
How do they relate to this plot?
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = drv, y = cyl))
```
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = hwy, y = cty)) +
facet_grid(drv ~ cyl)
```
The empty cells (facets) in this plot are combinations of `drv` and `cyl` that have no observations.
These are the same locations in the scatter plot of `drv` and `cyl` that have no points.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = drv, y = cyl))
```
### Exercise 3\.5\.3
What plots does the following code make?
What does `.` do?
The symbol `.` ignores that dimension when faceting.
For example, `drv ~ .` facet by values of `drv` on the y\-axis.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
facet_grid(drv ~ .)
```
While, `. ~ cyl` will facet by values of `cyl` on the x\-axis.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
facet_grid(. ~ cyl)
```
### Exercise 3\.5\.4
Take the first faceted plot in this section:
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
facet_wrap(~class, nrow = 2)
```
What are the advantages to using faceting instead of the colour aesthetic?
What are the disadvantages?
How might the balance change if you had a larger dataset?
In the following plot the `class` variable is mapped to color.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class))
```
Advantages of encoding `class` with facets instead of color include the ability to encode more distinct categories.
For me, it is difficult to distinguish between the colors of `"midsize"` and `"minivan"`.
Given human visual perception, the max number of colors to use when encoding
unordered categorical (qualitative) data is nine, and in practice, often much less than that.
Displaying observations from different categories on different scales makes it difficult to directly compare values of observations across categories.
However, it can make it easier to compare the shape of the relationship between the x and y variables across categories.
Disadvantages of encoding the `class` variable with facets instead of the color aesthetic include the difficulty of comparing the values of observations between categories since the observations for each category are on different plots.
Using the same x\- and y\-scales for all facets makes it easier to compare values of observations across categories, but it is still more difficult than if they had been displayed on the same plot.
Since encoding class within color also places all points on the same plot,
it visualizes the unconditional relationship between the x and y variables;
with facets, the unconditional relationship is no longer visualized since the
points are spread across multiple plots.
The benefit of encoding a variable with facetting over encoding it with color increase in both the number of points and the number of categories.
With a large number of points, there is often overlap.
It is difficult to handle overlapping points with different colors color.
Jittering will still work with color.
But jittering will only work well if there are few points and the classes do not overlap much, otherwise, the colors of areas will no longer be distinct, and it will be hard to pick out the patterns of different categories visually.
Transparency (`alpha`) does not work well with colors since the mixing of overlapping transparent colors will no longer represent the colors of the categories.
Binning methods already use color to encode the density of points in the bin, so color cannot be used to encode categories.
As the number of categories increases, the difference between
colors decreases, to the point that the color of categories will no longer be
visually distinct.
### Exercise 3\.5\.5
Read `?facet_wrap`.
What does `nrow` do? What does `ncol` do?
What other options control the layout of the individual panels?
Why doesn’t `facet_grid()` have `nrow` and `ncol` variables?
The arguments `nrow` (`ncol`) determines the number of rows (columns) to use when laying out the facets.
It is necessary since `facet_wrap()` only facets on one variable.
The `nrow` and `ncol` arguments are unnecessary for `facet_grid()` since the number of unique values of the variables specified in the function determines the number of rows and columns.
### Exercise 3\.5\.6
When using `facet_grid()` you should usually put the variable with more unique levels in the columns.
Why?
There will be more space for columns if the plot is laid out horizontally (landscape).
3\.6 Geometric objects
----------------------
### Exercise 3\.6\.1
What geom would you use to draw a line chart?
A boxplot?
A histogram?
An area chart?
* line chart: `geom_line()`
* boxplot: `geom_boxplot()`
* histogram: `geom_histogram()`
* area chart: `geom_area()`
### Exercise 3\.6\.2
Run this code in your head and predict what the output will look like.
Then, run the code in R and check your predictions.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth(se = FALSE)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
This code produces a scatter plot with `displ` on the x\-axis, `hwy` on the y\-axis, and the points colored by `drv`.
There will be a smooth line, without standard errors, fit through each `drv` group.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth(se = FALSE)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### Exercise 3\.6\.3
What does `show.legend = FALSE` do?
What happens if you remove it?
Why do you think I used it earlier in the chapter?
The theme option `show.legend = FALSE` hides the legend box.
Consider this example earlier in the chapter.
```
ggplot(data = mpg) +
geom_smooth(
mapping = aes(x = displ, y = hwy, colour = drv),
show.legend = FALSE
)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
In that plot, there is no legend.
Removing the `show.legend` argument or setting `show.legend = TRUE` will result in the plot having a legend displaying the mapping between colors and `drv`.
```
ggplot(data = mpg) +
geom_smooth(mapping = aes(x = displ, y = hwy, colour = drv))
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
In the chapter, the legend is suppressed because with three plots,
adding a legend to only the last plot would make the sizes of plots different.
Different sized plots would make it more difficult to see how arguments change the appearance of the plots.
The purpose of those plots is to show the difference between no groups, using a `group` aesthetic, and using a `color` aesthetic, which creates implicit groups.
In that example, the legend isn’t necessary since looking up the values associated with each color isn’t necessary to make that point.
### Exercise 3\.6\.4
What does the `se` argument to `geom_smooth()` do?
It adds standard error bands to the lines.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth(se = TRUE)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
By default `se = TRUE`:
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth()
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### Exercise 3\.6\.5
Will these two graphs look different?
Why/why not?
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth()
ggplot() +
geom_point(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_smooth(data = mpg, mapping = aes(x = displ, y = hwy))
```
No. Because both `geom_point()` and `geom_smooth()` will use the same data and mappings.
They will inherit those options from the `ggplot()` object, so the mappings don’t need to specified again.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth()
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
```
ggplot() +
geom_point(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_smooth(data = mpg, mapping = aes(x = displ, y = hwy))
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### Exercise 3\.6\.6
Recreate the R code necessary to generate the following graphs.
The following code will generate those plots.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth(se = FALSE)
```
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_smooth(mapping = aes(group = drv), se = FALSE) +
geom_point()
```
```
ggplot(mpg, aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth(se = FALSE)
```
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point(aes(colour = drv)) +
geom_smooth(se = FALSE)
```
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point(aes(colour = drv)) +
geom_smooth(aes(linetype = drv), se = FALSE)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point(size = 4, color = "white") +
geom_point(aes(colour = drv))
```
3\.7 Statistical transformations
--------------------------------
### Exercise 3\.7\.1
What is the default geom associated with `stat_summary()`?
How could you rewrite the previous plot to use that geom function instead of the stat function?
The “previous plot” referred to in the question is the following.
```
ggplot(data = diamonds) +
stat_summary(
mapping = aes(x = cut, y = depth),
fun.min = min,
fun.max = max,
fun = median
)
```
The arguments `fun.ymin`, `fun.ymax`, and `fun.y` have been deprecated and replaced with `fun.min`, `fun.max`, and `fun` in ggplot2 v 3\.3\.0\.
The default geom for [`stat_summary()`](https://ggplot2.tidyverse.org/reference/stat_summary.html) is `geom_pointrange()`.
The default stat for [`geom_pointrange()`](https://ggplot2.tidyverse.org/reference/geom_linerange.html) is `identity()` but we can add the argument `stat = "summary"` to use `stat_summary()` instead of `stat_identity()`.
```
ggplot(data = diamonds) +
geom_pointrange(
mapping = aes(x = cut, y = depth),
stat = "summary"
)
#> No summary function supplied, defaulting to `mean_se()`
```
The resulting message says that `stat_summary()` uses the `mean` and `sd` to calculate the middle point and endpoints of the line.
However, in the original plot the min and max values were used for the endpoints.
To recreate the original plot we need to specify values for `fun.min`, `fun.max`, and `fun`.
```
ggplot(data = diamonds) +
geom_pointrange(
mapping = aes(x = cut, y = depth),
stat = "summary",
fun.min = min,
fun.max = max,
fun = median
)
```
### Exercise 3\.7\.2
What does `geom_col()` do? How is it different to `geom_bar()`?
The `geom_col()` function has different default stat than `geom_bar()`.
The default stat of `geom_col()` is `stat_identity()`, which leaves the data as is.
The `geom_col()` function expects that the data contains `x` values and `y` values which represent the bar height.
The default stat of `geom_bar()` is `stat_count()`.
The `geom_bar()` function only expects an `x` variable.
The stat, `stat_count()`, preprocesses input data by counting the number of observations for each value of `x`.
The `y` aesthetic uses the values of these counts.
### Exercise 3\.7\.3
Most geoms and stats come in pairs that are almost always used in concert.
Read through the documentation and make a list of all the pairs.
What do they have in common?
The following tables lists the pairs of geoms and stats that are almost always used in concert.
Complementary geoms and stats
| geom | stat |
| --- | --- |
| `geom_bar()` | `stat_count()` |
| `geom_bin2d()` | `stat_bin_2d()` |
| `geom_boxplot()` | `stat_boxplot()` |
| `geom_contour_filled()` | `stat_contour_filled()` |
| `geom_contour()` | `stat_contour()` |
| `geom_count()` | `stat_sum()` |
| `geom_density_2d()` | `stat_density_2d()` |
| `geom_density()` | `stat_density()` |
| `geom_dotplot()` | `stat_bindot()` |
| `geom_function()` | `stat_function()` |
| `geom_sf()` | `stat_sf()` |
| `geom_sf()` | `stat_sf()` |
| `geom_smooth()` | `stat_smooth()` |
| `geom_violin()` | `stat_ydensity()` |
| `geom_hex()` | `stat_bin_hex()` |
| `geom_qq_line()` | `stat_qq_line()` |
| `geom_qq()` | `stat_qq()` |
| `geom_quantile()` | `stat_quantile()` |
These pairs of geoms and stats tend to have their names in common, such `stat_smooth()` and `geom_smooth()` and be documented on the same help page.
The pairs of geoms and stats that are used in concert often have each other as the default stat (for a geom) or geom (for a stat).
The following tables contain the geoms and stats in [ggplot2](https://ggplot2.tidyverse.org/reference/) and their defaults as of version 3\.3\.0\.
Many geoms have `stat_identity()` as the default stat.
ggplot2 geom layers and their default stats.
| geom | default stat | shared docs |
| --- | --- | --- |
| `geom_abline()` | `stat_identity()` | |
| `geom_area()` | `stat_identity()` | |
| `geom_bar()` | `stat_count()` | x |
| `geom_bin2d()` | `stat_bin_2d()` | x |
| `geom_blank()` | None | |
| `geom_boxplot()` | `stat_boxplot()` | x |
| `geom_col()` | `stat_identity()` | |
| `geom_count()` | `stat_sum()` | x |
| `geom_countour_filled()` | `stat_countour_filled()` | x |
| `geom_countour()` | `stat_countour()` | x |
| `geom_crossbar()` | `stat_identity()` | |
| `geom_curve()` | `stat_identity()` | |
| `geom_density_2d_filled()` | `stat_density_2d_filled()` | x |
| `geom_density_2d()` | `stat_density_2d()` | x |
| `geom_density()` | `stat_density()` | x |
| `geom_dotplot()` | `stat_bindot()` | x |
| `geom_errorbar()` | `stat_identity()` | |
| `geom_errorbarh()` | `stat_identity()` | |
| `geom_freqpoly()` | `stat_bin()` | x |
| `geom_function()` | `stat_function()` | x |
| `geom_hex()` | `stat_bin_hex()` | x |
| `geom_histogram()` | `stat_bin()` | x |
| `geom_hline()` | `stat_identity()` | |
| `geom_jitter()` | `stat_identity()` | |
| `geom_label()` | `stat_identity()` | |
| `geom_line()` | `stat_identity()` | |
| `geom_linerange()` | `stat_identity()` | |
| `geom_map()` | `stat_identity()` | |
| `geom_path()` | `stat_identity()` | |
| `geom_point()` | `stat_identity()` | |
| `geom_pointrange()` | `stat_identity()` | |
| `geom_polygon()` | `stat_identity()` | |
| `geom_qq_line()` | `stat_qq_line()` | x |
| `geom_qq()` | `stat_qq()` | x |
| `geom_quantile()` | `stat_quantile()` | x |
| `geom_raster()` | `stat_identity()` | |
| `geom_rect()` | `stat_identity()` | |
| `geom_ribbon()` | `stat_identity()` | |
| `geom_rug()` | `stat_identity()` | |
| `geom_segment()` | `stat_identity()` | |
| `geom_sf_label()` | `stat_sf_coordinates()` | x |
| `geom_sf_text()` | `stat_sf_coordinates()` | x |
| `geom_sf()` | `stat_sf()` | x |
| `geom_smooth()` | `stat_smooth()` | x |
| `geom_spoke()` | `stat_identity()` | |
| `geom_step()` | `stat_identity()` | |
| `geom_text()` | `stat_identity()` | |
| `geom_tile()` | `stat_identity()` | |
| `geom_violin()` | `stat_ydensity()` | x |
| `geom_vline()` | `stat_identity()` | |
ggplot2 stat layers and their default geoms.
| stat | default geom | shared docs |
| --- | --- | --- |
| `stat_bin_2d()` | `geom_tile()` | |
| `stat_bin_hex()` | `geom_hex()` | x |
| `stat_bin()` | `geom_bar()` | x |
| `stat_boxplot()` | `geom_boxplot()` | x |
| `stat_count()` | `geom_bar()` | x |
| `stat_countour_filled()` | `geom_contour_filled()` | x |
| `stat_countour()` | `geom_contour()` | x |
| `stat_density_2d_filled()` | `geom_density_2d()` | x |
| `stat_density_2d()` | `geom_density_2d()` | x |
| `stat_density()` | `geom_area()` | |
| `stat_ecdf()` | `geom_step()` | |
| `stat_ellipse()` | `geom_path()` | |
| `stat_function()` | `geom_function()` | x |
| `stat_function()` | `geom_path()` | |
| `stat_identity()` | `geom_point()` | |
| `stat_qq_line()` | `geom_path()` | |
| `stat_qq()` | `geom_point()` | |
| `stat_quantile()` | `geom_quantile()` | x |
| `stat_sf_coordinates()` | `geom_point()` | |
| `stat_sf()` | `geom_rect()` | |
| `stat_smooth()` | `geom_smooth()` | x |
| `stat_sum()` | `geom_point()` | |
| `stat_summary_2d()` | `geom_tile()` | |
| `stat_summary_bin()` | `geom_pointrange()` | |
| `stat_summary_hex()` | `geom_hex()` | |
| `stat_summary()` | `geom_pointrange()` | |
| `stat_unique()` | `geom_point()` | |
| `stat_ydensity()` | `geom_violin()` | x |
### Exercise 3\.7\.4
What variables does `stat_smooth()` compute?
What parameters control its behavior?
The function `stat_smooth()` calculates the following variables:
* `y`: predicted value
* `ymin`: lower value of the confidence interval
* `ymax`: upper value of the confidence interval
* `se`: standard error
The “Computed Variables” section of the `stat_smooth()` documentation contains these variables.
The parameters that control the behavior of `stat_smooth()` include:
* `method`: This is the method used to compute the smoothing line.
If `NULL`, a default method is used based on the sample size: `stats::loess()` when there are less than 1,000 observations in a group, and `mgcv::gam()` with `formula = y ~ s(x, bs = "CS)` otherwise.
Alternatively, the user can provide a character vector with a function name, e.g. `"lm"`, `"loess"`, or a function,
e.g. `MASS::rlm`.
* `formula`: When providing a custom `method` argument, the formula to use. The default is `y ~ x`. For example, to use the line implied by `lm(y ~ x + I(x ^ 2) + I(x ^ 3))`, use `method = "lm"` or `method = lm` and `formula = y ~ x + I(x ^ 2) + I(x ^ 3)`.
* `method.arg()`: Arguments other than than the formula, which is already specified in the `formula` argument`, to pass to the function in`method\`.
* `se`: If `TRUE`, display standard error bands, if `FALSE` only display the line.
* `na.rm`: If `FALSE`, missing values are removed with a warning, if `TRUE` the are silently removed.
The default is `FALSE` in order to make debugging easier.
If missing values are known to be in the data, then can be ignored, but if missing values are not anticipated this warning can help catch errors.
**TODO:** Plots with examples illustrating the uses of these arguments.
### Exercise 3\.7\.5
In our proportion bar chart, we need to set `group = 1` Why?
In other words, what is the problem with these two graphs?
If `group = 1` is not included, then all the bars in the plot will have the same height, a height of 1\.
The function `geom_bar()` assumes that the groups are equal to the `x` values since the stat computes the counts within the group.
```
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, y = ..prop..))
```
The problem with these two plots is that the proportions are calculated within the groups.
```
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, y = ..prop..))
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, fill = color, y = ..prop..))
```
The following code will produce the intended stacked bar charts for the case with no `fill` aesthetic.
```
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, y = ..prop.., group = 1))
```
With the `fill` aesthetic, the heights of the bars need to be normalized.
```
ggplot(data = diamonds) +
geom_bar(aes(x = cut, y = ..count.. / sum(..count..), fill = color))
```
3\.8 Position adjustments
-------------------------
### Exercise 3\.8\.1
What is the problem with this plot?
How could you improve it?
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_point()
```
There is overplotting because there are multiple observations for each combination of `cty` and `hwy` values.
I would improve the plot by using a jitter position adjustment to decrease overplotting.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_point(position = "jitter")
```
The relationship between `cty` and `hwy` is clear even without jittering the points
but jittering shows the locations where there are more observations.
### Exercise 3\.8\.2
What parameters to `geom_jitter()` control the amount of jittering?
From the [`geom_jitter()`](https://ggplot2.tidyverse.org/reference/geom_jitter.html) documentation, there are two arguments to jitter:
* `width` controls the amount of horizontal displacement, and
* `height` controls the amount of vertical displacement.
The defaults values of `width` and `height` will introduce noise in both directions.
Here is what the plot looks like with the default values of `height` and `width`.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_point(position = position_jitter())
```
However, we can change these parameters.
Here are few a examples to understand how these parameters affect the amount of jittering.
When`width = 0` there is no horizontal jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(width = 0)
```
When `width = 20`, there is too much horizontal jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(width = 20)
```
When `height = 0`, there is no vertical jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(height = 0)
```
When `height = 15`, there is too much vertical jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(height = 15)
```
When `width = 0` and `height = 0`, there is neither horizontal or vertical jitter,
and the plot produced is identical to the one produced with `geom_point()`.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(height = 0, width = 0)
```
Note that the `height` and `width` arguments are in the units of the data.
Thus `height = 1` (`width = 1`) corresponds to different relative amounts of jittering depending on the scale of the `y` (`x`) variable.
The default values of `height` and `width` are defined to be 80% of the `resolution()` of the data, which is the smallest non\-zero distance between adjacent values of a variable.
When `x` and `y` are discrete variables,
their resolutions are both equal to 1, and `height = 0.4` and `width = 0.4` since the jitter moves points in both positive and negative directions.
The default values of `height` and `width` in `geom_jitter()` are non\-zero, so unless both `height` and `width` are explicitly set set 0, there will be some jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter()
```
### Exercise 3\.8\.3
Compare and contrast `geom_jitter()` with `geom_count()`.
The geom `geom_jitter()` adds random variation to the locations points of the graph.
In other words, it “jitters” the locations of points slightly.
This method reduces overplotting since two points with the same location are unlikely to have the same random variation.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter()
```
However, the reduction in overlapping comes at the cost of slightly changing the `x` and `y` values of the points.
The geom `geom_count()` sizes the points relative to the number of observations.
Combinations of (`x`, `y`) values with more observations will be larger than those with fewer observations.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_count()
```
The `geom_count()` geom does not change `x` and `y` coordinates of the points.
However, if the points are close together and counts are large, the size of some
points can itself create overplotting.
For example, in the following example, a third variable mapped to color is added to the plot. In this case, `geom_count()` is less readable than `geom_jitter()` when adding a third variable as a color aesthetic.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy, color = class)) +
geom_jitter()
```
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy, color = class)) +
geom_count()
```
Combining `geom_count()` with jitter, which is specified with the `position` argument to `geom_count()` rather than its own geom, helps overplotting a little.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy, color = class)) +
geom_count(position = "jitter")
```
But as this example shows, unfortunately, there is no universal solution to overplotting.
The costs and benefits of different approaches will depend on the structure of the data and the goal of the data scientist.
### Exercise 3\.8\.4
What’s the default position adjustment for `geom_boxplot()`?
Create a visualization of the `mpg` dataset that demonstrates it.
The default position for `geom_boxplot()` is `"dodge2"`, which is a shortcut for `position_dodge2`.
This position adjustment does not change the vertical position of a geom but moves the geom horizontally to avoid overlapping other geoms.
See the documentation for [`position_dodge2()`](https://ggplot2.tidyverse.org/reference/position_dodge.html) for additional discussion on how it works.
When we add `colour = class` to the box plot, the different levels of the `drv` variable are placed side by side, i.e., dodged.
```
ggplot(data = mpg, aes(x = drv, y = hwy, colour = class)) +
geom_boxplot()
```
If `position_identity()` is used the boxplots overlap.
```
ggplot(data = mpg, aes(x = drv, y = hwy, colour = class)) +
geom_boxplot(position = "identity")
```
3\.9 Coordinate systems
-----------------------
### Exercise 3\.9\.1
Turn a stacked bar chart into a pie chart using `coord_polar()`.
A pie chart is a stacked bar chart with the addition of polar coordinates.
Take this stacked bar chart with a single category.
```
ggplot(mpg, aes(x = factor(1), fill = drv)) +
geom_bar()
```
Now add `coord_polar(theta="y")` to create pie chart.
```
ggplot(mpg, aes(x = factor(1), fill = drv)) +
geom_bar(width = 1) +
coord_polar(theta = "y")
```
The argument `theta = "y"` maps `y` to the angle of each section.
If `coord_polar()` is specified without `theta = "y"`, then the resulting plot is called a bulls\-eye chart.
```
ggplot(mpg, aes(x = factor(1), fill = drv)) +
geom_bar(width = 1) +
coord_polar()
```
### Exercise 3\.9\.2
What does `labs()` do?
Read the documentation.
The `labs` function adds axis titles, plot titles, and a caption to the plot.
```
ggplot(data = mpg, mapping = aes(x = class, y = hwy)) +
geom_boxplot() +
coord_flip() +
labs(y = "Highway MPG",
x = "Class",
title = "Highway MPG by car class",
subtitle = "1999-2008",
caption = "Source: http://fueleconomy.gov")
```
The arguments to `labs()` are optional, so you can add as many or as few of these as are needed.
```
ggplot(data = mpg, mapping = aes(x = class, y = hwy)) +
geom_boxplot() +
coord_flip() +
labs(y = "Highway MPG",
x = "Year",
title = "Highway MPG by car class")
```
The `labs()` function is not the only function that adds titles to plots.
The `xlab()`, `ylab()`, and x\- and y\-scale functions can add axis titles.
The `ggtitle()` function adds plot titles.
### Exercise 3\.9\.3
What’s the difference between `coord_quickmap()` and `coord_map()`?
The `coord_map()` function uses map projections to project the three\-dimensional Earth onto a two\-dimensional plane.
By default, `coord_map()` uses the [Mercator projection](https://en.wikipedia.org/wiki/Mercator_projection).
This projection is applied to all the geoms in the plot.
The `coord_quickmap()` function uses an approximate but faster map projection.
This approximation ignores the curvature of Earth and adjusts the map for the latitude/longitude ratio.
The `coord_quickmap()` project is faster than `coord_map()` both because the projection is computationally easier, and unlike `coord_map()`, the coordinates of the individual geoms do not need to be transformed.
See the [coord\_map()](https://ggplot2.tidyverse.org/reference/coord_map.html) documentation for more information on these functions and some examples.
### Exercise 3\.9\.4
What does the plot below tell you about the relationship between city and highway mpg?
Why is `coord_fixed()` important?
What does `geom_abline()` do?
The function `coord_fixed()` ensures that the line produced by `geom_abline()` is at a 45\-degree angle.
A 45\-degree line makes it easy to compare the highway and city mileage to the case in which city and highway MPG were equal.
```
p <- ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_point() +
geom_abline()
p + coord_fixed()
```
If we didn’t include `coord_fixed()`, then the line would no longer have an angle of 45 degrees.
```
p
```
On average, humans are best able to perceive differences in angles relative to 45 degrees.
See Cleveland ([1993](#ref-Cleveland1993)[b](#ref-Cleveland1993)), Cleveland ([1994](#ref-Cleveland1994)),Cleveland ([1993](#ref-Cleveland1993a)[a](#ref-Cleveland1993a)), Cleveland, McGill, and McGill ([1988](#ref-ClevelandMcGillMcGill1988)), Heer and Agrawala ([2006](#ref-HeerAgrawala2006)) for discussion on how the aspect ratio of a plot affects perception of the values it encodes, evidence that 45\-degrees is generally the optimal aspect ratio, and methods to calculate the optimal aspect ratio of a plot.
The function `ggthemes::bank_slopes()` will calculate the optimal aspect ratio to bank slopes to 45\-degrees.
3\.10 The layered grammar of graphics
-------------------------------------
No exercises
3\.1 Introduction
-----------------
```
library("tidyverse")
```
3\.2 First steps
----------------
### Exercise 3\.2\.1
Run `ggplot(data = mpg)` what do you see?
```
ggplot(data = mpg)
```
This code creates an empty plot.
The `ggplot()` function creates the background of the plot,
but since no layers were specified with geom function, nothing is drawn.
### Exercise 3\.2\.2
How many rows are in `mpg`?
How many columns?
There are 234 rows and 11 columns in the `mpg` data frame.
```
nrow(mpg)
#> [1] 234
ncol(mpg)
#> [1] 11
```
The `glimpse()` function also displays the number of rows and columns in a data frame.
```
glimpse(mpg)
#> Rows: 234
#> Columns: 11
#> $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", …
#> $ model <chr> "a4", "a4", "a4", "a4", "a4", "a4", "a4", "a4 quattro", …
#> $ displ <dbl> 1.8, 1.8, 2.0, 2.0, 2.8, 2.8, 3.1, 1.8, 1.8, 2.0, 2.0, 2…
#> $ year <int> 1999, 1999, 2008, 2008, 1999, 1999, 2008, 1999, 1999, 20…
#> $ cyl <int> 4, 4, 4, 4, 6, 6, 6, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 8, 8,…
#> $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(av)", "aut…
#> $ drv <chr> "f", "f", "f", "f", "f", "f", "f", "4", "4", "4", "4", "…
#> $ cty <int> 18, 21, 20, 21, 16, 18, 18, 18, 16, 20, 19, 15, 17, 17, …
#> $ hwy <int> 29, 29, 31, 30, 26, 26, 27, 26, 25, 28, 27, 25, 25, 25, …
#> $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "…
#> $ class <chr> "compact", "compact", "compact", "compact", "compact", "…
```
### Exercise 3\.2\.3
What does the `drv` variable describe?
Read the help for `?mpg` to find out.
The `drv` variable is a categorical variable which categorizes cars into front\-wheels, rear\-wheels, or four\-wheel drive.[1](#fn1)
| Value | Description |
| --- | --- |
| `"f"` | [front\-wheel drive](https://en.wikipedia.org/wiki/Front-wheel_drive) |
| `"r"` | [rear\-wheel drive](https://en.wikipedia.org/wiki/Automobile_layout#Rear-wheel-drive_layouts) |
| `"4"` | [four\-wheel drive](https://en.wikipedia.org/wiki/Four-wheel_drive) |
### Exercise 3\.2\.4
Make a scatter plot of `hwy` vs. `cyl`.
```
ggplot(mpg, aes(x = cyl, y = hwy)) +
geom_point()
```
### Exercise 3\.2\.5
What happens if you make a scatter plot of `class` vs `drv`?
Why is the plot not useful?
The resulting scatterplot has only a few points.
```
ggplot(mpg, aes(x = class, y = drv)) +
geom_point()
```
A scatter plot is not a useful display of these variables since both `drv` and `class` are categorical variables.
Since categorical variables typically take a small number of values,
there are a limited number of unique combinations of (`x`, `y`) values that can be displayed.
In this data, `drv` takes 3 values and `class` takes 7 values,
meaning that there are only 21 values that could be plotted on a scatterplot of `drv` vs. `class`.
In this data, there 12 values of (`drv`, `class`) are observed.
```
count(mpg, drv, class)
#> # A tibble: 12 x 3
#> drv class n
#> <chr> <chr> <int>
#> 1 4 compact 12
#> 2 4 midsize 3
#> 3 4 pickup 33
#> 4 4 subcompact 4
#> 5 4 suv 51
#> 6 f compact 35
#> # … with 6 more rows
```
A simple scatter plot does not show how many observations there are for each (`x`, `y`) value.
As such, scatterplots work best for plotting a continuous x and a continuous y variable, and when all (`x`, `y`) values are unique.
**Warning:** The following code uses functions introduced in a later section.
Come back to this after reading
section [7\.5\.2](https://r4ds.had.co.nz/exploratory-data-analysis.html#two-categorical-variables), which introduces methods for plotting two categorical variables.
The first is `geom_count()` which is similar to a scatterplot but uses the size of the points to show the number of observations at an (`x`, `y`) point.
```
ggplot(mpg, aes(x = class, y = drv)) +
geom_count()
```
The second is `geom_tile()` which uses a color scale to show the number of observations with each (`x`, `y`) value.
```
mpg %>%
count(class, drv) %>%
ggplot(aes(x = class, y = drv)) +
geom_tile(mapping = aes(fill = n))
```
In the previous plot, there are many missing tiles.
These missing tiles represent unobserved combinations of `class` and `drv` values.
These missing values are not unknown, but represent values of (`class`, `drv`) where `n = 0`.
The `complete()` function in the tidyr package adds new rows to a data frame for missing combinations of columns.
The following code adds rows for missing combinations of `class` and `drv` and uses the `fill` argument to set `n = 0` for those new rows.
```
mpg %>%
count(class, drv) %>%
complete(class, drv, fill = list(n = 0)) %>%
ggplot(aes(x = class, y = drv)) +
geom_tile(mapping = aes(fill = n))
```
### Exercise 3\.2\.1
Run `ggplot(data = mpg)` what do you see?
```
ggplot(data = mpg)
```
This code creates an empty plot.
The `ggplot()` function creates the background of the plot,
but since no layers were specified with geom function, nothing is drawn.
### Exercise 3\.2\.2
How many rows are in `mpg`?
How many columns?
There are 234 rows and 11 columns in the `mpg` data frame.
```
nrow(mpg)
#> [1] 234
ncol(mpg)
#> [1] 11
```
The `glimpse()` function also displays the number of rows and columns in a data frame.
```
glimpse(mpg)
#> Rows: 234
#> Columns: 11
#> $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", …
#> $ model <chr> "a4", "a4", "a4", "a4", "a4", "a4", "a4", "a4 quattro", …
#> $ displ <dbl> 1.8, 1.8, 2.0, 2.0, 2.8, 2.8, 3.1, 1.8, 1.8, 2.0, 2.0, 2…
#> $ year <int> 1999, 1999, 2008, 2008, 1999, 1999, 2008, 1999, 1999, 20…
#> $ cyl <int> 4, 4, 4, 4, 6, 6, 6, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 8, 8,…
#> $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(av)", "aut…
#> $ drv <chr> "f", "f", "f", "f", "f", "f", "f", "4", "4", "4", "4", "…
#> $ cty <int> 18, 21, 20, 21, 16, 18, 18, 18, 16, 20, 19, 15, 17, 17, …
#> $ hwy <int> 29, 29, 31, 30, 26, 26, 27, 26, 25, 28, 27, 25, 25, 25, …
#> $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "…
#> $ class <chr> "compact", "compact", "compact", "compact", "compact", "…
```
### Exercise 3\.2\.3
What does the `drv` variable describe?
Read the help for `?mpg` to find out.
The `drv` variable is a categorical variable which categorizes cars into front\-wheels, rear\-wheels, or four\-wheel drive.[1](#fn1)
| Value | Description |
| --- | --- |
| `"f"` | [front\-wheel drive](https://en.wikipedia.org/wiki/Front-wheel_drive) |
| `"r"` | [rear\-wheel drive](https://en.wikipedia.org/wiki/Automobile_layout#Rear-wheel-drive_layouts) |
| `"4"` | [four\-wheel drive](https://en.wikipedia.org/wiki/Four-wheel_drive) |
### Exercise 3\.2\.4
Make a scatter plot of `hwy` vs. `cyl`.
```
ggplot(mpg, aes(x = cyl, y = hwy)) +
geom_point()
```
### Exercise 3\.2\.5
What happens if you make a scatter plot of `class` vs `drv`?
Why is the plot not useful?
The resulting scatterplot has only a few points.
```
ggplot(mpg, aes(x = class, y = drv)) +
geom_point()
```
A scatter plot is not a useful display of these variables since both `drv` and `class` are categorical variables.
Since categorical variables typically take a small number of values,
there are a limited number of unique combinations of (`x`, `y`) values that can be displayed.
In this data, `drv` takes 3 values and `class` takes 7 values,
meaning that there are only 21 values that could be plotted on a scatterplot of `drv` vs. `class`.
In this data, there 12 values of (`drv`, `class`) are observed.
```
count(mpg, drv, class)
#> # A tibble: 12 x 3
#> drv class n
#> <chr> <chr> <int>
#> 1 4 compact 12
#> 2 4 midsize 3
#> 3 4 pickup 33
#> 4 4 subcompact 4
#> 5 4 suv 51
#> 6 f compact 35
#> # … with 6 more rows
```
A simple scatter plot does not show how many observations there are for each (`x`, `y`) value.
As such, scatterplots work best for plotting a continuous x and a continuous y variable, and when all (`x`, `y`) values are unique.
**Warning:** The following code uses functions introduced in a later section.
Come back to this after reading
section [7\.5\.2](https://r4ds.had.co.nz/exploratory-data-analysis.html#two-categorical-variables), which introduces methods for plotting two categorical variables.
The first is `geom_count()` which is similar to a scatterplot but uses the size of the points to show the number of observations at an (`x`, `y`) point.
```
ggplot(mpg, aes(x = class, y = drv)) +
geom_count()
```
The second is `geom_tile()` which uses a color scale to show the number of observations with each (`x`, `y`) value.
```
mpg %>%
count(class, drv) %>%
ggplot(aes(x = class, y = drv)) +
geom_tile(mapping = aes(fill = n))
```
In the previous plot, there are many missing tiles.
These missing tiles represent unobserved combinations of `class` and `drv` values.
These missing values are not unknown, but represent values of (`class`, `drv`) where `n = 0`.
The `complete()` function in the tidyr package adds new rows to a data frame for missing combinations of columns.
The following code adds rows for missing combinations of `class` and `drv` and uses the `fill` argument to set `n = 0` for those new rows.
```
mpg %>%
count(class, drv) %>%
complete(class, drv, fill = list(n = 0)) %>%
ggplot(aes(x = class, y = drv)) +
geom_tile(mapping = aes(fill = n))
```
3\.3 Aesthetic mappings
-----------------------
### Exercise 3\.3\.1
What’s gone wrong with this code?
Why are the points not blue?
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, colour = "blue"))
```
The argument`colour = "blue"` is included within the `mapping` argument, and as such, it is treated as an aesthetic, which is a mapping between a variable and a value.
In the expression, `colour = "blue"`, `"blue"` is interpreted as a categorical variable which only takes a single value `"blue"`.
If this is confusing, consider how `colour = 1:234` and `colour = 1` are interpreted by `aes()`.
The following code does produces the expected result.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy), colour = "blue")
```
### Exercise 3\.3\.2
Which variables in `mpg` are categorical?
Which variables are continuous?
(Hint: type `?mpg` to read the documentation for the dataset).
How can you see this information when you run `mpg`?
The following list contains the categorical variables in `mpg`:
* `manufacturer`
* `model`
* `trans`
* `drv`
* `fl`
* `class`
The following list contains the continuous variables in `mpg`:
* `displ`
* `year`
* `cyl`
* `cty`
* `hwy`
In the printed data frame, angled brackets at the top of each column provide type of each variable.
```
mpg
#> # A tibble: 234 x 11
#> manufacturer model displ year cyl trans drv cty hwy fl class
#> <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
#> 1 audi a4 1.8 1999 4 auto(l5) f 18 29 p compa…
#> 2 audi a4 1.8 1999 4 manual(m5) f 21 29 p compa…
#> 3 audi a4 2 2008 4 manual(m6) f 20 31 p compa…
#> 4 audi a4 2 2008 4 auto(av) f 21 30 p compa…
#> 5 audi a4 2.8 1999 6 auto(l5) f 16 26 p compa…
#> 6 audi a4 2.8 1999 6 manual(m5) f 18 26 p compa…
#> # … with 228 more rows
```
Those with `<chr>` above their columns are categorical, while those with `<dbl>` or `<int>` are continuous.
The exact meaning of these types will be discussed in [“Chapter 15: Vectors”](https://jrnold.github.io/r4ds-exercise-solutions/vectors.html).
`glimpse()` is another function that concisely displays the type of each column in the data frame:
```
glimpse(mpg)
#> Rows: 234
#> Columns: 11
#> $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", …
#> $ model <chr> "a4", "a4", "a4", "a4", "a4", "a4", "a4", "a4 quattro", …
#> $ displ <dbl> 1.8, 1.8, 2.0, 2.0, 2.8, 2.8, 3.1, 1.8, 1.8, 2.0, 2.0, 2…
#> $ year <int> 1999, 1999, 2008, 2008, 1999, 1999, 2008, 1999, 1999, 20…
#> $ cyl <int> 4, 4, 4, 4, 6, 6, 6, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 8, 8,…
#> $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(av)", "aut…
#> $ drv <chr> "f", "f", "f", "f", "f", "f", "f", "4", "4", "4", "4", "…
#> $ cty <int> 18, 21, 20, 21, 16, 18, 18, 18, 16, 20, 19, 15, 17, 17, …
#> $ hwy <int> 29, 29, 31, 30, 26, 26, 27, 26, 25, 28, 27, 25, 25, 25, …
#> $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "…
#> $ class <chr> "compact", "compact", "compact", "compact", "compact", "…
```
For those lists, I considered any variable that was non\-numeric was considered categorical and any variable that was numeric was considered continuous.
This largely corresponds to the heuristics `ggplot()` uses for will interpreting variables as discrete or continuous.
However, this definition of continuous vs. categorical misses several important cases.
Of the numeric variables, `year` and `cyl` (cylinders) clearly take on discrete values.
The variables `cty` and `hwy` are stored as integers (`int`) so they only take on a discrete values.
Even though `displ` has
In some sense, due to measurement and computational constraints all numeric variables are discrete ().
But unlike the categorical variables, it is possible to add and subtract these numeric variables in a meaningful way.
The typology of [levels of measurement](https://en.wikipedia.org/wiki/Level_of_measurement) is one such typology of data types.
In this case the R data types largely encode the semantics of the variables; e.g. integer variables are stored as integers, categorical variables with no order are stored as character vectors and so on.
However, that is not always the case.
Instead, the data could have stored the categorical `class` variable as an integer with values 1–7, where the documentation would note that 1 \= “compact”, 2 \= “midsize”, and so on.[2](#fn2)
Even though this integer vector could be added, multiplied, subtracted, and divided, those operations would be meaningless.
Fundamentally, categorizing variables as “discrete”, “continuous”, “ordinal”, “nominal”, “categorical”, etc. is about specifying what operations can be performed on the variables.
Discrete variables support counting and calculating the mode.
Variables with an ordering support sorting and calculating quantiles.
Variables that have an interval scale support addition and subtraction and operations such as taking the mean that rely on these primitives.
In this way, the types of data or variables types is an information class system, something that is beyond the scope of R4DS but discussed in [*Advanced R*](http://adv-r.had.co.nz/OO-essentials.html#s3).
### Exercise 3\.3\.3
Map a continuous variable to color, size, and shape.
How do these aesthetics behave differently for categorical vs. continuous variables?
The variable `cty`, city highway miles per gallon, is a continuous variable.
```
ggplot(mpg, aes(x = displ, y = hwy, colour = cty)) +
geom_point()
```
Instead of using discrete colors, the continuous variable uses a scale that varies from a light to dark blue color.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cty)) +
geom_point()
```
When mapped to size, the sizes of the points vary continuously as a function of their size.
```
ggplot(mpg, aes(x = displ, y = hwy, shape = cty)) +
geom_point()
#> Error: A continuous variable can not be mapped to shape
```
When a continuous value is mapped to shape, it gives an error.
Though we could split a continuous variable into discrete categories and use a shape aesthetic, this would conceptually not make sense.
A numeric variable has an order, but shapes do not.
It is clear that smaller points correspond to smaller values, or once the color scale is given, which colors correspond to larger or smaller values. But it is not clear whether a square is greater or less than a circle.
### Exercise 3\.3\.4
What happens if you map the same variable to multiple aesthetics?
```
ggplot(mpg, aes(x = displ, y = hwy, colour = hwy, size = displ)) +
geom_point()
```
In the above plot, `hwy` is mapped to both location on the y\-axis and color, and `displ` is mapped to both location on the x\-axis and size.
The code works and produces a plot, even if it is a bad one.
Mapping a single variable to multiple aesthetics is redundant.
Because it is redundant information, in most cases avoid mapping a single variable to multiple aesthetics.
### Exercise 3\.3\.5
What does the stroke aesthetic do?
What shapes does it work with?
(Hint: use `?geom_point`)
Stroke changes the size of the border for shapes (21\-25\).
These are filled shapes in which the color and size of the border can differ from that of the filled interior of the shape.
For example
```
ggplot(mtcars, aes(wt, mpg)) +
geom_point(shape = 21, colour = "black", fill = "white", size = 5, stroke = 5)
```
### Exercise 3\.3\.6
What happens if you map an aesthetic to something other than a variable name, like `aes(colour = displ < 5)`?
```
ggplot(mpg, aes(x = displ, y = hwy, colour = displ < 5)) +
geom_point()
```
Aesthetics can also be mapped to expressions like `displ < 5`.
The `ggplot()` function behaves as if a temporary variable was added to the data with values equal to the result of the expression.
In this case, the result of `displ < 5` is a logical variable which takes values of `TRUE` or `FALSE`.
This also explains why, in [Exercise 3\.3\.1](data-visualisation.html#exercise-3.3.1), the expression `colour = "blue"` created a categorical variable with only one category: “blue”.
### Exercise 3\.3\.1
What’s gone wrong with this code?
Why are the points not blue?
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, colour = "blue"))
```
The argument`colour = "blue"` is included within the `mapping` argument, and as such, it is treated as an aesthetic, which is a mapping between a variable and a value.
In the expression, `colour = "blue"`, `"blue"` is interpreted as a categorical variable which only takes a single value `"blue"`.
If this is confusing, consider how `colour = 1:234` and `colour = 1` are interpreted by `aes()`.
The following code does produces the expected result.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy), colour = "blue")
```
### Exercise 3\.3\.2
Which variables in `mpg` are categorical?
Which variables are continuous?
(Hint: type `?mpg` to read the documentation for the dataset).
How can you see this information when you run `mpg`?
The following list contains the categorical variables in `mpg`:
* `manufacturer`
* `model`
* `trans`
* `drv`
* `fl`
* `class`
The following list contains the continuous variables in `mpg`:
* `displ`
* `year`
* `cyl`
* `cty`
* `hwy`
In the printed data frame, angled brackets at the top of each column provide type of each variable.
```
mpg
#> # A tibble: 234 x 11
#> manufacturer model displ year cyl trans drv cty hwy fl class
#> <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
#> 1 audi a4 1.8 1999 4 auto(l5) f 18 29 p compa…
#> 2 audi a4 1.8 1999 4 manual(m5) f 21 29 p compa…
#> 3 audi a4 2 2008 4 manual(m6) f 20 31 p compa…
#> 4 audi a4 2 2008 4 auto(av) f 21 30 p compa…
#> 5 audi a4 2.8 1999 6 auto(l5) f 16 26 p compa…
#> 6 audi a4 2.8 1999 6 manual(m5) f 18 26 p compa…
#> # … with 228 more rows
```
Those with `<chr>` above their columns are categorical, while those with `<dbl>` or `<int>` are continuous.
The exact meaning of these types will be discussed in [“Chapter 15: Vectors”](https://jrnold.github.io/r4ds-exercise-solutions/vectors.html).
`glimpse()` is another function that concisely displays the type of each column in the data frame:
```
glimpse(mpg)
#> Rows: 234
#> Columns: 11
#> $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", …
#> $ model <chr> "a4", "a4", "a4", "a4", "a4", "a4", "a4", "a4 quattro", …
#> $ displ <dbl> 1.8, 1.8, 2.0, 2.0, 2.8, 2.8, 3.1, 1.8, 1.8, 2.0, 2.0, 2…
#> $ year <int> 1999, 1999, 2008, 2008, 1999, 1999, 2008, 1999, 1999, 20…
#> $ cyl <int> 4, 4, 4, 4, 6, 6, 6, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 8, 8,…
#> $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(av)", "aut…
#> $ drv <chr> "f", "f", "f", "f", "f", "f", "f", "4", "4", "4", "4", "…
#> $ cty <int> 18, 21, 20, 21, 16, 18, 18, 18, 16, 20, 19, 15, 17, 17, …
#> $ hwy <int> 29, 29, 31, 30, 26, 26, 27, 26, 25, 28, 27, 25, 25, 25, …
#> $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "…
#> $ class <chr> "compact", "compact", "compact", "compact", "compact", "…
```
For those lists, I considered any variable that was non\-numeric was considered categorical and any variable that was numeric was considered continuous.
This largely corresponds to the heuristics `ggplot()` uses for will interpreting variables as discrete or continuous.
However, this definition of continuous vs. categorical misses several important cases.
Of the numeric variables, `year` and `cyl` (cylinders) clearly take on discrete values.
The variables `cty` and `hwy` are stored as integers (`int`) so they only take on a discrete values.
Even though `displ` has
In some sense, due to measurement and computational constraints all numeric variables are discrete ().
But unlike the categorical variables, it is possible to add and subtract these numeric variables in a meaningful way.
The typology of [levels of measurement](https://en.wikipedia.org/wiki/Level_of_measurement) is one such typology of data types.
In this case the R data types largely encode the semantics of the variables; e.g. integer variables are stored as integers, categorical variables with no order are stored as character vectors and so on.
However, that is not always the case.
Instead, the data could have stored the categorical `class` variable as an integer with values 1–7, where the documentation would note that 1 \= “compact”, 2 \= “midsize”, and so on.[2](#fn2)
Even though this integer vector could be added, multiplied, subtracted, and divided, those operations would be meaningless.
Fundamentally, categorizing variables as “discrete”, “continuous”, “ordinal”, “nominal”, “categorical”, etc. is about specifying what operations can be performed on the variables.
Discrete variables support counting and calculating the mode.
Variables with an ordering support sorting and calculating quantiles.
Variables that have an interval scale support addition and subtraction and operations such as taking the mean that rely on these primitives.
In this way, the types of data or variables types is an information class system, something that is beyond the scope of R4DS but discussed in [*Advanced R*](http://adv-r.had.co.nz/OO-essentials.html#s3).
### Exercise 3\.3\.3
Map a continuous variable to color, size, and shape.
How do these aesthetics behave differently for categorical vs. continuous variables?
The variable `cty`, city highway miles per gallon, is a continuous variable.
```
ggplot(mpg, aes(x = displ, y = hwy, colour = cty)) +
geom_point()
```
Instead of using discrete colors, the continuous variable uses a scale that varies from a light to dark blue color.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cty)) +
geom_point()
```
When mapped to size, the sizes of the points vary continuously as a function of their size.
```
ggplot(mpg, aes(x = displ, y = hwy, shape = cty)) +
geom_point()
#> Error: A continuous variable can not be mapped to shape
```
When a continuous value is mapped to shape, it gives an error.
Though we could split a continuous variable into discrete categories and use a shape aesthetic, this would conceptually not make sense.
A numeric variable has an order, but shapes do not.
It is clear that smaller points correspond to smaller values, or once the color scale is given, which colors correspond to larger or smaller values. But it is not clear whether a square is greater or less than a circle.
### Exercise 3\.3\.4
What happens if you map the same variable to multiple aesthetics?
```
ggplot(mpg, aes(x = displ, y = hwy, colour = hwy, size = displ)) +
geom_point()
```
In the above plot, `hwy` is mapped to both location on the y\-axis and color, and `displ` is mapped to both location on the x\-axis and size.
The code works and produces a plot, even if it is a bad one.
Mapping a single variable to multiple aesthetics is redundant.
Because it is redundant information, in most cases avoid mapping a single variable to multiple aesthetics.
### Exercise 3\.3\.5
What does the stroke aesthetic do?
What shapes does it work with?
(Hint: use `?geom_point`)
Stroke changes the size of the border for shapes (21\-25\).
These are filled shapes in which the color and size of the border can differ from that of the filled interior of the shape.
For example
```
ggplot(mtcars, aes(wt, mpg)) +
geom_point(shape = 21, colour = "black", fill = "white", size = 5, stroke = 5)
```
### Exercise 3\.3\.6
What happens if you map an aesthetic to something other than a variable name, like `aes(colour = displ < 5)`?
```
ggplot(mpg, aes(x = displ, y = hwy, colour = displ < 5)) +
geom_point()
```
Aesthetics can also be mapped to expressions like `displ < 5`.
The `ggplot()` function behaves as if a temporary variable was added to the data with values equal to the result of the expression.
In this case, the result of `displ < 5` is a logical variable which takes values of `TRUE` or `FALSE`.
This also explains why, in [Exercise 3\.3\.1](data-visualisation.html#exercise-3.3.1), the expression `colour = "blue"` created a categorical variable with only one category: “blue”.
3\.4 Common problems
--------------------
No exercises
3\.5 Facets
-----------
### Exercise 3\.5\.1
What happens if you facet on a continuous variable?
Let’s see.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
facet_grid(. ~ cty)
```
The continuous variable is converted to a categorical variable, and the plot contains a facet for each distinct value.
### Exercise 3\.5\.2
What do the empty cells in plot with `facet_grid(drv ~ cyl)` mean?
How do they relate to this plot?
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = drv, y = cyl))
```
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = hwy, y = cty)) +
facet_grid(drv ~ cyl)
```
The empty cells (facets) in this plot are combinations of `drv` and `cyl` that have no observations.
These are the same locations in the scatter plot of `drv` and `cyl` that have no points.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = drv, y = cyl))
```
### Exercise 3\.5\.3
What plots does the following code make?
What does `.` do?
The symbol `.` ignores that dimension when faceting.
For example, `drv ~ .` facet by values of `drv` on the y\-axis.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
facet_grid(drv ~ .)
```
While, `. ~ cyl` will facet by values of `cyl` on the x\-axis.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
facet_grid(. ~ cyl)
```
### Exercise 3\.5\.4
Take the first faceted plot in this section:
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
facet_wrap(~class, nrow = 2)
```
What are the advantages to using faceting instead of the colour aesthetic?
What are the disadvantages?
How might the balance change if you had a larger dataset?
In the following plot the `class` variable is mapped to color.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class))
```
Advantages of encoding `class` with facets instead of color include the ability to encode more distinct categories.
For me, it is difficult to distinguish between the colors of `"midsize"` and `"minivan"`.
Given human visual perception, the max number of colors to use when encoding
unordered categorical (qualitative) data is nine, and in practice, often much less than that.
Displaying observations from different categories on different scales makes it difficult to directly compare values of observations across categories.
However, it can make it easier to compare the shape of the relationship between the x and y variables across categories.
Disadvantages of encoding the `class` variable with facets instead of the color aesthetic include the difficulty of comparing the values of observations between categories since the observations for each category are on different plots.
Using the same x\- and y\-scales for all facets makes it easier to compare values of observations across categories, but it is still more difficult than if they had been displayed on the same plot.
Since encoding class within color also places all points on the same plot,
it visualizes the unconditional relationship between the x and y variables;
with facets, the unconditional relationship is no longer visualized since the
points are spread across multiple plots.
The benefit of encoding a variable with facetting over encoding it with color increase in both the number of points and the number of categories.
With a large number of points, there is often overlap.
It is difficult to handle overlapping points with different colors color.
Jittering will still work with color.
But jittering will only work well if there are few points and the classes do not overlap much, otherwise, the colors of areas will no longer be distinct, and it will be hard to pick out the patterns of different categories visually.
Transparency (`alpha`) does not work well with colors since the mixing of overlapping transparent colors will no longer represent the colors of the categories.
Binning methods already use color to encode the density of points in the bin, so color cannot be used to encode categories.
As the number of categories increases, the difference between
colors decreases, to the point that the color of categories will no longer be
visually distinct.
### Exercise 3\.5\.5
Read `?facet_wrap`.
What does `nrow` do? What does `ncol` do?
What other options control the layout of the individual panels?
Why doesn’t `facet_grid()` have `nrow` and `ncol` variables?
The arguments `nrow` (`ncol`) determines the number of rows (columns) to use when laying out the facets.
It is necessary since `facet_wrap()` only facets on one variable.
The `nrow` and `ncol` arguments are unnecessary for `facet_grid()` since the number of unique values of the variables specified in the function determines the number of rows and columns.
### Exercise 3\.5\.6
When using `facet_grid()` you should usually put the variable with more unique levels in the columns.
Why?
There will be more space for columns if the plot is laid out horizontally (landscape).
### Exercise 3\.5\.1
What happens if you facet on a continuous variable?
Let’s see.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
facet_grid(. ~ cty)
```
The continuous variable is converted to a categorical variable, and the plot contains a facet for each distinct value.
### Exercise 3\.5\.2
What do the empty cells in plot with `facet_grid(drv ~ cyl)` mean?
How do they relate to this plot?
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = drv, y = cyl))
```
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = hwy, y = cty)) +
facet_grid(drv ~ cyl)
```
The empty cells (facets) in this plot are combinations of `drv` and `cyl` that have no observations.
These are the same locations in the scatter plot of `drv` and `cyl` that have no points.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = drv, y = cyl))
```
### Exercise 3\.5\.3
What plots does the following code make?
What does `.` do?
The symbol `.` ignores that dimension when faceting.
For example, `drv ~ .` facet by values of `drv` on the y\-axis.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
facet_grid(drv ~ .)
```
While, `. ~ cyl` will facet by values of `cyl` on the x\-axis.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
facet_grid(. ~ cyl)
```
### Exercise 3\.5\.4
Take the first faceted plot in this section:
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
facet_wrap(~class, nrow = 2)
```
What are the advantages to using faceting instead of the colour aesthetic?
What are the disadvantages?
How might the balance change if you had a larger dataset?
In the following plot the `class` variable is mapped to color.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class))
```
Advantages of encoding `class` with facets instead of color include the ability to encode more distinct categories.
For me, it is difficult to distinguish between the colors of `"midsize"` and `"minivan"`.
Given human visual perception, the max number of colors to use when encoding
unordered categorical (qualitative) data is nine, and in practice, often much less than that.
Displaying observations from different categories on different scales makes it difficult to directly compare values of observations across categories.
However, it can make it easier to compare the shape of the relationship between the x and y variables across categories.
Disadvantages of encoding the `class` variable with facets instead of the color aesthetic include the difficulty of comparing the values of observations between categories since the observations for each category are on different plots.
Using the same x\- and y\-scales for all facets makes it easier to compare values of observations across categories, but it is still more difficult than if they had been displayed on the same plot.
Since encoding class within color also places all points on the same plot,
it visualizes the unconditional relationship between the x and y variables;
with facets, the unconditional relationship is no longer visualized since the
points are spread across multiple plots.
The benefit of encoding a variable with facetting over encoding it with color increase in both the number of points and the number of categories.
With a large number of points, there is often overlap.
It is difficult to handle overlapping points with different colors color.
Jittering will still work with color.
But jittering will only work well if there are few points and the classes do not overlap much, otherwise, the colors of areas will no longer be distinct, and it will be hard to pick out the patterns of different categories visually.
Transparency (`alpha`) does not work well with colors since the mixing of overlapping transparent colors will no longer represent the colors of the categories.
Binning methods already use color to encode the density of points in the bin, so color cannot be used to encode categories.
As the number of categories increases, the difference between
colors decreases, to the point that the color of categories will no longer be
visually distinct.
### Exercise 3\.5\.5
Read `?facet_wrap`.
What does `nrow` do? What does `ncol` do?
What other options control the layout of the individual panels?
Why doesn’t `facet_grid()` have `nrow` and `ncol` variables?
The arguments `nrow` (`ncol`) determines the number of rows (columns) to use when laying out the facets.
It is necessary since `facet_wrap()` only facets on one variable.
The `nrow` and `ncol` arguments are unnecessary for `facet_grid()` since the number of unique values of the variables specified in the function determines the number of rows and columns.
### Exercise 3\.5\.6
When using `facet_grid()` you should usually put the variable with more unique levels in the columns.
Why?
There will be more space for columns if the plot is laid out horizontally (landscape).
3\.6 Geometric objects
----------------------
### Exercise 3\.6\.1
What geom would you use to draw a line chart?
A boxplot?
A histogram?
An area chart?
* line chart: `geom_line()`
* boxplot: `geom_boxplot()`
* histogram: `geom_histogram()`
* area chart: `geom_area()`
### Exercise 3\.6\.2
Run this code in your head and predict what the output will look like.
Then, run the code in R and check your predictions.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth(se = FALSE)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
This code produces a scatter plot with `displ` on the x\-axis, `hwy` on the y\-axis, and the points colored by `drv`.
There will be a smooth line, without standard errors, fit through each `drv` group.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth(se = FALSE)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### Exercise 3\.6\.3
What does `show.legend = FALSE` do?
What happens if you remove it?
Why do you think I used it earlier in the chapter?
The theme option `show.legend = FALSE` hides the legend box.
Consider this example earlier in the chapter.
```
ggplot(data = mpg) +
geom_smooth(
mapping = aes(x = displ, y = hwy, colour = drv),
show.legend = FALSE
)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
In that plot, there is no legend.
Removing the `show.legend` argument or setting `show.legend = TRUE` will result in the plot having a legend displaying the mapping between colors and `drv`.
```
ggplot(data = mpg) +
geom_smooth(mapping = aes(x = displ, y = hwy, colour = drv))
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
In the chapter, the legend is suppressed because with three plots,
adding a legend to only the last plot would make the sizes of plots different.
Different sized plots would make it more difficult to see how arguments change the appearance of the plots.
The purpose of those plots is to show the difference between no groups, using a `group` aesthetic, and using a `color` aesthetic, which creates implicit groups.
In that example, the legend isn’t necessary since looking up the values associated with each color isn’t necessary to make that point.
### Exercise 3\.6\.4
What does the `se` argument to `geom_smooth()` do?
It adds standard error bands to the lines.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth(se = TRUE)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
By default `se = TRUE`:
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth()
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### Exercise 3\.6\.5
Will these two graphs look different?
Why/why not?
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth()
ggplot() +
geom_point(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_smooth(data = mpg, mapping = aes(x = displ, y = hwy))
```
No. Because both `geom_point()` and `geom_smooth()` will use the same data and mappings.
They will inherit those options from the `ggplot()` object, so the mappings don’t need to specified again.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth()
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
```
ggplot() +
geom_point(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_smooth(data = mpg, mapping = aes(x = displ, y = hwy))
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### Exercise 3\.6\.6
Recreate the R code necessary to generate the following graphs.
The following code will generate those plots.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth(se = FALSE)
```
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_smooth(mapping = aes(group = drv), se = FALSE) +
geom_point()
```
```
ggplot(mpg, aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth(se = FALSE)
```
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point(aes(colour = drv)) +
geom_smooth(se = FALSE)
```
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point(aes(colour = drv)) +
geom_smooth(aes(linetype = drv), se = FALSE)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point(size = 4, color = "white") +
geom_point(aes(colour = drv))
```
### Exercise 3\.6\.1
What geom would you use to draw a line chart?
A boxplot?
A histogram?
An area chart?
* line chart: `geom_line()`
* boxplot: `geom_boxplot()`
* histogram: `geom_histogram()`
* area chart: `geom_area()`
### Exercise 3\.6\.2
Run this code in your head and predict what the output will look like.
Then, run the code in R and check your predictions.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth(se = FALSE)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
This code produces a scatter plot with `displ` on the x\-axis, `hwy` on the y\-axis, and the points colored by `drv`.
There will be a smooth line, without standard errors, fit through each `drv` group.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth(se = FALSE)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### Exercise 3\.6\.3
What does `show.legend = FALSE` do?
What happens if you remove it?
Why do you think I used it earlier in the chapter?
The theme option `show.legend = FALSE` hides the legend box.
Consider this example earlier in the chapter.
```
ggplot(data = mpg) +
geom_smooth(
mapping = aes(x = displ, y = hwy, colour = drv),
show.legend = FALSE
)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
In that plot, there is no legend.
Removing the `show.legend` argument or setting `show.legend = TRUE` will result in the plot having a legend displaying the mapping between colors and `drv`.
```
ggplot(data = mpg) +
geom_smooth(mapping = aes(x = displ, y = hwy, colour = drv))
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
In the chapter, the legend is suppressed because with three plots,
adding a legend to only the last plot would make the sizes of plots different.
Different sized plots would make it more difficult to see how arguments change the appearance of the plots.
The purpose of those plots is to show the difference between no groups, using a `group` aesthetic, and using a `color` aesthetic, which creates implicit groups.
In that example, the legend isn’t necessary since looking up the values associated with each color isn’t necessary to make that point.
### Exercise 3\.6\.4
What does the `se` argument to `geom_smooth()` do?
It adds standard error bands to the lines.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth(se = TRUE)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
By default `se = TRUE`:
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth()
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### Exercise 3\.6\.5
Will these two graphs look different?
Why/why not?
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth()
ggplot() +
geom_point(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_smooth(data = mpg, mapping = aes(x = displ, y = hwy))
```
No. Because both `geom_point()` and `geom_smooth()` will use the same data and mappings.
They will inherit those options from the `ggplot()` object, so the mappings don’t need to specified again.
```
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth()
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
```
ggplot() +
geom_point(data = mpg, mapping = aes(x = displ, y = hwy)) +
geom_smooth(data = mpg, mapping = aes(x = displ, y = hwy))
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### Exercise 3\.6\.6
Recreate the R code necessary to generate the following graphs.
The following code will generate those plots.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
geom_smooth(se = FALSE)
```
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_smooth(mapping = aes(group = drv), se = FALSE) +
geom_point()
```
```
ggplot(mpg, aes(x = displ, y = hwy, colour = drv)) +
geom_point() +
geom_smooth(se = FALSE)
```
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point(aes(colour = drv)) +
geom_smooth(se = FALSE)
```
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point(aes(colour = drv)) +
geom_smooth(aes(linetype = drv), se = FALSE)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point(size = 4, color = "white") +
geom_point(aes(colour = drv))
```
3\.7 Statistical transformations
--------------------------------
### Exercise 3\.7\.1
What is the default geom associated with `stat_summary()`?
How could you rewrite the previous plot to use that geom function instead of the stat function?
The “previous plot” referred to in the question is the following.
```
ggplot(data = diamonds) +
stat_summary(
mapping = aes(x = cut, y = depth),
fun.min = min,
fun.max = max,
fun = median
)
```
The arguments `fun.ymin`, `fun.ymax`, and `fun.y` have been deprecated and replaced with `fun.min`, `fun.max`, and `fun` in ggplot2 v 3\.3\.0\.
The default geom for [`stat_summary()`](https://ggplot2.tidyverse.org/reference/stat_summary.html) is `geom_pointrange()`.
The default stat for [`geom_pointrange()`](https://ggplot2.tidyverse.org/reference/geom_linerange.html) is `identity()` but we can add the argument `stat = "summary"` to use `stat_summary()` instead of `stat_identity()`.
```
ggplot(data = diamonds) +
geom_pointrange(
mapping = aes(x = cut, y = depth),
stat = "summary"
)
#> No summary function supplied, defaulting to `mean_se()`
```
The resulting message says that `stat_summary()` uses the `mean` and `sd` to calculate the middle point and endpoints of the line.
However, in the original plot the min and max values were used for the endpoints.
To recreate the original plot we need to specify values for `fun.min`, `fun.max`, and `fun`.
```
ggplot(data = diamonds) +
geom_pointrange(
mapping = aes(x = cut, y = depth),
stat = "summary",
fun.min = min,
fun.max = max,
fun = median
)
```
### Exercise 3\.7\.2
What does `geom_col()` do? How is it different to `geom_bar()`?
The `geom_col()` function has different default stat than `geom_bar()`.
The default stat of `geom_col()` is `stat_identity()`, which leaves the data as is.
The `geom_col()` function expects that the data contains `x` values and `y` values which represent the bar height.
The default stat of `geom_bar()` is `stat_count()`.
The `geom_bar()` function only expects an `x` variable.
The stat, `stat_count()`, preprocesses input data by counting the number of observations for each value of `x`.
The `y` aesthetic uses the values of these counts.
### Exercise 3\.7\.3
Most geoms and stats come in pairs that are almost always used in concert.
Read through the documentation and make a list of all the pairs.
What do they have in common?
The following tables lists the pairs of geoms and stats that are almost always used in concert.
Complementary geoms and stats
| geom | stat |
| --- | --- |
| `geom_bar()` | `stat_count()` |
| `geom_bin2d()` | `stat_bin_2d()` |
| `geom_boxplot()` | `stat_boxplot()` |
| `geom_contour_filled()` | `stat_contour_filled()` |
| `geom_contour()` | `stat_contour()` |
| `geom_count()` | `stat_sum()` |
| `geom_density_2d()` | `stat_density_2d()` |
| `geom_density()` | `stat_density()` |
| `geom_dotplot()` | `stat_bindot()` |
| `geom_function()` | `stat_function()` |
| `geom_sf()` | `stat_sf()` |
| `geom_sf()` | `stat_sf()` |
| `geom_smooth()` | `stat_smooth()` |
| `geom_violin()` | `stat_ydensity()` |
| `geom_hex()` | `stat_bin_hex()` |
| `geom_qq_line()` | `stat_qq_line()` |
| `geom_qq()` | `stat_qq()` |
| `geom_quantile()` | `stat_quantile()` |
These pairs of geoms and stats tend to have their names in common, such `stat_smooth()` and `geom_smooth()` and be documented on the same help page.
The pairs of geoms and stats that are used in concert often have each other as the default stat (for a geom) or geom (for a stat).
The following tables contain the geoms and stats in [ggplot2](https://ggplot2.tidyverse.org/reference/) and their defaults as of version 3\.3\.0\.
Many geoms have `stat_identity()` as the default stat.
ggplot2 geom layers and their default stats.
| geom | default stat | shared docs |
| --- | --- | --- |
| `geom_abline()` | `stat_identity()` | |
| `geom_area()` | `stat_identity()` | |
| `geom_bar()` | `stat_count()` | x |
| `geom_bin2d()` | `stat_bin_2d()` | x |
| `geom_blank()` | None | |
| `geom_boxplot()` | `stat_boxplot()` | x |
| `geom_col()` | `stat_identity()` | |
| `geom_count()` | `stat_sum()` | x |
| `geom_countour_filled()` | `stat_countour_filled()` | x |
| `geom_countour()` | `stat_countour()` | x |
| `geom_crossbar()` | `stat_identity()` | |
| `geom_curve()` | `stat_identity()` | |
| `geom_density_2d_filled()` | `stat_density_2d_filled()` | x |
| `geom_density_2d()` | `stat_density_2d()` | x |
| `geom_density()` | `stat_density()` | x |
| `geom_dotplot()` | `stat_bindot()` | x |
| `geom_errorbar()` | `stat_identity()` | |
| `geom_errorbarh()` | `stat_identity()` | |
| `geom_freqpoly()` | `stat_bin()` | x |
| `geom_function()` | `stat_function()` | x |
| `geom_hex()` | `stat_bin_hex()` | x |
| `geom_histogram()` | `stat_bin()` | x |
| `geom_hline()` | `stat_identity()` | |
| `geom_jitter()` | `stat_identity()` | |
| `geom_label()` | `stat_identity()` | |
| `geom_line()` | `stat_identity()` | |
| `geom_linerange()` | `stat_identity()` | |
| `geom_map()` | `stat_identity()` | |
| `geom_path()` | `stat_identity()` | |
| `geom_point()` | `stat_identity()` | |
| `geom_pointrange()` | `stat_identity()` | |
| `geom_polygon()` | `stat_identity()` | |
| `geom_qq_line()` | `stat_qq_line()` | x |
| `geom_qq()` | `stat_qq()` | x |
| `geom_quantile()` | `stat_quantile()` | x |
| `geom_raster()` | `stat_identity()` | |
| `geom_rect()` | `stat_identity()` | |
| `geom_ribbon()` | `stat_identity()` | |
| `geom_rug()` | `stat_identity()` | |
| `geom_segment()` | `stat_identity()` | |
| `geom_sf_label()` | `stat_sf_coordinates()` | x |
| `geom_sf_text()` | `stat_sf_coordinates()` | x |
| `geom_sf()` | `stat_sf()` | x |
| `geom_smooth()` | `stat_smooth()` | x |
| `geom_spoke()` | `stat_identity()` | |
| `geom_step()` | `stat_identity()` | |
| `geom_text()` | `stat_identity()` | |
| `geom_tile()` | `stat_identity()` | |
| `geom_violin()` | `stat_ydensity()` | x |
| `geom_vline()` | `stat_identity()` | |
ggplot2 stat layers and their default geoms.
| stat | default geom | shared docs |
| --- | --- | --- |
| `stat_bin_2d()` | `geom_tile()` | |
| `stat_bin_hex()` | `geom_hex()` | x |
| `stat_bin()` | `geom_bar()` | x |
| `stat_boxplot()` | `geom_boxplot()` | x |
| `stat_count()` | `geom_bar()` | x |
| `stat_countour_filled()` | `geom_contour_filled()` | x |
| `stat_countour()` | `geom_contour()` | x |
| `stat_density_2d_filled()` | `geom_density_2d()` | x |
| `stat_density_2d()` | `geom_density_2d()` | x |
| `stat_density()` | `geom_area()` | |
| `stat_ecdf()` | `geom_step()` | |
| `stat_ellipse()` | `geom_path()` | |
| `stat_function()` | `geom_function()` | x |
| `stat_function()` | `geom_path()` | |
| `stat_identity()` | `geom_point()` | |
| `stat_qq_line()` | `geom_path()` | |
| `stat_qq()` | `geom_point()` | |
| `stat_quantile()` | `geom_quantile()` | x |
| `stat_sf_coordinates()` | `geom_point()` | |
| `stat_sf()` | `geom_rect()` | |
| `stat_smooth()` | `geom_smooth()` | x |
| `stat_sum()` | `geom_point()` | |
| `stat_summary_2d()` | `geom_tile()` | |
| `stat_summary_bin()` | `geom_pointrange()` | |
| `stat_summary_hex()` | `geom_hex()` | |
| `stat_summary()` | `geom_pointrange()` | |
| `stat_unique()` | `geom_point()` | |
| `stat_ydensity()` | `geom_violin()` | x |
### Exercise 3\.7\.4
What variables does `stat_smooth()` compute?
What parameters control its behavior?
The function `stat_smooth()` calculates the following variables:
* `y`: predicted value
* `ymin`: lower value of the confidence interval
* `ymax`: upper value of the confidence interval
* `se`: standard error
The “Computed Variables” section of the `stat_smooth()` documentation contains these variables.
The parameters that control the behavior of `stat_smooth()` include:
* `method`: This is the method used to compute the smoothing line.
If `NULL`, a default method is used based on the sample size: `stats::loess()` when there are less than 1,000 observations in a group, and `mgcv::gam()` with `formula = y ~ s(x, bs = "CS)` otherwise.
Alternatively, the user can provide a character vector with a function name, e.g. `"lm"`, `"loess"`, or a function,
e.g. `MASS::rlm`.
* `formula`: When providing a custom `method` argument, the formula to use. The default is `y ~ x`. For example, to use the line implied by `lm(y ~ x + I(x ^ 2) + I(x ^ 3))`, use `method = "lm"` or `method = lm` and `formula = y ~ x + I(x ^ 2) + I(x ^ 3)`.
* `method.arg()`: Arguments other than than the formula, which is already specified in the `formula` argument`, to pass to the function in`method\`.
* `se`: If `TRUE`, display standard error bands, if `FALSE` only display the line.
* `na.rm`: If `FALSE`, missing values are removed with a warning, if `TRUE` the are silently removed.
The default is `FALSE` in order to make debugging easier.
If missing values are known to be in the data, then can be ignored, but if missing values are not anticipated this warning can help catch errors.
**TODO:** Plots with examples illustrating the uses of these arguments.
### Exercise 3\.7\.5
In our proportion bar chart, we need to set `group = 1` Why?
In other words, what is the problem with these two graphs?
If `group = 1` is not included, then all the bars in the plot will have the same height, a height of 1\.
The function `geom_bar()` assumes that the groups are equal to the `x` values since the stat computes the counts within the group.
```
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, y = ..prop..))
```
The problem with these two plots is that the proportions are calculated within the groups.
```
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, y = ..prop..))
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, fill = color, y = ..prop..))
```
The following code will produce the intended stacked bar charts for the case with no `fill` aesthetic.
```
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, y = ..prop.., group = 1))
```
With the `fill` aesthetic, the heights of the bars need to be normalized.
```
ggplot(data = diamonds) +
geom_bar(aes(x = cut, y = ..count.. / sum(..count..), fill = color))
```
### Exercise 3\.7\.1
What is the default geom associated with `stat_summary()`?
How could you rewrite the previous plot to use that geom function instead of the stat function?
The “previous plot” referred to in the question is the following.
```
ggplot(data = diamonds) +
stat_summary(
mapping = aes(x = cut, y = depth),
fun.min = min,
fun.max = max,
fun = median
)
```
The arguments `fun.ymin`, `fun.ymax`, and `fun.y` have been deprecated and replaced with `fun.min`, `fun.max`, and `fun` in ggplot2 v 3\.3\.0\.
The default geom for [`stat_summary()`](https://ggplot2.tidyverse.org/reference/stat_summary.html) is `geom_pointrange()`.
The default stat for [`geom_pointrange()`](https://ggplot2.tidyverse.org/reference/geom_linerange.html) is `identity()` but we can add the argument `stat = "summary"` to use `stat_summary()` instead of `stat_identity()`.
```
ggplot(data = diamonds) +
geom_pointrange(
mapping = aes(x = cut, y = depth),
stat = "summary"
)
#> No summary function supplied, defaulting to `mean_se()`
```
The resulting message says that `stat_summary()` uses the `mean` and `sd` to calculate the middle point and endpoints of the line.
However, in the original plot the min and max values were used for the endpoints.
To recreate the original plot we need to specify values for `fun.min`, `fun.max`, and `fun`.
```
ggplot(data = diamonds) +
geom_pointrange(
mapping = aes(x = cut, y = depth),
stat = "summary",
fun.min = min,
fun.max = max,
fun = median
)
```
### Exercise 3\.7\.2
What does `geom_col()` do? How is it different to `geom_bar()`?
The `geom_col()` function has different default stat than `geom_bar()`.
The default stat of `geom_col()` is `stat_identity()`, which leaves the data as is.
The `geom_col()` function expects that the data contains `x` values and `y` values which represent the bar height.
The default stat of `geom_bar()` is `stat_count()`.
The `geom_bar()` function only expects an `x` variable.
The stat, `stat_count()`, preprocesses input data by counting the number of observations for each value of `x`.
The `y` aesthetic uses the values of these counts.
### Exercise 3\.7\.3
Most geoms and stats come in pairs that are almost always used in concert.
Read through the documentation and make a list of all the pairs.
What do they have in common?
The following tables lists the pairs of geoms and stats that are almost always used in concert.
Complementary geoms and stats
| geom | stat |
| --- | --- |
| `geom_bar()` | `stat_count()` |
| `geom_bin2d()` | `stat_bin_2d()` |
| `geom_boxplot()` | `stat_boxplot()` |
| `geom_contour_filled()` | `stat_contour_filled()` |
| `geom_contour()` | `stat_contour()` |
| `geom_count()` | `stat_sum()` |
| `geom_density_2d()` | `stat_density_2d()` |
| `geom_density()` | `stat_density()` |
| `geom_dotplot()` | `stat_bindot()` |
| `geom_function()` | `stat_function()` |
| `geom_sf()` | `stat_sf()` |
| `geom_sf()` | `stat_sf()` |
| `geom_smooth()` | `stat_smooth()` |
| `geom_violin()` | `stat_ydensity()` |
| `geom_hex()` | `stat_bin_hex()` |
| `geom_qq_line()` | `stat_qq_line()` |
| `geom_qq()` | `stat_qq()` |
| `geom_quantile()` | `stat_quantile()` |
These pairs of geoms and stats tend to have their names in common, such `stat_smooth()` and `geom_smooth()` and be documented on the same help page.
The pairs of geoms and stats that are used in concert often have each other as the default stat (for a geom) or geom (for a stat).
The following tables contain the geoms and stats in [ggplot2](https://ggplot2.tidyverse.org/reference/) and their defaults as of version 3\.3\.0\.
Many geoms have `stat_identity()` as the default stat.
ggplot2 geom layers and their default stats.
| geom | default stat | shared docs |
| --- | --- | --- |
| `geom_abline()` | `stat_identity()` | |
| `geom_area()` | `stat_identity()` | |
| `geom_bar()` | `stat_count()` | x |
| `geom_bin2d()` | `stat_bin_2d()` | x |
| `geom_blank()` | None | |
| `geom_boxplot()` | `stat_boxplot()` | x |
| `geom_col()` | `stat_identity()` | |
| `geom_count()` | `stat_sum()` | x |
| `geom_countour_filled()` | `stat_countour_filled()` | x |
| `geom_countour()` | `stat_countour()` | x |
| `geom_crossbar()` | `stat_identity()` | |
| `geom_curve()` | `stat_identity()` | |
| `geom_density_2d_filled()` | `stat_density_2d_filled()` | x |
| `geom_density_2d()` | `stat_density_2d()` | x |
| `geom_density()` | `stat_density()` | x |
| `geom_dotplot()` | `stat_bindot()` | x |
| `geom_errorbar()` | `stat_identity()` | |
| `geom_errorbarh()` | `stat_identity()` | |
| `geom_freqpoly()` | `stat_bin()` | x |
| `geom_function()` | `stat_function()` | x |
| `geom_hex()` | `stat_bin_hex()` | x |
| `geom_histogram()` | `stat_bin()` | x |
| `geom_hline()` | `stat_identity()` | |
| `geom_jitter()` | `stat_identity()` | |
| `geom_label()` | `stat_identity()` | |
| `geom_line()` | `stat_identity()` | |
| `geom_linerange()` | `stat_identity()` | |
| `geom_map()` | `stat_identity()` | |
| `geom_path()` | `stat_identity()` | |
| `geom_point()` | `stat_identity()` | |
| `geom_pointrange()` | `stat_identity()` | |
| `geom_polygon()` | `stat_identity()` | |
| `geom_qq_line()` | `stat_qq_line()` | x |
| `geom_qq()` | `stat_qq()` | x |
| `geom_quantile()` | `stat_quantile()` | x |
| `geom_raster()` | `stat_identity()` | |
| `geom_rect()` | `stat_identity()` | |
| `geom_ribbon()` | `stat_identity()` | |
| `geom_rug()` | `stat_identity()` | |
| `geom_segment()` | `stat_identity()` | |
| `geom_sf_label()` | `stat_sf_coordinates()` | x |
| `geom_sf_text()` | `stat_sf_coordinates()` | x |
| `geom_sf()` | `stat_sf()` | x |
| `geom_smooth()` | `stat_smooth()` | x |
| `geom_spoke()` | `stat_identity()` | |
| `geom_step()` | `stat_identity()` | |
| `geom_text()` | `stat_identity()` | |
| `geom_tile()` | `stat_identity()` | |
| `geom_violin()` | `stat_ydensity()` | x |
| `geom_vline()` | `stat_identity()` | |
ggplot2 stat layers and their default geoms.
| stat | default geom | shared docs |
| --- | --- | --- |
| `stat_bin_2d()` | `geom_tile()` | |
| `stat_bin_hex()` | `geom_hex()` | x |
| `stat_bin()` | `geom_bar()` | x |
| `stat_boxplot()` | `geom_boxplot()` | x |
| `stat_count()` | `geom_bar()` | x |
| `stat_countour_filled()` | `geom_contour_filled()` | x |
| `stat_countour()` | `geom_contour()` | x |
| `stat_density_2d_filled()` | `geom_density_2d()` | x |
| `stat_density_2d()` | `geom_density_2d()` | x |
| `stat_density()` | `geom_area()` | |
| `stat_ecdf()` | `geom_step()` | |
| `stat_ellipse()` | `geom_path()` | |
| `stat_function()` | `geom_function()` | x |
| `stat_function()` | `geom_path()` | |
| `stat_identity()` | `geom_point()` | |
| `stat_qq_line()` | `geom_path()` | |
| `stat_qq()` | `geom_point()` | |
| `stat_quantile()` | `geom_quantile()` | x |
| `stat_sf_coordinates()` | `geom_point()` | |
| `stat_sf()` | `geom_rect()` | |
| `stat_smooth()` | `geom_smooth()` | x |
| `stat_sum()` | `geom_point()` | |
| `stat_summary_2d()` | `geom_tile()` | |
| `stat_summary_bin()` | `geom_pointrange()` | |
| `stat_summary_hex()` | `geom_hex()` | |
| `stat_summary()` | `geom_pointrange()` | |
| `stat_unique()` | `geom_point()` | |
| `stat_ydensity()` | `geom_violin()` | x |
### Exercise 3\.7\.4
What variables does `stat_smooth()` compute?
What parameters control its behavior?
The function `stat_smooth()` calculates the following variables:
* `y`: predicted value
* `ymin`: lower value of the confidence interval
* `ymax`: upper value of the confidence interval
* `se`: standard error
The “Computed Variables” section of the `stat_smooth()` documentation contains these variables.
The parameters that control the behavior of `stat_smooth()` include:
* `method`: This is the method used to compute the smoothing line.
If `NULL`, a default method is used based on the sample size: `stats::loess()` when there are less than 1,000 observations in a group, and `mgcv::gam()` with `formula = y ~ s(x, bs = "CS)` otherwise.
Alternatively, the user can provide a character vector with a function name, e.g. `"lm"`, `"loess"`, or a function,
e.g. `MASS::rlm`.
* `formula`: When providing a custom `method` argument, the formula to use. The default is `y ~ x`. For example, to use the line implied by `lm(y ~ x + I(x ^ 2) + I(x ^ 3))`, use `method = "lm"` or `method = lm` and `formula = y ~ x + I(x ^ 2) + I(x ^ 3)`.
* `method.arg()`: Arguments other than than the formula, which is already specified in the `formula` argument`, to pass to the function in`method\`.
* `se`: If `TRUE`, display standard error bands, if `FALSE` only display the line.
* `na.rm`: If `FALSE`, missing values are removed with a warning, if `TRUE` the are silently removed.
The default is `FALSE` in order to make debugging easier.
If missing values are known to be in the data, then can be ignored, but if missing values are not anticipated this warning can help catch errors.
**TODO:** Plots with examples illustrating the uses of these arguments.
### Exercise 3\.7\.5
In our proportion bar chart, we need to set `group = 1` Why?
In other words, what is the problem with these two graphs?
If `group = 1` is not included, then all the bars in the plot will have the same height, a height of 1\.
The function `geom_bar()` assumes that the groups are equal to the `x` values since the stat computes the counts within the group.
```
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, y = ..prop..))
```
The problem with these two plots is that the proportions are calculated within the groups.
```
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, y = ..prop..))
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, fill = color, y = ..prop..))
```
The following code will produce the intended stacked bar charts for the case with no `fill` aesthetic.
```
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, y = ..prop.., group = 1))
```
With the `fill` aesthetic, the heights of the bars need to be normalized.
```
ggplot(data = diamonds) +
geom_bar(aes(x = cut, y = ..count.. / sum(..count..), fill = color))
```
3\.8 Position adjustments
-------------------------
### Exercise 3\.8\.1
What is the problem with this plot?
How could you improve it?
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_point()
```
There is overplotting because there are multiple observations for each combination of `cty` and `hwy` values.
I would improve the plot by using a jitter position adjustment to decrease overplotting.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_point(position = "jitter")
```
The relationship between `cty` and `hwy` is clear even without jittering the points
but jittering shows the locations where there are more observations.
### Exercise 3\.8\.2
What parameters to `geom_jitter()` control the amount of jittering?
From the [`geom_jitter()`](https://ggplot2.tidyverse.org/reference/geom_jitter.html) documentation, there are two arguments to jitter:
* `width` controls the amount of horizontal displacement, and
* `height` controls the amount of vertical displacement.
The defaults values of `width` and `height` will introduce noise in both directions.
Here is what the plot looks like with the default values of `height` and `width`.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_point(position = position_jitter())
```
However, we can change these parameters.
Here are few a examples to understand how these parameters affect the amount of jittering.
When`width = 0` there is no horizontal jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(width = 0)
```
When `width = 20`, there is too much horizontal jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(width = 20)
```
When `height = 0`, there is no vertical jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(height = 0)
```
When `height = 15`, there is too much vertical jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(height = 15)
```
When `width = 0` and `height = 0`, there is neither horizontal or vertical jitter,
and the plot produced is identical to the one produced with `geom_point()`.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(height = 0, width = 0)
```
Note that the `height` and `width` arguments are in the units of the data.
Thus `height = 1` (`width = 1`) corresponds to different relative amounts of jittering depending on the scale of the `y` (`x`) variable.
The default values of `height` and `width` are defined to be 80% of the `resolution()` of the data, which is the smallest non\-zero distance between adjacent values of a variable.
When `x` and `y` are discrete variables,
their resolutions are both equal to 1, and `height = 0.4` and `width = 0.4` since the jitter moves points in both positive and negative directions.
The default values of `height` and `width` in `geom_jitter()` are non\-zero, so unless both `height` and `width` are explicitly set set 0, there will be some jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter()
```
### Exercise 3\.8\.3
Compare and contrast `geom_jitter()` with `geom_count()`.
The geom `geom_jitter()` adds random variation to the locations points of the graph.
In other words, it “jitters” the locations of points slightly.
This method reduces overplotting since two points with the same location are unlikely to have the same random variation.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter()
```
However, the reduction in overlapping comes at the cost of slightly changing the `x` and `y` values of the points.
The geom `geom_count()` sizes the points relative to the number of observations.
Combinations of (`x`, `y`) values with more observations will be larger than those with fewer observations.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_count()
```
The `geom_count()` geom does not change `x` and `y` coordinates of the points.
However, if the points are close together and counts are large, the size of some
points can itself create overplotting.
For example, in the following example, a third variable mapped to color is added to the plot. In this case, `geom_count()` is less readable than `geom_jitter()` when adding a third variable as a color aesthetic.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy, color = class)) +
geom_jitter()
```
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy, color = class)) +
geom_count()
```
Combining `geom_count()` with jitter, which is specified with the `position` argument to `geom_count()` rather than its own geom, helps overplotting a little.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy, color = class)) +
geom_count(position = "jitter")
```
But as this example shows, unfortunately, there is no universal solution to overplotting.
The costs and benefits of different approaches will depend on the structure of the data and the goal of the data scientist.
### Exercise 3\.8\.4
What’s the default position adjustment for `geom_boxplot()`?
Create a visualization of the `mpg` dataset that demonstrates it.
The default position for `geom_boxplot()` is `"dodge2"`, which is a shortcut for `position_dodge2`.
This position adjustment does not change the vertical position of a geom but moves the geom horizontally to avoid overlapping other geoms.
See the documentation for [`position_dodge2()`](https://ggplot2.tidyverse.org/reference/position_dodge.html) for additional discussion on how it works.
When we add `colour = class` to the box plot, the different levels of the `drv` variable are placed side by side, i.e., dodged.
```
ggplot(data = mpg, aes(x = drv, y = hwy, colour = class)) +
geom_boxplot()
```
If `position_identity()` is used the boxplots overlap.
```
ggplot(data = mpg, aes(x = drv, y = hwy, colour = class)) +
geom_boxplot(position = "identity")
```
### Exercise 3\.8\.1
What is the problem with this plot?
How could you improve it?
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_point()
```
There is overplotting because there are multiple observations for each combination of `cty` and `hwy` values.
I would improve the plot by using a jitter position adjustment to decrease overplotting.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_point(position = "jitter")
```
The relationship between `cty` and `hwy` is clear even without jittering the points
but jittering shows the locations where there are more observations.
### Exercise 3\.8\.2
What parameters to `geom_jitter()` control the amount of jittering?
From the [`geom_jitter()`](https://ggplot2.tidyverse.org/reference/geom_jitter.html) documentation, there are two arguments to jitter:
* `width` controls the amount of horizontal displacement, and
* `height` controls the amount of vertical displacement.
The defaults values of `width` and `height` will introduce noise in both directions.
Here is what the plot looks like with the default values of `height` and `width`.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_point(position = position_jitter())
```
However, we can change these parameters.
Here are few a examples to understand how these parameters affect the amount of jittering.
When`width = 0` there is no horizontal jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(width = 0)
```
When `width = 20`, there is too much horizontal jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(width = 20)
```
When `height = 0`, there is no vertical jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(height = 0)
```
When `height = 15`, there is too much vertical jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(height = 15)
```
When `width = 0` and `height = 0`, there is neither horizontal or vertical jitter,
and the plot produced is identical to the one produced with `geom_point()`.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter(height = 0, width = 0)
```
Note that the `height` and `width` arguments are in the units of the data.
Thus `height = 1` (`width = 1`) corresponds to different relative amounts of jittering depending on the scale of the `y` (`x`) variable.
The default values of `height` and `width` are defined to be 80% of the `resolution()` of the data, which is the smallest non\-zero distance between adjacent values of a variable.
When `x` and `y` are discrete variables,
their resolutions are both equal to 1, and `height = 0.4` and `width = 0.4` since the jitter moves points in both positive and negative directions.
The default values of `height` and `width` in `geom_jitter()` are non\-zero, so unless both `height` and `width` are explicitly set set 0, there will be some jitter.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter()
```
### Exercise 3\.8\.3
Compare and contrast `geom_jitter()` with `geom_count()`.
The geom `geom_jitter()` adds random variation to the locations points of the graph.
In other words, it “jitters” the locations of points slightly.
This method reduces overplotting since two points with the same location are unlikely to have the same random variation.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_jitter()
```
However, the reduction in overlapping comes at the cost of slightly changing the `x` and `y` values of the points.
The geom `geom_count()` sizes the points relative to the number of observations.
Combinations of (`x`, `y`) values with more observations will be larger than those with fewer observations.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_count()
```
The `geom_count()` geom does not change `x` and `y` coordinates of the points.
However, if the points are close together and counts are large, the size of some
points can itself create overplotting.
For example, in the following example, a third variable mapped to color is added to the plot. In this case, `geom_count()` is less readable than `geom_jitter()` when adding a third variable as a color aesthetic.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy, color = class)) +
geom_jitter()
```
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy, color = class)) +
geom_count()
```
Combining `geom_count()` with jitter, which is specified with the `position` argument to `geom_count()` rather than its own geom, helps overplotting a little.
```
ggplot(data = mpg, mapping = aes(x = cty, y = hwy, color = class)) +
geom_count(position = "jitter")
```
But as this example shows, unfortunately, there is no universal solution to overplotting.
The costs and benefits of different approaches will depend on the structure of the data and the goal of the data scientist.
### Exercise 3\.8\.4
What’s the default position adjustment for `geom_boxplot()`?
Create a visualization of the `mpg` dataset that demonstrates it.
The default position for `geom_boxplot()` is `"dodge2"`, which is a shortcut for `position_dodge2`.
This position adjustment does not change the vertical position of a geom but moves the geom horizontally to avoid overlapping other geoms.
See the documentation for [`position_dodge2()`](https://ggplot2.tidyverse.org/reference/position_dodge.html) for additional discussion on how it works.
When we add `colour = class` to the box plot, the different levels of the `drv` variable are placed side by side, i.e., dodged.
```
ggplot(data = mpg, aes(x = drv, y = hwy, colour = class)) +
geom_boxplot()
```
If `position_identity()` is used the boxplots overlap.
```
ggplot(data = mpg, aes(x = drv, y = hwy, colour = class)) +
geom_boxplot(position = "identity")
```
3\.9 Coordinate systems
-----------------------
### Exercise 3\.9\.1
Turn a stacked bar chart into a pie chart using `coord_polar()`.
A pie chart is a stacked bar chart with the addition of polar coordinates.
Take this stacked bar chart with a single category.
```
ggplot(mpg, aes(x = factor(1), fill = drv)) +
geom_bar()
```
Now add `coord_polar(theta="y")` to create pie chart.
```
ggplot(mpg, aes(x = factor(1), fill = drv)) +
geom_bar(width = 1) +
coord_polar(theta = "y")
```
The argument `theta = "y"` maps `y` to the angle of each section.
If `coord_polar()` is specified without `theta = "y"`, then the resulting plot is called a bulls\-eye chart.
```
ggplot(mpg, aes(x = factor(1), fill = drv)) +
geom_bar(width = 1) +
coord_polar()
```
### Exercise 3\.9\.2
What does `labs()` do?
Read the documentation.
The `labs` function adds axis titles, plot titles, and a caption to the plot.
```
ggplot(data = mpg, mapping = aes(x = class, y = hwy)) +
geom_boxplot() +
coord_flip() +
labs(y = "Highway MPG",
x = "Class",
title = "Highway MPG by car class",
subtitle = "1999-2008",
caption = "Source: http://fueleconomy.gov")
```
The arguments to `labs()` are optional, so you can add as many or as few of these as are needed.
```
ggplot(data = mpg, mapping = aes(x = class, y = hwy)) +
geom_boxplot() +
coord_flip() +
labs(y = "Highway MPG",
x = "Year",
title = "Highway MPG by car class")
```
The `labs()` function is not the only function that adds titles to plots.
The `xlab()`, `ylab()`, and x\- and y\-scale functions can add axis titles.
The `ggtitle()` function adds plot titles.
### Exercise 3\.9\.3
What’s the difference between `coord_quickmap()` and `coord_map()`?
The `coord_map()` function uses map projections to project the three\-dimensional Earth onto a two\-dimensional plane.
By default, `coord_map()` uses the [Mercator projection](https://en.wikipedia.org/wiki/Mercator_projection).
This projection is applied to all the geoms in the plot.
The `coord_quickmap()` function uses an approximate but faster map projection.
This approximation ignores the curvature of Earth and adjusts the map for the latitude/longitude ratio.
The `coord_quickmap()` project is faster than `coord_map()` both because the projection is computationally easier, and unlike `coord_map()`, the coordinates of the individual geoms do not need to be transformed.
See the [coord\_map()](https://ggplot2.tidyverse.org/reference/coord_map.html) documentation for more information on these functions and some examples.
### Exercise 3\.9\.4
What does the plot below tell you about the relationship between city and highway mpg?
Why is `coord_fixed()` important?
What does `geom_abline()` do?
The function `coord_fixed()` ensures that the line produced by `geom_abline()` is at a 45\-degree angle.
A 45\-degree line makes it easy to compare the highway and city mileage to the case in which city and highway MPG were equal.
```
p <- ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_point() +
geom_abline()
p + coord_fixed()
```
If we didn’t include `coord_fixed()`, then the line would no longer have an angle of 45 degrees.
```
p
```
On average, humans are best able to perceive differences in angles relative to 45 degrees.
See Cleveland ([1993](#ref-Cleveland1993)[b](#ref-Cleveland1993)), Cleveland ([1994](#ref-Cleveland1994)),Cleveland ([1993](#ref-Cleveland1993a)[a](#ref-Cleveland1993a)), Cleveland, McGill, and McGill ([1988](#ref-ClevelandMcGillMcGill1988)), Heer and Agrawala ([2006](#ref-HeerAgrawala2006)) for discussion on how the aspect ratio of a plot affects perception of the values it encodes, evidence that 45\-degrees is generally the optimal aspect ratio, and methods to calculate the optimal aspect ratio of a plot.
The function `ggthemes::bank_slopes()` will calculate the optimal aspect ratio to bank slopes to 45\-degrees.
### Exercise 3\.9\.1
Turn a stacked bar chart into a pie chart using `coord_polar()`.
A pie chart is a stacked bar chart with the addition of polar coordinates.
Take this stacked bar chart with a single category.
```
ggplot(mpg, aes(x = factor(1), fill = drv)) +
geom_bar()
```
Now add `coord_polar(theta="y")` to create pie chart.
```
ggplot(mpg, aes(x = factor(1), fill = drv)) +
geom_bar(width = 1) +
coord_polar(theta = "y")
```
The argument `theta = "y"` maps `y` to the angle of each section.
If `coord_polar()` is specified without `theta = "y"`, then the resulting plot is called a bulls\-eye chart.
```
ggplot(mpg, aes(x = factor(1), fill = drv)) +
geom_bar(width = 1) +
coord_polar()
```
### Exercise 3\.9\.2
What does `labs()` do?
Read the documentation.
The `labs` function adds axis titles, plot titles, and a caption to the plot.
```
ggplot(data = mpg, mapping = aes(x = class, y = hwy)) +
geom_boxplot() +
coord_flip() +
labs(y = "Highway MPG",
x = "Class",
title = "Highway MPG by car class",
subtitle = "1999-2008",
caption = "Source: http://fueleconomy.gov")
```
The arguments to `labs()` are optional, so you can add as many or as few of these as are needed.
```
ggplot(data = mpg, mapping = aes(x = class, y = hwy)) +
geom_boxplot() +
coord_flip() +
labs(y = "Highway MPG",
x = "Year",
title = "Highway MPG by car class")
```
The `labs()` function is not the only function that adds titles to plots.
The `xlab()`, `ylab()`, and x\- and y\-scale functions can add axis titles.
The `ggtitle()` function adds plot titles.
### Exercise 3\.9\.3
What’s the difference between `coord_quickmap()` and `coord_map()`?
The `coord_map()` function uses map projections to project the three\-dimensional Earth onto a two\-dimensional plane.
By default, `coord_map()` uses the [Mercator projection](https://en.wikipedia.org/wiki/Mercator_projection).
This projection is applied to all the geoms in the plot.
The `coord_quickmap()` function uses an approximate but faster map projection.
This approximation ignores the curvature of Earth and adjusts the map for the latitude/longitude ratio.
The `coord_quickmap()` project is faster than `coord_map()` both because the projection is computationally easier, and unlike `coord_map()`, the coordinates of the individual geoms do not need to be transformed.
See the [coord\_map()](https://ggplot2.tidyverse.org/reference/coord_map.html) documentation for more information on these functions and some examples.
### Exercise 3\.9\.4
What does the plot below tell you about the relationship between city and highway mpg?
Why is `coord_fixed()` important?
What does `geom_abline()` do?
The function `coord_fixed()` ensures that the line produced by `geom_abline()` is at a 45\-degree angle.
A 45\-degree line makes it easy to compare the highway and city mileage to the case in which city and highway MPG were equal.
```
p <- ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) +
geom_point() +
geom_abline()
p + coord_fixed()
```
If we didn’t include `coord_fixed()`, then the line would no longer have an angle of 45 degrees.
```
p
```
On average, humans are best able to perceive differences in angles relative to 45 degrees.
See Cleveland ([1993](#ref-Cleveland1993)[b](#ref-Cleveland1993)), Cleveland ([1994](#ref-Cleveland1994)),Cleveland ([1993](#ref-Cleveland1993a)[a](#ref-Cleveland1993a)), Cleveland, McGill, and McGill ([1988](#ref-ClevelandMcGillMcGill1988)), Heer and Agrawala ([2006](#ref-HeerAgrawala2006)) for discussion on how the aspect ratio of a plot affects perception of the values it encodes, evidence that 45\-degrees is generally the optimal aspect ratio, and methods to calculate the optimal aspect ratio of a plot.
The function `ggthemes::bank_slopes()` will calculate the optimal aspect ratio to bank slopes to 45\-degrees.
3\.10 The layered grammar of graphics
-------------------------------------
No exercises
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/workflow-basics.html |
4 Workflow: basics
==================
```
library("tidyverse")
```
Exercise 4\.1
-------------
Why does this code not work?
```
my_variable <- 10
my_varıable
#> Error in eval(expr, envir, enclos): object 'my_varıable' not found
```
The variable being printed is `my_varıable`, not `my_variable`:
the seventh character is “ı” (“[LATIN SMALL LETTER DOTLESS I](https://en.wikipedia.org/wiki/Dotted_and_dotless_I)”), not “i”.
While it wouldn’t have helped much in this case, the importance of
distinguishing characters in code is reasons why fonts which clearly
distinguish similar characters are preferred in programming.
It is especially important to distinguish between two sets of similar looking characters:
* the numeral zero (0\), the Latin small letter O (o), and the Latin capital letter O (O),
* the numeral one (1\), the Latin small letter I (i), the Latin capital letter I (I), and Latin small letter L (l).
In these fonts, zero and the Latin letter O are often distinguished by using a glyph for zero that uses either a dot in the interior or a slash through it.
Some examples of fonts with dotted or slashed zero glyphs are Consolas, Deja Vu Sans Mono, Monaco, Menlo, [Source Sans Pro](https://adobe-fonts.github.io/source-sans-pro/), and FiraCode.
Error messages of the form `"object '...' not found"` mean exactly what they say.
R cannot find an object with that name.
Unfortunately, the error does not tell you why that object cannot be found, because R does not know the reason that the object does not exist.
The most common scenarios in which I encounter this error message are
1. I forgot to create the object, or an error prevented the object from being created.
2. I made a typo in the object’s name, either when using it or when I created it (as in the example above), or I forgot what I had originally named it.
If you find yourself often writing the wrong name for an object,
it is a good indication that the original name was not a good one.
3. I forgot to load the package that contains the object using `library()`.
Exercise 4\.2
-------------
Tweak each of the following R commands so that they run correctly:
```
ggplot(dota = mpg) +
geom_point(mapping = aes(x = displ, y = hwy))
fliter(mpg, cyl = 8)
filter(diamond, carat > 3)
```
```
ggplot(dota = mpg) +
geom_point(mapping = aes(x = displ, y = hwy))
#> Error in FUN(X[[i]], ...): object 'displ' not found
```
The error message is `argument "data" is missing, with no default`.
This error is a result of a typo, `dota` instead of `data`.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy))
```
```
fliter(mpg, cyl = 8)
#> Error in fliter(mpg, cyl = 8): could not find function "fliter"
```
R could not find the function `fliter()` because we made a typo: `fliter` instead of `filter`.
```
filter(mpg, cyl = 8)
#> Error: Problem with `filter()` input `..1`.
#> ✖ Input `..1` is named.
#> ℹ This usually means that you've used `=` instead of `==`.
#> ℹ Did you mean `cyl == 8`?
```
We aren’t done yet. But the error message gives a suggestion. Let’s follow it.
```
filter(mpg, cyl == 8)
#> # A tibble: 70 x 11
#> manufacturer model displ year cyl trans drv cty hwy fl class
#> <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
#> 1 audi a6 quattro 4.2 2008 8 auto(… 4 16 23 p mids…
#> 2 chevrolet c1500 sub… 5.3 2008 8 auto(… r 14 20 r suv
#> 3 chevrolet c1500 sub… 5.3 2008 8 auto(… r 11 15 e suv
#> 4 chevrolet c1500 sub… 5.3 2008 8 auto(… r 14 20 r suv
#> 5 chevrolet c1500 sub… 5.7 1999 8 auto(… r 13 17 r suv
#> 6 chevrolet c1500 sub… 6 2008 8 auto(… r 12 17 r suv
#> # … with 64 more rows
```
```
filter(diamond, carat > 3)
#> Error in filter(diamond, carat > 3): object 'diamond' not found
```
R says it can’t find the object `diamond`.
This is a typo; the data frame is named `diamonds`.
```
filter(diamonds, carat > 3)
#> # A tibble: 32 x 10
#> carat cut color clarity depth table price x y z
#> <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 3.01 Premium I I1 62.7 58 8040 9.1 8.97 5.67
#> 2 3.11 Fair J I1 65.9 57 9823 9.15 9.02 5.98
#> 3 3.01 Premium F I1 62.2 56 9925 9.24 9.13 5.73
#> 4 3.05 Premium E I1 60.9 58 10453 9.26 9.25 5.66
#> 5 3.02 Fair I I1 65.2 56 10577 9.11 9.02 5.91
#> 6 3.01 Fair H I1 56.1 62 10761 9.54 9.38 5.31
#> # … with 26 more rows
```
How did I know? I started typing in `diamond` and RStudio completed it to `diamonds`.
Since `diamonds` includes the variable `carat` and the code works, that appears to have been the problem.
Exercise 4\.3
-------------
Press *Alt \+ Shift \+ K*. What happens? How can you get to the same place using the menus?
This gives a menu with keyboard shortcuts. This can be found in the menu under `Tools -> Keyboard Shortcuts Help`.
Exercise 4\.1
-------------
Why does this code not work?
```
my_variable <- 10
my_varıable
#> Error in eval(expr, envir, enclos): object 'my_varıable' not found
```
The variable being printed is `my_varıable`, not `my_variable`:
the seventh character is “ı” (“[LATIN SMALL LETTER DOTLESS I](https://en.wikipedia.org/wiki/Dotted_and_dotless_I)”), not “i”.
While it wouldn’t have helped much in this case, the importance of
distinguishing characters in code is reasons why fonts which clearly
distinguish similar characters are preferred in programming.
It is especially important to distinguish between two sets of similar looking characters:
* the numeral zero (0\), the Latin small letter O (o), and the Latin capital letter O (O),
* the numeral one (1\), the Latin small letter I (i), the Latin capital letter I (I), and Latin small letter L (l).
In these fonts, zero and the Latin letter O are often distinguished by using a glyph for zero that uses either a dot in the interior or a slash through it.
Some examples of fonts with dotted or slashed zero glyphs are Consolas, Deja Vu Sans Mono, Monaco, Menlo, [Source Sans Pro](https://adobe-fonts.github.io/source-sans-pro/), and FiraCode.
Error messages of the form `"object '...' not found"` mean exactly what they say.
R cannot find an object with that name.
Unfortunately, the error does not tell you why that object cannot be found, because R does not know the reason that the object does not exist.
The most common scenarios in which I encounter this error message are
1. I forgot to create the object, or an error prevented the object from being created.
2. I made a typo in the object’s name, either when using it or when I created it (as in the example above), or I forgot what I had originally named it.
If you find yourself often writing the wrong name for an object,
it is a good indication that the original name was not a good one.
3. I forgot to load the package that contains the object using `library()`.
Exercise 4\.2
-------------
Tweak each of the following R commands so that they run correctly:
```
ggplot(dota = mpg) +
geom_point(mapping = aes(x = displ, y = hwy))
fliter(mpg, cyl = 8)
filter(diamond, carat > 3)
```
```
ggplot(dota = mpg) +
geom_point(mapping = aes(x = displ, y = hwy))
#> Error in FUN(X[[i]], ...): object 'displ' not found
```
The error message is `argument "data" is missing, with no default`.
This error is a result of a typo, `dota` instead of `data`.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy))
```
```
fliter(mpg, cyl = 8)
#> Error in fliter(mpg, cyl = 8): could not find function "fliter"
```
R could not find the function `fliter()` because we made a typo: `fliter` instead of `filter`.
```
filter(mpg, cyl = 8)
#> Error: Problem with `filter()` input `..1`.
#> ✖ Input `..1` is named.
#> ℹ This usually means that you've used `=` instead of `==`.
#> ℹ Did you mean `cyl == 8`?
```
We aren’t done yet. But the error message gives a suggestion. Let’s follow it.
```
filter(mpg, cyl == 8)
#> # A tibble: 70 x 11
#> manufacturer model displ year cyl trans drv cty hwy fl class
#> <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
#> 1 audi a6 quattro 4.2 2008 8 auto(… 4 16 23 p mids…
#> 2 chevrolet c1500 sub… 5.3 2008 8 auto(… r 14 20 r suv
#> 3 chevrolet c1500 sub… 5.3 2008 8 auto(… r 11 15 e suv
#> 4 chevrolet c1500 sub… 5.3 2008 8 auto(… r 14 20 r suv
#> 5 chevrolet c1500 sub… 5.7 1999 8 auto(… r 13 17 r suv
#> 6 chevrolet c1500 sub… 6 2008 8 auto(… r 12 17 r suv
#> # … with 64 more rows
```
```
filter(diamond, carat > 3)
#> Error in filter(diamond, carat > 3): object 'diamond' not found
```
R says it can’t find the object `diamond`.
This is a typo; the data frame is named `diamonds`.
```
filter(diamonds, carat > 3)
#> # A tibble: 32 x 10
#> carat cut color clarity depth table price x y z
#> <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 3.01 Premium I I1 62.7 58 8040 9.1 8.97 5.67
#> 2 3.11 Fair J I1 65.9 57 9823 9.15 9.02 5.98
#> 3 3.01 Premium F I1 62.2 56 9925 9.24 9.13 5.73
#> 4 3.05 Premium E I1 60.9 58 10453 9.26 9.25 5.66
#> 5 3.02 Fair I I1 65.2 56 10577 9.11 9.02 5.91
#> 6 3.01 Fair H I1 56.1 62 10761 9.54 9.38 5.31
#> # … with 26 more rows
```
How did I know? I started typing in `diamond` and RStudio completed it to `diamonds`.
Since `diamonds` includes the variable `carat` and the code works, that appears to have been the problem.
Exercise 4\.3
-------------
Press *Alt \+ Shift \+ K*. What happens? How can you get to the same place using the menus?
This gives a menu with keyboard shortcuts. This can be found in the menu under `Tools -> Keyboard Shortcuts Help`.
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/transform.html |
5 Data transformation
=====================
5\.1 Introduction
-----------------
```
library("nycflights13")
library("tidyverse")
```
5\.2 Filter rows with `filter()`
--------------------------------
### Exercise 5\.2\.1
Find all flights that
1. Had an arrival delay of two or more hours
2. Flew to Houston (IAH or HOU)
3. Were operated by United, American, or Delta
4. Departed in summer (July, August, and September)
5. Arrived more than two hours late, but didn’t leave late
6. Were delayed by at least an hour, but made up over 30 minutes in flight
7. Departed between midnight and 6 am (inclusive)
The answer to each part follows.
1. Since the `arr_delay` variable is measured in minutes, find
flights with an arrival delay of 120 or more minutes.
```
filter(flights, arr_delay >= 120)
#> # A tibble: 10,200 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 811 630 101 1047 830
#> 2 2013 1 1 848 1835 853 1001 1950
#> 3 2013 1 1 957 733 144 1056 853
#> 4 2013 1 1 1114 900 134 1447 1222
#> 5 2013 1 1 1505 1310 115 1638 1431
#> 6 2013 1 1 1525 1340 105 1831 1626
#> # … with 10,194 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
2. The flights that flew to Houston are those flights where the
destination (`dest`) is either “IAH” or “HOU”.
```
filter(flights, dest == "IAH" | dest == "HOU")
#> # A tibble: 9,313 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 623 627 -4 933 932
#> 4 2013 1 1 728 732 -4 1041 1038
#> 5 2013 1 1 739 739 0 1104 1038
#> 6 2013 1 1 908 908 0 1228 1219
#> # … with 9,307 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
However, using `%in%` is more compact and would scale to cases where
there were more than two airports we were interested in.
```
filter(flights, dest %in% c("IAH", "HOU"))
#> # A tibble: 9,313 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 623 627 -4 933 932
#> 4 2013 1 1 728 732 -4 1041 1038
#> 5 2013 1 1 739 739 0 1104 1038
#> 6 2013 1 1 908 908 0 1228 1219
#> # … with 9,307 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
3. In the `flights` dataset, the column `carrier` indicates the airline, but it uses two\-character carrier codes.
We can find the carrier codes for the airlines in the `airlines` dataset.
Since the carrier code dataset only has 16 rows, and the names
of the airlines in that dataset are not exactly “United”, “American”, or “Delta”,
it is easiest to manually look up their carrier codes in that data.
```
airlines
#> # A tibble: 16 x 2
#> carrier name
#> <chr> <chr>
#> 1 9E Endeavor Air Inc.
#> 2 AA American Airlines Inc.
#> 3 AS Alaska Airlines Inc.
#> 4 B6 JetBlue Airways
#> 5 DL Delta Air Lines Inc.
#> 6 EV ExpressJet Airlines Inc.
#> # … with 10 more rows
```
The carrier code for Delta is `"DL"`, for American is `"AA"`, and for United is `"UA"`.
Using these carriers codes, we check whether `carrier` is one of those.
```
filter(flights, carrier %in% c("AA", "DL", "UA"))
#> # A tibble: 139,504 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 542 540 2 923 850
#> 4 2013 1 1 554 600 -6 812 837
#> 5 2013 1 1 554 558 -4 740 728
#> 6 2013 1 1 558 600 -2 753 745
#> # … with 139,498 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
4. The variable `month` has the month, and it is numeric.
So, the summer flights are those that departed in months 7 (July), 8 (August), and 9 (September).
```
filter(flights, month >= 7, month <= 9)
#> # A tibble: 86,326 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 1 1 2029 212 236 2359
#> 2 2013 7 1 2 2359 3 344 344
#> 3 2013 7 1 29 2245 104 151 1
#> 4 2013 7 1 43 2130 193 322 14
#> 5 2013 7 1 44 2150 174 300 100
#> 6 2013 7 1 46 2051 235 304 2358
#> # … with 86,320 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The `%in%` operator is an alternative. If the `:` operator is used to specify
the integer range, the expression is readable and compact.
```
filter(flights, month %in% 7:9)
#> # A tibble: 86,326 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 1 1 2029 212 236 2359
#> 2 2013 7 1 2 2359 3 344 344
#> 3 2013 7 1 29 2245 104 151 1
#> 4 2013 7 1 43 2130 193 322 14
#> 5 2013 7 1 44 2150 174 300 100
#> 6 2013 7 1 46 2051 235 304 2358
#> # … with 86,320 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
We could also use the `|` operator. However, the `|` does not scale to
many choices.
Even with only three choices, it is quite verbose.
```
filter(flights, month == 7 | month == 8 | month == 9)
#> # A tibble: 86,326 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 1 1 2029 212 236 2359
#> 2 2013 7 1 2 2359 3 344 344
#> 3 2013 7 1 29 2245 104 151 1
#> 4 2013 7 1 43 2130 193 322 14
#> 5 2013 7 1 44 2150 174 300 100
#> 6 2013 7 1 46 2051 235 304 2358
#> # … with 86,320 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
We can also use the `between()` function as shown in [Exercise 5\.2\.2](transform.html#exercise-5.2.2).
5. Flights that arrived more than two hours late, but didn’t leave late will
have an arrival delay of more than 120 minutes (`arr_delay > 120`) and
a non\-positive departure delay (`dep_delay <= 0`).
```
filter(flights, arr_delay > 120, dep_delay <= 0)
#> # A tibble: 29 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 27 1419 1420 -1 1754 1550
#> 2 2013 10 7 1350 1350 0 1736 1526
#> 3 2013 10 7 1357 1359 -2 1858 1654
#> 4 2013 10 16 657 700 -3 1258 1056
#> 5 2013 11 1 658 700 -2 1329 1015
#> 6 2013 3 18 1844 1847 -3 39 2219
#> # … with 23 more rows, and 11 more variables: arr_delay <dbl>, carrier <chr>,
#> # flight <int>, tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>,
#> # distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
6. Were delayed by at least an hour, but made up over 30 minutes in flight.
If a flight was delayed by at least an hour, then `dep_delay >= 60`.
If the flight didn’t make up any time in the air, then its arrival would be delayed by the same amount as its departure, meaning `dep_delay == arr_delay`, or alternatively, `dep_delay - arr_delay == 0`.
If it makes up over 30 minutes in the air, then the arrival delay must be at least 30 minutes less than the departure delay, which is stated as `dep_delay - arr_delay > 30`.
```
filter(flights, dep_delay >= 60, dep_delay - arr_delay > 30)
#> # A tibble: 1,844 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 2205 1720 285 46 2040
#> 2 2013 1 1 2326 2130 116 131 18
#> 3 2013 1 3 1503 1221 162 1803 1555
#> 4 2013 1 3 1839 1700 99 2056 1950
#> 5 2013 1 3 1850 1745 65 2148 2120
#> 6 2013 1 3 1941 1759 102 2246 2139
#> # … with 1,838 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
7. Finding flights that departed between midnight and 6 a.m. is complicated by
the way in which times are represented in the data.
In `dep_time`, midnight is represented by `2400`, not `0`.
You can verify this by checking the minimum and maximum of `dep_time`.
```
summary(flights$dep_time)
#> Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
#> 1 907 1401 1349 1744 2400 8255
```
This is an example of why it is always good to check the summary statistics of your data.
Unfortunately, this means we cannot simply check that `dep_time < 600`, because we also have
to consider the special case of midnight.
```
filter(flights, dep_time <= 600 | dep_time == 2400)
#> # A tibble: 9,373 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 542 540 2 923 850
#> 4 2013 1 1 544 545 -1 1004 1022
#> 5 2013 1 1 554 600 -6 812 837
#> 6 2013 1 1 554 558 -4 740 728
#> # … with 9,367 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
Alternatively, we could use the [modulo operator](https://en.wikipedia.org/wiki/Modulo_operation), `%%`.
The modulo operator returns the remainder of division.
Let’s see how this affects our times.
```
c(600, 1200, 2400) %% 2400
#> [1] 600 1200 0
```
Since `2400 %% 2400 == 0` and all other times are left unchanged,
we can compare the result of the modulo operation to `600`,
```
filter(flights, dep_time %% 2400 <= 600)
#> # A tibble: 9,373 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 542 540 2 923 850
#> 4 2013 1 1 544 545 -1 1004 1022
#> 5 2013 1 1 554 600 -6 812 837
#> 6 2013 1 1 554 558 -4 740 728
#> # … with 9,367 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
This filter expression is more compact, but its readability depends on the
familiarity of the reader with modular arithmetic.
### Exercise 5\.2\.2
Another useful dplyr filtering helper is `between()`. What does it do? Can you use it to simplify the code needed to answer the previous challenges?
The expression `between(x, left, right)` is equivalent to `x >= left & x <= right`.
Of the answers in the previous question, we could simplify the statement of *departed in summer* (`month >= 7 & month <= 9`) using the `between()` function.
```
filter(flights, between(month, 7, 9))
#> # A tibble: 86,326 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 1 1 2029 212 236 2359
#> 2 2013 7 1 2 2359 3 344 344
#> 3 2013 7 1 29 2245 104 151 1
#> 4 2013 7 1 43 2130 193 322 14
#> 5 2013 7 1 44 2150 174 300 100
#> 6 2013 7 1 46 2051 235 304 2358
#> # … with 86,320 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
### Exercise 5\.2\.3
How many flights have a missing `dep_time`? What other variables are missing? What might these rows represent?
Find the rows of flights with a missing departure time (`dep_time`) using the `is.na()` function.
```
filter(flights, is.na(dep_time))
#> # A tibble: 8,255 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 NA 1630 NA NA 1815
#> 2 2013 1 1 NA 1935 NA NA 2240
#> 3 2013 1 1 NA 1500 NA NA 1825
#> 4 2013 1 1 NA 600 NA NA 901
#> 5 2013 1 2 NA 1540 NA NA 1747
#> 6 2013 1 2 NA 1620 NA NA 1746
#> # … with 8,249 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
Notably, the arrival time (`arr_time`) is also missing for these rows. These seem to be cancelled flights.
The output of the function `summary()` includes the number of missing values for all non\-character variables.
```
summary(flights)
#> year month day dep_time sched_dep_time
#> Min. :2013 Min. : 1.00 Min. : 1.0 Min. : 1 Min. : 106
#> 1st Qu.:2013 1st Qu.: 4.00 1st Qu.: 8.0 1st Qu.: 907 1st Qu.: 906
#> Median :2013 Median : 7.00 Median :16.0 Median :1401 Median :1359
#> Mean :2013 Mean : 6.55 Mean :15.7 Mean :1349 Mean :1344
#> 3rd Qu.:2013 3rd Qu.:10.00 3rd Qu.:23.0 3rd Qu.:1744 3rd Qu.:1729
#> Max. :2013 Max. :12.00 Max. :31.0 Max. :2400 Max. :2359
#> NA's :8255
#> dep_delay arr_time sched_arr_time arr_delay carrier
#> Min. : -43 Min. : 1 Min. : 1 Min. : -86 Length:336776
#> 1st Qu.: -5 1st Qu.:1104 1st Qu.:1124 1st Qu.: -17 Class :character
#> Median : -2 Median :1535 Median :1556 Median : -5 Mode :character
#> Mean : 13 Mean :1502 Mean :1536 Mean : 7
#> 3rd Qu.: 11 3rd Qu.:1940 3rd Qu.:1945 3rd Qu.: 14
#> Max. :1301 Max. :2400 Max. :2359 Max. :1272
#> NA's :8255 NA's :8713 NA's :9430
#> flight tailnum origin dest
#> Min. : 1 Length:336776 Length:336776 Length:336776
#> 1st Qu.: 553 Class :character Class :character Class :character
#> Median :1496 Mode :character Mode :character Mode :character
#> Mean :1972
#> 3rd Qu.:3465
#> Max. :8500
#>
#> air_time distance hour minute
#> Min. : 20 Min. : 17 Min. : 1.0 Min. : 0.0
#> 1st Qu.: 82 1st Qu.: 502 1st Qu.: 9.0 1st Qu.: 8.0
#> Median :129 Median : 872 Median :13.0 Median :29.0
#> Mean :151 Mean :1040 Mean :13.2 Mean :26.2
#> 3rd Qu.:192 3rd Qu.:1389 3rd Qu.:17.0 3rd Qu.:44.0
#> Max. :695 Max. :4983 Max. :23.0 Max. :59.0
#> NA's :9430
#> time_hour
#> Min. :2013-01-01 05:00:00
#> 1st Qu.:2013-04-04 13:00:00
#> Median :2013-07-03 10:00:00
#> Mean :2013-07-03 05:22:54
#> 3rd Qu.:2013-10-01 07:00:00
#> Max. :2013-12-31 23:00:00
#>
```
### Exercise 5\.2\.4
Why is `NA ^ 0` not missing? Why is `NA | TRUE` not missing?
Why is `FALSE & NA` not missing? Can you figure out the general rule?
(`NA * 0` is a tricky counterexample!)
```
NA ^ 0
#> [1] 1
```
`NA ^ 0 == 1` since for all numeric values \\(x ^ 0 \= 1\\).
```
NA | TRUE
#> [1] TRUE
```
`NA | TRUE` is `TRUE` because anything **or** `TRUE` is `TRUE`.
If the missing value were `TRUE`, then `TRUE | TRUE == TRUE`,
and if the missing value was `FALSE`, then `FALSE | TRUE == TRUE`.
```
NA & FALSE
#> [1] FALSE
```
The value of `NA & FALSE` is `FALSE` because anything **and** `FALSE` is always `FALSE`.
If the missing value were `TRUE`, then `TRUE & FALSE == FALSE`,
and if the missing value was `FALSE`, then `FALSE & FALSE == FALSE`.
```
NA | FALSE
#> [1] NA
```
For `NA | FALSE`, the value is unknown since `TRUE | FALSE == TRUE`, but `FALSE | FALSE == FALSE`.
```
NA & TRUE
#> [1] NA
```
For `NA & TRUE`, the value is unknown since `FALSE & TRUE== FALSE`, but `TRUE & TRUE == TRUE`.
```
NA * 0
#> [1] NA
```
Since \\(x \* 0 \= 0\\) for all finite numbers we might expect `NA * 0 == 0`, but that’s not the case.
The reason that `NA * 0 != 0` is that \\(0 \\times \\infty\\) and \\(0 \\times \-\\infty\\) are undefined.
R represents undefined results as `NaN`, which is an abbreviation of “[not a number](https://en.wikipedia.org/wiki/NaN)”.
```
Inf * 0
#> [1] NaN
-Inf * 0
#> [1] NaN
```
5\.3 Arrange rows with `arrange()`
----------------------------------
### Exercise 5\.3\.1
How could you use `arrange()` to sort all missing values to the start? (Hint: use `is.na()`).
The `arrange()` function puts `NA` values last.
```
arrange(flights, dep_time) %>%
tail()
#> # A tibble: 6 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 9 30 NA 1842 NA NA 2019
#> 2 2013 9 30 NA 1455 NA NA 1634
#> 3 2013 9 30 NA 2200 NA NA 2312
#> 4 2013 9 30 NA 1210 NA NA 1330
#> 5 2013 9 30 NA 1159 NA NA 1344
#> 6 2013 9 30 NA 840 NA NA 1020
#> # … with 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
#> # tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
#> # hour <dbl>, minute <dbl>, time_hour <dttm>
```
Using `desc()` does not change that.
```
arrange(flights, desc(dep_time))
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 10 30 2400 2359 1 327 337
#> 2 2013 11 27 2400 2359 1 515 445
#> 3 2013 12 5 2400 2359 1 427 440
#> 4 2013 12 9 2400 2359 1 432 440
#> 5 2013 12 9 2400 2250 70 59 2356
#> 6 2013 12 13 2400 2359 1 432 440
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
To put `NA` values first, we can add an indicator of whether the column has a missing value.
Then we sort by the missing indicator column and the column of interest.
For example, to sort the data frame by departure time (`dep_time`) in ascending order but `NA` values first, run the following.
```
arrange(flights, desc(is.na(dep_time)), dep_time)
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 NA 1630 NA NA 1815
#> 2 2013 1 1 NA 1935 NA NA 2240
#> 3 2013 1 1 NA 1500 NA NA 1825
#> 4 2013 1 1 NA 600 NA NA 901
#> 5 2013 1 2 NA 1540 NA NA 1747
#> 6 2013 1 2 NA 1620 NA NA 1746
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The `flights` will first be sorted by `desc(is.na(dep_time))`.
Since `desc(is.na(dep_time))` is either `TRUE` when `dep_time` is missing, or `FALSE`, when it is not, the rows with missing values of `dep_time` will come first, since `TRUE > FALSE`.
### Exercise 5\.3\.2
Sort flights to find the most delayed flights. Find the flights that left earliest.
Find the most delayed flights by sorting the table by departure delay, `dep_delay`, in descending order.
```
arrange(flights, desc(dep_delay))
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 9 641 900 1301 1242 1530
#> 2 2013 6 15 1432 1935 1137 1607 2120
#> 3 2013 1 10 1121 1635 1126 1239 1810
#> 4 2013 9 20 1139 1845 1014 1457 2210
#> 5 2013 7 22 845 1600 1005 1044 1815
#> 6 2013 4 10 1100 1900 960 1342 2211
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The most delayed flight was HA 51, JFK to HNL, which was scheduled to leave on January 09, 2013 09:00\.
Note that the departure time is given as 641, which seems to be less than the scheduled departure time.
But the departure was delayed 1,301 minutes, which is 21 hours, 41 minutes.
The departure time is the day after the scheduled departure time.
Be happy that you weren’t on that flight, and if you happened to have been on that flight and are reading this, I’m sorry for you.
Similarly, the earliest departing flight can be found by sorting `dep_delay` in ascending order.
```
arrange(flights, dep_delay)
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 12 7 2040 2123 -43 40 2352
#> 2 2013 2 3 2022 2055 -33 2240 2338
#> 3 2013 11 10 1408 1440 -32 1549 1559
#> 4 2013 1 11 1900 1930 -30 2233 2243
#> 5 2013 1 29 1703 1730 -27 1947 1957
#> 6 2013 8 9 729 755 -26 1002 955
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
Flight B6 97 (JFK to DEN) scheduled to depart on December 07, 2013 at 21:23
departed 43 minutes early.
### Exercise 5\.3\.3
Sort flights to find the fastest flights.
There are actually two ways to interpret this question: one that can be solved by using `arrange()`, and a more complex interpretation that requires creation of a new variable using `mutate()`, which we haven’t seen demonstrated before.
The colloquial interpretation of “fastest” flight can be understood to mean “the flight with the shortest flight time”. We can use arrange to sort our data by the `air_time` variable to find the shortest flights:
```
head(arrange(flights, air_time))
#> # A tibble: 6 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 16 1355 1315 40 1442 1411
#> 2 2013 4 13 537 527 10 622 628
#> 3 2013 12 6 922 851 31 1021 954
#> 4 2013 2 3 2153 2129 24 2247 2224
#> 5 2013 2 5 1303 1315 -12 1342 1411
#> 6 2013 2 12 2123 2130 -7 2211 2225
#> # … with 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
#> # tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
#> # hour <dbl>, minute <dbl>, time_hour <dttm>
```
Another definition of the “fastest flight” is the flight with the highest average [ground speed](https://en.wikipedia.org/wiki/Ground_speed).
The ground speed is not included in the data, but it can be calculated from the `distance` and `air_time` of the flight.
```
head(arrange(flights, desc(distance / air_time)))
#> # A tibble: 6 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 5 25 1709 1700 9 1923 1937
#> 2 2013 7 2 1558 1513 45 1745 1719
#> 3 2013 5 13 2040 2025 15 2225 2226
#> 4 2013 3 23 1914 1910 4 2045 2043
#> 5 2013 1 12 1559 1600 -1 1849 1917
#> 6 2013 11 17 650 655 -5 1059 1150
#> # … with 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
#> # tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
#> # hour <dbl>, minute <dbl>, time_hour <dttm>
```
### Exercise 5\.3\.4
Which flights traveled the longest?
Which traveled the shortest?
To find the longest flight, sort the flights by the `distance` column in descending order.
```
arrange(flights, desc(distance))
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 857 900 -3 1516 1530
#> 2 2013 1 2 909 900 9 1525 1530
#> 3 2013 1 3 914 900 14 1504 1530
#> 4 2013 1 4 900 900 0 1516 1530
#> 5 2013 1 5 858 900 -2 1519 1530
#> 6 2013 1 6 1019 900 79 1558 1530
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The longest flight is HA 51, JFK to HNL, which is 4,983 miles.
To find the shortest flight, sort the flights by the `distance` in ascending order, which is the default sort order.
```
arrange(flights, distance)
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 27 NA 106 NA NA 245
#> 2 2013 1 3 2127 2129 -2 2222 2224
#> 3 2013 1 4 1240 1200 40 1333 1306
#> 4 2013 1 4 1829 1615 134 1937 1721
#> 5 2013 1 4 2128 2129 -1 2218 2224
#> 6 2013 1 5 1155 1200 -5 1241 1306
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The shortest flight is US 1632, EWR to LGA, which is only 17 miles.
This is a flight between two of the New York area airports.
However, since this flight is missing a departure time so it either did not actually fly or there is a problem with the data.
The terms “longest” and “shortest” could also refer to the time of the flight instead of the distance.
Now the longest and shortest flights by can be found by sorting by the `air_time` column.
The longest flights by airtime are the following.
```
arrange(flights, desc(air_time))
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 3 17 1337 1335 2 1937 1836
#> 2 2013 2 6 853 900 -7 1542 1540
#> 3 2013 3 15 1001 1000 1 1551 1530
#> 4 2013 3 17 1006 1000 6 1607 1530
#> 5 2013 3 16 1001 1000 1 1544 1530
#> 6 2013 2 5 900 900 0 1555 1540
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The shortest flights by airtime are the following.
```
arrange(flights, air_time)
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 16 1355 1315 40 1442 1411
#> 2 2013 4 13 537 527 10 622 628
#> 3 2013 12 6 922 851 31 1021 954
#> 4 2013 2 3 2153 2129 24 2247 2224
#> 5 2013 2 5 1303 1315 -12 1342 1411
#> 6 2013 2 12 2123 2130 -7 2211 2225
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
5\.4 Select columns with `select()`
-----------------------------------
### Exercise 5\.4\.1
Brainstorm as many ways as possible to select `dep_time`, `dep_delay`, `arr_time`, and `arr_delay` from flights.
These are a few ways to select columns.
* Specify columns names as unquoted variable names.
```
select(flights, dep_time, dep_delay, arr_time, arr_delay)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Specify column names as strings.
```
select(flights, "dep_time", "dep_delay", "arr_time", "arr_delay")
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Specify the column numbers of the variables.
```
select(flights, 4, 6, 7, 9)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
This works, but is not good practice for two reasons.
First, the column location of variables may change, resulting in code that
may continue to run without error, but produce the wrong answer.
Second code is obfuscated, since it is not clear from the code which
variables are being selected. What variable does column 6 correspond to?
I just wrote the code, and I’ve already forgotten.
* Specify the names of the variables with character vector and `any_of()` or `all_of()`
```
select(flights, all_of(c("dep_time", "dep_delay", "arr_time", "arr_delay")))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
```
select(flights, any_of(c("dep_time", "dep_delay", "arr_time", "arr_delay")))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
This is useful because the names of the variables can be stored in a
variable and passed to `all_of()` or `any_of()`.
```
variables <- c("dep_time", "dep_delay", "arr_time", "arr_delay")
select(flights, all_of(variables))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
These two functions replace the deprecated function `one_of()`.
* Selecting the variables by matching the start of their names using `starts_with()`.
```
select(flights, starts_with("dep_"), starts_with("arr_"))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Selecting the variables using regular expressions with `matches()`.
Regular expressions provide a flexible way to match string patterns
and are discussed in the [Strings](https://r4ds.had.co.nz/strings.html) chapter.
```
select(flights, matches("^(dep|arr)_(time|delay)$"))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Specify the names of the variables with a character vector and use the bang\-bang operator (`!!`).
```
variables <- c("dep_time", "dep_delay", "arr_time", "arr_delay")
select(flights, !!variables)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
This and the following answers use the features of **tidy evaluation** not covered in R4DS but covered in the [Programming with dplyr](https://dplyr.tidyverse.org/articles/programming.html) vignette.
* Specify the names of the variables in a character or list vector and use the bang\-bang\-bang operator.
```
variables <- c("dep_time", "dep_delay", "arr_time", "arr_delay")
select(flights, !!!variables)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Specify the unquoted names of the variables in a list using `syms()` and use the bang\-bang\-bang operator.
```
variables <- syms(c("dep_time", "dep_delay", "arr_time", "arr_delay"))
select(flights, !!!variables)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
Some things that **don’t** work are:
* Matching the ends of their names using `ends_with()` since this will incorrectly
include other variables. For example,
```
select(flights, ends_with("arr_time"), ends_with("dep_time"))
#> # A tibble: 336,776 x 4
#> arr_time sched_arr_time dep_time sched_dep_time
#> <int> <int> <int> <int>
#> 1 830 819 517 515
#> 2 850 830 533 529
#> 3 923 850 542 540
#> 4 1004 1022 544 545
#> 5 812 837 554 600
#> 6 740 728 554 558
#> # … with 336,770 more rows
```
* Matching the names using `contains()` since there is not a pattern that can
include all these variables without incorrectly including others.
```
select(flights, contains("_time"), contains("arr_"))
#> # A tibble: 336,776 x 6
#> dep_time sched_dep_time arr_time sched_arr_time air_time arr_delay
#> <int> <int> <int> <int> <dbl> <dbl>
#> 1 517 515 830 819 227 11
#> 2 533 529 850 830 227 20
#> 3 542 540 923 850 160 33
#> 4 544 545 1004 1022 183 -18
#> 5 554 600 812 837 116 -25
#> 6 554 558 740 728 150 12
#> # … with 336,770 more rows
```
### Exercise 5\.4\.2
What happens if you include the name of a variable multiple times in a `select()` call?
The `select()` call ignores the duplication. Any duplicated variables are only included once, in the first location they appear. The `select()` function does not raise an error or warning or print any message if there are duplicated variables.
```
select(flights, year, month, day, year, year)
#> # A tibble: 336,776 x 3
#> year month day
#> <int> <int> <int>
#> 1 2013 1 1
#> 2 2013 1 1
#> 3 2013 1 1
#> 4 2013 1 1
#> 5 2013 1 1
#> 6 2013 1 1
#> # … with 336,770 more rows
```
This behavior is useful because it means that we can use `select()` with `everything()`
in order to easily change the order of columns without having to specify the names
of all the columns.
```
select(flights, arr_delay, everything())
#> # A tibble: 336,776 x 19
#> arr_delay year month day dep_time sched_dep_time dep_delay arr_time
#> <dbl> <int> <int> <int> <int> <int> <dbl> <int>
#> 1 11 2013 1 1 517 515 2 830
#> 2 20 2013 1 1 533 529 4 850
#> 3 33 2013 1 1 542 540 2 923
#> 4 -18 2013 1 1 544 545 -1 1004
#> 5 -25 2013 1 1 554 600 -6 812
#> 6 12 2013 1 1 554 558 -4 740
#> # … with 336,770 more rows, and 11 more variables: sched_arr_time <int>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
### Exercise 5\.4\.3
What does the `one_of()` function do? Why might it be helpful in conjunction with this vector?
The `one_of()` function selects variables with a character vector rather than unquoted variable name arguments.
This function is useful because it is easier to programmatically generate character vectors with variable names than to generate unquoted variable names, which are easier to type.
```
vars <- c("year", "month", "day", "dep_delay", "arr_delay")
select(flights, one_of(vars))
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
In the most recent versions of **dplyr**, `one_of` has been deprecated in favor of two functions: `all_of()` and `any_of()`.
These functions behave similarly if all variables are present in the data frame.
```
select(flights, any_of(vars))
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
```
select(flights, all_of(vars))
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
These functions differ in their strictness.
The function `all_of()` will raise an error if one of the variable names is not present, while `any_of()` will ignore it.
```
vars2 <- c("year", "month", "day", "variable_not_in_the_dataframe")
select(flights, all_of(vars2))
#> Error: Can't subset columns that don't exist.
#> ✖ Column `variable_not_in_the_dataframe` doesn't exist.
```
```
select(flights, any_of(vars2))
#> # A tibble: 336,776 x 3
#> year month day
#> <int> <int> <int>
#> 1 2013 1 1
#> 2 2013 1 1
#> 3 2013 1 1
#> 4 2013 1 1
#> 5 2013 1 1
#> 6 2013 1 1
#> # … with 336,770 more rows
```
The deprecated function `one_of()` will raise a warning if an unknown column is encountered.
```
select(flights, one_of(vars2))
#> Warning: Unknown columns: `variable_not_in_the_dataframe`
#> # A tibble: 336,776 x 3
#> year month day
#> <int> <int> <int>
#> 1 2013 1 1
#> 2 2013 1 1
#> 3 2013 1 1
#> 4 2013 1 1
#> 5 2013 1 1
#> 6 2013 1 1
#> # … with 336,770 more rows
```
In the most recent versions of **dplyr**, the `one_of()` function is less necessary due to new behavior in the selection functions.
The `select()` function can now accept the name of a vector containing the variable names you wish to select:
```
select(flights, vars)
#> Note: Using an external vector in selections is ambiguous.
#> ℹ Use `all_of(vars)` instead of `vars` to silence this message.
#> ℹ See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
#> This message is displayed once per session.
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
However there is a problem with the previous code.
The name `vars` could refer to a column named `vars` in `flights` or a different variable named `vars`.
What th code does will depend on whether or not `vars` is a column in `flights`.
If `vars` was a column in `flights`, then that code would only select the `vars` column.
For example:
```
flights <- mutate(flights, vars = 1)
select(flights, vars)
#> # A tibble: 336,776 x 1
#> vars
#> <dbl>
#> 1 1
#> 2 1
#> 3 1
#> 4 1
#> 5 1
#> 6 1
#> # … with 336,770 more rows
```
However, `vars` is not a column in `flights`, as is the case, then `select` will use the value the value of the , and select those columns.
If it has the same name or to ensure that it will not conflict with the names of the columns in the data frame, use the `!!!` (bang\-bang\-bang) operator.
```
select(flights, !!!vars)
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
This behavior, which is used by many **tidyverse** functions, is an example of what is called non\-standard evaluation (NSE) in R. See the **dplyr** vignette, [Programming with dplyr](https://dplyr.tidyverse.org/articles/programming.html), for more information on this topic.
### Exercise 5\.4\.4
Does the result of running the following code surprise you? How do the select helpers deal with case by default? How can you change that default?
```
select(flights, contains("TIME"))
#> # A tibble: 336,776 x 6
#> dep_time sched_dep_time arr_time sched_arr_time air_time time_hour
#> <int> <int> <int> <int> <dbl> <dttm>
#> 1 517 515 830 819 227 2013-01-01 05:00:00
#> 2 533 529 850 830 227 2013-01-01 05:00:00
#> 3 542 540 923 850 160 2013-01-01 05:00:00
#> 4 544 545 1004 1022 183 2013-01-01 05:00:00
#> 5 554 600 812 837 116 2013-01-01 06:00:00
#> 6 554 558 740 728 150 2013-01-01 05:00:00
#> # … with 336,770 more rows
```
The default behavior for `contains()` is to ignore case.
This may or may not surprise you.
If this behavior does not surprise you, that could be why it is the default.
Users searching for variable names probably have a better sense of the letters
in the variable than their capitalization.
A second, technical, reason is that dplyr works with more than R data frames.
It can also work with a variety of [databases](https://db.rstudio.com/dplyr/).
Some of these database engines have case insensitive column names, so making functions that match variable names
case insensitive by default will make the behavior of
`select()` consistent regardless of whether the table is
stored as an R data frame or in a database.
To change the behavior add the argument `ignore.case = FALSE`.
```
select(flights, contains("TIME", ignore.case = FALSE))
#> # A tibble: 336,776 x 0
```
5\.5 Add new variables with `mutate()`
--------------------------------------
### Exercise 5\.5\.1
Currently `dep_time` and `sched_dep_time` are convenient to look at, but hard to compute with because they’re not really continuous numbers. Convert them to a more convenient representation of number of minutes since midnight.
To get the departure times in the number of minutes, divide `dep_time` by 100 to get the hours since midnight and multiply by 60 and add the remainder of `dep_time` divided by 100\.
For example, `1504` represents 15:04 (or 3:04 PM), which is 904 minutes after midnight.
To generalize this approach, we need a way to split out the hour\-digits from the minute\-digits.
Dividing by 100 and discarding the remainder using the integer division operator, `%/%` gives us the following.
```
1504 %/% 100
#> [1] 15
```
Instead of `%/%` could also use `/` along with `trunc()` or `floor()`, but `round()` would not work.
To get the minutes, instead of discarding the remainder of the division by `100`,
we only want the remainder.
So we use the modulo operator, `%%`, discussed in the [Other Useful Functions](https://r4ds.had.co.nz/transform.html#select) section.
```
1504 %% 100
#> [1] 4
```
Now, we can combine the hours (multiplied by 60 to convert them to minutes) and
minutes to get the number of minutes after midnight.
```
1504 %/% 100 * 60 + 1504 %% 100
#> [1] 904
```
There is one remaining issue. Midnight is represented by `2400`, which would
correspond to `1440` minutes since midnight, but it should correspond to `0`.
After converting all the times to minutes after midnight, `x %% 1440` will convert
`1440` to zero while keeping all the other times the same.
Now we will put it all together.
The following code creates a new data frame `flights_times` with columns `dep_time_mins` and `sched_dep_time_mins`.
These columns convert `dep_time` and `sched_dep_time`, respectively, to minutes since midnight.
```
flights_times <- mutate(flights,
dep_time_mins = (dep_time %/% 100 * 60 + dep_time %% 100) %% 1440,
sched_dep_time_mins = (sched_dep_time %/% 100 * 60 +
sched_dep_time %% 100) %% 1440
)
# view only relevant columns
select(
flights_times, dep_time, dep_time_mins, sched_dep_time,
sched_dep_time_mins
)
#> # A tibble: 336,776 x 4
#> dep_time dep_time_mins sched_dep_time sched_dep_time_mins
#> <int> <dbl> <int> <dbl>
#> 1 517 317 515 315
#> 2 533 333 529 329
#> 3 542 342 540 340
#> 4 544 344 545 345
#> 5 554 354 600 360
#> 6 554 354 558 358
#> # … with 336,770 more rows
```
Looking ahead to the [Functions](https://r4ds.had.co.nz/functions.html) chapter,
this is precisely the sort of situation in which it would make sense to write
a function to avoid copying and pasting code.
We could define a function `time2mins()`, which converts a vector of times in
from the format used in `flights` to minutes since midnight.
```
time2mins <- function(x) {
(x %/% 100 * 60 + x %% 100) %% 1440
}
```
Using `time2mins`, the previous code simplifies to the following.
```
flights_times <- mutate(flights,
dep_time_mins = time2mins(dep_time),
sched_dep_time_mins = time2mins(sched_dep_time)
)
# show only the relevant columns
select(
flights_times, dep_time, dep_time_mins, sched_dep_time,
sched_dep_time_mins
)
#> # A tibble: 336,776 x 4
#> dep_time dep_time_mins sched_dep_time sched_dep_time_mins
#> <int> <dbl> <int> <dbl>
#> 1 517 317 515 315
#> 2 533 333 529 329
#> 3 542 342 540 340
#> 4 544 344 545 345
#> 5 554 354 600 360
#> 6 554 354 558 358
#> # … with 336,770 more rows
```
### Exercise 5\.5\.2
Compare `air_time` with `arr_time - dep_time`.
What do you expect to see?
What do you see?
What do you need to do to fix it?
I expect that `air_time` is the difference between the arrival (`arr_time`) and departure times (`dep_time`).
In other words, `air_time = arr_time - dep_time`.
To check that this relationship, I’ll first need to convert the times to a form more amenable to arithmetic operations using the same calculations as the [previous exercise](transform.html#exercise-5.5.1).
```
flights_airtime <-
mutate(flights,
dep_time = (dep_time %/% 100 * 60 + dep_time %% 100) %% 1440,
arr_time = (arr_time %/% 100 * 60 + arr_time %% 100) %% 1440,
air_time_diff = air_time - arr_time + dep_time
)
```
So, does `air_time = arr_time - dep_time`?
If so, there should be no flights with non\-zero values of `air_time_diff`.
```
nrow(filter(flights_airtime, air_time_diff != 0))
#> [1] 327150
```
It turns out that there are many flights for which `air_time != arr_time - dep_time`.
Other than data errors, I can think of two reasons why `air_time` would not equal `arr_time - dep_time`.
1. The flight passes midnight, so `arr_time < dep_time`.
In these cases, the difference in airtime should be by 24 hours (1,440 minutes).
2. The flight crosses time zones, and the total air time will be off by hours (multiples of 60\).
All flights in `flights` departed from New York City and are domestic flights in the US.
This means that flights will all be to the same or more westerly time zones.
Given the time\-zones in the US, the differences due to time\-zone should be 60 minutes (Central)
120 minutes (Mountain), 180 minutes (Pacific), 240 minutes (Alaska), or 300 minutes (Hawaii).
Both of these explanations have clear patterns that I would expect to see if they
were true.
In particular, in both cases, since time\-zones and crossing midnight only affects the hour part of the time, all values of `air_time_diff` should be divisible by 60\.
I’ll visually check this hypothesis by plotting the distribution of `air_time_diff`.
If those two explanations are correct, distribution of `air_time_diff` should comprise only spikes at multiples of 60\.
```
ggplot(flights_airtime, aes(x = air_time_diff)) +
geom_histogram(binwidth = 1)
#> Warning: Removed 9430 rows containing non-finite values (stat_bin).
```
This is not the case.
While, the distribution of `air_time_diff` has modes at multiples of 60 as hypothesized,
it shows that there are many flights in which the difference between air time and local arrival and departure times is not divisible by 60\.
Let’s also look at flights with Los Angeles as a destination.
The discrepancy should be 180 minutes.
```
ggplot(filter(flights_airtime, dest == "LAX"), aes(x = air_time_diff)) +
geom_histogram(binwidth = 1)
#> Warning: Removed 148 rows containing non-finite values (stat_bin).
```
To fix these time\-zone issues, I would want to convert all the times to a date\-time to handle overnight flights, and from local time to a common time zone, most likely [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time), to handle flights crossing time\-zones.
The `tzone` column of `nycflights13::airports` gives the time\-zone of each airport.
See the [“Dates and Times”](https://r4ds.had.co.nz/dates-and-times.html) for an introduction on working with date and time data.
But that still leaves the other differences unexplained.
So what else might be going on?
There seem to be too many problems for this to be data entry problems, so I’m probably missing something.
So, I’ll reread the documentation to make sure that I understand the definitions of `arr_time`, `dep_time`, and
`air_time`.
The documentation contains a link to the source of the `flights` data, [https://www.transtats.bts.gov/DL\_SelectFields.asp?Table\_ID\=236](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236).
This documentation shows that the `flights` data does not contain the variables `TaxiIn`, `TaxiOff`, `WheelsIn`, and `WheelsOff`.
It appears that the `air_time` variable refers to flight time, which is defined as the time between wheels\-off (take\-off) and wheels\-in (landing).
But the flight time does not include time spent on the runway taxiing to and from gates.
With this new understanding of the data, I now know that the relationship between `air_time`, `arr_time`, and `dep_time` is `air_time <= arr_time - dep_time`, supposing that the time zones of `arr_time` and `dep_time` are in the same time zone.
### Exercise 5\.5\.3
Compare `dep_time`, `sched_dep_time`, and `dep_delay`. How would you expect those three numbers to be related?
I would expect the departure delay (`dep_delay`) to be equal to the difference between scheduled departure time (`sched_dep_time`), and actual departure time (`dep_time`),
`dep_time - sched_dep_time = dep_delay`.
As with the previous question, the first step is to convert all times to the
number of minutes since midnight.
The column, `dep_delay_diff`, is the difference between the column, `dep_delay`, and
departure delay calculated directly from the scheduled and actual departure times.
```
flights_deptime <-
mutate(flights,
dep_time_min = (dep_time %/% 100 * 60 + dep_time %% 100) %% 1440,
sched_dep_time_min = (sched_dep_time %/% 100 * 60 +
sched_dep_time %% 100) %% 1440,
dep_delay_diff = dep_delay - dep_time_min + sched_dep_time_min
)
```
Does `dep_delay_diff` equal zero for all rows?
```
filter(flights_deptime, dep_delay_diff != 0)
#> # A tibble: 1,236 x 22
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 848 1835 853 1001 1950
#> 2 2013 1 2 42 2359 43 518 442
#> 3 2013 1 2 126 2250 156 233 2359
#> 4 2013 1 3 32 2359 33 504 442
#> 5 2013 1 3 50 2145 185 203 2311
#> 6 2013 1 3 235 2359 156 700 437
#> # … with 1,230 more rows, and 14 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>,
#> # dep_time_min <dbl>, sched_dep_time_min <dbl>, dep_delay_diff <dbl>
```
No. Unlike the last question, time zones are not an issue since we are only
considering departure times.[3](#fn3)
However, the discrepancies could be because a flight was scheduled to depart
before midnight, but was delayed after midnight.
All of these discrepancies are exactly equal to 1440 (24 hours), and the flights with these discrepancies were scheduled to depart later in the day.
```
ggplot(
filter(flights_deptime, dep_delay_diff > 0),
aes(y = sched_dep_time_min, x = dep_delay_diff)
) +
geom_point()
```
Thus the only cases in which the departure delay is not equal to the difference
in scheduled departure and actual departure times is due to a quirk in how these
columns were stored.
### Exercise 5\.5\.4
Find the 10 most delayed flights using a ranking function.
How do you want to handle ties?
Carefully read the documentation for `min_rank()`.
The **dplyr** package provides multiple functions for ranking, which differ in how they handle tied values: `row_number()`, `min_rank()`, `dense_rank()`.
To see how they work, let’s create a data frame with duplicate values in a vector and see how ranking functions handle ties.
```
rankme <- tibble(
x = c(10, 5, 1, 5, 5)
)
```
```
rankme <- mutate(rankme,
x_row_number = row_number(x),
x_min_rank = min_rank(x),
x_dense_rank = dense_rank(x)
)
arrange(rankme, x)
#> # A tibble: 5 x 4
#> x x_row_number x_min_rank x_dense_rank
#> <dbl> <int> <int> <int>
#> 1 1 1 1 1
#> 2 5 2 2 2
#> 3 5 3 2 2
#> 4 5 4 2 2
#> 5 10 5 5 3
```
The function `row_number()` assigns each element a unique value.
The result is equivalent to the index (or row) number of each element after sorting the vector, hence its name.
The`min_rank()` and `dense_rank()` assign tied values the same rank, but differ in how they assign values to the next rank.
For each set of tied values the `min_rank()` function assigns a rank equal to the number of values less than that tied value plus one.
In contrast, the `dense_rank()` function assigns a rank equal to the number of distinct values less than that tied value plus one.
To see the difference between `dense_rank()` and `min_rank()` compare the value of `rankme$x_min_rank` and `rankme$x_dense_rank` for `x = 10`.
If I had to choose one for presenting rankings to someone else, I would use `min_rank()` since its results correspond to the most common usage of rankings in sports or other competitions.
In the code below, I use all three functions, but since there are no ties in the top 10 flights, the results don’t differ.
```
flights_delayed <- mutate(flights,
dep_delay_min_rank = min_rank(desc(dep_delay)),
dep_delay_row_number = row_number(desc(dep_delay)),
dep_delay_dense_rank = dense_rank(desc(dep_delay))
)
flights_delayed <- filter(flights_delayed,
!(dep_delay_min_rank > 10 | dep_delay_row_number > 10 |
dep_delay_dense_rank > 10))
flights_delayed <- arrange(flights_delayed, dep_delay_min_rank)
print(select(flights_delayed, month, day, carrier, flight, dep_delay,
dep_delay_min_rank, dep_delay_row_number, dep_delay_dense_rank),
n = Inf)
#> # A tibble: 10 x 8
#> month day carrier flight dep_delay dep_delay_min_r… dep_delay_row_n…
#> <int> <int> <chr> <int> <dbl> <int> <int>
#> 1 1 9 HA 51 1301 1 1
#> 2 6 15 MQ 3535 1137 2 2
#> 3 1 10 MQ 3695 1126 3 3
#> 4 9 20 AA 177 1014 4 4
#> 5 7 22 MQ 3075 1005 5 5
#> 6 4 10 DL 2391 960 6 6
#> 7 3 17 DL 2119 911 7 7
#> 8 6 27 DL 2007 899 8 8
#> 9 7 22 DL 2047 898 9 9
#> 10 12 5 AA 172 896 10 10
#> # … with 1 more variable: dep_delay_dense_rank <int>
```
In addition to the functions covered here, the `rank()` function provides several more ways of ranking elements.
There are other ways to solve this problem that do not using ranking functions.
To select the top 10, sort values with `arrange()` and select the top values with `slice`:
```
flights_delayed2 <- arrange(flights, desc(dep_delay))
flights_delayed2 <- slice(flights_delayed2, 1:10)
select(flights_delayed2, month, day, carrier, flight, dep_delay)
#> # A tibble: 10 x 5
#> month day carrier flight dep_delay
#> <int> <int> <chr> <int> <dbl>
#> 1 1 9 HA 51 1301
#> 2 6 15 MQ 3535 1137
#> 3 1 10 MQ 3695 1126
#> 4 9 20 AA 177 1014
#> 5 7 22 MQ 3075 1005
#> 6 4 10 DL 2391 960
#> # … with 4 more rows
```
Alternatively, we could use the `top_n()`.
```
flights_delayed3 <- top_n(flights, 10, dep_delay)
flights_delayed3 <- arrange(flights_delayed3, desc(dep_delay))
select(flights_delayed3, month, day, carrier, flight, dep_delay)
#> # A tibble: 10 x 5
#> month day carrier flight dep_delay
#> <int> <int> <chr> <int> <dbl>
#> 1 1 9 HA 51 1301
#> 2 6 15 MQ 3535 1137
#> 3 1 10 MQ 3695 1126
#> 4 9 20 AA 177 1014
#> 5 7 22 MQ 3075 1005
#> 6 4 10 DL 2391 960
#> # … with 4 more rows
```
The previous two approaches will always select 10 rows even if there are tied values.
Ranking functions provide more control over how tied values are handled.
Those approaches will provide the 10 rows with the largest values of `dep_delay`, while ranking functions can provide all rows with the 10 largest values of `dep_delay`.
If there are no ties, these approaches are equivalent.
If there are ties, then which is more appropriate depends on the use.
### Exercise 5\.5\.5
What does `1:3 + 1:10` return? Why?
The code given in the question returns the following.
```
1:3 + 1:10
#> Warning in 1:3 + 1:10: longer object length is not a multiple of shorter object
#> length
#> [1] 2 4 6 5 7 9 8 10 12 11
```
This is equivalent to the following.
```
c(1 + 1, 2 + 2, 3 + 3, 1 + 4, 2 + 5, 3 + 6, 1 + 7, 2 + 8, 3 + 9, 1 + 10)
#> [1] 2 4 6 5 7 9 8 10 12 11
```
When adding two vectors, R recycles the shorter vector’s values to create a vector of the same length as the longer vector.
The code also raises a warning that the shorter vector is not a multiple of the longer vector.
A warning is raised since when this occurs, it is often unintended and may be a bug.
### Exercise 5\.5\.6
What trigonometric functions does R provide?
All trigonometric functions are all described in a single help page, named `Trig`.
You can open the documentation for these functions with `?Trig` or by using `?` with any of the following functions, for example:`?sin`.
R provides functions for the three primary trigonometric functions: sine (`sin()`), cosine (`cos()`), and tangent (`tan()`).
The input angles to all these functions are in [radians](https://en.wikipedia.org/wiki/Radian).
```
x <- seq(-3, 7, by = 1 / 2)
sin(pi * x)
#> [1] -3.67e-16 -1.00e+00 2.45e-16 1.00e+00 -1.22e-16 -1.00e+00 0.00e+00
#> [8] 1.00e+00 1.22e-16 -1.00e+00 -2.45e-16 1.00e+00 3.67e-16 -1.00e+00
#> [15] -4.90e-16 1.00e+00 6.12e-16 -1.00e+00 -7.35e-16 1.00e+00 8.57e-16
cos(pi * x)
#> [1] -1.00e+00 3.06e-16 1.00e+00 -1.84e-16 -1.00e+00 6.12e-17 1.00e+00
#> [8] 6.12e-17 -1.00e+00 -1.84e-16 1.00e+00 3.06e-16 -1.00e+00 -4.29e-16
#> [15] 1.00e+00 5.51e-16 -1.00e+00 -2.45e-15 1.00e+00 -9.80e-16 -1.00e+00
tan(pi * x)
#> [1] 3.67e-16 -3.27e+15 2.45e-16 -5.44e+15 1.22e-16 -1.63e+16 0.00e+00
#> [8] 1.63e+16 -1.22e-16 5.44e+15 -2.45e-16 3.27e+15 -3.67e-16 2.33e+15
#> [15] -4.90e-16 1.81e+15 -6.12e-16 4.08e+14 -7.35e-16 -1.02e+15 -8.57e-16
```
In the previous code, I used the variable `pi`.
R provides the variable `pi` which is set to the value of the mathematical constant \\(\\pi\\) .[4](#fn4)
```
pi
#> [1] 3.14
```
Although R provides the `pi` variable, there is nothing preventing a user from changing its value.
For example, I could redefine `pi` to [3\.14](https://en.wikipedia.org/wiki/Indiana_Pi_Bill) or
any other value.
```
pi <- 3.14
pi
#> [1] 3.14
pi <- "Apple"
pi
#> [1] "Apple"
```
For that reason, if you are using the builtin `pi` variable in computations and are paranoid, you may want to always reference it as `base::pi`.
```
base::pi
#> [1] 3.14
```
In the previous code block, since the angles were in radians, I wrote them as \\(\\pi\\) times some number.
Since it is often easier to write radians multiple of \\(\\pi\\), R provides some convenience functions that do that.
The function `sinpi(x)`, is equivalent to `sin(pi * x)`.
The functions `cospi()` and `tanpi()` are similarly defined for the sin and tan functions, respectively.
```
sinpi(x)
#> [1] 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0
cospi(x)
#> [1] -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1
tanpi(x)
#> Warning in tanpi(x): NaNs produced
#> [1] 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0
#> [20] NaN 0
```
R provides the function arc\-cosine (`acos()`), arc\-sine (`asin()`), and arc\-tangent (`atan()`).
```
x <- seq(-1, 1, by = 1 / 4)
acos(x)
#> [1] 3.142 2.419 2.094 1.823 1.571 1.318 1.047 0.723 0.000
asin(x)
#> [1] -1.571 -0.848 -0.524 -0.253 0.000 0.253 0.524 0.848 1.571
atan(x)
#> [1] -0.785 -0.644 -0.464 -0.245 0.000 0.245 0.464 0.644 0.785
```
Finally, R provides the function `atan2()`.
Calling `atan2(y, x)` returns the angle between the x\-axis and the vector from `(0,0)` to `(x, y)`.
```
atan2(c(1, 0, -1, 0), c(0, 1, 0, -1))
#> [1] 1.57 0.00 -1.57 3.14
```
5\.6 Grouped summaries with `summarise()`
-----------------------------------------
### Exercise 5\.6\.1
Brainstorm at least 5 different ways to assess the typical delay characteristics of a group of flights.
Consider the following scenarios:
* A flight is 15 minutes early 50% of the time, and 15 minutes late 50% of the time.
* A flight is always 10 minutes late.
* A flight is 30 minutes early 50% of the time, and 30 minutes late 50% of the time.
* 99% of the time a flight is on time. 1% of the time it’s 2 hours late.
Which is more important: arrival delay or departure delay?
What this question gets at is a fundamental question of data analysis: the cost function.
As analysts, the reason we are interested in flight delay because it is costly to passengers.
But it is worth thinking carefully about how it is costly and use that information in ranking and measuring these scenarios.
In many scenarios, arrival delay is more important.
In most cases, being arriving late is more costly to the passenger since it could disrupt the next stages of their travel, such as connecting flights or scheduled meetings.
If a departure is delayed without affecting the arrival time, this delay will not have those affects plans nor does it affect the total time spent traveling.
This delay could be beneficial, if less time is spent in the cramped confines of the airplane itself, or a negative, if that delayed time is still spent in the cramped confines of the airplane on the runway.
Variation in arrival time is worse than consistency.
If a flight is always 30 minutes late and that delay is known, then it is as if the arrival time is that delayed time.
The traveler could easily plan for this.
But higher variation in flight times makes it harder to plan.
### Exercise 5\.6\.2
Come up with another approach that will give you the same output as `not_cancelled %>% count(dest)` and `not_cancelled %>% count(tailnum, wt = distance)` (without using `count()`).
```
not_cancelled <- flights %>%
filter(!is.na(dep_delay), !is.na(arr_delay))
```
The first expression is the following.
```
not_cancelled %>%
count(dest)
#> # A tibble: 104 x 2
#> dest n
#> <chr> <int>
#> 1 ABQ 254
#> 2 ACK 264
#> 3 ALB 418
#> 4 ANC 8
#> 5 ATL 16837
#> 6 AUS 2411
#> # … with 98 more rows
```
The `count()` function counts the number of instances within each group of variables.
Instead of using the `count()` function, we can combine the `group_by()` and `summarise()` verbs.
```
not_cancelled %>%
group_by(dest) %>%
summarise(n = length(dest))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 104 x 2
#> dest n
#> <chr> <int>
#> 1 ABQ 254
#> 2 ACK 264
#> 3 ALB 418
#> 4 ANC 8
#> 5 ATL 16837
#> 6 AUS 2411
#> # … with 98 more rows
```
An alternative method for getting the number of observations in a data frame is the function `n()`.
```
not_cancelled %>%
group_by(dest) %>%
summarise(n = n())
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 104 x 2
#> dest n
#> <chr> <int>
#> 1 ABQ 254
#> 2 ACK 264
#> 3 ALB 418
#> 4 ANC 8
#> 5 ATL 16837
#> 6 AUS 2411
#> # … with 98 more rows
```
Another alternative to `count()` is to use `group_by()` followed by `tally()`.
In fact, `count()` is effectively a short\-cut for `group_by()` followed by `tally()`.
```
not_cancelled %>%
group_by(tailnum) %>%
tally()
#> # A tibble: 4,037 x 2
#> tailnum n
#> <chr> <int>
#> 1 D942DN 4
#> 2 N0EGMQ 352
#> 3 N10156 145
#> 4 N102UW 48
#> 5 N103US 46
#> 6 N104UW 46
#> # … with 4,031 more rows
```
The second expression also uses the `count()` function, but adds a `wt` argument.
```
not_cancelled %>%
count(tailnum, wt = distance)
#> # A tibble: 4,037 x 2
#> tailnum n
#> <chr> <dbl>
#> 1 D942DN 3418
#> 2 N0EGMQ 239143
#> 3 N10156 109664
#> 4 N102UW 25722
#> 5 N103US 24619
#> 6 N104UW 24616
#> # … with 4,031 more rows
```
As before, we can replicate `count()` by combining the `group_by()` and `summarise()` verbs.
But this time instead of using `length()`, we will use `sum()` with the weighting variable.
```
not_cancelled %>%
group_by(tailnum) %>%
summarise(n = sum(distance))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 4,037 x 2
#> tailnum n
#> <chr> <dbl>
#> 1 D942DN 3418
#> 2 N0EGMQ 239143
#> 3 N10156 109664
#> 4 N102UW 25722
#> 5 N103US 24619
#> 6 N104UW 24616
#> # … with 4,031 more rows
```
Like the previous example, we can also use the combination `group_by()` and `tally()`.
Any arguments to `tally()` are summed.
```
not_cancelled %>%
group_by(tailnum) %>%
tally(distance)
#> # A tibble: 4,037 x 2
#> tailnum n
#> <chr> <dbl>
#> 1 D942DN 3418
#> 2 N0EGMQ 239143
#> 3 N10156 109664
#> 4 N102UW 25722
#> 5 N103US 24619
#> 6 N104UW 24616
#> # … with 4,031 more rows
```
### Exercise 5\.6\.3
Our definition of cancelled flights `(is.na(dep_delay) | is.na(arr_delay))` is slightly suboptimal.
Why?
Which is the most important column?
If a flight never departs, then it won’t arrive.
A flight could also depart and not arrive if it crashes, or if it is redirected and lands in an airport other than its intended destination.
So the most important column is `arr_delay`, which indicates the amount of delay in arrival.
```
filter(flights, !is.na(dep_delay), is.na(arr_delay)) %>%
select(dep_time, arr_time, sched_arr_time, dep_delay, arr_delay)
#> # A tibble: 1,175 x 5
#> dep_time arr_time sched_arr_time dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 1525 1934 1805 -5 NA
#> 2 1528 2002 1647 29 NA
#> 3 1740 2158 2020 -5 NA
#> 4 1807 2251 2103 29 NA
#> 5 1939 29 2151 59 NA
#> 6 1952 2358 2207 22 NA
#> # … with 1,169 more rows
```
In this data `dep_time` can be non\-missing and `arr_delay` missing but `arr_time` not missing.
Some further [research](https://hyp.is/TsdRpofJEeqzs6-vUOfVBg/jrnold.github.io/r4ds-exercise-solutions/transform.html) found that these rows correspond to diverted flights.
The [BTS](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236) database that is the source for the `flights` table contains additional information for diverted flights that is not included in the nycflights13 data.
The source contains a column `DivArrDelay` with the description:
> Difference in minutes between scheduled and actual arrival time for a diverted flight reaching scheduled destination.
> The `ArrDelay` column remains `NULL` for all diverted flights.
### Exercise 5\.6\.4
Look at the number of cancelled flights per day.
Is there a pattern?
Is the proportion of cancelled flights related to the average delay?
One pattern in cancelled flights per day is that the number of cancelled flights increases with the total number of flights per day.
The proportion of cancelled flights increases with the average delay of flights.
To answer these questions, use definition of cancelled used in the
chapter [Section 5\.6\.3](https://r4ds.had.co.nz/transform.html#counts) and the
relationship `!(is.na(arr_delay) & is.na(dep_delay))` is equal to
`!is.na(arr_delay) | !is.na(dep_delay)` by [De Morgan’s law](https://en.wikipedia.org/wiki/De_Morgan%27s_laws).
The first part of the question asks for any pattern in the number of cancelled flights per day.
I’ll look at the relationship between the number of cancelled flights per day and the total number of flights in a day.
There should be an increasing relationship for two reasons.
First, if all flights are equally likely to be cancelled, then days with more flights should have a higher number of cancellations.
Second, it is likely that days with more flights would have a higher probability of cancellations because congestion itself can cause delays and any delay would affect more flights, and large delays can lead to cancellations.
```
cancelled_per_day <-
flights %>%
mutate(cancelled = (is.na(arr_delay) | is.na(dep_delay))) %>%
group_by(year, month, day) %>%
summarise(
cancelled_num = sum(cancelled),
flights_num = n(),
)
#> `summarise()` regrouping output by 'year', 'month' (override with `.groups` argument)
```
Plotting `flights_num` against `cancelled_num` shows that the number of flights
cancelled increases with the total number of flights.
```
ggplot(cancelled_per_day) +
geom_point(aes(x = flights_num, y = cancelled_num))
```
The second part of the question asks whether there is a relationship between the proportion of flights cancelled and the average departure delay.
I implied this in my answer to the first part of the question, when I noted that increasing delays could result in increased cancellations.
The question does not specify which delay, so I will show the relationship for both.
```
cancelled_and_delays <-
flights %>%
mutate(cancelled = (is.na(arr_delay) | is.na(dep_delay))) %>%
group_by(year, month, day) %>%
summarise(
cancelled_prop = mean(cancelled),
avg_dep_delay = mean(dep_delay, na.rm = TRUE),
avg_arr_delay = mean(arr_delay, na.rm = TRUE)
) %>%
ungroup()
#> `summarise()` regrouping output by 'year', 'month' (override with `.groups` argument)
```
There is a strong increasing relationship between both average departure delay and
and average arrival delay and the proportion of cancelled flights.
```
ggplot(cancelled_and_delays) +
geom_point(aes(x = avg_dep_delay, y = cancelled_prop))
```
```
ggplot(cancelled_and_delays) +
geom_point(aes(x = avg_arr_delay, y = cancelled_prop))
```
### Exercise 5\.6\.5
Which carrier has the worst delays?
Challenge: can you disentangle the effects of bad airports vs. bad carriers?
Why/why not?
(Hint: think about `flights %>% group_by(carrier, dest) %>% summarise(n())`)
```
flights %>%
group_by(carrier) %>%
summarise(arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
arrange(desc(arr_delay))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 16 x 2
#> carrier arr_delay
#> <chr> <dbl>
#> 1 F9 21.9
#> 2 FL 20.1
#> 3 EV 15.8
#> 4 YV 15.6
#> 5 OO 11.9
#> 6 MQ 10.8
#> # … with 10 more rows
```
What airline corresponds to the `"F9"` carrier code?
```
filter(airlines, carrier == "F9")
#> # A tibble: 1 x 2
#> carrier name
#> <chr> <chr>
#> 1 F9 Frontier Airlines Inc.
```
You can get part of the way to disentangling the effects of airports versus bad carriers by comparing the average delay of each carrier to the average delay of flights within a route (flights from the same origin to the same destination).
Comparing delays between carriers and within each route disentangles the effect of carriers and airports.
A better analysis would compare the average delay of a carrier’s flights to the average delay of *all other* carrier’s flights within a route.
```
flights %>%
filter(!is.na(arr_delay)) %>%
# Total delay by carrier within each origin, dest
group_by(origin, dest, carrier) %>%
summarise(
arr_delay = sum(arr_delay),
flights = n()
) %>%
# Total delay within each origin dest
group_by(origin, dest) %>%
mutate(
arr_delay_total = sum(arr_delay),
flights_total = sum(flights)
) %>%
# average delay of each carrier - average delay of other carriers
ungroup() %>%
mutate(
arr_delay_others = (arr_delay_total - arr_delay) /
(flights_total - flights),
arr_delay_mean = arr_delay / flights,
arr_delay_diff = arr_delay_mean - arr_delay_others
) %>%
# remove NaN values (when there is only one carrier)
filter(is.finite(arr_delay_diff)) %>%
# average over all airports it flies to
group_by(carrier) %>%
summarise(arr_delay_diff = mean(arr_delay_diff)) %>%
arrange(desc(arr_delay_diff))
#> `summarise()` regrouping output by 'origin', 'dest' (override with `.groups` argument)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 15 x 2
#> carrier arr_delay_diff
#> <chr> <dbl>
#> 1 OO 27.3
#> 2 F9 17.3
#> 3 EV 11.0
#> 4 B6 6.41
#> 5 FL 2.57
#> 6 VX -0.202
#> # … with 9 more rows
```
There are more sophisticated ways to do this analysis, however comparing the delay of flights within each route goes a long ways toward disentangling airport and carrier effects.
To see a more complete example of this analysis, see this FiveThirtyEight [piece](https://fivethirtyeight.com/features/the-best-and-worst-airlines-airports-and-flights-summer-2015-update/).
### Exercise 5\.6\.6
What does the sort argument to `count()` do?
When might you use it?
The sort argument to `count()` sorts the results in order of `n`.
You could use this anytime you would run `count()` followed by `arrange()`.
For example, the following expression counts the number of flights to a destination and sorts the returned data from highest to lowest.
```
flights %>%
count(dest, sort = TRUE)
#> # A tibble: 105 x 2
#> dest n
#> <chr> <int>
#> 1 ORD 17283
#> 2 ATL 17215
#> 3 LAX 16174
#> 4 BOS 15508
#> 5 MCO 14082
#> 6 CLT 14064
#> # … with 99 more rows
```
5\.7 Grouped mutates (and filters)
----------------------------------
### Exercise 5\.7\.1
Refer back to the lists of useful mutate and filtering functions.
Describe how each operation changes when you combine it with grouping.
Summary functions (`mean()`), offset functions (`lead()`, `lag()`), ranking functions (`min_rank()`, `row_number()`), operate within each group when used with `group_by()` in
`mutate()` or `filter()`.
Arithmetic operators (`+`, `-`), logical operators (`<`, `==`), modular arithmetic operators (`%%`, `%/%`), logarithmic functions (`log`) are not affected by `group_by`.
Summary functions like `mean()`, `median()`, `sum()`, `std()` and others covered
in the section [Useful Summary Functions](https://r4ds.had.co.nz/transform.html#summarise-funs)
calculate their values within each group when used with `mutate()` or `filter()` and `group_by()`.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(x_mean = mean(x)) %>%
group_by(group) %>%
mutate(x_mean_2 = mean(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group x_mean x_mean_2
#> <int> <chr> <dbl> <dbl>
#> 1 1 a 5 2
#> 2 2 a 5 2
#> 3 3 a 5 2
#> 4 4 b 5 5
#> 5 5 b 5 5
#> 6 6 b 5 5
#> # … with 3 more rows
```
Arithmetic operators `+`, `-`, `*`, `/`, `^` are not affected by `group_by()`.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(y = x + 2) %>%
group_by(group) %>%
mutate(z = x + 2)
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group y z
#> <int> <chr> <dbl> <dbl>
#> 1 1 a 3 3
#> 2 2 a 4 4
#> 3 3 a 5 5
#> 4 4 b 6 6
#> 5 5 b 7 7
#> 6 6 b 8 8
#> # … with 3 more rows
```
The modular arithmetic operators `%/%` and `%%` are not affected by `group_by()`
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(y = x %% 2) %>%
group_by(group) %>%
mutate(z = x %% 2)
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group y z
#> <int> <chr> <dbl> <dbl>
#> 1 1 a 1 1
#> 2 2 a 0 0
#> 3 3 a 1 1
#> 4 4 b 0 0
#> 5 5 b 1 1
#> 6 6 b 0 0
#> # … with 3 more rows
```
The logarithmic functions `log()`, `log2()`, and `log10()` are not affected by
`group_by()`.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(y = log(x)) %>%
group_by(group) %>%
mutate(z = log(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group y z
#> <int> <chr> <dbl> <dbl>
#> 1 1 a 0 0
#> 2 2 a 0.693 0.693
#> 3 3 a 1.10 1.10
#> 4 4 b 1.39 1.39
#> 5 5 b 1.61 1.61
#> 6 6 b 1.79 1.79
#> # … with 3 more rows
```
The offset functions `lead()` and `lag()` respect the groupings in `group_by()`.
The functions `lag()` and `lead()` will only return values within each group.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
group_by(group) %>%
mutate(lag_x = lag(x),
lead_x = lead(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group lag_x lead_x
#> <int> <chr> <int> <int>
#> 1 1 a NA 2
#> 2 2 a 1 3
#> 3 3 a 2 NA
#> 4 4 b NA 5
#> 5 5 b 4 6
#> 6 6 b 5 NA
#> # … with 3 more rows
```
The cumulative and rolling aggregate functions `cumsum()`, `cumprod()`, `cummin()`, `cummax()`, and `cummean()` calculate values within each group.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(x_cumsum = cumsum(x)) %>%
group_by(group) %>%
mutate(x_cumsum_2 = cumsum(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group x_cumsum x_cumsum_2
#> <int> <chr> <int> <int>
#> 1 1 a 1 1
#> 2 2 a 3 3
#> 3 3 a 6 6
#> 4 4 b 10 4
#> 5 5 b 15 9
#> 6 6 b 21 15
#> # … with 3 more rows
```
Logical comparisons, `<`, `<=`, `>`, `>=`, `!=`, and `==` are not affected by `group_by()`.
```
tibble(x = 1:9,
y = 9:1,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(x_lte_y = x <= y) %>%
group_by(group) %>%
mutate(x_lte_y_2 = x <= y)
#> # A tibble: 9 x 5
#> # Groups: group [3]
#> x y group x_lte_y x_lte_y_2
#> <int> <int> <chr> <lgl> <lgl>
#> 1 1 9 a TRUE TRUE
#> 2 2 8 a TRUE TRUE
#> 3 3 7 a TRUE TRUE
#> 4 4 6 b TRUE TRUE
#> 5 5 5 b TRUE TRUE
#> 6 6 4 b FALSE FALSE
#> # … with 3 more rows
```
Ranking functions like `min_rank()` work within each group when used with `group_by()`.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(rnk = min_rank(x)) %>%
group_by(group) %>%
mutate(rnk2 = min_rank(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group rnk rnk2
#> <int> <chr> <int> <int>
#> 1 1 a 1 1
#> 2 2 a 2 2
#> 3 3 a 3 3
#> 4 4 b 4 1
#> 5 5 b 5 2
#> 6 6 b 6 3
#> # … with 3 more rows
```
Though not asked in the question, note that `arrange()` ignores groups when sorting values.
```
tibble(x = runif(9),
group = rep(c("a", "b", "c"), each = 3)) %>%
group_by(group) %>%
arrange(x)
#> # A tibble: 9 x 2
#> # Groups: group [3]
#> x group
#> <dbl> <chr>
#> 1 0.00740 b
#> 2 0.0808 a
#> 3 0.157 b
#> 4 0.290 c
#> 5 0.466 b
#> 6 0.498 c
#> # … with 3 more rows
```
However, the order of values from `arrange()` can interact with groups when
used with functions that rely on the ordering of elements, such as `lead()`, `lag()`,
or `cumsum()`.
```
tibble(group = rep(c("a", "b", "c"), each = 3),
x = runif(9)) %>%
group_by(group) %>%
arrange(x) %>%
mutate(lag_x = lag(x))
#> # A tibble: 9 x 3
#> # Groups: group [3]
#> group x lag_x
#> <chr> <dbl> <dbl>
#> 1 b 0.0342 NA
#> 2 c 0.0637 NA
#> 3 a 0.175 NA
#> 4 c 0.196 0.0637
#> 5 b 0.320 0.0342
#> 6 b 0.402 0.320
#> # … with 3 more rows
```
### Exercise 5\.7\.2
Which plane (`tailnum`) has the worst on\-time record?
The question does not define a way to measure on\-time record, so I will consider two metrics:
1. proportion of flights not delayed or cancelled, and
2. mean arrival delay.
The first metric is the proportion of not\-cancelled and on\-time flights.
I use the presence of an arrival time as an indicator that a flight was not cancelled.
However, there are many planes that have never flown an on\-time flight.
Additionally, many of the planes that have the lowest proportion of on\-time flights have only flown a small number of flights.
```
flights %>%
filter(!is.na(tailnum)) %>%
mutate(on_time = !is.na(arr_time) & (arr_delay <= 0)) %>%
group_by(tailnum) %>%
summarise(on_time = mean(on_time), n = n()) %>%
filter(min_rank(on_time) == 1)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 110 x 3
#> tailnum on_time n
#> <chr> <dbl> <int>
#> 1 N121DE 0 2
#> 2 N136DL 0 1
#> 3 N143DA 0 1
#> 4 N17627 0 2
#> 5 N240AT 0 5
#> 6 N26906 0 1
#> # … with 104 more rows
```
So, I will remove planes that flew at least 20 flights.
The choice of 20 was chosen because it round number near the first quartile of the number of flights by plane.[5](#fn5)[6](#fn6)
```
quantile(count(flights, tailnum)$n)
#> 0% 25% 50% 75% 100%
#> 1 23 54 110 2512
```
The plane with the worst on time record that flew at least 20 flights is:
```
flights %>%
filter(!is.na(tailnum), is.na(arr_time) | !is.na(arr_delay)) %>%
mutate(on_time = !is.na(arr_time) & (arr_delay <= 0)) %>%
group_by(tailnum) %>%
summarise(on_time = mean(on_time), n = n()) %>%
filter(n >= 20) %>%
filter(min_rank(on_time) == 1)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 1 x 3
#> tailnum on_time n
#> <chr> <dbl> <int>
#> 1 N988AT 0.189 37
```
There are cases where `arr_delay` is missing but `arr_time` is not missing.
I have not debugged the cause of this bad data, so these rows are dropped for
the purposes of this exercise.
The second metric is the mean minutes delayed.
As with the previous metric, I will only consider planes which flew least 20 flights.
A different plane has the worst on\-time record when measured as average minutes delayed.
```
flights %>%
filter(!is.na(arr_delay)) %>%
group_by(tailnum) %>%
summarise(arr_delay = mean(arr_delay), n = n()) %>%
filter(n >= 20) %>%
filter(min_rank(desc(arr_delay)) == 1)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 1 x 3
#> tailnum arr_delay n
#> <chr> <dbl> <int>
#> 1 N203FR 59.1 41
```
### Exercise 5\.7\.3
What time of day should you fly if you want to avoid delays as much as possible?
Let’s group by the hour of the flight.
The earlier the flight is scheduled, the lower its expected delay.
This is intuitive as delays will affect later flights.
Morning flights have fewer (if any) previous flights that can delay them.
```
flights %>%
group_by(hour) %>%
summarise(arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
arrange(arr_delay)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 20 x 2
#> hour arr_delay
#> <dbl> <dbl>
#> 1 7 -5.30
#> 2 5 -4.80
#> 3 6 -3.38
#> 4 9 -1.45
#> 5 8 -1.11
#> 6 10 0.954
#> # … with 14 more rows
```
### Exercise 5\.7\.4
For each destination, compute the total minutes of delay.
For each flight, compute the proportion of the total delay for its destination.
The key to answering this question is to only include delayed flights when calculating the total delay and proportion of delay.
```
flights %>%
filter(arr_delay > 0) %>%
group_by(dest) %>%
mutate(
arr_delay_total = sum(arr_delay),
arr_delay_prop = arr_delay / arr_delay_total
) %>%
select(dest, month, day, dep_time, carrier, flight,
arr_delay, arr_delay_prop) %>%
arrange(dest, desc(arr_delay_prop))
#> # A tibble: 133,004 x 8
#> # Groups: dest [103]
#> dest month day dep_time carrier flight arr_delay arr_delay_prop
#> <chr> <int> <int> <int> <chr> <int> <dbl> <dbl>
#> 1 ABQ 7 22 2145 B6 1505 153 0.0341
#> 2 ABQ 12 14 2223 B6 65 149 0.0332
#> 3 ABQ 10 15 2146 B6 65 138 0.0308
#> 4 ABQ 7 23 2206 B6 1505 137 0.0305
#> 5 ABQ 12 17 2220 B6 65 136 0.0303
#> 6 ABQ 7 10 2025 B6 1505 126 0.0281
#> # … with 132,998 more rows
```
There is some ambiguity in the meaning of the term *flights* in the question.
The first example defined a flight as a row in the `flights` table, which is a trip by an aircraft from an airport at a particular date and time.
However, *flight* could also refer to the [flight number](https://en.wikipedia.org/wiki/Flight_number), which is the code a carrier uses for an airline service of a route.
For example, `AA1` is the flight number of the 09:00 American Airlines flight between JFK and LAX.
The flight number is contained in the `flights$flight` column, though what is called a “flight” is a combination of the `flights$carrier` and `flights$flight` columns.
```
flights %>%
filter(arr_delay > 0) %>%
group_by(dest, origin, carrier, flight) %>%
summarise(arr_delay = sum(arr_delay)) %>%
group_by(dest) %>%
mutate(
arr_delay_prop = arr_delay / sum(arr_delay)
) %>%
arrange(dest, desc(arr_delay_prop)) %>%
select(carrier, flight, origin, dest, arr_delay_prop)
#> `summarise()` regrouping output by 'dest', 'origin', 'carrier' (override with `.groups` argument)
#> # A tibble: 8,834 x 5
#> # Groups: dest [103]
#> carrier flight origin dest arr_delay_prop
#> <chr> <int> <chr> <chr> <dbl>
#> 1 B6 1505 JFK ABQ 0.567
#> 2 B6 65 JFK ABQ 0.433
#> 3 B6 1191 JFK ACK 0.475
#> 4 B6 1491 JFK ACK 0.414
#> 5 B6 1291 JFK ACK 0.0898
#> 6 B6 1195 JFK ACK 0.0208
#> # … with 8,828 more rows
```
### Exercise 5\.7\.5
Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave. Using `lag()` explore how the delay of a flight is related to the delay of the immediately preceding flight.
This calculates the departure delay of the preceding flight from the same airport.
```
lagged_delays <- flights %>%
arrange(origin, month, day, dep_time) %>%
group_by(origin) %>%
mutate(dep_delay_lag = lag(dep_delay)) %>%
filter(!is.na(dep_delay), !is.na(dep_delay_lag))
```
This plots the relationship between the mean delay of a flight for all values of the previous flight.
For delays less than two hours, the relationship between the delay of the preceding flight and the current flight is nearly a line.
After that the relationship becomes more variable, as long\-delayed flights are interspersed with flights leaving on\-time.
After about 8\-hours, a delayed flight is likely to be followed by a flight leaving on time.
```
lagged_delays %>%
group_by(dep_delay_lag) %>%
summarise(dep_delay_mean = mean(dep_delay)) %>%
ggplot(aes(y = dep_delay_mean, x = dep_delay_lag)) +
geom_point() +
scale_x_continuous(breaks = seq(0, 1500, by = 120)) +
labs(y = "Departure Delay", x = "Previous Departure Delay")
```
The overall relationship looks similar in all three origin airports.
```
lagged_delays %>%
group_by(origin, dep_delay_lag) %>%
summarise(dep_delay_mean = mean(dep_delay)) %>%
ggplot(aes(y = dep_delay_mean, x = dep_delay_lag)) +
geom_point() +
facet_wrap(~ origin, ncol=1) +
labs(y = "Departure Delay", x = "Previous Departure Delay")
#> `summarise()` regrouping output by 'origin' (override with `.groups` argument)
```
### Exercise 5\.7\.6
Look at each destination. Can you find flights that are suspiciously fast?
(i.e. flights that represent a potential data entry error).
Compute the air time of a flight relative to the shortest flight to that destination.
Which flights were most delayed in the air?
When calculating this answer we should only compare flights within the same (origin, destination) pair.
To find unusual observations, we need to first put them on the same scale.
I will [standardize](https://en.wikipedia.org/wiki/Standard_score)
values by subtracting the mean from each and then dividing each by the standard deviation.
\\\[
\\mathsf{standardized}(x) \= \\frac{x \- \\mathsf{mean}(x)}{\\mathsf{sd}(x)} .
\\]
A standardized variable is often called a \\(z\\)\-score.
The units of the standardized variable are standard deviations from the mean.
This will put the flight times from different routes on the same scale.
The larger the magnitude of the standardized variable for an observation, the more unusual the observation is.
Flights with negative values of the standardized variable are faster than the
mean flight for that route, while those with positive values are slower than
the mean flight for that route.
```
standardized_flights <- flights %>%
filter(!is.na(air_time)) %>%
group_by(dest, origin) %>%
mutate(
air_time_mean = mean(air_time),
air_time_sd = sd(air_time),
n = n()
) %>%
ungroup() %>%
mutate(air_time_standard = (air_time - air_time_mean) / (air_time_sd + 1))
```
I add 1 to the denominator and numerator to avoid dividing by zero.
Note that the `ungroup()` here is not necessary. However, I will be using
this data frame later. Through experience, I have found that I have fewer bugs
when I keep a data frame grouped for only those verbs that need it.
If I did not `ungroup()` this data frame, the `arrange()` used later would
not work as expected. It is better to err on the side of using `ungroup()`
when unnecessary.
The distribution of the standardized air flights has long right tail.
```
ggplot(standardized_flights, aes(x = air_time_standard)) +
geom_density()
#> Warning: Removed 4 rows containing non-finite values (stat_density).
```
Unusually fast flights are those flights with the smallest standardized values.
```
standardized_flights %>%
arrange(air_time_standard) %>%
select(
carrier, flight, origin, dest, month, day,
air_time, air_time_mean, air_time_standard
) %>%
head(10) %>%
print(width = Inf)
#> # A tibble: 10 x 9
#> carrier flight origin dest month day air_time air_time_mean
#> <chr> <int> <chr> <chr> <int> <int> <dbl> <dbl>
#> 1 DL 1499 LGA ATL 5 25 65 114.
#> 2 EV 4667 EWR MSP 7 2 93 151.
#> 3 EV 4292 EWR GSP 5 13 55 93.2
#> 4 EV 3805 EWR BNA 3 23 70 115.
#> 5 EV 4687 EWR CVG 9 29 62 96.1
#> 6 B6 2002 JFK BUF 11 10 38 57.1
#> air_time_standard
#> <dbl>
#> 1 -4.56
#> 2 -4.46
#> 3 -4.20
#> 4 -3.73
#> 5 -3.60
#> 6 -3.38
#> # … with 4 more rows
```
I used `width = Inf` to ensure that all columns will be printed.
The fastest flight is DL1499 from LGA to
ATL which departed on
2013\-05\-25 at 17:09\.
It has an air time of 65 minutes, compared to an average
flight time of 114 minutes for its route.
This is 4\.6 standard deviations below
the average flight on its route.
It is important to note that this does not necessarily imply that there was a data entry error.
We should check these flights to see whether there was some reason for the difference.
It may be that we are missing some piece of information that explains these unusual times.
A potential issue with the way that we standardized the flights is that the mean and standard deviation used to calculate are sensitive to outliers and outliers is what we are looking for.
Instead of standardizing variables with the mean and variance, we could use the median
as a measure of central tendency and the interquartile range (IQR) as a measure of spread.
The median and IQR are more [resistant to outliers](https://en.wikipedia.org/wiki/Robust_statistics) than the mean and standard deviation.
The following method uses the median and inter\-quartile range, which are less sensitive to outliers.
```
standardized_flights2 <- flights %>%
filter(!is.na(air_time)) %>%
group_by(dest, origin) %>%
mutate(
air_time_median = median(air_time),
air_time_iqr = IQR(air_time),
n = n(),
air_time_standard = (air_time - air_time_median) / air_time_iqr)
```
The distribution of the standardized air flights using this new definition
also has long right tail of slow flights.
```
ggplot(standardized_flights2, aes(x = air_time_standard)) +
geom_density()
#> Warning: Removed 4 rows containing non-finite values (stat_density).
```
Unusually fast flights are those flights with the smallest standardized values.
```
standardized_flights2 %>%
arrange(air_time_standard) %>%
select(
carrier, flight, origin, dest, month, day, air_time,
air_time_median, air_time_standard
) %>%
head(10) %>%
print(width = Inf)
#> # A tibble: 10 x 9
#> # Groups: dest, origin [10]
#> carrier flight origin dest month day air_time air_time_median
#> <chr> <int> <chr> <chr> <int> <int> <dbl> <dbl>
#> 1 EV 4667 EWR MSP 7 2 93 149
#> 2 DL 1499 LGA ATL 5 25 65 112
#> 3 US 2132 LGA BOS 3 2 21 37
#> 4 B6 30 JFK ROC 3 25 35 51
#> 5 B6 2002 JFK BUF 11 10 38 57
#> 6 EV 4292 EWR GSP 5 13 55 92
#> air_time_standard
#> <dbl>
#> 1 -3.5
#> 2 -3.36
#> 3 -3.2
#> 4 -3.2
#> 5 -3.17
#> 6 -3.08
#> # … with 4 more rows
```
All of these answers have relied only on using a distribution of comparable observations to find unusual observations.
In this case, the comparable observations were flights from the same origin to the same destination.
Apart from our knowledge that flights from the same origin to the same destination should have similar air times, we have not used any other domain\-specific knowledge.
But we know much more about this problem.
The most obvious piece of knowledge we have is that we know that flights cannot travel back in time, so there should never be a flight with a negative airtime.
But we also know that aircraft have maximum speeds.
While different aircraft have different [cruising speeds](https://en.wikipedia.org/wiki/Cruise_(aeronautics)), commercial airliners
typically cruise at air speeds around 547–575 mph.
Calculating the ground speed of aircraft is complicated by the way in which winds, especially the influence of wind, especially jet streams, on the ground\-speed of flights.
A strong tailwind can increase ground\-speed of the aircraft by [200 mph](https://www.wired.com/story/norwegian-air-transatlantic-speed-record/).
Apart from the retired [Concorde](https://en.wikipedia.org/wiki/Concorde).
For example, in 2018, [a transatlantic flight](https://www.wired.com/story/norwegian-air-transatlantic-speed-record/)
traveled at 770 mph due to a strong jet stream tailwind.
This means that any flight traveling at speeds greater than 800 mph is implausible,
and it may be worth checking flights traveling at greater than 600 or 700 mph.
Ground speed could also be used to identify aircraft flying implausibly slow.
Joining flights data with the air craft type in the `planes` table and getting
information about typical or top speeds of those aircraft could provide a more
detailed way to identify implausibly fast or slow flights.
Additional data on high altitude wind speeds at the time of the flight would further help.
Knowing the substance of the data analysis at hand is one of the most important
tools of a data scientist. The tools of statistics are a complement, not a
substitute, for that knowledge.
With that in mind, Let’s plot the distribution of the ground speed of flights.
The modal flight in this data has a ground speed of between 400 and 500 mph.
The distribution of ground speeds has a large left tail of slower flights below
400 mph constituting the majority.
There are very few flights with a ground speed over 500 mph.
```
flights %>%
mutate(mph = distance / (air_time / 60)) %>%
ggplot(aes(x = mph)) +
geom_histogram(binwidth = 10)
#> Warning: Removed 9430 rows containing non-finite values (stat_bin).
```
The fastest flight is the same one identified as the largest outlier earlier.
Its ground speed was 703 mph.
This is fast for a commercial jet, but not impossible.
```
flights %>%
mutate(mph = distance / (air_time / 60)) %>%
arrange(desc(mph)) %>%
select(mph, flight, carrier, flight, month, day, dep_time) %>%
head(5)
#> # A tibble: 5 x 6
#> mph flight carrier month day dep_time
#> <dbl> <int> <chr> <int> <int> <int>
#> 1 703. 1499 DL 5 25 1709
#> 2 650. 4667 EV 7 2 1558
#> 3 648 4292 EV 5 13 2040
#> 4 641. 3805 EV 3 23 1914
#> 5 591. 1902 DL 1 12 1559
```
One explanation for unusually fast flights is that they are “making up time” in the air by flying faster.
Commercial aircraft do not fly at their top speed since the airlines are also concerned about fuel consumption.
But, if a flight is delayed on the ground, it may fly faster than usual in order to avoid a late arrival.
So, I would expect that some of the unusually fast flights were delayed on departure.
```
flights %>%
mutate(mph = distance / (air_time / 60)) %>%
arrange(desc(mph)) %>%
select(
origin, dest, mph, year, month, day, dep_time, flight, carrier,
dep_delay, arr_delay
)
#> # A tibble: 336,776 x 11
#> origin dest mph year month day dep_time flight carrier dep_delay
#> <chr> <chr> <dbl> <int> <int> <int> <int> <int> <chr> <dbl>
#> 1 LGA ATL 703. 2013 5 25 1709 1499 DL 9
#> 2 EWR MSP 650. 2013 7 2 1558 4667 EV 45
#> 3 EWR GSP 648 2013 5 13 2040 4292 EV 15
#> 4 EWR BNA 641. 2013 3 23 1914 3805 EV 4
#> 5 LGA PBI 591. 2013 1 12 1559 1902 DL -1
#> 6 JFK SJU 564 2013 11 17 650 315 DL -5
#> # … with 336,770 more rows, and 1 more variable: arr_delay <dbl>
head(5)
#> [1] 5
```
Five of the top ten flights had departure delays, and three of those were
able to make up that time in the air and arrive ahead of schedule.
Overall, there were a few flights that seemed unusually fast, but they all
fall into the realm of plausibility and likely are not data entry problems.
\[Ed. Please correct me if I am missing something]
The second part of the question asks us to compare flights to the fastest flight
on a route to find the flights most delayed in the air. I will calculate the
amount a flight is delayed in air in two ways.
The first is the absolute delay, defined as the number of minutes longer than the fastest flight on that route,`air_time - min(air_time)`.
The second is the relative delay, which is the percentage increase in air time relative to the time of the fastest flight
along that route, `(air_time - min(air_time)) / min(air_time) * 100`.
```
air_time_delayed <-
flights %>%
group_by(origin, dest) %>%
mutate(
air_time_min = min(air_time, na.rm = TRUE),
air_time_delay = air_time - air_time_min,
air_time_delay_pct = air_time_delay / air_time_min * 100
)
#> Warning in min(air_time, na.rm = TRUE): no non-missing arguments to min;
#> returning Inf
```
The most delayed flight in air in minutes was DL841
from JFK to SFO which departed on
2013\-07\-28 at 17:27\. It took
189 minutes longer than the flight with the shortest
air time on its route.
```
air_time_delayed %>%
arrange(desc(air_time_delay)) %>%
select(
air_time_delay, carrier, flight,
origin, dest, year, month, day, dep_time,
air_time, air_time_min
) %>%
head() %>%
print(width = Inf)
#> # A tibble: 6 x 11
#> # Groups: origin, dest [5]
#> air_time_delay carrier flight origin dest year month day dep_time air_time
#> <dbl> <chr> <int> <chr> <chr> <int> <int> <int> <int> <dbl>
#> 1 189 DL 841 JFK SFO 2013 7 28 1727 490
#> 2 165 DL 426 JFK LAX 2013 11 22 1812 440
#> 3 163 AA 575 JFK EGE 2013 1 28 1806 382
#> 4 147 DL 17 JFK LAX 2013 7 10 1814 422
#> 5 145 UA 745 LGA DEN 2013 9 10 1513 331
#> 6 143 UA 587 EWR LAS 2013 11 22 2142 399
#> air_time_min
#> <dbl>
#> 1 301
#> 2 275
#> 3 219
#> 4 275
#> 5 186
#> 6 256
```
The most delayed flight in air as a percentage of the fastest flight along that
route was US2136
from LGA to BOS departing on 2013\-06\-17 at 16:52\.
It took 410% longer than the
flight with the shortest air time on its route.
```
air_time_delayed %>%
arrange(desc(air_time_delay)) %>%
select(
air_time_delay_pct, carrier, flight,
origin, dest, year, month, day, dep_time,
air_time, air_time_min
) %>%
head() %>%
print(width = Inf)
#> # A tibble: 6 x 11
#> # Groups: origin, dest [5]
#> air_time_delay_pct carrier flight origin dest year month day dep_time
#> <dbl> <chr> <int> <chr> <chr> <int> <int> <int> <int>
#> 1 62.8 DL 841 JFK SFO 2013 7 28 1727
#> 2 60 DL 426 JFK LAX 2013 11 22 1812
#> 3 74.4 AA 575 JFK EGE 2013 1 28 1806
#> 4 53.5 DL 17 JFK LAX 2013 7 10 1814
#> 5 78.0 UA 745 LGA DEN 2013 9 10 1513
#> 6 55.9 UA 587 EWR LAS 2013 11 22 2142
#> air_time air_time_min
#> <dbl> <dbl>
#> 1 490 301
#> 2 440 275
#> 3 382 219
#> 4 422 275
#> 5 331 186
#> 6 399 256
```
### Exercise 5\.7\.7
Find all destinations that are flown by at least two carriers.
Use that information to rank the carriers.
To restate this question, we are asked to rank airlines by the number of destinations that they fly to, considering only those airports that are flown to by two or more airlines.
There are two steps to calculating this ranking.
First, find all airports serviced by two or more carriers.
Then, rank carriers by the number of those destinations that they service.
```
flights %>%
# find all airports with > 1 carrier
group_by(dest) %>%
mutate(n_carriers = n_distinct(carrier)) %>%
filter(n_carriers > 1) %>%
# rank carriers by numer of destinations
group_by(carrier) %>%
summarize(n_dest = n_distinct(dest)) %>%
arrange(desc(n_dest))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 16 x 2
#> carrier n_dest
#> <chr> <int>
#> 1 EV 51
#> 2 9E 48
#> 3 UA 42
#> 4 DL 39
#> 5 B6 35
#> 6 AA 19
#> # … with 10 more rows
```
The carrier `"EV"` flies to the most destinations, considering only airports flown to by two or more carriers. What airline does the `"EV"` carrier code correspond to?
```
filter(airlines, carrier == "EV")
#> # A tibble: 1 x 2
#> carrier name
#> <chr> <chr>
#> 1 EV ExpressJet Airlines Inc.
```
Unless you know the airplane industry, it is likely that you don’t recognize [ExpressJet](https://en.wikipedia.org/wiki/ExpressJet); I certainly didn’t.
It is a regional airline that partners with major airlines to fly from hubs (larger airports) to smaller airports.
This means that many of the shorter flights of major carriers are operated by ExpressJet.
This business model explains why ExpressJet services the most destinations.
Among the airlines that fly to only one destination from New York are Alaska Airlines
and Hawaiian Airlines.
```
filter(airlines, carrier %in% c("AS", "F9", "HA"))
#> # A tibble: 3 x 2
#> carrier name
#> <chr> <chr>
#> 1 AS Alaska Airlines Inc.
#> 2 F9 Frontier Airlines Inc.
#> 3 HA Hawaiian Airlines Inc.
```
### Exercise 5\.7\.8
For each plane, count the number of flights before the first delay of greater than 1 hour.
The question does not specify arrival or departure delay.
I consider `dep_delay` in this answer, though similar code could be used for `arr_delay`.
```
flights %>%
# sort in increasing order
select(tailnum, year, month,day, dep_delay) %>%
filter(!is.na(dep_delay)) %>%
arrange(tailnum, year, month, day) %>%
group_by(tailnum) %>%
# cumulative number of flights delayed over one hour
mutate(cumulative_hr_delays = cumsum(dep_delay > 60)) %>%
# count the number of flights == 0
summarise(total_flights = sum(cumulative_hr_delays < 1)) %>%
arrange(total_flights)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 4,037 x 2
#> tailnum total_flights
#> <chr> <int>
#> 1 D942DN 0
#> 2 N10575 0
#> 3 N11106 0
#> 4 N11109 0
#> 5 N11187 0
#> 6 N11199 0
#> # … with 4,031 more rows
```
5\.1 Introduction
-----------------
```
library("nycflights13")
library("tidyverse")
```
5\.2 Filter rows with `filter()`
--------------------------------
### Exercise 5\.2\.1
Find all flights that
1. Had an arrival delay of two or more hours
2. Flew to Houston (IAH or HOU)
3. Were operated by United, American, or Delta
4. Departed in summer (July, August, and September)
5. Arrived more than two hours late, but didn’t leave late
6. Were delayed by at least an hour, but made up over 30 minutes in flight
7. Departed between midnight and 6 am (inclusive)
The answer to each part follows.
1. Since the `arr_delay` variable is measured in minutes, find
flights with an arrival delay of 120 or more minutes.
```
filter(flights, arr_delay >= 120)
#> # A tibble: 10,200 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 811 630 101 1047 830
#> 2 2013 1 1 848 1835 853 1001 1950
#> 3 2013 1 1 957 733 144 1056 853
#> 4 2013 1 1 1114 900 134 1447 1222
#> 5 2013 1 1 1505 1310 115 1638 1431
#> 6 2013 1 1 1525 1340 105 1831 1626
#> # … with 10,194 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
2. The flights that flew to Houston are those flights where the
destination (`dest`) is either “IAH” or “HOU”.
```
filter(flights, dest == "IAH" | dest == "HOU")
#> # A tibble: 9,313 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 623 627 -4 933 932
#> 4 2013 1 1 728 732 -4 1041 1038
#> 5 2013 1 1 739 739 0 1104 1038
#> 6 2013 1 1 908 908 0 1228 1219
#> # … with 9,307 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
However, using `%in%` is more compact and would scale to cases where
there were more than two airports we were interested in.
```
filter(flights, dest %in% c("IAH", "HOU"))
#> # A tibble: 9,313 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 623 627 -4 933 932
#> 4 2013 1 1 728 732 -4 1041 1038
#> 5 2013 1 1 739 739 0 1104 1038
#> 6 2013 1 1 908 908 0 1228 1219
#> # … with 9,307 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
3. In the `flights` dataset, the column `carrier` indicates the airline, but it uses two\-character carrier codes.
We can find the carrier codes for the airlines in the `airlines` dataset.
Since the carrier code dataset only has 16 rows, and the names
of the airlines in that dataset are not exactly “United”, “American”, or “Delta”,
it is easiest to manually look up their carrier codes in that data.
```
airlines
#> # A tibble: 16 x 2
#> carrier name
#> <chr> <chr>
#> 1 9E Endeavor Air Inc.
#> 2 AA American Airlines Inc.
#> 3 AS Alaska Airlines Inc.
#> 4 B6 JetBlue Airways
#> 5 DL Delta Air Lines Inc.
#> 6 EV ExpressJet Airlines Inc.
#> # … with 10 more rows
```
The carrier code for Delta is `"DL"`, for American is `"AA"`, and for United is `"UA"`.
Using these carriers codes, we check whether `carrier` is one of those.
```
filter(flights, carrier %in% c("AA", "DL", "UA"))
#> # A tibble: 139,504 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 542 540 2 923 850
#> 4 2013 1 1 554 600 -6 812 837
#> 5 2013 1 1 554 558 -4 740 728
#> 6 2013 1 1 558 600 -2 753 745
#> # … with 139,498 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
4. The variable `month` has the month, and it is numeric.
So, the summer flights are those that departed in months 7 (July), 8 (August), and 9 (September).
```
filter(flights, month >= 7, month <= 9)
#> # A tibble: 86,326 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 1 1 2029 212 236 2359
#> 2 2013 7 1 2 2359 3 344 344
#> 3 2013 7 1 29 2245 104 151 1
#> 4 2013 7 1 43 2130 193 322 14
#> 5 2013 7 1 44 2150 174 300 100
#> 6 2013 7 1 46 2051 235 304 2358
#> # … with 86,320 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The `%in%` operator is an alternative. If the `:` operator is used to specify
the integer range, the expression is readable and compact.
```
filter(flights, month %in% 7:9)
#> # A tibble: 86,326 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 1 1 2029 212 236 2359
#> 2 2013 7 1 2 2359 3 344 344
#> 3 2013 7 1 29 2245 104 151 1
#> 4 2013 7 1 43 2130 193 322 14
#> 5 2013 7 1 44 2150 174 300 100
#> 6 2013 7 1 46 2051 235 304 2358
#> # … with 86,320 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
We could also use the `|` operator. However, the `|` does not scale to
many choices.
Even with only three choices, it is quite verbose.
```
filter(flights, month == 7 | month == 8 | month == 9)
#> # A tibble: 86,326 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 1 1 2029 212 236 2359
#> 2 2013 7 1 2 2359 3 344 344
#> 3 2013 7 1 29 2245 104 151 1
#> 4 2013 7 1 43 2130 193 322 14
#> 5 2013 7 1 44 2150 174 300 100
#> 6 2013 7 1 46 2051 235 304 2358
#> # … with 86,320 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
We can also use the `between()` function as shown in [Exercise 5\.2\.2](transform.html#exercise-5.2.2).
5. Flights that arrived more than two hours late, but didn’t leave late will
have an arrival delay of more than 120 minutes (`arr_delay > 120`) and
a non\-positive departure delay (`dep_delay <= 0`).
```
filter(flights, arr_delay > 120, dep_delay <= 0)
#> # A tibble: 29 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 27 1419 1420 -1 1754 1550
#> 2 2013 10 7 1350 1350 0 1736 1526
#> 3 2013 10 7 1357 1359 -2 1858 1654
#> 4 2013 10 16 657 700 -3 1258 1056
#> 5 2013 11 1 658 700 -2 1329 1015
#> 6 2013 3 18 1844 1847 -3 39 2219
#> # … with 23 more rows, and 11 more variables: arr_delay <dbl>, carrier <chr>,
#> # flight <int>, tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>,
#> # distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
6. Were delayed by at least an hour, but made up over 30 minutes in flight.
If a flight was delayed by at least an hour, then `dep_delay >= 60`.
If the flight didn’t make up any time in the air, then its arrival would be delayed by the same amount as its departure, meaning `dep_delay == arr_delay`, or alternatively, `dep_delay - arr_delay == 0`.
If it makes up over 30 minutes in the air, then the arrival delay must be at least 30 minutes less than the departure delay, which is stated as `dep_delay - arr_delay > 30`.
```
filter(flights, dep_delay >= 60, dep_delay - arr_delay > 30)
#> # A tibble: 1,844 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 2205 1720 285 46 2040
#> 2 2013 1 1 2326 2130 116 131 18
#> 3 2013 1 3 1503 1221 162 1803 1555
#> 4 2013 1 3 1839 1700 99 2056 1950
#> 5 2013 1 3 1850 1745 65 2148 2120
#> 6 2013 1 3 1941 1759 102 2246 2139
#> # … with 1,838 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
7. Finding flights that departed between midnight and 6 a.m. is complicated by
the way in which times are represented in the data.
In `dep_time`, midnight is represented by `2400`, not `0`.
You can verify this by checking the minimum and maximum of `dep_time`.
```
summary(flights$dep_time)
#> Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
#> 1 907 1401 1349 1744 2400 8255
```
This is an example of why it is always good to check the summary statistics of your data.
Unfortunately, this means we cannot simply check that `dep_time < 600`, because we also have
to consider the special case of midnight.
```
filter(flights, dep_time <= 600 | dep_time == 2400)
#> # A tibble: 9,373 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 542 540 2 923 850
#> 4 2013 1 1 544 545 -1 1004 1022
#> 5 2013 1 1 554 600 -6 812 837
#> 6 2013 1 1 554 558 -4 740 728
#> # … with 9,367 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
Alternatively, we could use the [modulo operator](https://en.wikipedia.org/wiki/Modulo_operation), `%%`.
The modulo operator returns the remainder of division.
Let’s see how this affects our times.
```
c(600, 1200, 2400) %% 2400
#> [1] 600 1200 0
```
Since `2400 %% 2400 == 0` and all other times are left unchanged,
we can compare the result of the modulo operation to `600`,
```
filter(flights, dep_time %% 2400 <= 600)
#> # A tibble: 9,373 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 542 540 2 923 850
#> 4 2013 1 1 544 545 -1 1004 1022
#> 5 2013 1 1 554 600 -6 812 837
#> 6 2013 1 1 554 558 -4 740 728
#> # … with 9,367 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
This filter expression is more compact, but its readability depends on the
familiarity of the reader with modular arithmetic.
### Exercise 5\.2\.2
Another useful dplyr filtering helper is `between()`. What does it do? Can you use it to simplify the code needed to answer the previous challenges?
The expression `between(x, left, right)` is equivalent to `x >= left & x <= right`.
Of the answers in the previous question, we could simplify the statement of *departed in summer* (`month >= 7 & month <= 9`) using the `between()` function.
```
filter(flights, between(month, 7, 9))
#> # A tibble: 86,326 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 1 1 2029 212 236 2359
#> 2 2013 7 1 2 2359 3 344 344
#> 3 2013 7 1 29 2245 104 151 1
#> 4 2013 7 1 43 2130 193 322 14
#> 5 2013 7 1 44 2150 174 300 100
#> 6 2013 7 1 46 2051 235 304 2358
#> # … with 86,320 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
### Exercise 5\.2\.3
How many flights have a missing `dep_time`? What other variables are missing? What might these rows represent?
Find the rows of flights with a missing departure time (`dep_time`) using the `is.na()` function.
```
filter(flights, is.na(dep_time))
#> # A tibble: 8,255 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 NA 1630 NA NA 1815
#> 2 2013 1 1 NA 1935 NA NA 2240
#> 3 2013 1 1 NA 1500 NA NA 1825
#> 4 2013 1 1 NA 600 NA NA 901
#> 5 2013 1 2 NA 1540 NA NA 1747
#> 6 2013 1 2 NA 1620 NA NA 1746
#> # … with 8,249 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
Notably, the arrival time (`arr_time`) is also missing for these rows. These seem to be cancelled flights.
The output of the function `summary()` includes the number of missing values for all non\-character variables.
```
summary(flights)
#> year month day dep_time sched_dep_time
#> Min. :2013 Min. : 1.00 Min. : 1.0 Min. : 1 Min. : 106
#> 1st Qu.:2013 1st Qu.: 4.00 1st Qu.: 8.0 1st Qu.: 907 1st Qu.: 906
#> Median :2013 Median : 7.00 Median :16.0 Median :1401 Median :1359
#> Mean :2013 Mean : 6.55 Mean :15.7 Mean :1349 Mean :1344
#> 3rd Qu.:2013 3rd Qu.:10.00 3rd Qu.:23.0 3rd Qu.:1744 3rd Qu.:1729
#> Max. :2013 Max. :12.00 Max. :31.0 Max. :2400 Max. :2359
#> NA's :8255
#> dep_delay arr_time sched_arr_time arr_delay carrier
#> Min. : -43 Min. : 1 Min. : 1 Min. : -86 Length:336776
#> 1st Qu.: -5 1st Qu.:1104 1st Qu.:1124 1st Qu.: -17 Class :character
#> Median : -2 Median :1535 Median :1556 Median : -5 Mode :character
#> Mean : 13 Mean :1502 Mean :1536 Mean : 7
#> 3rd Qu.: 11 3rd Qu.:1940 3rd Qu.:1945 3rd Qu.: 14
#> Max. :1301 Max. :2400 Max. :2359 Max. :1272
#> NA's :8255 NA's :8713 NA's :9430
#> flight tailnum origin dest
#> Min. : 1 Length:336776 Length:336776 Length:336776
#> 1st Qu.: 553 Class :character Class :character Class :character
#> Median :1496 Mode :character Mode :character Mode :character
#> Mean :1972
#> 3rd Qu.:3465
#> Max. :8500
#>
#> air_time distance hour minute
#> Min. : 20 Min. : 17 Min. : 1.0 Min. : 0.0
#> 1st Qu.: 82 1st Qu.: 502 1st Qu.: 9.0 1st Qu.: 8.0
#> Median :129 Median : 872 Median :13.0 Median :29.0
#> Mean :151 Mean :1040 Mean :13.2 Mean :26.2
#> 3rd Qu.:192 3rd Qu.:1389 3rd Qu.:17.0 3rd Qu.:44.0
#> Max. :695 Max. :4983 Max. :23.0 Max. :59.0
#> NA's :9430
#> time_hour
#> Min. :2013-01-01 05:00:00
#> 1st Qu.:2013-04-04 13:00:00
#> Median :2013-07-03 10:00:00
#> Mean :2013-07-03 05:22:54
#> 3rd Qu.:2013-10-01 07:00:00
#> Max. :2013-12-31 23:00:00
#>
```
### Exercise 5\.2\.4
Why is `NA ^ 0` not missing? Why is `NA | TRUE` not missing?
Why is `FALSE & NA` not missing? Can you figure out the general rule?
(`NA * 0` is a tricky counterexample!)
```
NA ^ 0
#> [1] 1
```
`NA ^ 0 == 1` since for all numeric values \\(x ^ 0 \= 1\\).
```
NA | TRUE
#> [1] TRUE
```
`NA | TRUE` is `TRUE` because anything **or** `TRUE` is `TRUE`.
If the missing value were `TRUE`, then `TRUE | TRUE == TRUE`,
and if the missing value was `FALSE`, then `FALSE | TRUE == TRUE`.
```
NA & FALSE
#> [1] FALSE
```
The value of `NA & FALSE` is `FALSE` because anything **and** `FALSE` is always `FALSE`.
If the missing value were `TRUE`, then `TRUE & FALSE == FALSE`,
and if the missing value was `FALSE`, then `FALSE & FALSE == FALSE`.
```
NA | FALSE
#> [1] NA
```
For `NA | FALSE`, the value is unknown since `TRUE | FALSE == TRUE`, but `FALSE | FALSE == FALSE`.
```
NA & TRUE
#> [1] NA
```
For `NA & TRUE`, the value is unknown since `FALSE & TRUE== FALSE`, but `TRUE & TRUE == TRUE`.
```
NA * 0
#> [1] NA
```
Since \\(x \* 0 \= 0\\) for all finite numbers we might expect `NA * 0 == 0`, but that’s not the case.
The reason that `NA * 0 != 0` is that \\(0 \\times \\infty\\) and \\(0 \\times \-\\infty\\) are undefined.
R represents undefined results as `NaN`, which is an abbreviation of “[not a number](https://en.wikipedia.org/wiki/NaN)”.
```
Inf * 0
#> [1] NaN
-Inf * 0
#> [1] NaN
```
### Exercise 5\.2\.1
Find all flights that
1. Had an arrival delay of two or more hours
2. Flew to Houston (IAH or HOU)
3. Were operated by United, American, or Delta
4. Departed in summer (July, August, and September)
5. Arrived more than two hours late, but didn’t leave late
6. Were delayed by at least an hour, but made up over 30 minutes in flight
7. Departed between midnight and 6 am (inclusive)
The answer to each part follows.
1. Since the `arr_delay` variable is measured in minutes, find
flights with an arrival delay of 120 or more minutes.
```
filter(flights, arr_delay >= 120)
#> # A tibble: 10,200 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 811 630 101 1047 830
#> 2 2013 1 1 848 1835 853 1001 1950
#> 3 2013 1 1 957 733 144 1056 853
#> 4 2013 1 1 1114 900 134 1447 1222
#> 5 2013 1 1 1505 1310 115 1638 1431
#> 6 2013 1 1 1525 1340 105 1831 1626
#> # … with 10,194 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
2. The flights that flew to Houston are those flights where the
destination (`dest`) is either “IAH” or “HOU”.
```
filter(flights, dest == "IAH" | dest == "HOU")
#> # A tibble: 9,313 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 623 627 -4 933 932
#> 4 2013 1 1 728 732 -4 1041 1038
#> 5 2013 1 1 739 739 0 1104 1038
#> 6 2013 1 1 908 908 0 1228 1219
#> # … with 9,307 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
However, using `%in%` is more compact and would scale to cases where
there were more than two airports we were interested in.
```
filter(flights, dest %in% c("IAH", "HOU"))
#> # A tibble: 9,313 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 623 627 -4 933 932
#> 4 2013 1 1 728 732 -4 1041 1038
#> 5 2013 1 1 739 739 0 1104 1038
#> 6 2013 1 1 908 908 0 1228 1219
#> # … with 9,307 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
3. In the `flights` dataset, the column `carrier` indicates the airline, but it uses two\-character carrier codes.
We can find the carrier codes for the airlines in the `airlines` dataset.
Since the carrier code dataset only has 16 rows, and the names
of the airlines in that dataset are not exactly “United”, “American”, or “Delta”,
it is easiest to manually look up their carrier codes in that data.
```
airlines
#> # A tibble: 16 x 2
#> carrier name
#> <chr> <chr>
#> 1 9E Endeavor Air Inc.
#> 2 AA American Airlines Inc.
#> 3 AS Alaska Airlines Inc.
#> 4 B6 JetBlue Airways
#> 5 DL Delta Air Lines Inc.
#> 6 EV ExpressJet Airlines Inc.
#> # … with 10 more rows
```
The carrier code for Delta is `"DL"`, for American is `"AA"`, and for United is `"UA"`.
Using these carriers codes, we check whether `carrier` is one of those.
```
filter(flights, carrier %in% c("AA", "DL", "UA"))
#> # A tibble: 139,504 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 542 540 2 923 850
#> 4 2013 1 1 554 600 -6 812 837
#> 5 2013 1 1 554 558 -4 740 728
#> 6 2013 1 1 558 600 -2 753 745
#> # … with 139,498 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
4. The variable `month` has the month, and it is numeric.
So, the summer flights are those that departed in months 7 (July), 8 (August), and 9 (September).
```
filter(flights, month >= 7, month <= 9)
#> # A tibble: 86,326 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 1 1 2029 212 236 2359
#> 2 2013 7 1 2 2359 3 344 344
#> 3 2013 7 1 29 2245 104 151 1
#> 4 2013 7 1 43 2130 193 322 14
#> 5 2013 7 1 44 2150 174 300 100
#> 6 2013 7 1 46 2051 235 304 2358
#> # … with 86,320 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The `%in%` operator is an alternative. If the `:` operator is used to specify
the integer range, the expression is readable and compact.
```
filter(flights, month %in% 7:9)
#> # A tibble: 86,326 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 1 1 2029 212 236 2359
#> 2 2013 7 1 2 2359 3 344 344
#> 3 2013 7 1 29 2245 104 151 1
#> 4 2013 7 1 43 2130 193 322 14
#> 5 2013 7 1 44 2150 174 300 100
#> 6 2013 7 1 46 2051 235 304 2358
#> # … with 86,320 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
We could also use the `|` operator. However, the `|` does not scale to
many choices.
Even with only three choices, it is quite verbose.
```
filter(flights, month == 7 | month == 8 | month == 9)
#> # A tibble: 86,326 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 1 1 2029 212 236 2359
#> 2 2013 7 1 2 2359 3 344 344
#> 3 2013 7 1 29 2245 104 151 1
#> 4 2013 7 1 43 2130 193 322 14
#> 5 2013 7 1 44 2150 174 300 100
#> 6 2013 7 1 46 2051 235 304 2358
#> # … with 86,320 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
We can also use the `between()` function as shown in [Exercise 5\.2\.2](transform.html#exercise-5.2.2).
5. Flights that arrived more than two hours late, but didn’t leave late will
have an arrival delay of more than 120 minutes (`arr_delay > 120`) and
a non\-positive departure delay (`dep_delay <= 0`).
```
filter(flights, arr_delay > 120, dep_delay <= 0)
#> # A tibble: 29 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 27 1419 1420 -1 1754 1550
#> 2 2013 10 7 1350 1350 0 1736 1526
#> 3 2013 10 7 1357 1359 -2 1858 1654
#> 4 2013 10 16 657 700 -3 1258 1056
#> 5 2013 11 1 658 700 -2 1329 1015
#> 6 2013 3 18 1844 1847 -3 39 2219
#> # … with 23 more rows, and 11 more variables: arr_delay <dbl>, carrier <chr>,
#> # flight <int>, tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>,
#> # distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
6. Were delayed by at least an hour, but made up over 30 minutes in flight.
If a flight was delayed by at least an hour, then `dep_delay >= 60`.
If the flight didn’t make up any time in the air, then its arrival would be delayed by the same amount as its departure, meaning `dep_delay == arr_delay`, or alternatively, `dep_delay - arr_delay == 0`.
If it makes up over 30 minutes in the air, then the arrival delay must be at least 30 minutes less than the departure delay, which is stated as `dep_delay - arr_delay > 30`.
```
filter(flights, dep_delay >= 60, dep_delay - arr_delay > 30)
#> # A tibble: 1,844 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 2205 1720 285 46 2040
#> 2 2013 1 1 2326 2130 116 131 18
#> 3 2013 1 3 1503 1221 162 1803 1555
#> 4 2013 1 3 1839 1700 99 2056 1950
#> 5 2013 1 3 1850 1745 65 2148 2120
#> 6 2013 1 3 1941 1759 102 2246 2139
#> # … with 1,838 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
7. Finding flights that departed between midnight and 6 a.m. is complicated by
the way in which times are represented in the data.
In `dep_time`, midnight is represented by `2400`, not `0`.
You can verify this by checking the minimum and maximum of `dep_time`.
```
summary(flights$dep_time)
#> Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
#> 1 907 1401 1349 1744 2400 8255
```
This is an example of why it is always good to check the summary statistics of your data.
Unfortunately, this means we cannot simply check that `dep_time < 600`, because we also have
to consider the special case of midnight.
```
filter(flights, dep_time <= 600 | dep_time == 2400)
#> # A tibble: 9,373 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 542 540 2 923 850
#> 4 2013 1 1 544 545 -1 1004 1022
#> 5 2013 1 1 554 600 -6 812 837
#> 6 2013 1 1 554 558 -4 740 728
#> # … with 9,367 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
Alternatively, we could use the [modulo operator](https://en.wikipedia.org/wiki/Modulo_operation), `%%`.
The modulo operator returns the remainder of division.
Let’s see how this affects our times.
```
c(600, 1200, 2400) %% 2400
#> [1] 600 1200 0
```
Since `2400 %% 2400 == 0` and all other times are left unchanged,
we can compare the result of the modulo operation to `600`,
```
filter(flights, dep_time %% 2400 <= 600)
#> # A tibble: 9,373 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 542 540 2 923 850
#> 4 2013 1 1 544 545 -1 1004 1022
#> 5 2013 1 1 554 600 -6 812 837
#> 6 2013 1 1 554 558 -4 740 728
#> # … with 9,367 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
This filter expression is more compact, but its readability depends on the
familiarity of the reader with modular arithmetic.
### Exercise 5\.2\.2
Another useful dplyr filtering helper is `between()`. What does it do? Can you use it to simplify the code needed to answer the previous challenges?
The expression `between(x, left, right)` is equivalent to `x >= left & x <= right`.
Of the answers in the previous question, we could simplify the statement of *departed in summer* (`month >= 7 & month <= 9`) using the `between()` function.
```
filter(flights, between(month, 7, 9))
#> # A tibble: 86,326 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 1 1 2029 212 236 2359
#> 2 2013 7 1 2 2359 3 344 344
#> 3 2013 7 1 29 2245 104 151 1
#> 4 2013 7 1 43 2130 193 322 14
#> 5 2013 7 1 44 2150 174 300 100
#> 6 2013 7 1 46 2051 235 304 2358
#> # … with 86,320 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
### Exercise 5\.2\.3
How many flights have a missing `dep_time`? What other variables are missing? What might these rows represent?
Find the rows of flights with a missing departure time (`dep_time`) using the `is.na()` function.
```
filter(flights, is.na(dep_time))
#> # A tibble: 8,255 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 NA 1630 NA NA 1815
#> 2 2013 1 1 NA 1935 NA NA 2240
#> 3 2013 1 1 NA 1500 NA NA 1825
#> 4 2013 1 1 NA 600 NA NA 901
#> 5 2013 1 2 NA 1540 NA NA 1747
#> 6 2013 1 2 NA 1620 NA NA 1746
#> # … with 8,249 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
Notably, the arrival time (`arr_time`) is also missing for these rows. These seem to be cancelled flights.
The output of the function `summary()` includes the number of missing values for all non\-character variables.
```
summary(flights)
#> year month day dep_time sched_dep_time
#> Min. :2013 Min. : 1.00 Min. : 1.0 Min. : 1 Min. : 106
#> 1st Qu.:2013 1st Qu.: 4.00 1st Qu.: 8.0 1st Qu.: 907 1st Qu.: 906
#> Median :2013 Median : 7.00 Median :16.0 Median :1401 Median :1359
#> Mean :2013 Mean : 6.55 Mean :15.7 Mean :1349 Mean :1344
#> 3rd Qu.:2013 3rd Qu.:10.00 3rd Qu.:23.0 3rd Qu.:1744 3rd Qu.:1729
#> Max. :2013 Max. :12.00 Max. :31.0 Max. :2400 Max. :2359
#> NA's :8255
#> dep_delay arr_time sched_arr_time arr_delay carrier
#> Min. : -43 Min. : 1 Min. : 1 Min. : -86 Length:336776
#> 1st Qu.: -5 1st Qu.:1104 1st Qu.:1124 1st Qu.: -17 Class :character
#> Median : -2 Median :1535 Median :1556 Median : -5 Mode :character
#> Mean : 13 Mean :1502 Mean :1536 Mean : 7
#> 3rd Qu.: 11 3rd Qu.:1940 3rd Qu.:1945 3rd Qu.: 14
#> Max. :1301 Max. :2400 Max. :2359 Max. :1272
#> NA's :8255 NA's :8713 NA's :9430
#> flight tailnum origin dest
#> Min. : 1 Length:336776 Length:336776 Length:336776
#> 1st Qu.: 553 Class :character Class :character Class :character
#> Median :1496 Mode :character Mode :character Mode :character
#> Mean :1972
#> 3rd Qu.:3465
#> Max. :8500
#>
#> air_time distance hour minute
#> Min. : 20 Min. : 17 Min. : 1.0 Min. : 0.0
#> 1st Qu.: 82 1st Qu.: 502 1st Qu.: 9.0 1st Qu.: 8.0
#> Median :129 Median : 872 Median :13.0 Median :29.0
#> Mean :151 Mean :1040 Mean :13.2 Mean :26.2
#> 3rd Qu.:192 3rd Qu.:1389 3rd Qu.:17.0 3rd Qu.:44.0
#> Max. :695 Max. :4983 Max. :23.0 Max. :59.0
#> NA's :9430
#> time_hour
#> Min. :2013-01-01 05:00:00
#> 1st Qu.:2013-04-04 13:00:00
#> Median :2013-07-03 10:00:00
#> Mean :2013-07-03 05:22:54
#> 3rd Qu.:2013-10-01 07:00:00
#> Max. :2013-12-31 23:00:00
#>
```
### Exercise 5\.2\.4
Why is `NA ^ 0` not missing? Why is `NA | TRUE` not missing?
Why is `FALSE & NA` not missing? Can you figure out the general rule?
(`NA * 0` is a tricky counterexample!)
```
NA ^ 0
#> [1] 1
```
`NA ^ 0 == 1` since for all numeric values \\(x ^ 0 \= 1\\).
```
NA | TRUE
#> [1] TRUE
```
`NA | TRUE` is `TRUE` because anything **or** `TRUE` is `TRUE`.
If the missing value were `TRUE`, then `TRUE | TRUE == TRUE`,
and if the missing value was `FALSE`, then `FALSE | TRUE == TRUE`.
```
NA & FALSE
#> [1] FALSE
```
The value of `NA & FALSE` is `FALSE` because anything **and** `FALSE` is always `FALSE`.
If the missing value were `TRUE`, then `TRUE & FALSE == FALSE`,
and if the missing value was `FALSE`, then `FALSE & FALSE == FALSE`.
```
NA | FALSE
#> [1] NA
```
For `NA | FALSE`, the value is unknown since `TRUE | FALSE == TRUE`, but `FALSE | FALSE == FALSE`.
```
NA & TRUE
#> [1] NA
```
For `NA & TRUE`, the value is unknown since `FALSE & TRUE== FALSE`, but `TRUE & TRUE == TRUE`.
```
NA * 0
#> [1] NA
```
Since \\(x \* 0 \= 0\\) for all finite numbers we might expect `NA * 0 == 0`, but that’s not the case.
The reason that `NA * 0 != 0` is that \\(0 \\times \\infty\\) and \\(0 \\times \-\\infty\\) are undefined.
R represents undefined results as `NaN`, which is an abbreviation of “[not a number](https://en.wikipedia.org/wiki/NaN)”.
```
Inf * 0
#> [1] NaN
-Inf * 0
#> [1] NaN
```
5\.3 Arrange rows with `arrange()`
----------------------------------
### Exercise 5\.3\.1
How could you use `arrange()` to sort all missing values to the start? (Hint: use `is.na()`).
The `arrange()` function puts `NA` values last.
```
arrange(flights, dep_time) %>%
tail()
#> # A tibble: 6 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 9 30 NA 1842 NA NA 2019
#> 2 2013 9 30 NA 1455 NA NA 1634
#> 3 2013 9 30 NA 2200 NA NA 2312
#> 4 2013 9 30 NA 1210 NA NA 1330
#> 5 2013 9 30 NA 1159 NA NA 1344
#> 6 2013 9 30 NA 840 NA NA 1020
#> # … with 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
#> # tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
#> # hour <dbl>, minute <dbl>, time_hour <dttm>
```
Using `desc()` does not change that.
```
arrange(flights, desc(dep_time))
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 10 30 2400 2359 1 327 337
#> 2 2013 11 27 2400 2359 1 515 445
#> 3 2013 12 5 2400 2359 1 427 440
#> 4 2013 12 9 2400 2359 1 432 440
#> 5 2013 12 9 2400 2250 70 59 2356
#> 6 2013 12 13 2400 2359 1 432 440
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
To put `NA` values first, we can add an indicator of whether the column has a missing value.
Then we sort by the missing indicator column and the column of interest.
For example, to sort the data frame by departure time (`dep_time`) in ascending order but `NA` values first, run the following.
```
arrange(flights, desc(is.na(dep_time)), dep_time)
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 NA 1630 NA NA 1815
#> 2 2013 1 1 NA 1935 NA NA 2240
#> 3 2013 1 1 NA 1500 NA NA 1825
#> 4 2013 1 1 NA 600 NA NA 901
#> 5 2013 1 2 NA 1540 NA NA 1747
#> 6 2013 1 2 NA 1620 NA NA 1746
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The `flights` will first be sorted by `desc(is.na(dep_time))`.
Since `desc(is.na(dep_time))` is either `TRUE` when `dep_time` is missing, or `FALSE`, when it is not, the rows with missing values of `dep_time` will come first, since `TRUE > FALSE`.
### Exercise 5\.3\.2
Sort flights to find the most delayed flights. Find the flights that left earliest.
Find the most delayed flights by sorting the table by departure delay, `dep_delay`, in descending order.
```
arrange(flights, desc(dep_delay))
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 9 641 900 1301 1242 1530
#> 2 2013 6 15 1432 1935 1137 1607 2120
#> 3 2013 1 10 1121 1635 1126 1239 1810
#> 4 2013 9 20 1139 1845 1014 1457 2210
#> 5 2013 7 22 845 1600 1005 1044 1815
#> 6 2013 4 10 1100 1900 960 1342 2211
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The most delayed flight was HA 51, JFK to HNL, which was scheduled to leave on January 09, 2013 09:00\.
Note that the departure time is given as 641, which seems to be less than the scheduled departure time.
But the departure was delayed 1,301 minutes, which is 21 hours, 41 minutes.
The departure time is the day after the scheduled departure time.
Be happy that you weren’t on that flight, and if you happened to have been on that flight and are reading this, I’m sorry for you.
Similarly, the earliest departing flight can be found by sorting `dep_delay` in ascending order.
```
arrange(flights, dep_delay)
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 12 7 2040 2123 -43 40 2352
#> 2 2013 2 3 2022 2055 -33 2240 2338
#> 3 2013 11 10 1408 1440 -32 1549 1559
#> 4 2013 1 11 1900 1930 -30 2233 2243
#> 5 2013 1 29 1703 1730 -27 1947 1957
#> 6 2013 8 9 729 755 -26 1002 955
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
Flight B6 97 (JFK to DEN) scheduled to depart on December 07, 2013 at 21:23
departed 43 minutes early.
### Exercise 5\.3\.3
Sort flights to find the fastest flights.
There are actually two ways to interpret this question: one that can be solved by using `arrange()`, and a more complex interpretation that requires creation of a new variable using `mutate()`, which we haven’t seen demonstrated before.
The colloquial interpretation of “fastest” flight can be understood to mean “the flight with the shortest flight time”. We can use arrange to sort our data by the `air_time` variable to find the shortest flights:
```
head(arrange(flights, air_time))
#> # A tibble: 6 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 16 1355 1315 40 1442 1411
#> 2 2013 4 13 537 527 10 622 628
#> 3 2013 12 6 922 851 31 1021 954
#> 4 2013 2 3 2153 2129 24 2247 2224
#> 5 2013 2 5 1303 1315 -12 1342 1411
#> 6 2013 2 12 2123 2130 -7 2211 2225
#> # … with 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
#> # tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
#> # hour <dbl>, minute <dbl>, time_hour <dttm>
```
Another definition of the “fastest flight” is the flight with the highest average [ground speed](https://en.wikipedia.org/wiki/Ground_speed).
The ground speed is not included in the data, but it can be calculated from the `distance` and `air_time` of the flight.
```
head(arrange(flights, desc(distance / air_time)))
#> # A tibble: 6 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 5 25 1709 1700 9 1923 1937
#> 2 2013 7 2 1558 1513 45 1745 1719
#> 3 2013 5 13 2040 2025 15 2225 2226
#> 4 2013 3 23 1914 1910 4 2045 2043
#> 5 2013 1 12 1559 1600 -1 1849 1917
#> 6 2013 11 17 650 655 -5 1059 1150
#> # … with 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
#> # tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
#> # hour <dbl>, minute <dbl>, time_hour <dttm>
```
### Exercise 5\.3\.4
Which flights traveled the longest?
Which traveled the shortest?
To find the longest flight, sort the flights by the `distance` column in descending order.
```
arrange(flights, desc(distance))
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 857 900 -3 1516 1530
#> 2 2013 1 2 909 900 9 1525 1530
#> 3 2013 1 3 914 900 14 1504 1530
#> 4 2013 1 4 900 900 0 1516 1530
#> 5 2013 1 5 858 900 -2 1519 1530
#> 6 2013 1 6 1019 900 79 1558 1530
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The longest flight is HA 51, JFK to HNL, which is 4,983 miles.
To find the shortest flight, sort the flights by the `distance` in ascending order, which is the default sort order.
```
arrange(flights, distance)
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 27 NA 106 NA NA 245
#> 2 2013 1 3 2127 2129 -2 2222 2224
#> 3 2013 1 4 1240 1200 40 1333 1306
#> 4 2013 1 4 1829 1615 134 1937 1721
#> 5 2013 1 4 2128 2129 -1 2218 2224
#> 6 2013 1 5 1155 1200 -5 1241 1306
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The shortest flight is US 1632, EWR to LGA, which is only 17 miles.
This is a flight between two of the New York area airports.
However, since this flight is missing a departure time so it either did not actually fly or there is a problem with the data.
The terms “longest” and “shortest” could also refer to the time of the flight instead of the distance.
Now the longest and shortest flights by can be found by sorting by the `air_time` column.
The longest flights by airtime are the following.
```
arrange(flights, desc(air_time))
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 3 17 1337 1335 2 1937 1836
#> 2 2013 2 6 853 900 -7 1542 1540
#> 3 2013 3 15 1001 1000 1 1551 1530
#> 4 2013 3 17 1006 1000 6 1607 1530
#> 5 2013 3 16 1001 1000 1 1544 1530
#> 6 2013 2 5 900 900 0 1555 1540
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The shortest flights by airtime are the following.
```
arrange(flights, air_time)
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 16 1355 1315 40 1442 1411
#> 2 2013 4 13 537 527 10 622 628
#> 3 2013 12 6 922 851 31 1021 954
#> 4 2013 2 3 2153 2129 24 2247 2224
#> 5 2013 2 5 1303 1315 -12 1342 1411
#> 6 2013 2 12 2123 2130 -7 2211 2225
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
### Exercise 5\.3\.1
How could you use `arrange()` to sort all missing values to the start? (Hint: use `is.na()`).
The `arrange()` function puts `NA` values last.
```
arrange(flights, dep_time) %>%
tail()
#> # A tibble: 6 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 9 30 NA 1842 NA NA 2019
#> 2 2013 9 30 NA 1455 NA NA 1634
#> 3 2013 9 30 NA 2200 NA NA 2312
#> 4 2013 9 30 NA 1210 NA NA 1330
#> 5 2013 9 30 NA 1159 NA NA 1344
#> 6 2013 9 30 NA 840 NA NA 1020
#> # … with 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
#> # tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
#> # hour <dbl>, minute <dbl>, time_hour <dttm>
```
Using `desc()` does not change that.
```
arrange(flights, desc(dep_time))
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 10 30 2400 2359 1 327 337
#> 2 2013 11 27 2400 2359 1 515 445
#> 3 2013 12 5 2400 2359 1 427 440
#> 4 2013 12 9 2400 2359 1 432 440
#> 5 2013 12 9 2400 2250 70 59 2356
#> 6 2013 12 13 2400 2359 1 432 440
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
To put `NA` values first, we can add an indicator of whether the column has a missing value.
Then we sort by the missing indicator column and the column of interest.
For example, to sort the data frame by departure time (`dep_time`) in ascending order but `NA` values first, run the following.
```
arrange(flights, desc(is.na(dep_time)), dep_time)
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 NA 1630 NA NA 1815
#> 2 2013 1 1 NA 1935 NA NA 2240
#> 3 2013 1 1 NA 1500 NA NA 1825
#> 4 2013 1 1 NA 600 NA NA 901
#> 5 2013 1 2 NA 1540 NA NA 1747
#> 6 2013 1 2 NA 1620 NA NA 1746
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The `flights` will first be sorted by `desc(is.na(dep_time))`.
Since `desc(is.na(dep_time))` is either `TRUE` when `dep_time` is missing, or `FALSE`, when it is not, the rows with missing values of `dep_time` will come first, since `TRUE > FALSE`.
### Exercise 5\.3\.2
Sort flights to find the most delayed flights. Find the flights that left earliest.
Find the most delayed flights by sorting the table by departure delay, `dep_delay`, in descending order.
```
arrange(flights, desc(dep_delay))
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 9 641 900 1301 1242 1530
#> 2 2013 6 15 1432 1935 1137 1607 2120
#> 3 2013 1 10 1121 1635 1126 1239 1810
#> 4 2013 9 20 1139 1845 1014 1457 2210
#> 5 2013 7 22 845 1600 1005 1044 1815
#> 6 2013 4 10 1100 1900 960 1342 2211
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The most delayed flight was HA 51, JFK to HNL, which was scheduled to leave on January 09, 2013 09:00\.
Note that the departure time is given as 641, which seems to be less than the scheduled departure time.
But the departure was delayed 1,301 minutes, which is 21 hours, 41 minutes.
The departure time is the day after the scheduled departure time.
Be happy that you weren’t on that flight, and if you happened to have been on that flight and are reading this, I’m sorry for you.
Similarly, the earliest departing flight can be found by sorting `dep_delay` in ascending order.
```
arrange(flights, dep_delay)
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 12 7 2040 2123 -43 40 2352
#> 2 2013 2 3 2022 2055 -33 2240 2338
#> 3 2013 11 10 1408 1440 -32 1549 1559
#> 4 2013 1 11 1900 1930 -30 2233 2243
#> 5 2013 1 29 1703 1730 -27 1947 1957
#> 6 2013 8 9 729 755 -26 1002 955
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
Flight B6 97 (JFK to DEN) scheduled to depart on December 07, 2013 at 21:23
departed 43 minutes early.
### Exercise 5\.3\.3
Sort flights to find the fastest flights.
There are actually two ways to interpret this question: one that can be solved by using `arrange()`, and a more complex interpretation that requires creation of a new variable using `mutate()`, which we haven’t seen demonstrated before.
The colloquial interpretation of “fastest” flight can be understood to mean “the flight with the shortest flight time”. We can use arrange to sort our data by the `air_time` variable to find the shortest flights:
```
head(arrange(flights, air_time))
#> # A tibble: 6 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 16 1355 1315 40 1442 1411
#> 2 2013 4 13 537 527 10 622 628
#> 3 2013 12 6 922 851 31 1021 954
#> 4 2013 2 3 2153 2129 24 2247 2224
#> 5 2013 2 5 1303 1315 -12 1342 1411
#> 6 2013 2 12 2123 2130 -7 2211 2225
#> # … with 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
#> # tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
#> # hour <dbl>, minute <dbl>, time_hour <dttm>
```
Another definition of the “fastest flight” is the flight with the highest average [ground speed](https://en.wikipedia.org/wiki/Ground_speed).
The ground speed is not included in the data, but it can be calculated from the `distance` and `air_time` of the flight.
```
head(arrange(flights, desc(distance / air_time)))
#> # A tibble: 6 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 5 25 1709 1700 9 1923 1937
#> 2 2013 7 2 1558 1513 45 1745 1719
#> 3 2013 5 13 2040 2025 15 2225 2226
#> 4 2013 3 23 1914 1910 4 2045 2043
#> 5 2013 1 12 1559 1600 -1 1849 1917
#> 6 2013 11 17 650 655 -5 1059 1150
#> # … with 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
#> # tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
#> # hour <dbl>, minute <dbl>, time_hour <dttm>
```
### Exercise 5\.3\.4
Which flights traveled the longest?
Which traveled the shortest?
To find the longest flight, sort the flights by the `distance` column in descending order.
```
arrange(flights, desc(distance))
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 857 900 -3 1516 1530
#> 2 2013 1 2 909 900 9 1525 1530
#> 3 2013 1 3 914 900 14 1504 1530
#> 4 2013 1 4 900 900 0 1516 1530
#> 5 2013 1 5 858 900 -2 1519 1530
#> 6 2013 1 6 1019 900 79 1558 1530
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The longest flight is HA 51, JFK to HNL, which is 4,983 miles.
To find the shortest flight, sort the flights by the `distance` in ascending order, which is the default sort order.
```
arrange(flights, distance)
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 7 27 NA 106 NA NA 245
#> 2 2013 1 3 2127 2129 -2 2222 2224
#> 3 2013 1 4 1240 1200 40 1333 1306
#> 4 2013 1 4 1829 1615 134 1937 1721
#> 5 2013 1 4 2128 2129 -1 2218 2224
#> 6 2013 1 5 1155 1200 -5 1241 1306
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The shortest flight is US 1632, EWR to LGA, which is only 17 miles.
This is a flight between two of the New York area airports.
However, since this flight is missing a departure time so it either did not actually fly or there is a problem with the data.
The terms “longest” and “shortest” could also refer to the time of the flight instead of the distance.
Now the longest and shortest flights by can be found by sorting by the `air_time` column.
The longest flights by airtime are the following.
```
arrange(flights, desc(air_time))
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 3 17 1337 1335 2 1937 1836
#> 2 2013 2 6 853 900 -7 1542 1540
#> 3 2013 3 15 1001 1000 1 1551 1530
#> 4 2013 3 17 1006 1000 6 1607 1530
#> 5 2013 3 16 1001 1000 1 1544 1530
#> 6 2013 2 5 900 900 0 1555 1540
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
The shortest flights by airtime are the following.
```
arrange(flights, air_time)
#> # A tibble: 336,776 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 16 1355 1315 40 1442 1411
#> 2 2013 4 13 537 527 10 622 628
#> 3 2013 12 6 922 851 31 1021 954
#> 4 2013 2 3 2153 2129 24 2247 2224
#> 5 2013 2 5 1303 1315 -12 1342 1411
#> 6 2013 2 12 2123 2130 -7 2211 2225
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
5\.4 Select columns with `select()`
-----------------------------------
### Exercise 5\.4\.1
Brainstorm as many ways as possible to select `dep_time`, `dep_delay`, `arr_time`, and `arr_delay` from flights.
These are a few ways to select columns.
* Specify columns names as unquoted variable names.
```
select(flights, dep_time, dep_delay, arr_time, arr_delay)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Specify column names as strings.
```
select(flights, "dep_time", "dep_delay", "arr_time", "arr_delay")
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Specify the column numbers of the variables.
```
select(flights, 4, 6, 7, 9)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
This works, but is not good practice for two reasons.
First, the column location of variables may change, resulting in code that
may continue to run without error, but produce the wrong answer.
Second code is obfuscated, since it is not clear from the code which
variables are being selected. What variable does column 6 correspond to?
I just wrote the code, and I’ve already forgotten.
* Specify the names of the variables with character vector and `any_of()` or `all_of()`
```
select(flights, all_of(c("dep_time", "dep_delay", "arr_time", "arr_delay")))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
```
select(flights, any_of(c("dep_time", "dep_delay", "arr_time", "arr_delay")))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
This is useful because the names of the variables can be stored in a
variable and passed to `all_of()` or `any_of()`.
```
variables <- c("dep_time", "dep_delay", "arr_time", "arr_delay")
select(flights, all_of(variables))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
These two functions replace the deprecated function `one_of()`.
* Selecting the variables by matching the start of their names using `starts_with()`.
```
select(flights, starts_with("dep_"), starts_with("arr_"))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Selecting the variables using regular expressions with `matches()`.
Regular expressions provide a flexible way to match string patterns
and are discussed in the [Strings](https://r4ds.had.co.nz/strings.html) chapter.
```
select(flights, matches("^(dep|arr)_(time|delay)$"))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Specify the names of the variables with a character vector and use the bang\-bang operator (`!!`).
```
variables <- c("dep_time", "dep_delay", "arr_time", "arr_delay")
select(flights, !!variables)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
This and the following answers use the features of **tidy evaluation** not covered in R4DS but covered in the [Programming with dplyr](https://dplyr.tidyverse.org/articles/programming.html) vignette.
* Specify the names of the variables in a character or list vector and use the bang\-bang\-bang operator.
```
variables <- c("dep_time", "dep_delay", "arr_time", "arr_delay")
select(flights, !!!variables)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Specify the unquoted names of the variables in a list using `syms()` and use the bang\-bang\-bang operator.
```
variables <- syms(c("dep_time", "dep_delay", "arr_time", "arr_delay"))
select(flights, !!!variables)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
Some things that **don’t** work are:
* Matching the ends of their names using `ends_with()` since this will incorrectly
include other variables. For example,
```
select(flights, ends_with("arr_time"), ends_with("dep_time"))
#> # A tibble: 336,776 x 4
#> arr_time sched_arr_time dep_time sched_dep_time
#> <int> <int> <int> <int>
#> 1 830 819 517 515
#> 2 850 830 533 529
#> 3 923 850 542 540
#> 4 1004 1022 544 545
#> 5 812 837 554 600
#> 6 740 728 554 558
#> # … with 336,770 more rows
```
* Matching the names using `contains()` since there is not a pattern that can
include all these variables without incorrectly including others.
```
select(flights, contains("_time"), contains("arr_"))
#> # A tibble: 336,776 x 6
#> dep_time sched_dep_time arr_time sched_arr_time air_time arr_delay
#> <int> <int> <int> <int> <dbl> <dbl>
#> 1 517 515 830 819 227 11
#> 2 533 529 850 830 227 20
#> 3 542 540 923 850 160 33
#> 4 544 545 1004 1022 183 -18
#> 5 554 600 812 837 116 -25
#> 6 554 558 740 728 150 12
#> # … with 336,770 more rows
```
### Exercise 5\.4\.2
What happens if you include the name of a variable multiple times in a `select()` call?
The `select()` call ignores the duplication. Any duplicated variables are only included once, in the first location they appear. The `select()` function does not raise an error or warning or print any message if there are duplicated variables.
```
select(flights, year, month, day, year, year)
#> # A tibble: 336,776 x 3
#> year month day
#> <int> <int> <int>
#> 1 2013 1 1
#> 2 2013 1 1
#> 3 2013 1 1
#> 4 2013 1 1
#> 5 2013 1 1
#> 6 2013 1 1
#> # … with 336,770 more rows
```
This behavior is useful because it means that we can use `select()` with `everything()`
in order to easily change the order of columns without having to specify the names
of all the columns.
```
select(flights, arr_delay, everything())
#> # A tibble: 336,776 x 19
#> arr_delay year month day dep_time sched_dep_time dep_delay arr_time
#> <dbl> <int> <int> <int> <int> <int> <dbl> <int>
#> 1 11 2013 1 1 517 515 2 830
#> 2 20 2013 1 1 533 529 4 850
#> 3 33 2013 1 1 542 540 2 923
#> 4 -18 2013 1 1 544 545 -1 1004
#> 5 -25 2013 1 1 554 600 -6 812
#> 6 12 2013 1 1 554 558 -4 740
#> # … with 336,770 more rows, and 11 more variables: sched_arr_time <int>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
### Exercise 5\.4\.3
What does the `one_of()` function do? Why might it be helpful in conjunction with this vector?
The `one_of()` function selects variables with a character vector rather than unquoted variable name arguments.
This function is useful because it is easier to programmatically generate character vectors with variable names than to generate unquoted variable names, which are easier to type.
```
vars <- c("year", "month", "day", "dep_delay", "arr_delay")
select(flights, one_of(vars))
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
In the most recent versions of **dplyr**, `one_of` has been deprecated in favor of two functions: `all_of()` and `any_of()`.
These functions behave similarly if all variables are present in the data frame.
```
select(flights, any_of(vars))
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
```
select(flights, all_of(vars))
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
These functions differ in their strictness.
The function `all_of()` will raise an error if one of the variable names is not present, while `any_of()` will ignore it.
```
vars2 <- c("year", "month", "day", "variable_not_in_the_dataframe")
select(flights, all_of(vars2))
#> Error: Can't subset columns that don't exist.
#> ✖ Column `variable_not_in_the_dataframe` doesn't exist.
```
```
select(flights, any_of(vars2))
#> # A tibble: 336,776 x 3
#> year month day
#> <int> <int> <int>
#> 1 2013 1 1
#> 2 2013 1 1
#> 3 2013 1 1
#> 4 2013 1 1
#> 5 2013 1 1
#> 6 2013 1 1
#> # … with 336,770 more rows
```
The deprecated function `one_of()` will raise a warning if an unknown column is encountered.
```
select(flights, one_of(vars2))
#> Warning: Unknown columns: `variable_not_in_the_dataframe`
#> # A tibble: 336,776 x 3
#> year month day
#> <int> <int> <int>
#> 1 2013 1 1
#> 2 2013 1 1
#> 3 2013 1 1
#> 4 2013 1 1
#> 5 2013 1 1
#> 6 2013 1 1
#> # … with 336,770 more rows
```
In the most recent versions of **dplyr**, the `one_of()` function is less necessary due to new behavior in the selection functions.
The `select()` function can now accept the name of a vector containing the variable names you wish to select:
```
select(flights, vars)
#> Note: Using an external vector in selections is ambiguous.
#> ℹ Use `all_of(vars)` instead of `vars` to silence this message.
#> ℹ See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
#> This message is displayed once per session.
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
However there is a problem with the previous code.
The name `vars` could refer to a column named `vars` in `flights` or a different variable named `vars`.
What th code does will depend on whether or not `vars` is a column in `flights`.
If `vars` was a column in `flights`, then that code would only select the `vars` column.
For example:
```
flights <- mutate(flights, vars = 1)
select(flights, vars)
#> # A tibble: 336,776 x 1
#> vars
#> <dbl>
#> 1 1
#> 2 1
#> 3 1
#> 4 1
#> 5 1
#> 6 1
#> # … with 336,770 more rows
```
However, `vars` is not a column in `flights`, as is the case, then `select` will use the value the value of the , and select those columns.
If it has the same name or to ensure that it will not conflict with the names of the columns in the data frame, use the `!!!` (bang\-bang\-bang) operator.
```
select(flights, !!!vars)
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
This behavior, which is used by many **tidyverse** functions, is an example of what is called non\-standard evaluation (NSE) in R. See the **dplyr** vignette, [Programming with dplyr](https://dplyr.tidyverse.org/articles/programming.html), for more information on this topic.
### Exercise 5\.4\.4
Does the result of running the following code surprise you? How do the select helpers deal with case by default? How can you change that default?
```
select(flights, contains("TIME"))
#> # A tibble: 336,776 x 6
#> dep_time sched_dep_time arr_time sched_arr_time air_time time_hour
#> <int> <int> <int> <int> <dbl> <dttm>
#> 1 517 515 830 819 227 2013-01-01 05:00:00
#> 2 533 529 850 830 227 2013-01-01 05:00:00
#> 3 542 540 923 850 160 2013-01-01 05:00:00
#> 4 544 545 1004 1022 183 2013-01-01 05:00:00
#> 5 554 600 812 837 116 2013-01-01 06:00:00
#> 6 554 558 740 728 150 2013-01-01 05:00:00
#> # … with 336,770 more rows
```
The default behavior for `contains()` is to ignore case.
This may or may not surprise you.
If this behavior does not surprise you, that could be why it is the default.
Users searching for variable names probably have a better sense of the letters
in the variable than their capitalization.
A second, technical, reason is that dplyr works with more than R data frames.
It can also work with a variety of [databases](https://db.rstudio.com/dplyr/).
Some of these database engines have case insensitive column names, so making functions that match variable names
case insensitive by default will make the behavior of
`select()` consistent regardless of whether the table is
stored as an R data frame or in a database.
To change the behavior add the argument `ignore.case = FALSE`.
```
select(flights, contains("TIME", ignore.case = FALSE))
#> # A tibble: 336,776 x 0
```
### Exercise 5\.4\.1
Brainstorm as many ways as possible to select `dep_time`, `dep_delay`, `arr_time`, and `arr_delay` from flights.
These are a few ways to select columns.
* Specify columns names as unquoted variable names.
```
select(flights, dep_time, dep_delay, arr_time, arr_delay)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Specify column names as strings.
```
select(flights, "dep_time", "dep_delay", "arr_time", "arr_delay")
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Specify the column numbers of the variables.
```
select(flights, 4, 6, 7, 9)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
This works, but is not good practice for two reasons.
First, the column location of variables may change, resulting in code that
may continue to run without error, but produce the wrong answer.
Second code is obfuscated, since it is not clear from the code which
variables are being selected. What variable does column 6 correspond to?
I just wrote the code, and I’ve already forgotten.
* Specify the names of the variables with character vector and `any_of()` or `all_of()`
```
select(flights, all_of(c("dep_time", "dep_delay", "arr_time", "arr_delay")))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
```
select(flights, any_of(c("dep_time", "dep_delay", "arr_time", "arr_delay")))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
This is useful because the names of the variables can be stored in a
variable and passed to `all_of()` or `any_of()`.
```
variables <- c("dep_time", "dep_delay", "arr_time", "arr_delay")
select(flights, all_of(variables))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
These two functions replace the deprecated function `one_of()`.
* Selecting the variables by matching the start of their names using `starts_with()`.
```
select(flights, starts_with("dep_"), starts_with("arr_"))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Selecting the variables using regular expressions with `matches()`.
Regular expressions provide a flexible way to match string patterns
and are discussed in the [Strings](https://r4ds.had.co.nz/strings.html) chapter.
```
select(flights, matches("^(dep|arr)_(time|delay)$"))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Specify the names of the variables with a character vector and use the bang\-bang operator (`!!`).
```
variables <- c("dep_time", "dep_delay", "arr_time", "arr_delay")
select(flights, !!variables)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
This and the following answers use the features of **tidy evaluation** not covered in R4DS but covered in the [Programming with dplyr](https://dplyr.tidyverse.org/articles/programming.html) vignette.
* Specify the names of the variables in a character or list vector and use the bang\-bang\-bang operator.
```
variables <- c("dep_time", "dep_delay", "arr_time", "arr_delay")
select(flights, !!!variables)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
* Specify the unquoted names of the variables in a list using `syms()` and use the bang\-bang\-bang operator.
```
variables <- syms(c("dep_time", "dep_delay", "arr_time", "arr_delay"))
select(flights, !!!variables)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2 830 11
#> 2 533 4 850 20
#> 3 542 2 923 33
#> 4 544 -1 1004 -18
#> 5 554 -6 812 -25
#> 6 554 -4 740 12
#> # … with 336,770 more rows
```
Some things that **don’t** work are:
* Matching the ends of their names using `ends_with()` since this will incorrectly
include other variables. For example,
```
select(flights, ends_with("arr_time"), ends_with("dep_time"))
#> # A tibble: 336,776 x 4
#> arr_time sched_arr_time dep_time sched_dep_time
#> <int> <int> <int> <int>
#> 1 830 819 517 515
#> 2 850 830 533 529
#> 3 923 850 542 540
#> 4 1004 1022 544 545
#> 5 812 837 554 600
#> 6 740 728 554 558
#> # … with 336,770 more rows
```
* Matching the names using `contains()` since there is not a pattern that can
include all these variables without incorrectly including others.
```
select(flights, contains("_time"), contains("arr_"))
#> # A tibble: 336,776 x 6
#> dep_time sched_dep_time arr_time sched_arr_time air_time arr_delay
#> <int> <int> <int> <int> <dbl> <dbl>
#> 1 517 515 830 819 227 11
#> 2 533 529 850 830 227 20
#> 3 542 540 923 850 160 33
#> 4 544 545 1004 1022 183 -18
#> 5 554 600 812 837 116 -25
#> 6 554 558 740 728 150 12
#> # … with 336,770 more rows
```
### Exercise 5\.4\.2
What happens if you include the name of a variable multiple times in a `select()` call?
The `select()` call ignores the duplication. Any duplicated variables are only included once, in the first location they appear. The `select()` function does not raise an error or warning or print any message if there are duplicated variables.
```
select(flights, year, month, day, year, year)
#> # A tibble: 336,776 x 3
#> year month day
#> <int> <int> <int>
#> 1 2013 1 1
#> 2 2013 1 1
#> 3 2013 1 1
#> 4 2013 1 1
#> 5 2013 1 1
#> 6 2013 1 1
#> # … with 336,770 more rows
```
This behavior is useful because it means that we can use `select()` with `everything()`
in order to easily change the order of columns without having to specify the names
of all the columns.
```
select(flights, arr_delay, everything())
#> # A tibble: 336,776 x 19
#> arr_delay year month day dep_time sched_dep_time dep_delay arr_time
#> <dbl> <int> <int> <int> <int> <int> <dbl> <int>
#> 1 11 2013 1 1 517 515 2 830
#> 2 20 2013 1 1 533 529 4 850
#> 3 33 2013 1 1 542 540 2 923
#> 4 -18 2013 1 1 544 545 -1 1004
#> 5 -25 2013 1 1 554 600 -6 812
#> 6 12 2013 1 1 554 558 -4 740
#> # … with 336,770 more rows, and 11 more variables: sched_arr_time <int>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
### Exercise 5\.4\.3
What does the `one_of()` function do? Why might it be helpful in conjunction with this vector?
The `one_of()` function selects variables with a character vector rather than unquoted variable name arguments.
This function is useful because it is easier to programmatically generate character vectors with variable names than to generate unquoted variable names, which are easier to type.
```
vars <- c("year", "month", "day", "dep_delay", "arr_delay")
select(flights, one_of(vars))
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
In the most recent versions of **dplyr**, `one_of` has been deprecated in favor of two functions: `all_of()` and `any_of()`.
These functions behave similarly if all variables are present in the data frame.
```
select(flights, any_of(vars))
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
```
select(flights, all_of(vars))
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
These functions differ in their strictness.
The function `all_of()` will raise an error if one of the variable names is not present, while `any_of()` will ignore it.
```
vars2 <- c("year", "month", "day", "variable_not_in_the_dataframe")
select(flights, all_of(vars2))
#> Error: Can't subset columns that don't exist.
#> ✖ Column `variable_not_in_the_dataframe` doesn't exist.
```
```
select(flights, any_of(vars2))
#> # A tibble: 336,776 x 3
#> year month day
#> <int> <int> <int>
#> 1 2013 1 1
#> 2 2013 1 1
#> 3 2013 1 1
#> 4 2013 1 1
#> 5 2013 1 1
#> 6 2013 1 1
#> # … with 336,770 more rows
```
The deprecated function `one_of()` will raise a warning if an unknown column is encountered.
```
select(flights, one_of(vars2))
#> Warning: Unknown columns: `variable_not_in_the_dataframe`
#> # A tibble: 336,776 x 3
#> year month day
#> <int> <int> <int>
#> 1 2013 1 1
#> 2 2013 1 1
#> 3 2013 1 1
#> 4 2013 1 1
#> 5 2013 1 1
#> 6 2013 1 1
#> # … with 336,770 more rows
```
In the most recent versions of **dplyr**, the `one_of()` function is less necessary due to new behavior in the selection functions.
The `select()` function can now accept the name of a vector containing the variable names you wish to select:
```
select(flights, vars)
#> Note: Using an external vector in selections is ambiguous.
#> ℹ Use `all_of(vars)` instead of `vars` to silence this message.
#> ℹ See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
#> This message is displayed once per session.
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
However there is a problem with the previous code.
The name `vars` could refer to a column named `vars` in `flights` or a different variable named `vars`.
What th code does will depend on whether or not `vars` is a column in `flights`.
If `vars` was a column in `flights`, then that code would only select the `vars` column.
For example:
```
flights <- mutate(flights, vars = 1)
select(flights, vars)
#> # A tibble: 336,776 x 1
#> vars
#> <dbl>
#> 1 1
#> 2 1
#> 3 1
#> 4 1
#> 5 1
#> 6 1
#> # … with 336,770 more rows
```
However, `vars` is not a column in `flights`, as is the case, then `select` will use the value the value of the , and select those columns.
If it has the same name or to ensure that it will not conflict with the names of the columns in the data frame, use the `!!!` (bang\-bang\-bang) operator.
```
select(flights, !!!vars)
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2 11
#> 2 2013 1 1 4 20
#> 3 2013 1 1 2 33
#> 4 2013 1 1 -1 -18
#> 5 2013 1 1 -6 -25
#> 6 2013 1 1 -4 12
#> # … with 336,770 more rows
```
This behavior, which is used by many **tidyverse** functions, is an example of what is called non\-standard evaluation (NSE) in R. See the **dplyr** vignette, [Programming with dplyr](https://dplyr.tidyverse.org/articles/programming.html), for more information on this topic.
### Exercise 5\.4\.4
Does the result of running the following code surprise you? How do the select helpers deal with case by default? How can you change that default?
```
select(flights, contains("TIME"))
#> # A tibble: 336,776 x 6
#> dep_time sched_dep_time arr_time sched_arr_time air_time time_hour
#> <int> <int> <int> <int> <dbl> <dttm>
#> 1 517 515 830 819 227 2013-01-01 05:00:00
#> 2 533 529 850 830 227 2013-01-01 05:00:00
#> 3 542 540 923 850 160 2013-01-01 05:00:00
#> 4 544 545 1004 1022 183 2013-01-01 05:00:00
#> 5 554 600 812 837 116 2013-01-01 06:00:00
#> 6 554 558 740 728 150 2013-01-01 05:00:00
#> # … with 336,770 more rows
```
The default behavior for `contains()` is to ignore case.
This may or may not surprise you.
If this behavior does not surprise you, that could be why it is the default.
Users searching for variable names probably have a better sense of the letters
in the variable than their capitalization.
A second, technical, reason is that dplyr works with more than R data frames.
It can also work with a variety of [databases](https://db.rstudio.com/dplyr/).
Some of these database engines have case insensitive column names, so making functions that match variable names
case insensitive by default will make the behavior of
`select()` consistent regardless of whether the table is
stored as an R data frame or in a database.
To change the behavior add the argument `ignore.case = FALSE`.
```
select(flights, contains("TIME", ignore.case = FALSE))
#> # A tibble: 336,776 x 0
```
5\.5 Add new variables with `mutate()`
--------------------------------------
### Exercise 5\.5\.1
Currently `dep_time` and `sched_dep_time` are convenient to look at, but hard to compute with because they’re not really continuous numbers. Convert them to a more convenient representation of number of minutes since midnight.
To get the departure times in the number of minutes, divide `dep_time` by 100 to get the hours since midnight and multiply by 60 and add the remainder of `dep_time` divided by 100\.
For example, `1504` represents 15:04 (or 3:04 PM), which is 904 minutes after midnight.
To generalize this approach, we need a way to split out the hour\-digits from the minute\-digits.
Dividing by 100 and discarding the remainder using the integer division operator, `%/%` gives us the following.
```
1504 %/% 100
#> [1] 15
```
Instead of `%/%` could also use `/` along with `trunc()` or `floor()`, but `round()` would not work.
To get the minutes, instead of discarding the remainder of the division by `100`,
we only want the remainder.
So we use the modulo operator, `%%`, discussed in the [Other Useful Functions](https://r4ds.had.co.nz/transform.html#select) section.
```
1504 %% 100
#> [1] 4
```
Now, we can combine the hours (multiplied by 60 to convert them to minutes) and
minutes to get the number of minutes after midnight.
```
1504 %/% 100 * 60 + 1504 %% 100
#> [1] 904
```
There is one remaining issue. Midnight is represented by `2400`, which would
correspond to `1440` minutes since midnight, but it should correspond to `0`.
After converting all the times to minutes after midnight, `x %% 1440` will convert
`1440` to zero while keeping all the other times the same.
Now we will put it all together.
The following code creates a new data frame `flights_times` with columns `dep_time_mins` and `sched_dep_time_mins`.
These columns convert `dep_time` and `sched_dep_time`, respectively, to minutes since midnight.
```
flights_times <- mutate(flights,
dep_time_mins = (dep_time %/% 100 * 60 + dep_time %% 100) %% 1440,
sched_dep_time_mins = (sched_dep_time %/% 100 * 60 +
sched_dep_time %% 100) %% 1440
)
# view only relevant columns
select(
flights_times, dep_time, dep_time_mins, sched_dep_time,
sched_dep_time_mins
)
#> # A tibble: 336,776 x 4
#> dep_time dep_time_mins sched_dep_time sched_dep_time_mins
#> <int> <dbl> <int> <dbl>
#> 1 517 317 515 315
#> 2 533 333 529 329
#> 3 542 342 540 340
#> 4 544 344 545 345
#> 5 554 354 600 360
#> 6 554 354 558 358
#> # … with 336,770 more rows
```
Looking ahead to the [Functions](https://r4ds.had.co.nz/functions.html) chapter,
this is precisely the sort of situation in which it would make sense to write
a function to avoid copying and pasting code.
We could define a function `time2mins()`, which converts a vector of times in
from the format used in `flights` to minutes since midnight.
```
time2mins <- function(x) {
(x %/% 100 * 60 + x %% 100) %% 1440
}
```
Using `time2mins`, the previous code simplifies to the following.
```
flights_times <- mutate(flights,
dep_time_mins = time2mins(dep_time),
sched_dep_time_mins = time2mins(sched_dep_time)
)
# show only the relevant columns
select(
flights_times, dep_time, dep_time_mins, sched_dep_time,
sched_dep_time_mins
)
#> # A tibble: 336,776 x 4
#> dep_time dep_time_mins sched_dep_time sched_dep_time_mins
#> <int> <dbl> <int> <dbl>
#> 1 517 317 515 315
#> 2 533 333 529 329
#> 3 542 342 540 340
#> 4 544 344 545 345
#> 5 554 354 600 360
#> 6 554 354 558 358
#> # … with 336,770 more rows
```
### Exercise 5\.5\.2
Compare `air_time` with `arr_time - dep_time`.
What do you expect to see?
What do you see?
What do you need to do to fix it?
I expect that `air_time` is the difference between the arrival (`arr_time`) and departure times (`dep_time`).
In other words, `air_time = arr_time - dep_time`.
To check that this relationship, I’ll first need to convert the times to a form more amenable to arithmetic operations using the same calculations as the [previous exercise](transform.html#exercise-5.5.1).
```
flights_airtime <-
mutate(flights,
dep_time = (dep_time %/% 100 * 60 + dep_time %% 100) %% 1440,
arr_time = (arr_time %/% 100 * 60 + arr_time %% 100) %% 1440,
air_time_diff = air_time - arr_time + dep_time
)
```
So, does `air_time = arr_time - dep_time`?
If so, there should be no flights with non\-zero values of `air_time_diff`.
```
nrow(filter(flights_airtime, air_time_diff != 0))
#> [1] 327150
```
It turns out that there are many flights for which `air_time != arr_time - dep_time`.
Other than data errors, I can think of two reasons why `air_time` would not equal `arr_time - dep_time`.
1. The flight passes midnight, so `arr_time < dep_time`.
In these cases, the difference in airtime should be by 24 hours (1,440 minutes).
2. The flight crosses time zones, and the total air time will be off by hours (multiples of 60\).
All flights in `flights` departed from New York City and are domestic flights in the US.
This means that flights will all be to the same or more westerly time zones.
Given the time\-zones in the US, the differences due to time\-zone should be 60 minutes (Central)
120 minutes (Mountain), 180 minutes (Pacific), 240 minutes (Alaska), or 300 minutes (Hawaii).
Both of these explanations have clear patterns that I would expect to see if they
were true.
In particular, in both cases, since time\-zones and crossing midnight only affects the hour part of the time, all values of `air_time_diff` should be divisible by 60\.
I’ll visually check this hypothesis by plotting the distribution of `air_time_diff`.
If those two explanations are correct, distribution of `air_time_diff` should comprise only spikes at multiples of 60\.
```
ggplot(flights_airtime, aes(x = air_time_diff)) +
geom_histogram(binwidth = 1)
#> Warning: Removed 9430 rows containing non-finite values (stat_bin).
```
This is not the case.
While, the distribution of `air_time_diff` has modes at multiples of 60 as hypothesized,
it shows that there are many flights in which the difference between air time and local arrival and departure times is not divisible by 60\.
Let’s also look at flights with Los Angeles as a destination.
The discrepancy should be 180 minutes.
```
ggplot(filter(flights_airtime, dest == "LAX"), aes(x = air_time_diff)) +
geom_histogram(binwidth = 1)
#> Warning: Removed 148 rows containing non-finite values (stat_bin).
```
To fix these time\-zone issues, I would want to convert all the times to a date\-time to handle overnight flights, and from local time to a common time zone, most likely [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time), to handle flights crossing time\-zones.
The `tzone` column of `nycflights13::airports` gives the time\-zone of each airport.
See the [“Dates and Times”](https://r4ds.had.co.nz/dates-and-times.html) for an introduction on working with date and time data.
But that still leaves the other differences unexplained.
So what else might be going on?
There seem to be too many problems for this to be data entry problems, so I’m probably missing something.
So, I’ll reread the documentation to make sure that I understand the definitions of `arr_time`, `dep_time`, and
`air_time`.
The documentation contains a link to the source of the `flights` data, [https://www.transtats.bts.gov/DL\_SelectFields.asp?Table\_ID\=236](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236).
This documentation shows that the `flights` data does not contain the variables `TaxiIn`, `TaxiOff`, `WheelsIn`, and `WheelsOff`.
It appears that the `air_time` variable refers to flight time, which is defined as the time between wheels\-off (take\-off) and wheels\-in (landing).
But the flight time does not include time spent on the runway taxiing to and from gates.
With this new understanding of the data, I now know that the relationship between `air_time`, `arr_time`, and `dep_time` is `air_time <= arr_time - dep_time`, supposing that the time zones of `arr_time` and `dep_time` are in the same time zone.
### Exercise 5\.5\.3
Compare `dep_time`, `sched_dep_time`, and `dep_delay`. How would you expect those three numbers to be related?
I would expect the departure delay (`dep_delay`) to be equal to the difference between scheduled departure time (`sched_dep_time`), and actual departure time (`dep_time`),
`dep_time - sched_dep_time = dep_delay`.
As with the previous question, the first step is to convert all times to the
number of minutes since midnight.
The column, `dep_delay_diff`, is the difference between the column, `dep_delay`, and
departure delay calculated directly from the scheduled and actual departure times.
```
flights_deptime <-
mutate(flights,
dep_time_min = (dep_time %/% 100 * 60 + dep_time %% 100) %% 1440,
sched_dep_time_min = (sched_dep_time %/% 100 * 60 +
sched_dep_time %% 100) %% 1440,
dep_delay_diff = dep_delay - dep_time_min + sched_dep_time_min
)
```
Does `dep_delay_diff` equal zero for all rows?
```
filter(flights_deptime, dep_delay_diff != 0)
#> # A tibble: 1,236 x 22
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 848 1835 853 1001 1950
#> 2 2013 1 2 42 2359 43 518 442
#> 3 2013 1 2 126 2250 156 233 2359
#> 4 2013 1 3 32 2359 33 504 442
#> 5 2013 1 3 50 2145 185 203 2311
#> 6 2013 1 3 235 2359 156 700 437
#> # … with 1,230 more rows, and 14 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>,
#> # dep_time_min <dbl>, sched_dep_time_min <dbl>, dep_delay_diff <dbl>
```
No. Unlike the last question, time zones are not an issue since we are only
considering departure times.[3](#fn3)
However, the discrepancies could be because a flight was scheduled to depart
before midnight, but was delayed after midnight.
All of these discrepancies are exactly equal to 1440 (24 hours), and the flights with these discrepancies were scheduled to depart later in the day.
```
ggplot(
filter(flights_deptime, dep_delay_diff > 0),
aes(y = sched_dep_time_min, x = dep_delay_diff)
) +
geom_point()
```
Thus the only cases in which the departure delay is not equal to the difference
in scheduled departure and actual departure times is due to a quirk in how these
columns were stored.
### Exercise 5\.5\.4
Find the 10 most delayed flights using a ranking function.
How do you want to handle ties?
Carefully read the documentation for `min_rank()`.
The **dplyr** package provides multiple functions for ranking, which differ in how they handle tied values: `row_number()`, `min_rank()`, `dense_rank()`.
To see how they work, let’s create a data frame with duplicate values in a vector and see how ranking functions handle ties.
```
rankme <- tibble(
x = c(10, 5, 1, 5, 5)
)
```
```
rankme <- mutate(rankme,
x_row_number = row_number(x),
x_min_rank = min_rank(x),
x_dense_rank = dense_rank(x)
)
arrange(rankme, x)
#> # A tibble: 5 x 4
#> x x_row_number x_min_rank x_dense_rank
#> <dbl> <int> <int> <int>
#> 1 1 1 1 1
#> 2 5 2 2 2
#> 3 5 3 2 2
#> 4 5 4 2 2
#> 5 10 5 5 3
```
The function `row_number()` assigns each element a unique value.
The result is equivalent to the index (or row) number of each element after sorting the vector, hence its name.
The`min_rank()` and `dense_rank()` assign tied values the same rank, but differ in how they assign values to the next rank.
For each set of tied values the `min_rank()` function assigns a rank equal to the number of values less than that tied value plus one.
In contrast, the `dense_rank()` function assigns a rank equal to the number of distinct values less than that tied value plus one.
To see the difference between `dense_rank()` and `min_rank()` compare the value of `rankme$x_min_rank` and `rankme$x_dense_rank` for `x = 10`.
If I had to choose one for presenting rankings to someone else, I would use `min_rank()` since its results correspond to the most common usage of rankings in sports or other competitions.
In the code below, I use all three functions, but since there are no ties in the top 10 flights, the results don’t differ.
```
flights_delayed <- mutate(flights,
dep_delay_min_rank = min_rank(desc(dep_delay)),
dep_delay_row_number = row_number(desc(dep_delay)),
dep_delay_dense_rank = dense_rank(desc(dep_delay))
)
flights_delayed <- filter(flights_delayed,
!(dep_delay_min_rank > 10 | dep_delay_row_number > 10 |
dep_delay_dense_rank > 10))
flights_delayed <- arrange(flights_delayed, dep_delay_min_rank)
print(select(flights_delayed, month, day, carrier, flight, dep_delay,
dep_delay_min_rank, dep_delay_row_number, dep_delay_dense_rank),
n = Inf)
#> # A tibble: 10 x 8
#> month day carrier flight dep_delay dep_delay_min_r… dep_delay_row_n…
#> <int> <int> <chr> <int> <dbl> <int> <int>
#> 1 1 9 HA 51 1301 1 1
#> 2 6 15 MQ 3535 1137 2 2
#> 3 1 10 MQ 3695 1126 3 3
#> 4 9 20 AA 177 1014 4 4
#> 5 7 22 MQ 3075 1005 5 5
#> 6 4 10 DL 2391 960 6 6
#> 7 3 17 DL 2119 911 7 7
#> 8 6 27 DL 2007 899 8 8
#> 9 7 22 DL 2047 898 9 9
#> 10 12 5 AA 172 896 10 10
#> # … with 1 more variable: dep_delay_dense_rank <int>
```
In addition to the functions covered here, the `rank()` function provides several more ways of ranking elements.
There are other ways to solve this problem that do not using ranking functions.
To select the top 10, sort values with `arrange()` and select the top values with `slice`:
```
flights_delayed2 <- arrange(flights, desc(dep_delay))
flights_delayed2 <- slice(flights_delayed2, 1:10)
select(flights_delayed2, month, day, carrier, flight, dep_delay)
#> # A tibble: 10 x 5
#> month day carrier flight dep_delay
#> <int> <int> <chr> <int> <dbl>
#> 1 1 9 HA 51 1301
#> 2 6 15 MQ 3535 1137
#> 3 1 10 MQ 3695 1126
#> 4 9 20 AA 177 1014
#> 5 7 22 MQ 3075 1005
#> 6 4 10 DL 2391 960
#> # … with 4 more rows
```
Alternatively, we could use the `top_n()`.
```
flights_delayed3 <- top_n(flights, 10, dep_delay)
flights_delayed3 <- arrange(flights_delayed3, desc(dep_delay))
select(flights_delayed3, month, day, carrier, flight, dep_delay)
#> # A tibble: 10 x 5
#> month day carrier flight dep_delay
#> <int> <int> <chr> <int> <dbl>
#> 1 1 9 HA 51 1301
#> 2 6 15 MQ 3535 1137
#> 3 1 10 MQ 3695 1126
#> 4 9 20 AA 177 1014
#> 5 7 22 MQ 3075 1005
#> 6 4 10 DL 2391 960
#> # … with 4 more rows
```
The previous two approaches will always select 10 rows even if there are tied values.
Ranking functions provide more control over how tied values are handled.
Those approaches will provide the 10 rows with the largest values of `dep_delay`, while ranking functions can provide all rows with the 10 largest values of `dep_delay`.
If there are no ties, these approaches are equivalent.
If there are ties, then which is more appropriate depends on the use.
### Exercise 5\.5\.5
What does `1:3 + 1:10` return? Why?
The code given in the question returns the following.
```
1:3 + 1:10
#> Warning in 1:3 + 1:10: longer object length is not a multiple of shorter object
#> length
#> [1] 2 4 6 5 7 9 8 10 12 11
```
This is equivalent to the following.
```
c(1 + 1, 2 + 2, 3 + 3, 1 + 4, 2 + 5, 3 + 6, 1 + 7, 2 + 8, 3 + 9, 1 + 10)
#> [1] 2 4 6 5 7 9 8 10 12 11
```
When adding two vectors, R recycles the shorter vector’s values to create a vector of the same length as the longer vector.
The code also raises a warning that the shorter vector is not a multiple of the longer vector.
A warning is raised since when this occurs, it is often unintended and may be a bug.
### Exercise 5\.5\.6
What trigonometric functions does R provide?
All trigonometric functions are all described in a single help page, named `Trig`.
You can open the documentation for these functions with `?Trig` or by using `?` with any of the following functions, for example:`?sin`.
R provides functions for the three primary trigonometric functions: sine (`sin()`), cosine (`cos()`), and tangent (`tan()`).
The input angles to all these functions are in [radians](https://en.wikipedia.org/wiki/Radian).
```
x <- seq(-3, 7, by = 1 / 2)
sin(pi * x)
#> [1] -3.67e-16 -1.00e+00 2.45e-16 1.00e+00 -1.22e-16 -1.00e+00 0.00e+00
#> [8] 1.00e+00 1.22e-16 -1.00e+00 -2.45e-16 1.00e+00 3.67e-16 -1.00e+00
#> [15] -4.90e-16 1.00e+00 6.12e-16 -1.00e+00 -7.35e-16 1.00e+00 8.57e-16
cos(pi * x)
#> [1] -1.00e+00 3.06e-16 1.00e+00 -1.84e-16 -1.00e+00 6.12e-17 1.00e+00
#> [8] 6.12e-17 -1.00e+00 -1.84e-16 1.00e+00 3.06e-16 -1.00e+00 -4.29e-16
#> [15] 1.00e+00 5.51e-16 -1.00e+00 -2.45e-15 1.00e+00 -9.80e-16 -1.00e+00
tan(pi * x)
#> [1] 3.67e-16 -3.27e+15 2.45e-16 -5.44e+15 1.22e-16 -1.63e+16 0.00e+00
#> [8] 1.63e+16 -1.22e-16 5.44e+15 -2.45e-16 3.27e+15 -3.67e-16 2.33e+15
#> [15] -4.90e-16 1.81e+15 -6.12e-16 4.08e+14 -7.35e-16 -1.02e+15 -8.57e-16
```
In the previous code, I used the variable `pi`.
R provides the variable `pi` which is set to the value of the mathematical constant \\(\\pi\\) .[4](#fn4)
```
pi
#> [1] 3.14
```
Although R provides the `pi` variable, there is nothing preventing a user from changing its value.
For example, I could redefine `pi` to [3\.14](https://en.wikipedia.org/wiki/Indiana_Pi_Bill) or
any other value.
```
pi <- 3.14
pi
#> [1] 3.14
pi <- "Apple"
pi
#> [1] "Apple"
```
For that reason, if you are using the builtin `pi` variable in computations and are paranoid, you may want to always reference it as `base::pi`.
```
base::pi
#> [1] 3.14
```
In the previous code block, since the angles were in radians, I wrote them as \\(\\pi\\) times some number.
Since it is often easier to write radians multiple of \\(\\pi\\), R provides some convenience functions that do that.
The function `sinpi(x)`, is equivalent to `sin(pi * x)`.
The functions `cospi()` and `tanpi()` are similarly defined for the sin and tan functions, respectively.
```
sinpi(x)
#> [1] 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0
cospi(x)
#> [1] -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1
tanpi(x)
#> Warning in tanpi(x): NaNs produced
#> [1] 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0
#> [20] NaN 0
```
R provides the function arc\-cosine (`acos()`), arc\-sine (`asin()`), and arc\-tangent (`atan()`).
```
x <- seq(-1, 1, by = 1 / 4)
acos(x)
#> [1] 3.142 2.419 2.094 1.823 1.571 1.318 1.047 0.723 0.000
asin(x)
#> [1] -1.571 -0.848 -0.524 -0.253 0.000 0.253 0.524 0.848 1.571
atan(x)
#> [1] -0.785 -0.644 -0.464 -0.245 0.000 0.245 0.464 0.644 0.785
```
Finally, R provides the function `atan2()`.
Calling `atan2(y, x)` returns the angle between the x\-axis and the vector from `(0,0)` to `(x, y)`.
```
atan2(c(1, 0, -1, 0), c(0, 1, 0, -1))
#> [1] 1.57 0.00 -1.57 3.14
```
### Exercise 5\.5\.1
Currently `dep_time` and `sched_dep_time` are convenient to look at, but hard to compute with because they’re not really continuous numbers. Convert them to a more convenient representation of number of minutes since midnight.
To get the departure times in the number of minutes, divide `dep_time` by 100 to get the hours since midnight and multiply by 60 and add the remainder of `dep_time` divided by 100\.
For example, `1504` represents 15:04 (or 3:04 PM), which is 904 minutes after midnight.
To generalize this approach, we need a way to split out the hour\-digits from the minute\-digits.
Dividing by 100 and discarding the remainder using the integer division operator, `%/%` gives us the following.
```
1504 %/% 100
#> [1] 15
```
Instead of `%/%` could also use `/` along with `trunc()` or `floor()`, but `round()` would not work.
To get the minutes, instead of discarding the remainder of the division by `100`,
we only want the remainder.
So we use the modulo operator, `%%`, discussed in the [Other Useful Functions](https://r4ds.had.co.nz/transform.html#select) section.
```
1504 %% 100
#> [1] 4
```
Now, we can combine the hours (multiplied by 60 to convert them to minutes) and
minutes to get the number of minutes after midnight.
```
1504 %/% 100 * 60 + 1504 %% 100
#> [1] 904
```
There is one remaining issue. Midnight is represented by `2400`, which would
correspond to `1440` minutes since midnight, but it should correspond to `0`.
After converting all the times to minutes after midnight, `x %% 1440` will convert
`1440` to zero while keeping all the other times the same.
Now we will put it all together.
The following code creates a new data frame `flights_times` with columns `dep_time_mins` and `sched_dep_time_mins`.
These columns convert `dep_time` and `sched_dep_time`, respectively, to minutes since midnight.
```
flights_times <- mutate(flights,
dep_time_mins = (dep_time %/% 100 * 60 + dep_time %% 100) %% 1440,
sched_dep_time_mins = (sched_dep_time %/% 100 * 60 +
sched_dep_time %% 100) %% 1440
)
# view only relevant columns
select(
flights_times, dep_time, dep_time_mins, sched_dep_time,
sched_dep_time_mins
)
#> # A tibble: 336,776 x 4
#> dep_time dep_time_mins sched_dep_time sched_dep_time_mins
#> <int> <dbl> <int> <dbl>
#> 1 517 317 515 315
#> 2 533 333 529 329
#> 3 542 342 540 340
#> 4 544 344 545 345
#> 5 554 354 600 360
#> 6 554 354 558 358
#> # … with 336,770 more rows
```
Looking ahead to the [Functions](https://r4ds.had.co.nz/functions.html) chapter,
this is precisely the sort of situation in which it would make sense to write
a function to avoid copying and pasting code.
We could define a function `time2mins()`, which converts a vector of times in
from the format used in `flights` to minutes since midnight.
```
time2mins <- function(x) {
(x %/% 100 * 60 + x %% 100) %% 1440
}
```
Using `time2mins`, the previous code simplifies to the following.
```
flights_times <- mutate(flights,
dep_time_mins = time2mins(dep_time),
sched_dep_time_mins = time2mins(sched_dep_time)
)
# show only the relevant columns
select(
flights_times, dep_time, dep_time_mins, sched_dep_time,
sched_dep_time_mins
)
#> # A tibble: 336,776 x 4
#> dep_time dep_time_mins sched_dep_time sched_dep_time_mins
#> <int> <dbl> <int> <dbl>
#> 1 517 317 515 315
#> 2 533 333 529 329
#> 3 542 342 540 340
#> 4 544 344 545 345
#> 5 554 354 600 360
#> 6 554 354 558 358
#> # … with 336,770 more rows
```
### Exercise 5\.5\.2
Compare `air_time` with `arr_time - dep_time`.
What do you expect to see?
What do you see?
What do you need to do to fix it?
I expect that `air_time` is the difference between the arrival (`arr_time`) and departure times (`dep_time`).
In other words, `air_time = arr_time - dep_time`.
To check that this relationship, I’ll first need to convert the times to a form more amenable to arithmetic operations using the same calculations as the [previous exercise](transform.html#exercise-5.5.1).
```
flights_airtime <-
mutate(flights,
dep_time = (dep_time %/% 100 * 60 + dep_time %% 100) %% 1440,
arr_time = (arr_time %/% 100 * 60 + arr_time %% 100) %% 1440,
air_time_diff = air_time - arr_time + dep_time
)
```
So, does `air_time = arr_time - dep_time`?
If so, there should be no flights with non\-zero values of `air_time_diff`.
```
nrow(filter(flights_airtime, air_time_diff != 0))
#> [1] 327150
```
It turns out that there are many flights for which `air_time != arr_time - dep_time`.
Other than data errors, I can think of two reasons why `air_time` would not equal `arr_time - dep_time`.
1. The flight passes midnight, so `arr_time < dep_time`.
In these cases, the difference in airtime should be by 24 hours (1,440 minutes).
2. The flight crosses time zones, and the total air time will be off by hours (multiples of 60\).
All flights in `flights` departed from New York City and are domestic flights in the US.
This means that flights will all be to the same or more westerly time zones.
Given the time\-zones in the US, the differences due to time\-zone should be 60 minutes (Central)
120 minutes (Mountain), 180 minutes (Pacific), 240 minutes (Alaska), or 300 minutes (Hawaii).
Both of these explanations have clear patterns that I would expect to see if they
were true.
In particular, in both cases, since time\-zones and crossing midnight only affects the hour part of the time, all values of `air_time_diff` should be divisible by 60\.
I’ll visually check this hypothesis by plotting the distribution of `air_time_diff`.
If those two explanations are correct, distribution of `air_time_diff` should comprise only spikes at multiples of 60\.
```
ggplot(flights_airtime, aes(x = air_time_diff)) +
geom_histogram(binwidth = 1)
#> Warning: Removed 9430 rows containing non-finite values (stat_bin).
```
This is not the case.
While, the distribution of `air_time_diff` has modes at multiples of 60 as hypothesized,
it shows that there are many flights in which the difference between air time and local arrival and departure times is not divisible by 60\.
Let’s also look at flights with Los Angeles as a destination.
The discrepancy should be 180 minutes.
```
ggplot(filter(flights_airtime, dest == "LAX"), aes(x = air_time_diff)) +
geom_histogram(binwidth = 1)
#> Warning: Removed 148 rows containing non-finite values (stat_bin).
```
To fix these time\-zone issues, I would want to convert all the times to a date\-time to handle overnight flights, and from local time to a common time zone, most likely [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time), to handle flights crossing time\-zones.
The `tzone` column of `nycflights13::airports` gives the time\-zone of each airport.
See the [“Dates and Times”](https://r4ds.had.co.nz/dates-and-times.html) for an introduction on working with date and time data.
But that still leaves the other differences unexplained.
So what else might be going on?
There seem to be too many problems for this to be data entry problems, so I’m probably missing something.
So, I’ll reread the documentation to make sure that I understand the definitions of `arr_time`, `dep_time`, and
`air_time`.
The documentation contains a link to the source of the `flights` data, [https://www.transtats.bts.gov/DL\_SelectFields.asp?Table\_ID\=236](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236).
This documentation shows that the `flights` data does not contain the variables `TaxiIn`, `TaxiOff`, `WheelsIn`, and `WheelsOff`.
It appears that the `air_time` variable refers to flight time, which is defined as the time between wheels\-off (take\-off) and wheels\-in (landing).
But the flight time does not include time spent on the runway taxiing to and from gates.
With this new understanding of the data, I now know that the relationship between `air_time`, `arr_time`, and `dep_time` is `air_time <= arr_time - dep_time`, supposing that the time zones of `arr_time` and `dep_time` are in the same time zone.
### Exercise 5\.5\.3
Compare `dep_time`, `sched_dep_time`, and `dep_delay`. How would you expect those three numbers to be related?
I would expect the departure delay (`dep_delay`) to be equal to the difference between scheduled departure time (`sched_dep_time`), and actual departure time (`dep_time`),
`dep_time - sched_dep_time = dep_delay`.
As with the previous question, the first step is to convert all times to the
number of minutes since midnight.
The column, `dep_delay_diff`, is the difference between the column, `dep_delay`, and
departure delay calculated directly from the scheduled and actual departure times.
```
flights_deptime <-
mutate(flights,
dep_time_min = (dep_time %/% 100 * 60 + dep_time %% 100) %% 1440,
sched_dep_time_min = (sched_dep_time %/% 100 * 60 +
sched_dep_time %% 100) %% 1440,
dep_delay_diff = dep_delay - dep_time_min + sched_dep_time_min
)
```
Does `dep_delay_diff` equal zero for all rows?
```
filter(flights_deptime, dep_delay_diff != 0)
#> # A tibble: 1,236 x 22
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 848 1835 853 1001 1950
#> 2 2013 1 2 42 2359 43 518 442
#> 3 2013 1 2 126 2250 156 233 2359
#> 4 2013 1 3 32 2359 33 504 442
#> 5 2013 1 3 50 2145 185 203 2311
#> 6 2013 1 3 235 2359 156 700 437
#> # … with 1,230 more rows, and 14 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>,
#> # dep_time_min <dbl>, sched_dep_time_min <dbl>, dep_delay_diff <dbl>
```
No. Unlike the last question, time zones are not an issue since we are only
considering departure times.[3](#fn3)
However, the discrepancies could be because a flight was scheduled to depart
before midnight, but was delayed after midnight.
All of these discrepancies are exactly equal to 1440 (24 hours), and the flights with these discrepancies were scheduled to depart later in the day.
```
ggplot(
filter(flights_deptime, dep_delay_diff > 0),
aes(y = sched_dep_time_min, x = dep_delay_diff)
) +
geom_point()
```
Thus the only cases in which the departure delay is not equal to the difference
in scheduled departure and actual departure times is due to a quirk in how these
columns were stored.
### Exercise 5\.5\.4
Find the 10 most delayed flights using a ranking function.
How do you want to handle ties?
Carefully read the documentation for `min_rank()`.
The **dplyr** package provides multiple functions for ranking, which differ in how they handle tied values: `row_number()`, `min_rank()`, `dense_rank()`.
To see how they work, let’s create a data frame with duplicate values in a vector and see how ranking functions handle ties.
```
rankme <- tibble(
x = c(10, 5, 1, 5, 5)
)
```
```
rankme <- mutate(rankme,
x_row_number = row_number(x),
x_min_rank = min_rank(x),
x_dense_rank = dense_rank(x)
)
arrange(rankme, x)
#> # A tibble: 5 x 4
#> x x_row_number x_min_rank x_dense_rank
#> <dbl> <int> <int> <int>
#> 1 1 1 1 1
#> 2 5 2 2 2
#> 3 5 3 2 2
#> 4 5 4 2 2
#> 5 10 5 5 3
```
The function `row_number()` assigns each element a unique value.
The result is equivalent to the index (or row) number of each element after sorting the vector, hence its name.
The`min_rank()` and `dense_rank()` assign tied values the same rank, but differ in how they assign values to the next rank.
For each set of tied values the `min_rank()` function assigns a rank equal to the number of values less than that tied value plus one.
In contrast, the `dense_rank()` function assigns a rank equal to the number of distinct values less than that tied value plus one.
To see the difference between `dense_rank()` and `min_rank()` compare the value of `rankme$x_min_rank` and `rankme$x_dense_rank` for `x = 10`.
If I had to choose one for presenting rankings to someone else, I would use `min_rank()` since its results correspond to the most common usage of rankings in sports or other competitions.
In the code below, I use all three functions, but since there are no ties in the top 10 flights, the results don’t differ.
```
flights_delayed <- mutate(flights,
dep_delay_min_rank = min_rank(desc(dep_delay)),
dep_delay_row_number = row_number(desc(dep_delay)),
dep_delay_dense_rank = dense_rank(desc(dep_delay))
)
flights_delayed <- filter(flights_delayed,
!(dep_delay_min_rank > 10 | dep_delay_row_number > 10 |
dep_delay_dense_rank > 10))
flights_delayed <- arrange(flights_delayed, dep_delay_min_rank)
print(select(flights_delayed, month, day, carrier, flight, dep_delay,
dep_delay_min_rank, dep_delay_row_number, dep_delay_dense_rank),
n = Inf)
#> # A tibble: 10 x 8
#> month day carrier flight dep_delay dep_delay_min_r… dep_delay_row_n…
#> <int> <int> <chr> <int> <dbl> <int> <int>
#> 1 1 9 HA 51 1301 1 1
#> 2 6 15 MQ 3535 1137 2 2
#> 3 1 10 MQ 3695 1126 3 3
#> 4 9 20 AA 177 1014 4 4
#> 5 7 22 MQ 3075 1005 5 5
#> 6 4 10 DL 2391 960 6 6
#> 7 3 17 DL 2119 911 7 7
#> 8 6 27 DL 2007 899 8 8
#> 9 7 22 DL 2047 898 9 9
#> 10 12 5 AA 172 896 10 10
#> # … with 1 more variable: dep_delay_dense_rank <int>
```
In addition to the functions covered here, the `rank()` function provides several more ways of ranking elements.
There are other ways to solve this problem that do not using ranking functions.
To select the top 10, sort values with `arrange()` and select the top values with `slice`:
```
flights_delayed2 <- arrange(flights, desc(dep_delay))
flights_delayed2 <- slice(flights_delayed2, 1:10)
select(flights_delayed2, month, day, carrier, flight, dep_delay)
#> # A tibble: 10 x 5
#> month day carrier flight dep_delay
#> <int> <int> <chr> <int> <dbl>
#> 1 1 9 HA 51 1301
#> 2 6 15 MQ 3535 1137
#> 3 1 10 MQ 3695 1126
#> 4 9 20 AA 177 1014
#> 5 7 22 MQ 3075 1005
#> 6 4 10 DL 2391 960
#> # … with 4 more rows
```
Alternatively, we could use the `top_n()`.
```
flights_delayed3 <- top_n(flights, 10, dep_delay)
flights_delayed3 <- arrange(flights_delayed3, desc(dep_delay))
select(flights_delayed3, month, day, carrier, flight, dep_delay)
#> # A tibble: 10 x 5
#> month day carrier flight dep_delay
#> <int> <int> <chr> <int> <dbl>
#> 1 1 9 HA 51 1301
#> 2 6 15 MQ 3535 1137
#> 3 1 10 MQ 3695 1126
#> 4 9 20 AA 177 1014
#> 5 7 22 MQ 3075 1005
#> 6 4 10 DL 2391 960
#> # … with 4 more rows
```
The previous two approaches will always select 10 rows even if there are tied values.
Ranking functions provide more control over how tied values are handled.
Those approaches will provide the 10 rows with the largest values of `dep_delay`, while ranking functions can provide all rows with the 10 largest values of `dep_delay`.
If there are no ties, these approaches are equivalent.
If there are ties, then which is more appropriate depends on the use.
### Exercise 5\.5\.5
What does `1:3 + 1:10` return? Why?
The code given in the question returns the following.
```
1:3 + 1:10
#> Warning in 1:3 + 1:10: longer object length is not a multiple of shorter object
#> length
#> [1] 2 4 6 5 7 9 8 10 12 11
```
This is equivalent to the following.
```
c(1 + 1, 2 + 2, 3 + 3, 1 + 4, 2 + 5, 3 + 6, 1 + 7, 2 + 8, 3 + 9, 1 + 10)
#> [1] 2 4 6 5 7 9 8 10 12 11
```
When adding two vectors, R recycles the shorter vector’s values to create a vector of the same length as the longer vector.
The code also raises a warning that the shorter vector is not a multiple of the longer vector.
A warning is raised since when this occurs, it is often unintended and may be a bug.
### Exercise 5\.5\.6
What trigonometric functions does R provide?
All trigonometric functions are all described in a single help page, named `Trig`.
You can open the documentation for these functions with `?Trig` or by using `?` with any of the following functions, for example:`?sin`.
R provides functions for the three primary trigonometric functions: sine (`sin()`), cosine (`cos()`), and tangent (`tan()`).
The input angles to all these functions are in [radians](https://en.wikipedia.org/wiki/Radian).
```
x <- seq(-3, 7, by = 1 / 2)
sin(pi * x)
#> [1] -3.67e-16 -1.00e+00 2.45e-16 1.00e+00 -1.22e-16 -1.00e+00 0.00e+00
#> [8] 1.00e+00 1.22e-16 -1.00e+00 -2.45e-16 1.00e+00 3.67e-16 -1.00e+00
#> [15] -4.90e-16 1.00e+00 6.12e-16 -1.00e+00 -7.35e-16 1.00e+00 8.57e-16
cos(pi * x)
#> [1] -1.00e+00 3.06e-16 1.00e+00 -1.84e-16 -1.00e+00 6.12e-17 1.00e+00
#> [8] 6.12e-17 -1.00e+00 -1.84e-16 1.00e+00 3.06e-16 -1.00e+00 -4.29e-16
#> [15] 1.00e+00 5.51e-16 -1.00e+00 -2.45e-15 1.00e+00 -9.80e-16 -1.00e+00
tan(pi * x)
#> [1] 3.67e-16 -3.27e+15 2.45e-16 -5.44e+15 1.22e-16 -1.63e+16 0.00e+00
#> [8] 1.63e+16 -1.22e-16 5.44e+15 -2.45e-16 3.27e+15 -3.67e-16 2.33e+15
#> [15] -4.90e-16 1.81e+15 -6.12e-16 4.08e+14 -7.35e-16 -1.02e+15 -8.57e-16
```
In the previous code, I used the variable `pi`.
R provides the variable `pi` which is set to the value of the mathematical constant \\(\\pi\\) .[4](#fn4)
```
pi
#> [1] 3.14
```
Although R provides the `pi` variable, there is nothing preventing a user from changing its value.
For example, I could redefine `pi` to [3\.14](https://en.wikipedia.org/wiki/Indiana_Pi_Bill) or
any other value.
```
pi <- 3.14
pi
#> [1] 3.14
pi <- "Apple"
pi
#> [1] "Apple"
```
For that reason, if you are using the builtin `pi` variable in computations and are paranoid, you may want to always reference it as `base::pi`.
```
base::pi
#> [1] 3.14
```
In the previous code block, since the angles were in radians, I wrote them as \\(\\pi\\) times some number.
Since it is often easier to write radians multiple of \\(\\pi\\), R provides some convenience functions that do that.
The function `sinpi(x)`, is equivalent to `sin(pi * x)`.
The functions `cospi()` and `tanpi()` are similarly defined for the sin and tan functions, respectively.
```
sinpi(x)
#> [1] 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0
cospi(x)
#> [1] -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1
tanpi(x)
#> Warning in tanpi(x): NaNs produced
#> [1] 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0 NaN 0
#> [20] NaN 0
```
R provides the function arc\-cosine (`acos()`), arc\-sine (`asin()`), and arc\-tangent (`atan()`).
```
x <- seq(-1, 1, by = 1 / 4)
acos(x)
#> [1] 3.142 2.419 2.094 1.823 1.571 1.318 1.047 0.723 0.000
asin(x)
#> [1] -1.571 -0.848 -0.524 -0.253 0.000 0.253 0.524 0.848 1.571
atan(x)
#> [1] -0.785 -0.644 -0.464 -0.245 0.000 0.245 0.464 0.644 0.785
```
Finally, R provides the function `atan2()`.
Calling `atan2(y, x)` returns the angle between the x\-axis and the vector from `(0,0)` to `(x, y)`.
```
atan2(c(1, 0, -1, 0), c(0, 1, 0, -1))
#> [1] 1.57 0.00 -1.57 3.14
```
5\.6 Grouped summaries with `summarise()`
-----------------------------------------
### Exercise 5\.6\.1
Brainstorm at least 5 different ways to assess the typical delay characteristics of a group of flights.
Consider the following scenarios:
* A flight is 15 minutes early 50% of the time, and 15 minutes late 50% of the time.
* A flight is always 10 minutes late.
* A flight is 30 minutes early 50% of the time, and 30 minutes late 50% of the time.
* 99% of the time a flight is on time. 1% of the time it’s 2 hours late.
Which is more important: arrival delay or departure delay?
What this question gets at is a fundamental question of data analysis: the cost function.
As analysts, the reason we are interested in flight delay because it is costly to passengers.
But it is worth thinking carefully about how it is costly and use that information in ranking and measuring these scenarios.
In many scenarios, arrival delay is more important.
In most cases, being arriving late is more costly to the passenger since it could disrupt the next stages of their travel, such as connecting flights or scheduled meetings.
If a departure is delayed without affecting the arrival time, this delay will not have those affects plans nor does it affect the total time spent traveling.
This delay could be beneficial, if less time is spent in the cramped confines of the airplane itself, or a negative, if that delayed time is still spent in the cramped confines of the airplane on the runway.
Variation in arrival time is worse than consistency.
If a flight is always 30 minutes late and that delay is known, then it is as if the arrival time is that delayed time.
The traveler could easily plan for this.
But higher variation in flight times makes it harder to plan.
### Exercise 5\.6\.2
Come up with another approach that will give you the same output as `not_cancelled %>% count(dest)` and `not_cancelled %>% count(tailnum, wt = distance)` (without using `count()`).
```
not_cancelled <- flights %>%
filter(!is.na(dep_delay), !is.na(arr_delay))
```
The first expression is the following.
```
not_cancelled %>%
count(dest)
#> # A tibble: 104 x 2
#> dest n
#> <chr> <int>
#> 1 ABQ 254
#> 2 ACK 264
#> 3 ALB 418
#> 4 ANC 8
#> 5 ATL 16837
#> 6 AUS 2411
#> # … with 98 more rows
```
The `count()` function counts the number of instances within each group of variables.
Instead of using the `count()` function, we can combine the `group_by()` and `summarise()` verbs.
```
not_cancelled %>%
group_by(dest) %>%
summarise(n = length(dest))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 104 x 2
#> dest n
#> <chr> <int>
#> 1 ABQ 254
#> 2 ACK 264
#> 3 ALB 418
#> 4 ANC 8
#> 5 ATL 16837
#> 6 AUS 2411
#> # … with 98 more rows
```
An alternative method for getting the number of observations in a data frame is the function `n()`.
```
not_cancelled %>%
group_by(dest) %>%
summarise(n = n())
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 104 x 2
#> dest n
#> <chr> <int>
#> 1 ABQ 254
#> 2 ACK 264
#> 3 ALB 418
#> 4 ANC 8
#> 5 ATL 16837
#> 6 AUS 2411
#> # … with 98 more rows
```
Another alternative to `count()` is to use `group_by()` followed by `tally()`.
In fact, `count()` is effectively a short\-cut for `group_by()` followed by `tally()`.
```
not_cancelled %>%
group_by(tailnum) %>%
tally()
#> # A tibble: 4,037 x 2
#> tailnum n
#> <chr> <int>
#> 1 D942DN 4
#> 2 N0EGMQ 352
#> 3 N10156 145
#> 4 N102UW 48
#> 5 N103US 46
#> 6 N104UW 46
#> # … with 4,031 more rows
```
The second expression also uses the `count()` function, but adds a `wt` argument.
```
not_cancelled %>%
count(tailnum, wt = distance)
#> # A tibble: 4,037 x 2
#> tailnum n
#> <chr> <dbl>
#> 1 D942DN 3418
#> 2 N0EGMQ 239143
#> 3 N10156 109664
#> 4 N102UW 25722
#> 5 N103US 24619
#> 6 N104UW 24616
#> # … with 4,031 more rows
```
As before, we can replicate `count()` by combining the `group_by()` and `summarise()` verbs.
But this time instead of using `length()`, we will use `sum()` with the weighting variable.
```
not_cancelled %>%
group_by(tailnum) %>%
summarise(n = sum(distance))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 4,037 x 2
#> tailnum n
#> <chr> <dbl>
#> 1 D942DN 3418
#> 2 N0EGMQ 239143
#> 3 N10156 109664
#> 4 N102UW 25722
#> 5 N103US 24619
#> 6 N104UW 24616
#> # … with 4,031 more rows
```
Like the previous example, we can also use the combination `group_by()` and `tally()`.
Any arguments to `tally()` are summed.
```
not_cancelled %>%
group_by(tailnum) %>%
tally(distance)
#> # A tibble: 4,037 x 2
#> tailnum n
#> <chr> <dbl>
#> 1 D942DN 3418
#> 2 N0EGMQ 239143
#> 3 N10156 109664
#> 4 N102UW 25722
#> 5 N103US 24619
#> 6 N104UW 24616
#> # … with 4,031 more rows
```
### Exercise 5\.6\.3
Our definition of cancelled flights `(is.na(dep_delay) | is.na(arr_delay))` is slightly suboptimal.
Why?
Which is the most important column?
If a flight never departs, then it won’t arrive.
A flight could also depart and not arrive if it crashes, or if it is redirected and lands in an airport other than its intended destination.
So the most important column is `arr_delay`, which indicates the amount of delay in arrival.
```
filter(flights, !is.na(dep_delay), is.na(arr_delay)) %>%
select(dep_time, arr_time, sched_arr_time, dep_delay, arr_delay)
#> # A tibble: 1,175 x 5
#> dep_time arr_time sched_arr_time dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 1525 1934 1805 -5 NA
#> 2 1528 2002 1647 29 NA
#> 3 1740 2158 2020 -5 NA
#> 4 1807 2251 2103 29 NA
#> 5 1939 29 2151 59 NA
#> 6 1952 2358 2207 22 NA
#> # … with 1,169 more rows
```
In this data `dep_time` can be non\-missing and `arr_delay` missing but `arr_time` not missing.
Some further [research](https://hyp.is/TsdRpofJEeqzs6-vUOfVBg/jrnold.github.io/r4ds-exercise-solutions/transform.html) found that these rows correspond to diverted flights.
The [BTS](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236) database that is the source for the `flights` table contains additional information for diverted flights that is not included in the nycflights13 data.
The source contains a column `DivArrDelay` with the description:
> Difference in minutes between scheduled and actual arrival time for a diverted flight reaching scheduled destination.
> The `ArrDelay` column remains `NULL` for all diverted flights.
### Exercise 5\.6\.4
Look at the number of cancelled flights per day.
Is there a pattern?
Is the proportion of cancelled flights related to the average delay?
One pattern in cancelled flights per day is that the number of cancelled flights increases with the total number of flights per day.
The proportion of cancelled flights increases with the average delay of flights.
To answer these questions, use definition of cancelled used in the
chapter [Section 5\.6\.3](https://r4ds.had.co.nz/transform.html#counts) and the
relationship `!(is.na(arr_delay) & is.na(dep_delay))` is equal to
`!is.na(arr_delay) | !is.na(dep_delay)` by [De Morgan’s law](https://en.wikipedia.org/wiki/De_Morgan%27s_laws).
The first part of the question asks for any pattern in the number of cancelled flights per day.
I’ll look at the relationship between the number of cancelled flights per day and the total number of flights in a day.
There should be an increasing relationship for two reasons.
First, if all flights are equally likely to be cancelled, then days with more flights should have a higher number of cancellations.
Second, it is likely that days with more flights would have a higher probability of cancellations because congestion itself can cause delays and any delay would affect more flights, and large delays can lead to cancellations.
```
cancelled_per_day <-
flights %>%
mutate(cancelled = (is.na(arr_delay) | is.na(dep_delay))) %>%
group_by(year, month, day) %>%
summarise(
cancelled_num = sum(cancelled),
flights_num = n(),
)
#> `summarise()` regrouping output by 'year', 'month' (override with `.groups` argument)
```
Plotting `flights_num` against `cancelled_num` shows that the number of flights
cancelled increases with the total number of flights.
```
ggplot(cancelled_per_day) +
geom_point(aes(x = flights_num, y = cancelled_num))
```
The second part of the question asks whether there is a relationship between the proportion of flights cancelled and the average departure delay.
I implied this in my answer to the first part of the question, when I noted that increasing delays could result in increased cancellations.
The question does not specify which delay, so I will show the relationship for both.
```
cancelled_and_delays <-
flights %>%
mutate(cancelled = (is.na(arr_delay) | is.na(dep_delay))) %>%
group_by(year, month, day) %>%
summarise(
cancelled_prop = mean(cancelled),
avg_dep_delay = mean(dep_delay, na.rm = TRUE),
avg_arr_delay = mean(arr_delay, na.rm = TRUE)
) %>%
ungroup()
#> `summarise()` regrouping output by 'year', 'month' (override with `.groups` argument)
```
There is a strong increasing relationship between both average departure delay and
and average arrival delay and the proportion of cancelled flights.
```
ggplot(cancelled_and_delays) +
geom_point(aes(x = avg_dep_delay, y = cancelled_prop))
```
```
ggplot(cancelled_and_delays) +
geom_point(aes(x = avg_arr_delay, y = cancelled_prop))
```
### Exercise 5\.6\.5
Which carrier has the worst delays?
Challenge: can you disentangle the effects of bad airports vs. bad carriers?
Why/why not?
(Hint: think about `flights %>% group_by(carrier, dest) %>% summarise(n())`)
```
flights %>%
group_by(carrier) %>%
summarise(arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
arrange(desc(arr_delay))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 16 x 2
#> carrier arr_delay
#> <chr> <dbl>
#> 1 F9 21.9
#> 2 FL 20.1
#> 3 EV 15.8
#> 4 YV 15.6
#> 5 OO 11.9
#> 6 MQ 10.8
#> # … with 10 more rows
```
What airline corresponds to the `"F9"` carrier code?
```
filter(airlines, carrier == "F9")
#> # A tibble: 1 x 2
#> carrier name
#> <chr> <chr>
#> 1 F9 Frontier Airlines Inc.
```
You can get part of the way to disentangling the effects of airports versus bad carriers by comparing the average delay of each carrier to the average delay of flights within a route (flights from the same origin to the same destination).
Comparing delays between carriers and within each route disentangles the effect of carriers and airports.
A better analysis would compare the average delay of a carrier’s flights to the average delay of *all other* carrier’s flights within a route.
```
flights %>%
filter(!is.na(arr_delay)) %>%
# Total delay by carrier within each origin, dest
group_by(origin, dest, carrier) %>%
summarise(
arr_delay = sum(arr_delay),
flights = n()
) %>%
# Total delay within each origin dest
group_by(origin, dest) %>%
mutate(
arr_delay_total = sum(arr_delay),
flights_total = sum(flights)
) %>%
# average delay of each carrier - average delay of other carriers
ungroup() %>%
mutate(
arr_delay_others = (arr_delay_total - arr_delay) /
(flights_total - flights),
arr_delay_mean = arr_delay / flights,
arr_delay_diff = arr_delay_mean - arr_delay_others
) %>%
# remove NaN values (when there is only one carrier)
filter(is.finite(arr_delay_diff)) %>%
# average over all airports it flies to
group_by(carrier) %>%
summarise(arr_delay_diff = mean(arr_delay_diff)) %>%
arrange(desc(arr_delay_diff))
#> `summarise()` regrouping output by 'origin', 'dest' (override with `.groups` argument)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 15 x 2
#> carrier arr_delay_diff
#> <chr> <dbl>
#> 1 OO 27.3
#> 2 F9 17.3
#> 3 EV 11.0
#> 4 B6 6.41
#> 5 FL 2.57
#> 6 VX -0.202
#> # … with 9 more rows
```
There are more sophisticated ways to do this analysis, however comparing the delay of flights within each route goes a long ways toward disentangling airport and carrier effects.
To see a more complete example of this analysis, see this FiveThirtyEight [piece](https://fivethirtyeight.com/features/the-best-and-worst-airlines-airports-and-flights-summer-2015-update/).
### Exercise 5\.6\.6
What does the sort argument to `count()` do?
When might you use it?
The sort argument to `count()` sorts the results in order of `n`.
You could use this anytime you would run `count()` followed by `arrange()`.
For example, the following expression counts the number of flights to a destination and sorts the returned data from highest to lowest.
```
flights %>%
count(dest, sort = TRUE)
#> # A tibble: 105 x 2
#> dest n
#> <chr> <int>
#> 1 ORD 17283
#> 2 ATL 17215
#> 3 LAX 16174
#> 4 BOS 15508
#> 5 MCO 14082
#> 6 CLT 14064
#> # … with 99 more rows
```
### Exercise 5\.6\.1
Brainstorm at least 5 different ways to assess the typical delay characteristics of a group of flights.
Consider the following scenarios:
* A flight is 15 minutes early 50% of the time, and 15 minutes late 50% of the time.
* A flight is always 10 minutes late.
* A flight is 30 minutes early 50% of the time, and 30 minutes late 50% of the time.
* 99% of the time a flight is on time. 1% of the time it’s 2 hours late.
Which is more important: arrival delay or departure delay?
What this question gets at is a fundamental question of data analysis: the cost function.
As analysts, the reason we are interested in flight delay because it is costly to passengers.
But it is worth thinking carefully about how it is costly and use that information in ranking and measuring these scenarios.
In many scenarios, arrival delay is more important.
In most cases, being arriving late is more costly to the passenger since it could disrupt the next stages of their travel, such as connecting flights or scheduled meetings.
If a departure is delayed without affecting the arrival time, this delay will not have those affects plans nor does it affect the total time spent traveling.
This delay could be beneficial, if less time is spent in the cramped confines of the airplane itself, or a negative, if that delayed time is still spent in the cramped confines of the airplane on the runway.
Variation in arrival time is worse than consistency.
If a flight is always 30 minutes late and that delay is known, then it is as if the arrival time is that delayed time.
The traveler could easily plan for this.
But higher variation in flight times makes it harder to plan.
### Exercise 5\.6\.2
Come up with another approach that will give you the same output as `not_cancelled %>% count(dest)` and `not_cancelled %>% count(tailnum, wt = distance)` (without using `count()`).
```
not_cancelled <- flights %>%
filter(!is.na(dep_delay), !is.na(arr_delay))
```
The first expression is the following.
```
not_cancelled %>%
count(dest)
#> # A tibble: 104 x 2
#> dest n
#> <chr> <int>
#> 1 ABQ 254
#> 2 ACK 264
#> 3 ALB 418
#> 4 ANC 8
#> 5 ATL 16837
#> 6 AUS 2411
#> # … with 98 more rows
```
The `count()` function counts the number of instances within each group of variables.
Instead of using the `count()` function, we can combine the `group_by()` and `summarise()` verbs.
```
not_cancelled %>%
group_by(dest) %>%
summarise(n = length(dest))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 104 x 2
#> dest n
#> <chr> <int>
#> 1 ABQ 254
#> 2 ACK 264
#> 3 ALB 418
#> 4 ANC 8
#> 5 ATL 16837
#> 6 AUS 2411
#> # … with 98 more rows
```
An alternative method for getting the number of observations in a data frame is the function `n()`.
```
not_cancelled %>%
group_by(dest) %>%
summarise(n = n())
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 104 x 2
#> dest n
#> <chr> <int>
#> 1 ABQ 254
#> 2 ACK 264
#> 3 ALB 418
#> 4 ANC 8
#> 5 ATL 16837
#> 6 AUS 2411
#> # … with 98 more rows
```
Another alternative to `count()` is to use `group_by()` followed by `tally()`.
In fact, `count()` is effectively a short\-cut for `group_by()` followed by `tally()`.
```
not_cancelled %>%
group_by(tailnum) %>%
tally()
#> # A tibble: 4,037 x 2
#> tailnum n
#> <chr> <int>
#> 1 D942DN 4
#> 2 N0EGMQ 352
#> 3 N10156 145
#> 4 N102UW 48
#> 5 N103US 46
#> 6 N104UW 46
#> # … with 4,031 more rows
```
The second expression also uses the `count()` function, but adds a `wt` argument.
```
not_cancelled %>%
count(tailnum, wt = distance)
#> # A tibble: 4,037 x 2
#> tailnum n
#> <chr> <dbl>
#> 1 D942DN 3418
#> 2 N0EGMQ 239143
#> 3 N10156 109664
#> 4 N102UW 25722
#> 5 N103US 24619
#> 6 N104UW 24616
#> # … with 4,031 more rows
```
As before, we can replicate `count()` by combining the `group_by()` and `summarise()` verbs.
But this time instead of using `length()`, we will use `sum()` with the weighting variable.
```
not_cancelled %>%
group_by(tailnum) %>%
summarise(n = sum(distance))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 4,037 x 2
#> tailnum n
#> <chr> <dbl>
#> 1 D942DN 3418
#> 2 N0EGMQ 239143
#> 3 N10156 109664
#> 4 N102UW 25722
#> 5 N103US 24619
#> 6 N104UW 24616
#> # … with 4,031 more rows
```
Like the previous example, we can also use the combination `group_by()` and `tally()`.
Any arguments to `tally()` are summed.
```
not_cancelled %>%
group_by(tailnum) %>%
tally(distance)
#> # A tibble: 4,037 x 2
#> tailnum n
#> <chr> <dbl>
#> 1 D942DN 3418
#> 2 N0EGMQ 239143
#> 3 N10156 109664
#> 4 N102UW 25722
#> 5 N103US 24619
#> 6 N104UW 24616
#> # … with 4,031 more rows
```
### Exercise 5\.6\.3
Our definition of cancelled flights `(is.na(dep_delay) | is.na(arr_delay))` is slightly suboptimal.
Why?
Which is the most important column?
If a flight never departs, then it won’t arrive.
A flight could also depart and not arrive if it crashes, or if it is redirected and lands in an airport other than its intended destination.
So the most important column is `arr_delay`, which indicates the amount of delay in arrival.
```
filter(flights, !is.na(dep_delay), is.na(arr_delay)) %>%
select(dep_time, arr_time, sched_arr_time, dep_delay, arr_delay)
#> # A tibble: 1,175 x 5
#> dep_time arr_time sched_arr_time dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 1525 1934 1805 -5 NA
#> 2 1528 2002 1647 29 NA
#> 3 1740 2158 2020 -5 NA
#> 4 1807 2251 2103 29 NA
#> 5 1939 29 2151 59 NA
#> 6 1952 2358 2207 22 NA
#> # … with 1,169 more rows
```
In this data `dep_time` can be non\-missing and `arr_delay` missing but `arr_time` not missing.
Some further [research](https://hyp.is/TsdRpofJEeqzs6-vUOfVBg/jrnold.github.io/r4ds-exercise-solutions/transform.html) found that these rows correspond to diverted flights.
The [BTS](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236) database that is the source for the `flights` table contains additional information for diverted flights that is not included in the nycflights13 data.
The source contains a column `DivArrDelay` with the description:
> Difference in minutes between scheduled and actual arrival time for a diverted flight reaching scheduled destination.
> The `ArrDelay` column remains `NULL` for all diverted flights.
### Exercise 5\.6\.4
Look at the number of cancelled flights per day.
Is there a pattern?
Is the proportion of cancelled flights related to the average delay?
One pattern in cancelled flights per day is that the number of cancelled flights increases with the total number of flights per day.
The proportion of cancelled flights increases with the average delay of flights.
To answer these questions, use definition of cancelled used in the
chapter [Section 5\.6\.3](https://r4ds.had.co.nz/transform.html#counts) and the
relationship `!(is.na(arr_delay) & is.na(dep_delay))` is equal to
`!is.na(arr_delay) | !is.na(dep_delay)` by [De Morgan’s law](https://en.wikipedia.org/wiki/De_Morgan%27s_laws).
The first part of the question asks for any pattern in the number of cancelled flights per day.
I’ll look at the relationship between the number of cancelled flights per day and the total number of flights in a day.
There should be an increasing relationship for two reasons.
First, if all flights are equally likely to be cancelled, then days with more flights should have a higher number of cancellations.
Second, it is likely that days with more flights would have a higher probability of cancellations because congestion itself can cause delays and any delay would affect more flights, and large delays can lead to cancellations.
```
cancelled_per_day <-
flights %>%
mutate(cancelled = (is.na(arr_delay) | is.na(dep_delay))) %>%
group_by(year, month, day) %>%
summarise(
cancelled_num = sum(cancelled),
flights_num = n(),
)
#> `summarise()` regrouping output by 'year', 'month' (override with `.groups` argument)
```
Plotting `flights_num` against `cancelled_num` shows that the number of flights
cancelled increases with the total number of flights.
```
ggplot(cancelled_per_day) +
geom_point(aes(x = flights_num, y = cancelled_num))
```
The second part of the question asks whether there is a relationship between the proportion of flights cancelled and the average departure delay.
I implied this in my answer to the first part of the question, when I noted that increasing delays could result in increased cancellations.
The question does not specify which delay, so I will show the relationship for both.
```
cancelled_and_delays <-
flights %>%
mutate(cancelled = (is.na(arr_delay) | is.na(dep_delay))) %>%
group_by(year, month, day) %>%
summarise(
cancelled_prop = mean(cancelled),
avg_dep_delay = mean(dep_delay, na.rm = TRUE),
avg_arr_delay = mean(arr_delay, na.rm = TRUE)
) %>%
ungroup()
#> `summarise()` regrouping output by 'year', 'month' (override with `.groups` argument)
```
There is a strong increasing relationship between both average departure delay and
and average arrival delay and the proportion of cancelled flights.
```
ggplot(cancelled_and_delays) +
geom_point(aes(x = avg_dep_delay, y = cancelled_prop))
```
```
ggplot(cancelled_and_delays) +
geom_point(aes(x = avg_arr_delay, y = cancelled_prop))
```
### Exercise 5\.6\.5
Which carrier has the worst delays?
Challenge: can you disentangle the effects of bad airports vs. bad carriers?
Why/why not?
(Hint: think about `flights %>% group_by(carrier, dest) %>% summarise(n())`)
```
flights %>%
group_by(carrier) %>%
summarise(arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
arrange(desc(arr_delay))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 16 x 2
#> carrier arr_delay
#> <chr> <dbl>
#> 1 F9 21.9
#> 2 FL 20.1
#> 3 EV 15.8
#> 4 YV 15.6
#> 5 OO 11.9
#> 6 MQ 10.8
#> # … with 10 more rows
```
What airline corresponds to the `"F9"` carrier code?
```
filter(airlines, carrier == "F9")
#> # A tibble: 1 x 2
#> carrier name
#> <chr> <chr>
#> 1 F9 Frontier Airlines Inc.
```
You can get part of the way to disentangling the effects of airports versus bad carriers by comparing the average delay of each carrier to the average delay of flights within a route (flights from the same origin to the same destination).
Comparing delays between carriers and within each route disentangles the effect of carriers and airports.
A better analysis would compare the average delay of a carrier’s flights to the average delay of *all other* carrier’s flights within a route.
```
flights %>%
filter(!is.na(arr_delay)) %>%
# Total delay by carrier within each origin, dest
group_by(origin, dest, carrier) %>%
summarise(
arr_delay = sum(arr_delay),
flights = n()
) %>%
# Total delay within each origin dest
group_by(origin, dest) %>%
mutate(
arr_delay_total = sum(arr_delay),
flights_total = sum(flights)
) %>%
# average delay of each carrier - average delay of other carriers
ungroup() %>%
mutate(
arr_delay_others = (arr_delay_total - arr_delay) /
(flights_total - flights),
arr_delay_mean = arr_delay / flights,
arr_delay_diff = arr_delay_mean - arr_delay_others
) %>%
# remove NaN values (when there is only one carrier)
filter(is.finite(arr_delay_diff)) %>%
# average over all airports it flies to
group_by(carrier) %>%
summarise(arr_delay_diff = mean(arr_delay_diff)) %>%
arrange(desc(arr_delay_diff))
#> `summarise()` regrouping output by 'origin', 'dest' (override with `.groups` argument)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 15 x 2
#> carrier arr_delay_diff
#> <chr> <dbl>
#> 1 OO 27.3
#> 2 F9 17.3
#> 3 EV 11.0
#> 4 B6 6.41
#> 5 FL 2.57
#> 6 VX -0.202
#> # … with 9 more rows
```
There are more sophisticated ways to do this analysis, however comparing the delay of flights within each route goes a long ways toward disentangling airport and carrier effects.
To see a more complete example of this analysis, see this FiveThirtyEight [piece](https://fivethirtyeight.com/features/the-best-and-worst-airlines-airports-and-flights-summer-2015-update/).
### Exercise 5\.6\.6
What does the sort argument to `count()` do?
When might you use it?
The sort argument to `count()` sorts the results in order of `n`.
You could use this anytime you would run `count()` followed by `arrange()`.
For example, the following expression counts the number of flights to a destination and sorts the returned data from highest to lowest.
```
flights %>%
count(dest, sort = TRUE)
#> # A tibble: 105 x 2
#> dest n
#> <chr> <int>
#> 1 ORD 17283
#> 2 ATL 17215
#> 3 LAX 16174
#> 4 BOS 15508
#> 5 MCO 14082
#> 6 CLT 14064
#> # … with 99 more rows
```
5\.7 Grouped mutates (and filters)
----------------------------------
### Exercise 5\.7\.1
Refer back to the lists of useful mutate and filtering functions.
Describe how each operation changes when you combine it with grouping.
Summary functions (`mean()`), offset functions (`lead()`, `lag()`), ranking functions (`min_rank()`, `row_number()`), operate within each group when used with `group_by()` in
`mutate()` or `filter()`.
Arithmetic operators (`+`, `-`), logical operators (`<`, `==`), modular arithmetic operators (`%%`, `%/%`), logarithmic functions (`log`) are not affected by `group_by`.
Summary functions like `mean()`, `median()`, `sum()`, `std()` and others covered
in the section [Useful Summary Functions](https://r4ds.had.co.nz/transform.html#summarise-funs)
calculate their values within each group when used with `mutate()` or `filter()` and `group_by()`.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(x_mean = mean(x)) %>%
group_by(group) %>%
mutate(x_mean_2 = mean(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group x_mean x_mean_2
#> <int> <chr> <dbl> <dbl>
#> 1 1 a 5 2
#> 2 2 a 5 2
#> 3 3 a 5 2
#> 4 4 b 5 5
#> 5 5 b 5 5
#> 6 6 b 5 5
#> # … with 3 more rows
```
Arithmetic operators `+`, `-`, `*`, `/`, `^` are not affected by `group_by()`.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(y = x + 2) %>%
group_by(group) %>%
mutate(z = x + 2)
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group y z
#> <int> <chr> <dbl> <dbl>
#> 1 1 a 3 3
#> 2 2 a 4 4
#> 3 3 a 5 5
#> 4 4 b 6 6
#> 5 5 b 7 7
#> 6 6 b 8 8
#> # … with 3 more rows
```
The modular arithmetic operators `%/%` and `%%` are not affected by `group_by()`
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(y = x %% 2) %>%
group_by(group) %>%
mutate(z = x %% 2)
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group y z
#> <int> <chr> <dbl> <dbl>
#> 1 1 a 1 1
#> 2 2 a 0 0
#> 3 3 a 1 1
#> 4 4 b 0 0
#> 5 5 b 1 1
#> 6 6 b 0 0
#> # … with 3 more rows
```
The logarithmic functions `log()`, `log2()`, and `log10()` are not affected by
`group_by()`.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(y = log(x)) %>%
group_by(group) %>%
mutate(z = log(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group y z
#> <int> <chr> <dbl> <dbl>
#> 1 1 a 0 0
#> 2 2 a 0.693 0.693
#> 3 3 a 1.10 1.10
#> 4 4 b 1.39 1.39
#> 5 5 b 1.61 1.61
#> 6 6 b 1.79 1.79
#> # … with 3 more rows
```
The offset functions `lead()` and `lag()` respect the groupings in `group_by()`.
The functions `lag()` and `lead()` will only return values within each group.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
group_by(group) %>%
mutate(lag_x = lag(x),
lead_x = lead(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group lag_x lead_x
#> <int> <chr> <int> <int>
#> 1 1 a NA 2
#> 2 2 a 1 3
#> 3 3 a 2 NA
#> 4 4 b NA 5
#> 5 5 b 4 6
#> 6 6 b 5 NA
#> # … with 3 more rows
```
The cumulative and rolling aggregate functions `cumsum()`, `cumprod()`, `cummin()`, `cummax()`, and `cummean()` calculate values within each group.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(x_cumsum = cumsum(x)) %>%
group_by(group) %>%
mutate(x_cumsum_2 = cumsum(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group x_cumsum x_cumsum_2
#> <int> <chr> <int> <int>
#> 1 1 a 1 1
#> 2 2 a 3 3
#> 3 3 a 6 6
#> 4 4 b 10 4
#> 5 5 b 15 9
#> 6 6 b 21 15
#> # … with 3 more rows
```
Logical comparisons, `<`, `<=`, `>`, `>=`, `!=`, and `==` are not affected by `group_by()`.
```
tibble(x = 1:9,
y = 9:1,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(x_lte_y = x <= y) %>%
group_by(group) %>%
mutate(x_lte_y_2 = x <= y)
#> # A tibble: 9 x 5
#> # Groups: group [3]
#> x y group x_lte_y x_lte_y_2
#> <int> <int> <chr> <lgl> <lgl>
#> 1 1 9 a TRUE TRUE
#> 2 2 8 a TRUE TRUE
#> 3 3 7 a TRUE TRUE
#> 4 4 6 b TRUE TRUE
#> 5 5 5 b TRUE TRUE
#> 6 6 4 b FALSE FALSE
#> # … with 3 more rows
```
Ranking functions like `min_rank()` work within each group when used with `group_by()`.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(rnk = min_rank(x)) %>%
group_by(group) %>%
mutate(rnk2 = min_rank(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group rnk rnk2
#> <int> <chr> <int> <int>
#> 1 1 a 1 1
#> 2 2 a 2 2
#> 3 3 a 3 3
#> 4 4 b 4 1
#> 5 5 b 5 2
#> 6 6 b 6 3
#> # … with 3 more rows
```
Though not asked in the question, note that `arrange()` ignores groups when sorting values.
```
tibble(x = runif(9),
group = rep(c("a", "b", "c"), each = 3)) %>%
group_by(group) %>%
arrange(x)
#> # A tibble: 9 x 2
#> # Groups: group [3]
#> x group
#> <dbl> <chr>
#> 1 0.00740 b
#> 2 0.0808 a
#> 3 0.157 b
#> 4 0.290 c
#> 5 0.466 b
#> 6 0.498 c
#> # … with 3 more rows
```
However, the order of values from `arrange()` can interact with groups when
used with functions that rely on the ordering of elements, such as `lead()`, `lag()`,
or `cumsum()`.
```
tibble(group = rep(c("a", "b", "c"), each = 3),
x = runif(9)) %>%
group_by(group) %>%
arrange(x) %>%
mutate(lag_x = lag(x))
#> # A tibble: 9 x 3
#> # Groups: group [3]
#> group x lag_x
#> <chr> <dbl> <dbl>
#> 1 b 0.0342 NA
#> 2 c 0.0637 NA
#> 3 a 0.175 NA
#> 4 c 0.196 0.0637
#> 5 b 0.320 0.0342
#> 6 b 0.402 0.320
#> # … with 3 more rows
```
### Exercise 5\.7\.2
Which plane (`tailnum`) has the worst on\-time record?
The question does not define a way to measure on\-time record, so I will consider two metrics:
1. proportion of flights not delayed or cancelled, and
2. mean arrival delay.
The first metric is the proportion of not\-cancelled and on\-time flights.
I use the presence of an arrival time as an indicator that a flight was not cancelled.
However, there are many planes that have never flown an on\-time flight.
Additionally, many of the planes that have the lowest proportion of on\-time flights have only flown a small number of flights.
```
flights %>%
filter(!is.na(tailnum)) %>%
mutate(on_time = !is.na(arr_time) & (arr_delay <= 0)) %>%
group_by(tailnum) %>%
summarise(on_time = mean(on_time), n = n()) %>%
filter(min_rank(on_time) == 1)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 110 x 3
#> tailnum on_time n
#> <chr> <dbl> <int>
#> 1 N121DE 0 2
#> 2 N136DL 0 1
#> 3 N143DA 0 1
#> 4 N17627 0 2
#> 5 N240AT 0 5
#> 6 N26906 0 1
#> # … with 104 more rows
```
So, I will remove planes that flew at least 20 flights.
The choice of 20 was chosen because it round number near the first quartile of the number of flights by plane.[5](#fn5)[6](#fn6)
```
quantile(count(flights, tailnum)$n)
#> 0% 25% 50% 75% 100%
#> 1 23 54 110 2512
```
The plane with the worst on time record that flew at least 20 flights is:
```
flights %>%
filter(!is.na(tailnum), is.na(arr_time) | !is.na(arr_delay)) %>%
mutate(on_time = !is.na(arr_time) & (arr_delay <= 0)) %>%
group_by(tailnum) %>%
summarise(on_time = mean(on_time), n = n()) %>%
filter(n >= 20) %>%
filter(min_rank(on_time) == 1)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 1 x 3
#> tailnum on_time n
#> <chr> <dbl> <int>
#> 1 N988AT 0.189 37
```
There are cases where `arr_delay` is missing but `arr_time` is not missing.
I have not debugged the cause of this bad data, so these rows are dropped for
the purposes of this exercise.
The second metric is the mean minutes delayed.
As with the previous metric, I will only consider planes which flew least 20 flights.
A different plane has the worst on\-time record when measured as average minutes delayed.
```
flights %>%
filter(!is.na(arr_delay)) %>%
group_by(tailnum) %>%
summarise(arr_delay = mean(arr_delay), n = n()) %>%
filter(n >= 20) %>%
filter(min_rank(desc(arr_delay)) == 1)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 1 x 3
#> tailnum arr_delay n
#> <chr> <dbl> <int>
#> 1 N203FR 59.1 41
```
### Exercise 5\.7\.3
What time of day should you fly if you want to avoid delays as much as possible?
Let’s group by the hour of the flight.
The earlier the flight is scheduled, the lower its expected delay.
This is intuitive as delays will affect later flights.
Morning flights have fewer (if any) previous flights that can delay them.
```
flights %>%
group_by(hour) %>%
summarise(arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
arrange(arr_delay)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 20 x 2
#> hour arr_delay
#> <dbl> <dbl>
#> 1 7 -5.30
#> 2 5 -4.80
#> 3 6 -3.38
#> 4 9 -1.45
#> 5 8 -1.11
#> 6 10 0.954
#> # … with 14 more rows
```
### Exercise 5\.7\.4
For each destination, compute the total minutes of delay.
For each flight, compute the proportion of the total delay for its destination.
The key to answering this question is to only include delayed flights when calculating the total delay and proportion of delay.
```
flights %>%
filter(arr_delay > 0) %>%
group_by(dest) %>%
mutate(
arr_delay_total = sum(arr_delay),
arr_delay_prop = arr_delay / arr_delay_total
) %>%
select(dest, month, day, dep_time, carrier, flight,
arr_delay, arr_delay_prop) %>%
arrange(dest, desc(arr_delay_prop))
#> # A tibble: 133,004 x 8
#> # Groups: dest [103]
#> dest month day dep_time carrier flight arr_delay arr_delay_prop
#> <chr> <int> <int> <int> <chr> <int> <dbl> <dbl>
#> 1 ABQ 7 22 2145 B6 1505 153 0.0341
#> 2 ABQ 12 14 2223 B6 65 149 0.0332
#> 3 ABQ 10 15 2146 B6 65 138 0.0308
#> 4 ABQ 7 23 2206 B6 1505 137 0.0305
#> 5 ABQ 12 17 2220 B6 65 136 0.0303
#> 6 ABQ 7 10 2025 B6 1505 126 0.0281
#> # … with 132,998 more rows
```
There is some ambiguity in the meaning of the term *flights* in the question.
The first example defined a flight as a row in the `flights` table, which is a trip by an aircraft from an airport at a particular date and time.
However, *flight* could also refer to the [flight number](https://en.wikipedia.org/wiki/Flight_number), which is the code a carrier uses for an airline service of a route.
For example, `AA1` is the flight number of the 09:00 American Airlines flight between JFK and LAX.
The flight number is contained in the `flights$flight` column, though what is called a “flight” is a combination of the `flights$carrier` and `flights$flight` columns.
```
flights %>%
filter(arr_delay > 0) %>%
group_by(dest, origin, carrier, flight) %>%
summarise(arr_delay = sum(arr_delay)) %>%
group_by(dest) %>%
mutate(
arr_delay_prop = arr_delay / sum(arr_delay)
) %>%
arrange(dest, desc(arr_delay_prop)) %>%
select(carrier, flight, origin, dest, arr_delay_prop)
#> `summarise()` regrouping output by 'dest', 'origin', 'carrier' (override with `.groups` argument)
#> # A tibble: 8,834 x 5
#> # Groups: dest [103]
#> carrier flight origin dest arr_delay_prop
#> <chr> <int> <chr> <chr> <dbl>
#> 1 B6 1505 JFK ABQ 0.567
#> 2 B6 65 JFK ABQ 0.433
#> 3 B6 1191 JFK ACK 0.475
#> 4 B6 1491 JFK ACK 0.414
#> 5 B6 1291 JFK ACK 0.0898
#> 6 B6 1195 JFK ACK 0.0208
#> # … with 8,828 more rows
```
### Exercise 5\.7\.5
Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave. Using `lag()` explore how the delay of a flight is related to the delay of the immediately preceding flight.
This calculates the departure delay of the preceding flight from the same airport.
```
lagged_delays <- flights %>%
arrange(origin, month, day, dep_time) %>%
group_by(origin) %>%
mutate(dep_delay_lag = lag(dep_delay)) %>%
filter(!is.na(dep_delay), !is.na(dep_delay_lag))
```
This plots the relationship between the mean delay of a flight for all values of the previous flight.
For delays less than two hours, the relationship between the delay of the preceding flight and the current flight is nearly a line.
After that the relationship becomes more variable, as long\-delayed flights are interspersed with flights leaving on\-time.
After about 8\-hours, a delayed flight is likely to be followed by a flight leaving on time.
```
lagged_delays %>%
group_by(dep_delay_lag) %>%
summarise(dep_delay_mean = mean(dep_delay)) %>%
ggplot(aes(y = dep_delay_mean, x = dep_delay_lag)) +
geom_point() +
scale_x_continuous(breaks = seq(0, 1500, by = 120)) +
labs(y = "Departure Delay", x = "Previous Departure Delay")
```
The overall relationship looks similar in all three origin airports.
```
lagged_delays %>%
group_by(origin, dep_delay_lag) %>%
summarise(dep_delay_mean = mean(dep_delay)) %>%
ggplot(aes(y = dep_delay_mean, x = dep_delay_lag)) +
geom_point() +
facet_wrap(~ origin, ncol=1) +
labs(y = "Departure Delay", x = "Previous Departure Delay")
#> `summarise()` regrouping output by 'origin' (override with `.groups` argument)
```
### Exercise 5\.7\.6
Look at each destination. Can you find flights that are suspiciously fast?
(i.e. flights that represent a potential data entry error).
Compute the air time of a flight relative to the shortest flight to that destination.
Which flights were most delayed in the air?
When calculating this answer we should only compare flights within the same (origin, destination) pair.
To find unusual observations, we need to first put them on the same scale.
I will [standardize](https://en.wikipedia.org/wiki/Standard_score)
values by subtracting the mean from each and then dividing each by the standard deviation.
\\\[
\\mathsf{standardized}(x) \= \\frac{x \- \\mathsf{mean}(x)}{\\mathsf{sd}(x)} .
\\]
A standardized variable is often called a \\(z\\)\-score.
The units of the standardized variable are standard deviations from the mean.
This will put the flight times from different routes on the same scale.
The larger the magnitude of the standardized variable for an observation, the more unusual the observation is.
Flights with negative values of the standardized variable are faster than the
mean flight for that route, while those with positive values are slower than
the mean flight for that route.
```
standardized_flights <- flights %>%
filter(!is.na(air_time)) %>%
group_by(dest, origin) %>%
mutate(
air_time_mean = mean(air_time),
air_time_sd = sd(air_time),
n = n()
) %>%
ungroup() %>%
mutate(air_time_standard = (air_time - air_time_mean) / (air_time_sd + 1))
```
I add 1 to the denominator and numerator to avoid dividing by zero.
Note that the `ungroup()` here is not necessary. However, I will be using
this data frame later. Through experience, I have found that I have fewer bugs
when I keep a data frame grouped for only those verbs that need it.
If I did not `ungroup()` this data frame, the `arrange()` used later would
not work as expected. It is better to err on the side of using `ungroup()`
when unnecessary.
The distribution of the standardized air flights has long right tail.
```
ggplot(standardized_flights, aes(x = air_time_standard)) +
geom_density()
#> Warning: Removed 4 rows containing non-finite values (stat_density).
```
Unusually fast flights are those flights with the smallest standardized values.
```
standardized_flights %>%
arrange(air_time_standard) %>%
select(
carrier, flight, origin, dest, month, day,
air_time, air_time_mean, air_time_standard
) %>%
head(10) %>%
print(width = Inf)
#> # A tibble: 10 x 9
#> carrier flight origin dest month day air_time air_time_mean
#> <chr> <int> <chr> <chr> <int> <int> <dbl> <dbl>
#> 1 DL 1499 LGA ATL 5 25 65 114.
#> 2 EV 4667 EWR MSP 7 2 93 151.
#> 3 EV 4292 EWR GSP 5 13 55 93.2
#> 4 EV 3805 EWR BNA 3 23 70 115.
#> 5 EV 4687 EWR CVG 9 29 62 96.1
#> 6 B6 2002 JFK BUF 11 10 38 57.1
#> air_time_standard
#> <dbl>
#> 1 -4.56
#> 2 -4.46
#> 3 -4.20
#> 4 -3.73
#> 5 -3.60
#> 6 -3.38
#> # … with 4 more rows
```
I used `width = Inf` to ensure that all columns will be printed.
The fastest flight is DL1499 from LGA to
ATL which departed on
2013\-05\-25 at 17:09\.
It has an air time of 65 minutes, compared to an average
flight time of 114 minutes for its route.
This is 4\.6 standard deviations below
the average flight on its route.
It is important to note that this does not necessarily imply that there was a data entry error.
We should check these flights to see whether there was some reason for the difference.
It may be that we are missing some piece of information that explains these unusual times.
A potential issue with the way that we standardized the flights is that the mean and standard deviation used to calculate are sensitive to outliers and outliers is what we are looking for.
Instead of standardizing variables with the mean and variance, we could use the median
as a measure of central tendency and the interquartile range (IQR) as a measure of spread.
The median and IQR are more [resistant to outliers](https://en.wikipedia.org/wiki/Robust_statistics) than the mean and standard deviation.
The following method uses the median and inter\-quartile range, which are less sensitive to outliers.
```
standardized_flights2 <- flights %>%
filter(!is.na(air_time)) %>%
group_by(dest, origin) %>%
mutate(
air_time_median = median(air_time),
air_time_iqr = IQR(air_time),
n = n(),
air_time_standard = (air_time - air_time_median) / air_time_iqr)
```
The distribution of the standardized air flights using this new definition
also has long right tail of slow flights.
```
ggplot(standardized_flights2, aes(x = air_time_standard)) +
geom_density()
#> Warning: Removed 4 rows containing non-finite values (stat_density).
```
Unusually fast flights are those flights with the smallest standardized values.
```
standardized_flights2 %>%
arrange(air_time_standard) %>%
select(
carrier, flight, origin, dest, month, day, air_time,
air_time_median, air_time_standard
) %>%
head(10) %>%
print(width = Inf)
#> # A tibble: 10 x 9
#> # Groups: dest, origin [10]
#> carrier flight origin dest month day air_time air_time_median
#> <chr> <int> <chr> <chr> <int> <int> <dbl> <dbl>
#> 1 EV 4667 EWR MSP 7 2 93 149
#> 2 DL 1499 LGA ATL 5 25 65 112
#> 3 US 2132 LGA BOS 3 2 21 37
#> 4 B6 30 JFK ROC 3 25 35 51
#> 5 B6 2002 JFK BUF 11 10 38 57
#> 6 EV 4292 EWR GSP 5 13 55 92
#> air_time_standard
#> <dbl>
#> 1 -3.5
#> 2 -3.36
#> 3 -3.2
#> 4 -3.2
#> 5 -3.17
#> 6 -3.08
#> # … with 4 more rows
```
All of these answers have relied only on using a distribution of comparable observations to find unusual observations.
In this case, the comparable observations were flights from the same origin to the same destination.
Apart from our knowledge that flights from the same origin to the same destination should have similar air times, we have not used any other domain\-specific knowledge.
But we know much more about this problem.
The most obvious piece of knowledge we have is that we know that flights cannot travel back in time, so there should never be a flight with a negative airtime.
But we also know that aircraft have maximum speeds.
While different aircraft have different [cruising speeds](https://en.wikipedia.org/wiki/Cruise_(aeronautics)), commercial airliners
typically cruise at air speeds around 547–575 mph.
Calculating the ground speed of aircraft is complicated by the way in which winds, especially the influence of wind, especially jet streams, on the ground\-speed of flights.
A strong tailwind can increase ground\-speed of the aircraft by [200 mph](https://www.wired.com/story/norwegian-air-transatlantic-speed-record/).
Apart from the retired [Concorde](https://en.wikipedia.org/wiki/Concorde).
For example, in 2018, [a transatlantic flight](https://www.wired.com/story/norwegian-air-transatlantic-speed-record/)
traveled at 770 mph due to a strong jet stream tailwind.
This means that any flight traveling at speeds greater than 800 mph is implausible,
and it may be worth checking flights traveling at greater than 600 or 700 mph.
Ground speed could also be used to identify aircraft flying implausibly slow.
Joining flights data with the air craft type in the `planes` table and getting
information about typical or top speeds of those aircraft could provide a more
detailed way to identify implausibly fast or slow flights.
Additional data on high altitude wind speeds at the time of the flight would further help.
Knowing the substance of the data analysis at hand is one of the most important
tools of a data scientist. The tools of statistics are a complement, not a
substitute, for that knowledge.
With that in mind, Let’s plot the distribution of the ground speed of flights.
The modal flight in this data has a ground speed of between 400 and 500 mph.
The distribution of ground speeds has a large left tail of slower flights below
400 mph constituting the majority.
There are very few flights with a ground speed over 500 mph.
```
flights %>%
mutate(mph = distance / (air_time / 60)) %>%
ggplot(aes(x = mph)) +
geom_histogram(binwidth = 10)
#> Warning: Removed 9430 rows containing non-finite values (stat_bin).
```
The fastest flight is the same one identified as the largest outlier earlier.
Its ground speed was 703 mph.
This is fast for a commercial jet, but not impossible.
```
flights %>%
mutate(mph = distance / (air_time / 60)) %>%
arrange(desc(mph)) %>%
select(mph, flight, carrier, flight, month, day, dep_time) %>%
head(5)
#> # A tibble: 5 x 6
#> mph flight carrier month day dep_time
#> <dbl> <int> <chr> <int> <int> <int>
#> 1 703. 1499 DL 5 25 1709
#> 2 650. 4667 EV 7 2 1558
#> 3 648 4292 EV 5 13 2040
#> 4 641. 3805 EV 3 23 1914
#> 5 591. 1902 DL 1 12 1559
```
One explanation for unusually fast flights is that they are “making up time” in the air by flying faster.
Commercial aircraft do not fly at their top speed since the airlines are also concerned about fuel consumption.
But, if a flight is delayed on the ground, it may fly faster than usual in order to avoid a late arrival.
So, I would expect that some of the unusually fast flights were delayed on departure.
```
flights %>%
mutate(mph = distance / (air_time / 60)) %>%
arrange(desc(mph)) %>%
select(
origin, dest, mph, year, month, day, dep_time, flight, carrier,
dep_delay, arr_delay
)
#> # A tibble: 336,776 x 11
#> origin dest mph year month day dep_time flight carrier dep_delay
#> <chr> <chr> <dbl> <int> <int> <int> <int> <int> <chr> <dbl>
#> 1 LGA ATL 703. 2013 5 25 1709 1499 DL 9
#> 2 EWR MSP 650. 2013 7 2 1558 4667 EV 45
#> 3 EWR GSP 648 2013 5 13 2040 4292 EV 15
#> 4 EWR BNA 641. 2013 3 23 1914 3805 EV 4
#> 5 LGA PBI 591. 2013 1 12 1559 1902 DL -1
#> 6 JFK SJU 564 2013 11 17 650 315 DL -5
#> # … with 336,770 more rows, and 1 more variable: arr_delay <dbl>
head(5)
#> [1] 5
```
Five of the top ten flights had departure delays, and three of those were
able to make up that time in the air and arrive ahead of schedule.
Overall, there were a few flights that seemed unusually fast, but they all
fall into the realm of plausibility and likely are not data entry problems.
\[Ed. Please correct me if I am missing something]
The second part of the question asks us to compare flights to the fastest flight
on a route to find the flights most delayed in the air. I will calculate the
amount a flight is delayed in air in two ways.
The first is the absolute delay, defined as the number of minutes longer than the fastest flight on that route,`air_time - min(air_time)`.
The second is the relative delay, which is the percentage increase in air time relative to the time of the fastest flight
along that route, `(air_time - min(air_time)) / min(air_time) * 100`.
```
air_time_delayed <-
flights %>%
group_by(origin, dest) %>%
mutate(
air_time_min = min(air_time, na.rm = TRUE),
air_time_delay = air_time - air_time_min,
air_time_delay_pct = air_time_delay / air_time_min * 100
)
#> Warning in min(air_time, na.rm = TRUE): no non-missing arguments to min;
#> returning Inf
```
The most delayed flight in air in minutes was DL841
from JFK to SFO which departed on
2013\-07\-28 at 17:27\. It took
189 minutes longer than the flight with the shortest
air time on its route.
```
air_time_delayed %>%
arrange(desc(air_time_delay)) %>%
select(
air_time_delay, carrier, flight,
origin, dest, year, month, day, dep_time,
air_time, air_time_min
) %>%
head() %>%
print(width = Inf)
#> # A tibble: 6 x 11
#> # Groups: origin, dest [5]
#> air_time_delay carrier flight origin dest year month day dep_time air_time
#> <dbl> <chr> <int> <chr> <chr> <int> <int> <int> <int> <dbl>
#> 1 189 DL 841 JFK SFO 2013 7 28 1727 490
#> 2 165 DL 426 JFK LAX 2013 11 22 1812 440
#> 3 163 AA 575 JFK EGE 2013 1 28 1806 382
#> 4 147 DL 17 JFK LAX 2013 7 10 1814 422
#> 5 145 UA 745 LGA DEN 2013 9 10 1513 331
#> 6 143 UA 587 EWR LAS 2013 11 22 2142 399
#> air_time_min
#> <dbl>
#> 1 301
#> 2 275
#> 3 219
#> 4 275
#> 5 186
#> 6 256
```
The most delayed flight in air as a percentage of the fastest flight along that
route was US2136
from LGA to BOS departing on 2013\-06\-17 at 16:52\.
It took 410% longer than the
flight with the shortest air time on its route.
```
air_time_delayed %>%
arrange(desc(air_time_delay)) %>%
select(
air_time_delay_pct, carrier, flight,
origin, dest, year, month, day, dep_time,
air_time, air_time_min
) %>%
head() %>%
print(width = Inf)
#> # A tibble: 6 x 11
#> # Groups: origin, dest [5]
#> air_time_delay_pct carrier flight origin dest year month day dep_time
#> <dbl> <chr> <int> <chr> <chr> <int> <int> <int> <int>
#> 1 62.8 DL 841 JFK SFO 2013 7 28 1727
#> 2 60 DL 426 JFK LAX 2013 11 22 1812
#> 3 74.4 AA 575 JFK EGE 2013 1 28 1806
#> 4 53.5 DL 17 JFK LAX 2013 7 10 1814
#> 5 78.0 UA 745 LGA DEN 2013 9 10 1513
#> 6 55.9 UA 587 EWR LAS 2013 11 22 2142
#> air_time air_time_min
#> <dbl> <dbl>
#> 1 490 301
#> 2 440 275
#> 3 382 219
#> 4 422 275
#> 5 331 186
#> 6 399 256
```
### Exercise 5\.7\.7
Find all destinations that are flown by at least two carriers.
Use that information to rank the carriers.
To restate this question, we are asked to rank airlines by the number of destinations that they fly to, considering only those airports that are flown to by two or more airlines.
There are two steps to calculating this ranking.
First, find all airports serviced by two or more carriers.
Then, rank carriers by the number of those destinations that they service.
```
flights %>%
# find all airports with > 1 carrier
group_by(dest) %>%
mutate(n_carriers = n_distinct(carrier)) %>%
filter(n_carriers > 1) %>%
# rank carriers by numer of destinations
group_by(carrier) %>%
summarize(n_dest = n_distinct(dest)) %>%
arrange(desc(n_dest))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 16 x 2
#> carrier n_dest
#> <chr> <int>
#> 1 EV 51
#> 2 9E 48
#> 3 UA 42
#> 4 DL 39
#> 5 B6 35
#> 6 AA 19
#> # … with 10 more rows
```
The carrier `"EV"` flies to the most destinations, considering only airports flown to by two or more carriers. What airline does the `"EV"` carrier code correspond to?
```
filter(airlines, carrier == "EV")
#> # A tibble: 1 x 2
#> carrier name
#> <chr> <chr>
#> 1 EV ExpressJet Airlines Inc.
```
Unless you know the airplane industry, it is likely that you don’t recognize [ExpressJet](https://en.wikipedia.org/wiki/ExpressJet); I certainly didn’t.
It is a regional airline that partners with major airlines to fly from hubs (larger airports) to smaller airports.
This means that many of the shorter flights of major carriers are operated by ExpressJet.
This business model explains why ExpressJet services the most destinations.
Among the airlines that fly to only one destination from New York are Alaska Airlines
and Hawaiian Airlines.
```
filter(airlines, carrier %in% c("AS", "F9", "HA"))
#> # A tibble: 3 x 2
#> carrier name
#> <chr> <chr>
#> 1 AS Alaska Airlines Inc.
#> 2 F9 Frontier Airlines Inc.
#> 3 HA Hawaiian Airlines Inc.
```
### Exercise 5\.7\.8
For each plane, count the number of flights before the first delay of greater than 1 hour.
The question does not specify arrival or departure delay.
I consider `dep_delay` in this answer, though similar code could be used for `arr_delay`.
```
flights %>%
# sort in increasing order
select(tailnum, year, month,day, dep_delay) %>%
filter(!is.na(dep_delay)) %>%
arrange(tailnum, year, month, day) %>%
group_by(tailnum) %>%
# cumulative number of flights delayed over one hour
mutate(cumulative_hr_delays = cumsum(dep_delay > 60)) %>%
# count the number of flights == 0
summarise(total_flights = sum(cumulative_hr_delays < 1)) %>%
arrange(total_flights)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 4,037 x 2
#> tailnum total_flights
#> <chr> <int>
#> 1 D942DN 0
#> 2 N10575 0
#> 3 N11106 0
#> 4 N11109 0
#> 5 N11187 0
#> 6 N11199 0
#> # … with 4,031 more rows
```
### Exercise 5\.7\.1
Refer back to the lists of useful mutate and filtering functions.
Describe how each operation changes when you combine it with grouping.
Summary functions (`mean()`), offset functions (`lead()`, `lag()`), ranking functions (`min_rank()`, `row_number()`), operate within each group when used with `group_by()` in
`mutate()` or `filter()`.
Arithmetic operators (`+`, `-`), logical operators (`<`, `==`), modular arithmetic operators (`%%`, `%/%`), logarithmic functions (`log`) are not affected by `group_by`.
Summary functions like `mean()`, `median()`, `sum()`, `std()` and others covered
in the section [Useful Summary Functions](https://r4ds.had.co.nz/transform.html#summarise-funs)
calculate their values within each group when used with `mutate()` or `filter()` and `group_by()`.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(x_mean = mean(x)) %>%
group_by(group) %>%
mutate(x_mean_2 = mean(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group x_mean x_mean_2
#> <int> <chr> <dbl> <dbl>
#> 1 1 a 5 2
#> 2 2 a 5 2
#> 3 3 a 5 2
#> 4 4 b 5 5
#> 5 5 b 5 5
#> 6 6 b 5 5
#> # … with 3 more rows
```
Arithmetic operators `+`, `-`, `*`, `/`, `^` are not affected by `group_by()`.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(y = x + 2) %>%
group_by(group) %>%
mutate(z = x + 2)
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group y z
#> <int> <chr> <dbl> <dbl>
#> 1 1 a 3 3
#> 2 2 a 4 4
#> 3 3 a 5 5
#> 4 4 b 6 6
#> 5 5 b 7 7
#> 6 6 b 8 8
#> # … with 3 more rows
```
The modular arithmetic operators `%/%` and `%%` are not affected by `group_by()`
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(y = x %% 2) %>%
group_by(group) %>%
mutate(z = x %% 2)
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group y z
#> <int> <chr> <dbl> <dbl>
#> 1 1 a 1 1
#> 2 2 a 0 0
#> 3 3 a 1 1
#> 4 4 b 0 0
#> 5 5 b 1 1
#> 6 6 b 0 0
#> # … with 3 more rows
```
The logarithmic functions `log()`, `log2()`, and `log10()` are not affected by
`group_by()`.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(y = log(x)) %>%
group_by(group) %>%
mutate(z = log(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group y z
#> <int> <chr> <dbl> <dbl>
#> 1 1 a 0 0
#> 2 2 a 0.693 0.693
#> 3 3 a 1.10 1.10
#> 4 4 b 1.39 1.39
#> 5 5 b 1.61 1.61
#> 6 6 b 1.79 1.79
#> # … with 3 more rows
```
The offset functions `lead()` and `lag()` respect the groupings in `group_by()`.
The functions `lag()` and `lead()` will only return values within each group.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
group_by(group) %>%
mutate(lag_x = lag(x),
lead_x = lead(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group lag_x lead_x
#> <int> <chr> <int> <int>
#> 1 1 a NA 2
#> 2 2 a 1 3
#> 3 3 a 2 NA
#> 4 4 b NA 5
#> 5 5 b 4 6
#> 6 6 b 5 NA
#> # … with 3 more rows
```
The cumulative and rolling aggregate functions `cumsum()`, `cumprod()`, `cummin()`, `cummax()`, and `cummean()` calculate values within each group.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(x_cumsum = cumsum(x)) %>%
group_by(group) %>%
mutate(x_cumsum_2 = cumsum(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group x_cumsum x_cumsum_2
#> <int> <chr> <int> <int>
#> 1 1 a 1 1
#> 2 2 a 3 3
#> 3 3 a 6 6
#> 4 4 b 10 4
#> 5 5 b 15 9
#> 6 6 b 21 15
#> # … with 3 more rows
```
Logical comparisons, `<`, `<=`, `>`, `>=`, `!=`, and `==` are not affected by `group_by()`.
```
tibble(x = 1:9,
y = 9:1,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(x_lte_y = x <= y) %>%
group_by(group) %>%
mutate(x_lte_y_2 = x <= y)
#> # A tibble: 9 x 5
#> # Groups: group [3]
#> x y group x_lte_y x_lte_y_2
#> <int> <int> <chr> <lgl> <lgl>
#> 1 1 9 a TRUE TRUE
#> 2 2 8 a TRUE TRUE
#> 3 3 7 a TRUE TRUE
#> 4 4 6 b TRUE TRUE
#> 5 5 5 b TRUE TRUE
#> 6 6 4 b FALSE FALSE
#> # … with 3 more rows
```
Ranking functions like `min_rank()` work within each group when used with `group_by()`.
```
tibble(x = 1:9,
group = rep(c("a", "b", "c"), each = 3)) %>%
mutate(rnk = min_rank(x)) %>%
group_by(group) %>%
mutate(rnk2 = min_rank(x))
#> # A tibble: 9 x 4
#> # Groups: group [3]
#> x group rnk rnk2
#> <int> <chr> <int> <int>
#> 1 1 a 1 1
#> 2 2 a 2 2
#> 3 3 a 3 3
#> 4 4 b 4 1
#> 5 5 b 5 2
#> 6 6 b 6 3
#> # … with 3 more rows
```
Though not asked in the question, note that `arrange()` ignores groups when sorting values.
```
tibble(x = runif(9),
group = rep(c("a", "b", "c"), each = 3)) %>%
group_by(group) %>%
arrange(x)
#> # A tibble: 9 x 2
#> # Groups: group [3]
#> x group
#> <dbl> <chr>
#> 1 0.00740 b
#> 2 0.0808 a
#> 3 0.157 b
#> 4 0.290 c
#> 5 0.466 b
#> 6 0.498 c
#> # … with 3 more rows
```
However, the order of values from `arrange()` can interact with groups when
used with functions that rely on the ordering of elements, such as `lead()`, `lag()`,
or `cumsum()`.
```
tibble(group = rep(c("a", "b", "c"), each = 3),
x = runif(9)) %>%
group_by(group) %>%
arrange(x) %>%
mutate(lag_x = lag(x))
#> # A tibble: 9 x 3
#> # Groups: group [3]
#> group x lag_x
#> <chr> <dbl> <dbl>
#> 1 b 0.0342 NA
#> 2 c 0.0637 NA
#> 3 a 0.175 NA
#> 4 c 0.196 0.0637
#> 5 b 0.320 0.0342
#> 6 b 0.402 0.320
#> # … with 3 more rows
```
### Exercise 5\.7\.2
Which plane (`tailnum`) has the worst on\-time record?
The question does not define a way to measure on\-time record, so I will consider two metrics:
1. proportion of flights not delayed or cancelled, and
2. mean arrival delay.
The first metric is the proportion of not\-cancelled and on\-time flights.
I use the presence of an arrival time as an indicator that a flight was not cancelled.
However, there are many planes that have never flown an on\-time flight.
Additionally, many of the planes that have the lowest proportion of on\-time flights have only flown a small number of flights.
```
flights %>%
filter(!is.na(tailnum)) %>%
mutate(on_time = !is.na(arr_time) & (arr_delay <= 0)) %>%
group_by(tailnum) %>%
summarise(on_time = mean(on_time), n = n()) %>%
filter(min_rank(on_time) == 1)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 110 x 3
#> tailnum on_time n
#> <chr> <dbl> <int>
#> 1 N121DE 0 2
#> 2 N136DL 0 1
#> 3 N143DA 0 1
#> 4 N17627 0 2
#> 5 N240AT 0 5
#> 6 N26906 0 1
#> # … with 104 more rows
```
So, I will remove planes that flew at least 20 flights.
The choice of 20 was chosen because it round number near the first quartile of the number of flights by plane.[5](#fn5)[6](#fn6)
```
quantile(count(flights, tailnum)$n)
#> 0% 25% 50% 75% 100%
#> 1 23 54 110 2512
```
The plane with the worst on time record that flew at least 20 flights is:
```
flights %>%
filter(!is.na(tailnum), is.na(arr_time) | !is.na(arr_delay)) %>%
mutate(on_time = !is.na(arr_time) & (arr_delay <= 0)) %>%
group_by(tailnum) %>%
summarise(on_time = mean(on_time), n = n()) %>%
filter(n >= 20) %>%
filter(min_rank(on_time) == 1)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 1 x 3
#> tailnum on_time n
#> <chr> <dbl> <int>
#> 1 N988AT 0.189 37
```
There are cases where `arr_delay` is missing but `arr_time` is not missing.
I have not debugged the cause of this bad data, so these rows are dropped for
the purposes of this exercise.
The second metric is the mean minutes delayed.
As with the previous metric, I will only consider planes which flew least 20 flights.
A different plane has the worst on\-time record when measured as average minutes delayed.
```
flights %>%
filter(!is.na(arr_delay)) %>%
group_by(tailnum) %>%
summarise(arr_delay = mean(arr_delay), n = n()) %>%
filter(n >= 20) %>%
filter(min_rank(desc(arr_delay)) == 1)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 1 x 3
#> tailnum arr_delay n
#> <chr> <dbl> <int>
#> 1 N203FR 59.1 41
```
### Exercise 5\.7\.3
What time of day should you fly if you want to avoid delays as much as possible?
Let’s group by the hour of the flight.
The earlier the flight is scheduled, the lower its expected delay.
This is intuitive as delays will affect later flights.
Morning flights have fewer (if any) previous flights that can delay them.
```
flights %>%
group_by(hour) %>%
summarise(arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
arrange(arr_delay)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 20 x 2
#> hour arr_delay
#> <dbl> <dbl>
#> 1 7 -5.30
#> 2 5 -4.80
#> 3 6 -3.38
#> 4 9 -1.45
#> 5 8 -1.11
#> 6 10 0.954
#> # … with 14 more rows
```
### Exercise 5\.7\.4
For each destination, compute the total minutes of delay.
For each flight, compute the proportion of the total delay for its destination.
The key to answering this question is to only include delayed flights when calculating the total delay and proportion of delay.
```
flights %>%
filter(arr_delay > 0) %>%
group_by(dest) %>%
mutate(
arr_delay_total = sum(arr_delay),
arr_delay_prop = arr_delay / arr_delay_total
) %>%
select(dest, month, day, dep_time, carrier, flight,
arr_delay, arr_delay_prop) %>%
arrange(dest, desc(arr_delay_prop))
#> # A tibble: 133,004 x 8
#> # Groups: dest [103]
#> dest month day dep_time carrier flight arr_delay arr_delay_prop
#> <chr> <int> <int> <int> <chr> <int> <dbl> <dbl>
#> 1 ABQ 7 22 2145 B6 1505 153 0.0341
#> 2 ABQ 12 14 2223 B6 65 149 0.0332
#> 3 ABQ 10 15 2146 B6 65 138 0.0308
#> 4 ABQ 7 23 2206 B6 1505 137 0.0305
#> 5 ABQ 12 17 2220 B6 65 136 0.0303
#> 6 ABQ 7 10 2025 B6 1505 126 0.0281
#> # … with 132,998 more rows
```
There is some ambiguity in the meaning of the term *flights* in the question.
The first example defined a flight as a row in the `flights` table, which is a trip by an aircraft from an airport at a particular date and time.
However, *flight* could also refer to the [flight number](https://en.wikipedia.org/wiki/Flight_number), which is the code a carrier uses for an airline service of a route.
For example, `AA1` is the flight number of the 09:00 American Airlines flight between JFK and LAX.
The flight number is contained in the `flights$flight` column, though what is called a “flight” is a combination of the `flights$carrier` and `flights$flight` columns.
```
flights %>%
filter(arr_delay > 0) %>%
group_by(dest, origin, carrier, flight) %>%
summarise(arr_delay = sum(arr_delay)) %>%
group_by(dest) %>%
mutate(
arr_delay_prop = arr_delay / sum(arr_delay)
) %>%
arrange(dest, desc(arr_delay_prop)) %>%
select(carrier, flight, origin, dest, arr_delay_prop)
#> `summarise()` regrouping output by 'dest', 'origin', 'carrier' (override with `.groups` argument)
#> # A tibble: 8,834 x 5
#> # Groups: dest [103]
#> carrier flight origin dest arr_delay_prop
#> <chr> <int> <chr> <chr> <dbl>
#> 1 B6 1505 JFK ABQ 0.567
#> 2 B6 65 JFK ABQ 0.433
#> 3 B6 1191 JFK ACK 0.475
#> 4 B6 1491 JFK ACK 0.414
#> 5 B6 1291 JFK ACK 0.0898
#> 6 B6 1195 JFK ACK 0.0208
#> # … with 8,828 more rows
```
### Exercise 5\.7\.5
Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave. Using `lag()` explore how the delay of a flight is related to the delay of the immediately preceding flight.
This calculates the departure delay of the preceding flight from the same airport.
```
lagged_delays <- flights %>%
arrange(origin, month, day, dep_time) %>%
group_by(origin) %>%
mutate(dep_delay_lag = lag(dep_delay)) %>%
filter(!is.na(dep_delay), !is.na(dep_delay_lag))
```
This plots the relationship between the mean delay of a flight for all values of the previous flight.
For delays less than two hours, the relationship between the delay of the preceding flight and the current flight is nearly a line.
After that the relationship becomes more variable, as long\-delayed flights are interspersed with flights leaving on\-time.
After about 8\-hours, a delayed flight is likely to be followed by a flight leaving on time.
```
lagged_delays %>%
group_by(dep_delay_lag) %>%
summarise(dep_delay_mean = mean(dep_delay)) %>%
ggplot(aes(y = dep_delay_mean, x = dep_delay_lag)) +
geom_point() +
scale_x_continuous(breaks = seq(0, 1500, by = 120)) +
labs(y = "Departure Delay", x = "Previous Departure Delay")
```
The overall relationship looks similar in all three origin airports.
```
lagged_delays %>%
group_by(origin, dep_delay_lag) %>%
summarise(dep_delay_mean = mean(dep_delay)) %>%
ggplot(aes(y = dep_delay_mean, x = dep_delay_lag)) +
geom_point() +
facet_wrap(~ origin, ncol=1) +
labs(y = "Departure Delay", x = "Previous Departure Delay")
#> `summarise()` regrouping output by 'origin' (override with `.groups` argument)
```
### Exercise 5\.7\.6
Look at each destination. Can you find flights that are suspiciously fast?
(i.e. flights that represent a potential data entry error).
Compute the air time of a flight relative to the shortest flight to that destination.
Which flights were most delayed in the air?
When calculating this answer we should only compare flights within the same (origin, destination) pair.
To find unusual observations, we need to first put them on the same scale.
I will [standardize](https://en.wikipedia.org/wiki/Standard_score)
values by subtracting the mean from each and then dividing each by the standard deviation.
\\\[
\\mathsf{standardized}(x) \= \\frac{x \- \\mathsf{mean}(x)}{\\mathsf{sd}(x)} .
\\]
A standardized variable is often called a \\(z\\)\-score.
The units of the standardized variable are standard deviations from the mean.
This will put the flight times from different routes on the same scale.
The larger the magnitude of the standardized variable for an observation, the more unusual the observation is.
Flights with negative values of the standardized variable are faster than the
mean flight for that route, while those with positive values are slower than
the mean flight for that route.
```
standardized_flights <- flights %>%
filter(!is.na(air_time)) %>%
group_by(dest, origin) %>%
mutate(
air_time_mean = mean(air_time),
air_time_sd = sd(air_time),
n = n()
) %>%
ungroup() %>%
mutate(air_time_standard = (air_time - air_time_mean) / (air_time_sd + 1))
```
I add 1 to the denominator and numerator to avoid dividing by zero.
Note that the `ungroup()` here is not necessary. However, I will be using
this data frame later. Through experience, I have found that I have fewer bugs
when I keep a data frame grouped for only those verbs that need it.
If I did not `ungroup()` this data frame, the `arrange()` used later would
not work as expected. It is better to err on the side of using `ungroup()`
when unnecessary.
The distribution of the standardized air flights has long right tail.
```
ggplot(standardized_flights, aes(x = air_time_standard)) +
geom_density()
#> Warning: Removed 4 rows containing non-finite values (stat_density).
```
Unusually fast flights are those flights with the smallest standardized values.
```
standardized_flights %>%
arrange(air_time_standard) %>%
select(
carrier, flight, origin, dest, month, day,
air_time, air_time_mean, air_time_standard
) %>%
head(10) %>%
print(width = Inf)
#> # A tibble: 10 x 9
#> carrier flight origin dest month day air_time air_time_mean
#> <chr> <int> <chr> <chr> <int> <int> <dbl> <dbl>
#> 1 DL 1499 LGA ATL 5 25 65 114.
#> 2 EV 4667 EWR MSP 7 2 93 151.
#> 3 EV 4292 EWR GSP 5 13 55 93.2
#> 4 EV 3805 EWR BNA 3 23 70 115.
#> 5 EV 4687 EWR CVG 9 29 62 96.1
#> 6 B6 2002 JFK BUF 11 10 38 57.1
#> air_time_standard
#> <dbl>
#> 1 -4.56
#> 2 -4.46
#> 3 -4.20
#> 4 -3.73
#> 5 -3.60
#> 6 -3.38
#> # … with 4 more rows
```
I used `width = Inf` to ensure that all columns will be printed.
The fastest flight is DL1499 from LGA to
ATL which departed on
2013\-05\-25 at 17:09\.
It has an air time of 65 minutes, compared to an average
flight time of 114 minutes for its route.
This is 4\.6 standard deviations below
the average flight on its route.
It is important to note that this does not necessarily imply that there was a data entry error.
We should check these flights to see whether there was some reason for the difference.
It may be that we are missing some piece of information that explains these unusual times.
A potential issue with the way that we standardized the flights is that the mean and standard deviation used to calculate are sensitive to outliers and outliers is what we are looking for.
Instead of standardizing variables with the mean and variance, we could use the median
as a measure of central tendency and the interquartile range (IQR) as a measure of spread.
The median and IQR are more [resistant to outliers](https://en.wikipedia.org/wiki/Robust_statistics) than the mean and standard deviation.
The following method uses the median and inter\-quartile range, which are less sensitive to outliers.
```
standardized_flights2 <- flights %>%
filter(!is.na(air_time)) %>%
group_by(dest, origin) %>%
mutate(
air_time_median = median(air_time),
air_time_iqr = IQR(air_time),
n = n(),
air_time_standard = (air_time - air_time_median) / air_time_iqr)
```
The distribution of the standardized air flights using this new definition
also has long right tail of slow flights.
```
ggplot(standardized_flights2, aes(x = air_time_standard)) +
geom_density()
#> Warning: Removed 4 rows containing non-finite values (stat_density).
```
Unusually fast flights are those flights with the smallest standardized values.
```
standardized_flights2 %>%
arrange(air_time_standard) %>%
select(
carrier, flight, origin, dest, month, day, air_time,
air_time_median, air_time_standard
) %>%
head(10) %>%
print(width = Inf)
#> # A tibble: 10 x 9
#> # Groups: dest, origin [10]
#> carrier flight origin dest month day air_time air_time_median
#> <chr> <int> <chr> <chr> <int> <int> <dbl> <dbl>
#> 1 EV 4667 EWR MSP 7 2 93 149
#> 2 DL 1499 LGA ATL 5 25 65 112
#> 3 US 2132 LGA BOS 3 2 21 37
#> 4 B6 30 JFK ROC 3 25 35 51
#> 5 B6 2002 JFK BUF 11 10 38 57
#> 6 EV 4292 EWR GSP 5 13 55 92
#> air_time_standard
#> <dbl>
#> 1 -3.5
#> 2 -3.36
#> 3 -3.2
#> 4 -3.2
#> 5 -3.17
#> 6 -3.08
#> # … with 4 more rows
```
All of these answers have relied only on using a distribution of comparable observations to find unusual observations.
In this case, the comparable observations were flights from the same origin to the same destination.
Apart from our knowledge that flights from the same origin to the same destination should have similar air times, we have not used any other domain\-specific knowledge.
But we know much more about this problem.
The most obvious piece of knowledge we have is that we know that flights cannot travel back in time, so there should never be a flight with a negative airtime.
But we also know that aircraft have maximum speeds.
While different aircraft have different [cruising speeds](https://en.wikipedia.org/wiki/Cruise_(aeronautics)), commercial airliners
typically cruise at air speeds around 547–575 mph.
Calculating the ground speed of aircraft is complicated by the way in which winds, especially the influence of wind, especially jet streams, on the ground\-speed of flights.
A strong tailwind can increase ground\-speed of the aircraft by [200 mph](https://www.wired.com/story/norwegian-air-transatlantic-speed-record/).
Apart from the retired [Concorde](https://en.wikipedia.org/wiki/Concorde).
For example, in 2018, [a transatlantic flight](https://www.wired.com/story/norwegian-air-transatlantic-speed-record/)
traveled at 770 mph due to a strong jet stream tailwind.
This means that any flight traveling at speeds greater than 800 mph is implausible,
and it may be worth checking flights traveling at greater than 600 or 700 mph.
Ground speed could also be used to identify aircraft flying implausibly slow.
Joining flights data with the air craft type in the `planes` table and getting
information about typical or top speeds of those aircraft could provide a more
detailed way to identify implausibly fast or slow flights.
Additional data on high altitude wind speeds at the time of the flight would further help.
Knowing the substance of the data analysis at hand is one of the most important
tools of a data scientist. The tools of statistics are a complement, not a
substitute, for that knowledge.
With that in mind, Let’s plot the distribution of the ground speed of flights.
The modal flight in this data has a ground speed of between 400 and 500 mph.
The distribution of ground speeds has a large left tail of slower flights below
400 mph constituting the majority.
There are very few flights with a ground speed over 500 mph.
```
flights %>%
mutate(mph = distance / (air_time / 60)) %>%
ggplot(aes(x = mph)) +
geom_histogram(binwidth = 10)
#> Warning: Removed 9430 rows containing non-finite values (stat_bin).
```
The fastest flight is the same one identified as the largest outlier earlier.
Its ground speed was 703 mph.
This is fast for a commercial jet, but not impossible.
```
flights %>%
mutate(mph = distance / (air_time / 60)) %>%
arrange(desc(mph)) %>%
select(mph, flight, carrier, flight, month, day, dep_time) %>%
head(5)
#> # A tibble: 5 x 6
#> mph flight carrier month day dep_time
#> <dbl> <int> <chr> <int> <int> <int>
#> 1 703. 1499 DL 5 25 1709
#> 2 650. 4667 EV 7 2 1558
#> 3 648 4292 EV 5 13 2040
#> 4 641. 3805 EV 3 23 1914
#> 5 591. 1902 DL 1 12 1559
```
One explanation for unusually fast flights is that they are “making up time” in the air by flying faster.
Commercial aircraft do not fly at their top speed since the airlines are also concerned about fuel consumption.
But, if a flight is delayed on the ground, it may fly faster than usual in order to avoid a late arrival.
So, I would expect that some of the unusually fast flights were delayed on departure.
```
flights %>%
mutate(mph = distance / (air_time / 60)) %>%
arrange(desc(mph)) %>%
select(
origin, dest, mph, year, month, day, dep_time, flight, carrier,
dep_delay, arr_delay
)
#> # A tibble: 336,776 x 11
#> origin dest mph year month day dep_time flight carrier dep_delay
#> <chr> <chr> <dbl> <int> <int> <int> <int> <int> <chr> <dbl>
#> 1 LGA ATL 703. 2013 5 25 1709 1499 DL 9
#> 2 EWR MSP 650. 2013 7 2 1558 4667 EV 45
#> 3 EWR GSP 648 2013 5 13 2040 4292 EV 15
#> 4 EWR BNA 641. 2013 3 23 1914 3805 EV 4
#> 5 LGA PBI 591. 2013 1 12 1559 1902 DL -1
#> 6 JFK SJU 564 2013 11 17 650 315 DL -5
#> # … with 336,770 more rows, and 1 more variable: arr_delay <dbl>
head(5)
#> [1] 5
```
Five of the top ten flights had departure delays, and three of those were
able to make up that time in the air and arrive ahead of schedule.
Overall, there were a few flights that seemed unusually fast, but they all
fall into the realm of plausibility and likely are not data entry problems.
\[Ed. Please correct me if I am missing something]
The second part of the question asks us to compare flights to the fastest flight
on a route to find the flights most delayed in the air. I will calculate the
amount a flight is delayed in air in two ways.
The first is the absolute delay, defined as the number of minutes longer than the fastest flight on that route,`air_time - min(air_time)`.
The second is the relative delay, which is the percentage increase in air time relative to the time of the fastest flight
along that route, `(air_time - min(air_time)) / min(air_time) * 100`.
```
air_time_delayed <-
flights %>%
group_by(origin, dest) %>%
mutate(
air_time_min = min(air_time, na.rm = TRUE),
air_time_delay = air_time - air_time_min,
air_time_delay_pct = air_time_delay / air_time_min * 100
)
#> Warning in min(air_time, na.rm = TRUE): no non-missing arguments to min;
#> returning Inf
```
The most delayed flight in air in minutes was DL841
from JFK to SFO which departed on
2013\-07\-28 at 17:27\. It took
189 minutes longer than the flight with the shortest
air time on its route.
```
air_time_delayed %>%
arrange(desc(air_time_delay)) %>%
select(
air_time_delay, carrier, flight,
origin, dest, year, month, day, dep_time,
air_time, air_time_min
) %>%
head() %>%
print(width = Inf)
#> # A tibble: 6 x 11
#> # Groups: origin, dest [5]
#> air_time_delay carrier flight origin dest year month day dep_time air_time
#> <dbl> <chr> <int> <chr> <chr> <int> <int> <int> <int> <dbl>
#> 1 189 DL 841 JFK SFO 2013 7 28 1727 490
#> 2 165 DL 426 JFK LAX 2013 11 22 1812 440
#> 3 163 AA 575 JFK EGE 2013 1 28 1806 382
#> 4 147 DL 17 JFK LAX 2013 7 10 1814 422
#> 5 145 UA 745 LGA DEN 2013 9 10 1513 331
#> 6 143 UA 587 EWR LAS 2013 11 22 2142 399
#> air_time_min
#> <dbl>
#> 1 301
#> 2 275
#> 3 219
#> 4 275
#> 5 186
#> 6 256
```
The most delayed flight in air as a percentage of the fastest flight along that
route was US2136
from LGA to BOS departing on 2013\-06\-17 at 16:52\.
It took 410% longer than the
flight with the shortest air time on its route.
```
air_time_delayed %>%
arrange(desc(air_time_delay)) %>%
select(
air_time_delay_pct, carrier, flight,
origin, dest, year, month, day, dep_time,
air_time, air_time_min
) %>%
head() %>%
print(width = Inf)
#> # A tibble: 6 x 11
#> # Groups: origin, dest [5]
#> air_time_delay_pct carrier flight origin dest year month day dep_time
#> <dbl> <chr> <int> <chr> <chr> <int> <int> <int> <int>
#> 1 62.8 DL 841 JFK SFO 2013 7 28 1727
#> 2 60 DL 426 JFK LAX 2013 11 22 1812
#> 3 74.4 AA 575 JFK EGE 2013 1 28 1806
#> 4 53.5 DL 17 JFK LAX 2013 7 10 1814
#> 5 78.0 UA 745 LGA DEN 2013 9 10 1513
#> 6 55.9 UA 587 EWR LAS 2013 11 22 2142
#> air_time air_time_min
#> <dbl> <dbl>
#> 1 490 301
#> 2 440 275
#> 3 382 219
#> 4 422 275
#> 5 331 186
#> 6 399 256
```
### Exercise 5\.7\.7
Find all destinations that are flown by at least two carriers.
Use that information to rank the carriers.
To restate this question, we are asked to rank airlines by the number of destinations that they fly to, considering only those airports that are flown to by two or more airlines.
There are two steps to calculating this ranking.
First, find all airports serviced by two or more carriers.
Then, rank carriers by the number of those destinations that they service.
```
flights %>%
# find all airports with > 1 carrier
group_by(dest) %>%
mutate(n_carriers = n_distinct(carrier)) %>%
filter(n_carriers > 1) %>%
# rank carriers by numer of destinations
group_by(carrier) %>%
summarize(n_dest = n_distinct(dest)) %>%
arrange(desc(n_dest))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 16 x 2
#> carrier n_dest
#> <chr> <int>
#> 1 EV 51
#> 2 9E 48
#> 3 UA 42
#> 4 DL 39
#> 5 B6 35
#> 6 AA 19
#> # … with 10 more rows
```
The carrier `"EV"` flies to the most destinations, considering only airports flown to by two or more carriers. What airline does the `"EV"` carrier code correspond to?
```
filter(airlines, carrier == "EV")
#> # A tibble: 1 x 2
#> carrier name
#> <chr> <chr>
#> 1 EV ExpressJet Airlines Inc.
```
Unless you know the airplane industry, it is likely that you don’t recognize [ExpressJet](https://en.wikipedia.org/wiki/ExpressJet); I certainly didn’t.
It is a regional airline that partners with major airlines to fly from hubs (larger airports) to smaller airports.
This means that many of the shorter flights of major carriers are operated by ExpressJet.
This business model explains why ExpressJet services the most destinations.
Among the airlines that fly to only one destination from New York are Alaska Airlines
and Hawaiian Airlines.
```
filter(airlines, carrier %in% c("AS", "F9", "HA"))
#> # A tibble: 3 x 2
#> carrier name
#> <chr> <chr>
#> 1 AS Alaska Airlines Inc.
#> 2 F9 Frontier Airlines Inc.
#> 3 HA Hawaiian Airlines Inc.
```
### Exercise 5\.7\.8
For each plane, count the number of flights before the first delay of greater than 1 hour.
The question does not specify arrival or departure delay.
I consider `dep_delay` in this answer, though similar code could be used for `arr_delay`.
```
flights %>%
# sort in increasing order
select(tailnum, year, month,day, dep_delay) %>%
filter(!is.na(dep_delay)) %>%
arrange(tailnum, year, month, day) %>%
group_by(tailnum) %>%
# cumulative number of flights delayed over one hour
mutate(cumulative_hr_delays = cumsum(dep_delay > 60)) %>%
# count the number of flights == 0
summarise(total_flights = sum(cumulative_hr_delays < 1)) %>%
arrange(total_flights)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 4,037 x 2
#> tailnum total_flights
#> <chr> <int>
#> 1 D942DN 0
#> 2 N10575 0
#> 3 N11106 0
#> 4 N11109 0
#> 5 N11187 0
#> 6 N11199 0
#> # … with 4,031 more rows
```
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/exploratory-data-analysis.html |
7 Exploratory Data Analysis
===========================
7\.1 Introduction
-----------------
This will also use data from the **nycflights13** package.
The **ggbeeswarm**, **lvplot**, and **ggstance** packages provide some additional functions used in some solutions.
```
library("tidyverse")
library("nycflights13")
library("ggbeeswarm")
library("lvplot")
library("ggstance")
```
7\.2 Questions
--------------
7\.3 Variation
--------------
### Exercise 7\.3\.1
Explore the distribution of each of the `x`, `y`, and `z` variables in `diamonds`. What do you learn?
Think about a diamond and how you might decide which dimension is the length, width, and depth.
First, I’ll calculate summary statistics for these variables and plot their distributions.
```
summary(select(diamonds, x, y, z))
#> x y z
#> Min. : 0.00 Min. : 0.0 Min. : 0.0
#> 1st Qu.: 4.71 1st Qu.: 4.7 1st Qu.: 2.9
#> Median : 5.70 Median : 5.7 Median : 3.5
#> Mean : 5.73 Mean : 5.7 Mean : 3.5
#> 3rd Qu.: 6.54 3rd Qu.: 6.5 3rd Qu.: 4.0
#> Max. :10.74 Max. :58.9 Max. :31.8
```
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = x), binwidth = 0.01)
```
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = y), binwidth = 0.01)
```
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = z), binwidth = 0.01)
```
There several noticeable features of the distributions:
1. `x` and `y` are larger than `z`,
2. there are outliers,
3. they are all right skewed, and
4. they are multimodal or “spiky”.
The typical values of `x` and `y` are larger than `z`, with `x` and `y` having inter\-quartile
ranges of 4\.7–6\.5, while `z` has an inter\-quartile range of 2\.9–4\.0\.
There are two types of outliers in this data.
Some diamonds have values of zero and some have abnormally large values of `x`, `y`, or `z`.
```
summary(select(diamonds, x, y, z))
#> x y z
#> Min. : 0.00 Min. : 0.0 Min. : 0.0
#> 1st Qu.: 4.71 1st Qu.: 4.7 1st Qu.: 2.9
#> Median : 5.70 Median : 5.7 Median : 3.5
#> Mean : 5.73 Mean : 5.7 Mean : 3.5
#> 3rd Qu.: 6.54 3rd Qu.: 6.5 3rd Qu.: 4.0
#> Max. :10.74 Max. :58.9 Max. :31.8
```
These appear to be either data entry errors, or an undocumented convention in the dataset for indicating missing values. An alternative hypothesis would be that values of zero are the
result of rounding values like `0.002` down, but since there are no diamonds with values of 0\.01, that does not seem to be the case.
```
filter(diamonds, x == 0 | y == 0 | z == 0)
#> # A tibble: 20 x 10
#> carat cut color clarity depth table price x y z
#> <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 1 Premium G SI2 59.1 59 3142 6.55 6.48 0
#> 2 1.01 Premium H I1 58.1 59 3167 6.66 6.6 0
#> 3 1.1 Premium G SI2 63 59 3696 6.5 6.47 0
#> 4 1.01 Premium F SI2 59.2 58 3837 6.5 6.47 0
#> 5 1.5 Good G I1 64 61 4731 7.15 7.04 0
#> 6 1.07 Ideal F SI2 61.6 56 4954 0 6.62 0
#> # … with 14 more rows
```
There are also some diamonds with values of `y` and `z` that are abnormally large.
There are diamonds with `y == 58.9` and `y == 31.8`, and one with `z == 31.8`.
These are probably data errors since the values do not seem in line with the values of
the other variables.
```
diamonds %>%
arrange(desc(y)) %>%
head()
#> # A tibble: 6 x 10
#> carat cut color clarity depth table price x y z
#> <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 2 Premium H SI2 58.9 57 12210 8.09 58.9 8.06
#> 2 0.51 Ideal E VS1 61.8 55 2075 5.15 31.8 5.12
#> 3 5.01 Fair J I1 65.5 59 18018 10.7 10.5 6.98
#> 4 4.5 Fair J I1 65.8 58 18531 10.2 10.2 6.72
#> 5 4.01 Premium I I1 61 61 15223 10.1 10.1 6.17
#> 6 4.01 Premium J I1 62.5 62 15223 10.0 9.94 6.24
```
```
diamonds %>%
arrange(desc(z)) %>%
head()
#> # A tibble: 6 x 10
#> carat cut color clarity depth table price x y z
#> <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 0.51 Very Good E VS1 61.8 54.7 1970 5.12 5.15 31.8
#> 2 2 Premium H SI2 58.9 57 12210 8.09 58.9 8.06
#> 3 5.01 Fair J I1 65.5 59 18018 10.7 10.5 6.98
#> 4 4.5 Fair J I1 65.8 58 18531 10.2 10.2 6.72
#> 5 4.13 Fair H I1 64.8 61 17329 10 9.85 6.43
#> 6 3.65 Fair H I1 67.1 53 11668 9.53 9.48 6.38
```
Initially, I only considered univariate outliers. However, to check the plausibility
of those outliers I would informally consider how consistent their values are with
the values of the other variables. In this case, scatter plots of each combination
of `x`, `y`, and `z` shows these outliers much more clearly.
```
ggplot(diamonds, aes(x = x, y = y)) +
geom_point()
```
```
ggplot(diamonds, aes(x = x, y = z)) +
geom_point()
```
```
ggplot(diamonds, aes(x = y, y = z)) +
geom_point()
```
Removing the outliers from `x`, `y`, and `z` makes the distribution easier to see.
The right skewness of these distributions is unsurprising; there should be more smaller diamonds than larger ones and these values can never be negative.
More interestingly, there are spikes in the distribution at certain values.
These spikes often, but not exclusively, occur near integer values.
Without knowing more about diamond cutting, I can’t say more about what these spikes represent. If you know, add a comment.
I would guess that some diamond sizes are used more often than others, and these spikes correspond to those sizes.
Also, I would guess that a diamond cut and carat value of a diamond imply values of `x`, `y`, and `z`.
Since there are spikes in the distribution of carat sizes, and only a few different cuts, that could result in these spikes.
I’ll leave it to readers to figure out if that’s the case.
```
filter(diamonds, x > 0, x < 10) %>%
ggplot() +
geom_histogram(mapping = aes(x = x), binwidth = 0.01) +
scale_x_continuous(breaks = 1:10)
```
```
filter(diamonds, y > 0, y < 10) %>%
ggplot() +
geom_histogram(mapping = aes(x = y), binwidth = 0.01) +
scale_x_continuous(breaks = 1:10)
```
```
filter(diamonds, z > 0, z < 10) %>%
ggplot() +
geom_histogram(mapping = aes(x = z), binwidth = 0.01) +
scale_x_continuous(breaks = 1:10)
```
According to the documentation for `diamonds`, `x` is length, `y` is width, and `z` is depth.
If documentation were unavailable, I would compare the values of the variables to match them to the length, width, and depth.
I would expect length to always be less than width, otherwise the length would be called the width.
I would also search for the definitions of length, width, and depth with respect to diamond cuts.
[Depth](https://en.wikipedia.org/wiki/Diamond_cut) can be expressed as a percentage of the length/width of the diamond, which means it should be less than both the length and the width.
```
summarise(diamonds, mean(x > y), mean(x > z), mean(y > z))
#> # A tibble: 1 x 3
#> `mean(x > y)` `mean(x > z)` `mean(y > z)`
#> <dbl> <dbl> <dbl>
#> 1 0.434 1.00 1.00
```
It appears that depth (`z`) is always smaller than length (`x`) or width (`y`), perhaps because a shallower depth helps when setting diamonds in jewelry and due to how it affect the reflection of light.
Length is more than width in less than half the observations, the opposite of my expectations.
### Exercise 7\.3\.2
Explore the distribution of price. Do you discover anything unusual or surprising? (Hint: Carefully think about the `binwidth` and make sure you try a wide range of values.)
* The price data has many spikes, but I can’t tell what each spike corresponds to. The following plots don’t show much difference in the distributions in the last one or two digits.
* There are no diamonds with a price of $1,500 (between $1,455 and $1,545, including).
* There’s a bulge in the distribution around $750\.
```
ggplot(filter(diamonds, price < 2500), aes(x = price)) +
geom_histogram(binwidth = 10, center = 0)
```
```
ggplot(filter(diamonds), aes(x = price)) +
geom_histogram(binwidth = 100, center = 0)
```
The last digits of prices are often not uniformly distributed.
They are often round, ending in 0 or 5 (for one\-half).
Another common pattern is ending in 99, as in $1999\.
If we plot the distribution of the last one and two digits of prices do we observe patterns like that?
```
diamonds %>%
mutate(ending = price %% 10) %>%
ggplot(aes(x = ending)) +
geom_histogram(binwidth = 1, center = 0)
```
```
diamonds %>%
mutate(ending = price %% 100) %>%
ggplot(aes(x = ending)) +
geom_histogram(binwidth = 1)
```
```
diamonds %>%
mutate(ending = price %% 1000) %>%
filter(ending >= 500, ending <= 800) %>%
ggplot(aes(x = ending)) +
geom_histogram(binwidth = 1)
```
### Exercise 7\.3\.3
How many diamonds are 0\.99 carat?
How many are 1 carat?
What do you think is the cause of the difference?
There are more than 70 times as many 1 carat diamonds as 0\.99 carat diamond.
```
diamonds %>%
filter(carat >= 0.99, carat <= 1) %>%
count(carat)
#> # A tibble: 2 x 2
#> carat n
#> <dbl> <int>
#> 1 0.99 23
#> 2 1 1558
```
I don’t know exactly the process behind how carats are measured, but some way or another some diamonds carat values are being “rounded up”
Presumably there is a premium for a 1 carat diamond vs. a 0\.99 carat diamond beyond the expected increase in price due to a 0\.01 carat increase.[7](#fn7)
To check this intuition, we would want to look at the number of diamonds in each carat range to see if there is an unusually low number of 0\.99 carat diamonds, and an abnormally large number of 1 carat diamonds.
```
diamonds %>%
filter(carat >= 0.9, carat <= 1.1) %>%
count(carat) %>%
print(n = Inf)
#> # A tibble: 21 x 2
#> carat n
#> <dbl> <int>
#> 1 0.9 1485
#> 2 0.91 570
#> 3 0.92 226
#> 4 0.93 142
#> 5 0.94 59
#> 6 0.95 65
#> 7 0.96 103
#> 8 0.97 59
#> 9 0.98 31
#> 10 0.99 23
#> 11 1 1558
#> 12 1.01 2242
#> 13 1.02 883
#> 14 1.03 523
#> 15 1.04 475
#> 16 1.05 361
#> 17 1.06 373
#> 18 1.07 342
#> 19 1.08 246
#> 20 1.09 287
#> 21 1.1 278
```
### Exercise 7\.3\.4
Compare and contrast `coord_cartesian()` vs `xlim()` or `ylim()` when zooming in on a histogram. What happens if you leave `binwidth` unset? What happens if you try and zoom so only half a bar shows?
The `coord_cartesian()` function zooms in on the area specified by the limits,
after having calculated and drawn the geoms.
Since the histogram bins have already been calculated, it is unaffected.
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = price)) +
coord_cartesian(xlim = c(100, 5000), ylim = c(0, 3000))
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
However, the `xlim()` and `ylim()` functions influence actions before the calculation
of the stats related to the histogram. Thus, any values outside the x\- and y\-limits
are dropped before calculating bin widths and counts. This can influence how
the histogram looks.
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = price)) +
xlim(100, 5000) +
ylim(0, 3000)
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
#> Warning: Removed 14714 rows containing non-finite values (stat_bin).
#> Warning: Removed 6 rows containing missing values (geom_bar).
```
7\.4 Missing values
-------------------
### Exercise 7\.4\.1
What happens to missing values in a histogram?
What happens to missing values in a bar chart?
Why is there a difference?
Missing values are removed when the number of observations in each bin are calculated. See the warning message: `Removed 9 rows containing non-finite values (stat_bin)`
```
diamonds2 <- diamonds %>%
mutate(y = ifelse(y < 3 | y > 20, NA, y))
ggplot(diamonds2, aes(x = y)) +
geom_histogram()
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
#> Warning: Removed 9 rows containing non-finite values (stat_bin).
```
In the `geom_bar()` function, `NA` is treated as another category. The `x` aesthetic in `geom_bar()` requires a discrete (categorical) variable, and missing values act like another category.
```
diamonds %>%
mutate(cut = if_else(runif(n()) < 0.1, NA_character_, as.character(cut))) %>%
ggplot() +
geom_bar(mapping = aes(x = cut))
```
In a histogram, the `x` aesthetic variable needs to be numeric, and `stat_bin()` groups the observations by ranges into bins.
Since the numeric value of the `NA` observations is unknown, they cannot be placed in a particular bin, and are dropped.
### Exercise 7\.4\.2
What does `na.rm = TRUE` do in `mean()` and `sum()`?
This option removes `NA` values from the vector prior to calculating the mean and sum.
```
mean(c(0, 1, 2, NA), na.rm = TRUE)
#> [1] 1
sum(c(0, 1, 2, NA), na.rm = TRUE)
#> [1] 3
```
7\.5 Covariation
----------------
### 7\.5\.1 A categorical and continuous variable
#### Exercise 7\.5\.1\.1
Use what you’ve learned to improve the visualization of the departure times of cancelled vs. non\-cancelled flights.
Instead of a `freqplot` use a box\-plot
```
nycflights13::flights %>%
mutate(
cancelled = is.na(dep_time),
sched_hour = sched_dep_time %/% 100,
sched_min = sched_dep_time %% 100,
sched_dep_time = sched_hour + sched_min / 60
) %>%
ggplot() +
geom_boxplot(mapping = aes(y = sched_dep_time, x = cancelled))
```
#### Exercise 7\.5\.1\.2
What variable in the diamonds dataset is most important for predicting the price of a diamond?
How is that variable correlated with cut?
Why does the combination of those two relationships lead to lower quality diamonds being more expensive?
What are the general relationships of each variable with the price of the diamonds?
I will consider the variables: `carat`, `clarity`, `color`, and `cut`.
I ignore the dimensions of the diamond since `carat` measures size, and thus incorporates most of the information contained in these variables.
Since both `price` and `carat` are continuous variables, I use a scatter plot to visualize their relationship.
```
ggplot(diamonds, aes(x = carat, y = price)) +
geom_point()
```
However, since there is a large number of points in the data, I will use a boxplot by binning `carat`, as suggested in the chapter:
```
ggplot(data = diamonds, mapping = aes(x = carat, y = price)) +
geom_boxplot(mapping = aes(group = cut_width(carat, 0.1)), orientation = "x")
```
Note that the choice of the binning width is important, as if it were too large it would obscure any relationship, and if it were too small, the values in the bins could be too variable to reveal underlying trends.
Version 3\.3\.0 of ggplot2 introduced changes to boxplots that may affect the orientation.
> This geom treats each axis differently and, thus, can thus have two orientations.
> Often the orientation is easy to deduce from a combination of the given mappings and the types of positional scales in use.
> Thus, ggplot2 will by default try to guess which orientation the layer should have. Under rare circumstances, the orientation is ambiguous and guessing may fail
If you are getting something different with your code check the version of ggplot2\.
Use `orientation = "x"` (vertical boxplots) or `orientation = "y"` (horizontal boxplots) to explicitly specify how the geom should treat these axes.
The variables `color` and `clarity` are ordered categorical variables.
The chapter suggests visualizing a categorical and continuous variable using frequency polygons or boxplots.
In this case, I will use a box plot since it will better show a relationship between the variables.
There is a weak negative relationship between `color` and `price`.
The scale of diamond color goes from D (best) to J (worst).
Currently, the levels of `diamonds$color` are in the wrong order.
Before plotting, I will reverse the order of the `color` levels so they will be in increasing order of quality on the x\-axis.
The `color` column is an example of a factor variable, which is covered in the
“[Factors](https://r4ds.had.co.nz/factors.html)” chapter of *R4DS*.
```
diamonds %>%
mutate(color = fct_rev(color)) %>%
ggplot(aes(x = color, y = price)) +
geom_boxplot()
```
There is also weak negative relationship between `clarity` and `price`.
The scale of clarity goes from I1 (worst) to IF (best).
```
ggplot(data = diamonds) +
geom_boxplot(mapping = aes(x = clarity, y = price))
```
For both `clarity` and `color`, there is a much larger amount of variation within each category than between categories.
Carat is clearly the single best predictor of diamond prices.
Now that we have established that carat appears to be the best predictor of price, what is the relationship between it and cut?
Since this is an example of a continuous (carat) and categorical (cut) variable, it can be visualized with a box plot.
```
ggplot(diamonds, aes(x = cut, y = carat)) +
geom_boxplot()
```
There is a lot of variability in the distribution of carat sizes within each cut category.
There is a slight negative relationship between carat and cut.
Noticeably, the largest carat diamonds have a cut of “Fair” (the lowest).
This negative relationship can be due to the way in which diamonds are selected for sale.
A larger diamond can be profitably sold with a lower quality cut, while a smaller diamond requires a better cut.
#### Exercise 7\.5\.1\.3
Install the ggstance package, and create a horizontal box plot.
How does this compare to using `coord_flip()`?
Earlier, we created this horizontal box plot of the distribution `hwy` by `class`, using `geom_boxplot()` and `coord_flip()`:
```
ggplot(data = mpg) +
geom_boxplot(mapping = aes(x = reorder(class, hwy, FUN = median), y = hwy)) +
coord_flip()
```
In this case the output looks the same, but `x` and `y` aesthetics are flipped.
```
library("ggstance")
ggplot(data = mpg) +
geom_boxploth(mapping = aes(y = reorder(class, hwy, FUN = median), x = hwy))
```
Current versions of ggplot2 (since [version 3\.3\.0](https://ggplot2.tidyverse.org/news/index.html#new-features)) do not require `coord_flip()`.
All geoms can choose the direction.
The direction is be inferred from the aesthetic mapping.
In this case, switching `x` and `y` produces a horizontal boxplot.
```
ggplot(data = mpg) +
geom_boxplot(mapping = aes(y = reorder(class, hwy, FUN = median), x = hwy))
```
The `orientation` argument is used to explicitly specify the axis orientation of the plot.
```
ggplot(data = mpg) +
geom_boxplot(mapping = aes(y = reorder(class, hwy, FUN = median), x = hwy), orientation = "y")
```
#### Exercise 7\.5\.1\.4
One problem with box plots is that they were developed in an era of much smaller datasets and tend to display a prohibitively large number of “outlying values”.
One approach to remedy this problem is the letter value plot.
Install the lvplot package, and try using `geom_lv()` to display the distribution of price vs cut.
What do you learn?
How do you interpret the plots?
Like box\-plots, the boxes of the letter\-value plot correspond to quantiles. However, they incorporate
far more quantiles than box\-plots. They are useful for larger datasets because,
1. larger datasets can give precise estimates of quantiles beyond the quartiles, and
2. in expectation, larger datasets should have more outliers (in absolute numbers).
```
ggplot(diamonds, aes(x = cut, y = price)) +
geom_lv()
```
The letter\-value plot is described in Hofmann, Wickham, and Kafadar ([2017](#ref-HofmannWickhamKafadar2017)).
#### Exercise 7\.5\.1\.5
Compare and contrast `geom_violin()` with a faceted `geom_histogram()`, or a colored `geom_freqpoly()`.
What are the pros and cons of each method?
I produce plots for these three methods below. The `geom_freqpoly()` is better
for look\-up: meaning that given a price, it is easy to tell which `cut` has the
highest density. However, the overlapping lines makes it difficult to distinguish how the overall distributions relate to each other.
The `geom_violin()` and faceted `geom_histogram()` have similar strengths and
weaknesses.
It is easy to visually distinguish differences in the overall shape of the
distributions (skewness, central values, variance, etc).
However, since we can’t easily compare the vertical values of the distribution,
it is difficult to look up which category has the highest density for a given price.
All of these methods depend on tuning parameters to determine the level of
smoothness of the distribution.
```
ggplot(data = diamonds, mapping = aes(x = price, y = ..density..)) +
geom_freqpoly(mapping = aes(color = cut), binwidth = 500)
```
```
ggplot(data = diamonds, mapping = aes(x = price)) +
geom_histogram() +
facet_wrap(~cut, ncol = 1, scales = "free_y")
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
```
ggplot(data = diamonds, mapping = aes(x = cut, y = price)) +
geom_violin() +
coord_flip()
```
The violin plot was first described in Hintze and Nelson ([1998](#ref-HintzeNelson1998)).
#### Exercise 7\.5\.1\.6
If you have a small dataset, it’s sometimes useful to use `geom_jitter()` to see the relationship between a continuous and categorical variable.
The ggbeeswarm package provides a number of methods similar to `geom_jitter()`.
List them and briefly describe what each one does.
There are two methods:
* `geom_quasirandom()` produces plots that are a mix of jitter and violin plots. There are several different methods that determine exactly how the random location of the points is generated.
* `geom_beeswarm()` produces a plot similar to a violin plot, but by offsetting the points.
I’ll use the `mpg` box plot example since these methods display individual points, they are better suited for smaller datasets.
```
ggplot(data = mpg) +
geom_quasirandom(mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
))
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "tukey"
)
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "tukeyDense"
)
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "frowney"
)
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "smiley"
)
```
```
ggplot(data = mpg) +
geom_beeswarm(mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
))
```
### 7\.5\.2 Two categorical variables
#### Exercise 7\.5\.2\.1
How could you rescale the count dataset above to more clearly show the distribution of cut within color, or color within cut?
To clearly show the distribution of `cut` within `color`, calculate a new variable `prop` which is the proportion of each cut within a `color`.
This is done using a grouped mutate.
```
diamonds %>%
count(color, cut) %>%
group_by(color) %>%
mutate(prop = n / sum(n)) %>%
ggplot(mapping = aes(x = color, y = cut)) +
geom_tile(mapping = aes(fill = prop))
```
Similarly, to scale by the distribution of `color` within `cut`,
```
diamonds %>%
count(color, cut) %>%
group_by(cut) %>%
mutate(prop = n / sum(n)) %>%
ggplot(mapping = aes(x = color, y = cut)) +
geom_tile(mapping = aes(fill = prop))
```
I add `limit = c(0, 1)` to put the color scale between (0, 1\).
These are the logical boundaries of proportions.
This makes it possible to compare each cell to its actual value, and would improve comparisons across multiple plots.
However, it ends up limiting the colors and makes it harder to compare within the dataset.
However, using the default limits of the minimum and maximum values makes it easier to compare within the dataset the emphasizing relative differences, but harder to compare across datasets.
#### Exercise 7\.5\.2\.2
Use `geom_tile()` together with dplyr to explore how average flight delays vary by destination and month of year.
What makes the plot difficult to read?
How could you improve it?
```
flights %>%
group_by(month, dest) %>%
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = factor(month), y = dest, fill = dep_delay)) +
geom_tile() +
labs(x = "Month", y = "Destination", fill = "Departure Delay")
#> `summarise()` regrouping output by 'month' (override with `.groups` argument)
```
There are several things that could be done to improve it,
* sort destinations by a meaningful quantity (distance, number of flights, average delay)
* remove missing values
How to treat missing values is difficult.
In this case, missing values correspond to airports which don’t have regular flights (at least one flight each month) from NYC.
These are likely smaller airports (with higher variance in their average due to fewer observations).
When we group all pairs of (`month`, `dest`) again by dest, we should have a total count of 12 (one for each month) per group (`dest`).
This makes it easy to filter.
```
flights %>%
group_by(month, dest) %>% # This gives us (month, dest) pairs
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
group_by(dest) %>% # group all (month, dest) pairs by dest ..
filter(n() == 12) %>% # and only select those that have one entry per month
ungroup() %>%
mutate(dest = reorder(dest, dep_delay)) %>%
ggplot(aes(x = factor(month), y = dest, fill = dep_delay)) +
geom_tile() +
labs(x = "Month", y = "Destination", fill = "Departure Delay")
#> `summarise()` regrouping output by 'month' (override with `.groups` argument)
```
#### Exercise 7\.5\.2\.3
Why is it slightly better to use `aes(x = color, y = cut)` rather than `aes(x = cut, y = color)` in the example above?
It’s usually better to use the categorical variable with a larger number of categories or the longer labels on the y axis.
If at all possible, labels should be horizontal because that is easier to read.
However, switching the order doesn’t result in overlapping labels.
```
diamonds %>%
count(color, cut) %>%
ggplot(mapping = aes(y = color, x = cut)) +
geom_tile(mapping = aes(fill = n))
```
Another justification, for switching the order is that the larger numbers are at the top when `x = color` and `y = cut`, and that lowers the cognitive burden of interpreting the plot.
### 7\.5\.3 Two continuous variables
#### Exercise 7\.5\.3\.1
Instead of summarizing the conditional distribution with a box plot, you could use a frequency polygon.
What do you need to consider when using `cut_width()` vs `cut_number()`?
How does that impact a visualization of the 2d distribution of `carat` and `price`?
Both `cut_width()` and `cut_number()` split a variable into groups.
When using `cut_width()`, we need to choose the width, and the number of
bins will be calculated automatically.
When using `cut_number()`, we need to specify the number of bins, and
the widths will be calculated automatically.
In either case, we want to choose the bin widths and number to be large enough
to aggregate observations to remove noise, but not so large as to remove all the signal.
If categorical colors are used, no more than eight colors should be used
in order to keep them distinct. Using `cut_number`, I will split carats into
quantiles (five groups).
```
ggplot(
data = diamonds,
mapping = aes(color = cut_number(carat, 5), x = price)
) +
geom_freqpoly() +
labs(x = "Price", y = "Count", color = "Carat")
```
Alternatively, I could use `cut_width` to specify widths at which to cut.
I will choose 1\-carat widths. Since there are very few diamonds larger than
2\-carats, this is not as informative. However, using a width of 0\.5 carats
creates too many groups, and splitting at non\-whole numbers is unappealing.
```
ggplot(
data = diamonds,
mapping = aes(color = cut_width(carat, 1, boundary = 0), x = price)
) +
geom_freqpoly() +
labs(x = "Price", y = "Count", color = "Carat")
```
#### Exercise 7\.5\.3\.2
Visualize the distribution of `carat`, partitioned by `price`.
Plotted with a box plot with 10 bins with an equal number of observations, and the width determined by the number of observations.
```
ggplot(diamonds, aes(x = cut_number(price, 10), y = carat)) +
geom_boxplot() +
coord_flip() +
xlab("Price")
```
Plotted with a box plot with 10 equal\-width bins of $2,000\. The argument `boundary = 0` ensures that first bin is $0–$2,000\.
```
ggplot(diamonds, aes(x = cut_width(price, 2000, boundary = 0), y = carat)) +
geom_boxplot(varwidth = TRUE) +
coord_flip() +
xlab("Price")
```
#### Exercise 7\.5\.3\.3
How does the price distribution of very large diamonds compare to small diamonds.
Is it as you expect, or does it surprise you?
The distribution of very large diamonds is more variable.
I am not surprised, since I knew little about diamond prices.
After the fact, it does not seem surprising (as many thing do).
I would guess that this is due to the way in which diamonds are selected for retail sales.
Suppose that someone selling a diamond only finds it profitable to sell it if some combination size, cut, clarity, and color are above a certain threshold.
The smallest diamonds are only profitable to sell if they are exceptional in all the other factors (cut, clarity, and color), so the small diamonds sold have similar characteristics.
However, larger diamonds may be profitable regardless of the values of the other factors.
Thus we will observe large diamonds with a wider variety of cut, clarity, and color and thus more variability in prices.
#### Exercise 7\.5\.3\.4
Combine two of the techniques you’ve learned to visualize the combined distribution of cut, carat, and price.
There are many options to try, so your solutions may vary from mine.
Here are a few options that I tried.
```
ggplot(diamonds, aes(x = carat, y = price)) +
geom_hex() +
facet_wrap(~cut, ncol = 1)
```
```
ggplot(diamonds, aes(x = cut_number(carat, 5), y = price, colour = cut)) +
geom_boxplot()
```
```
ggplot(diamonds, aes(colour = cut_number(carat, 5), y = price, x = cut)) +
geom_boxplot()
```
#### Exercise 7\.5\.3\.5
Two dimensional plots reveal outliers that are not visible in one dimensional plots.
For example, some points in the plot below have an unusual combination of `x` and `y` values, which makes the points outliers even though their `x` and `y` values appear normal when examined separately.
```
ggplot(data = diamonds) +
geom_point(mapping = aes(x = x, y = y)) +
coord_cartesian(xlim = c(4, 11), ylim = c(4, 11))
```
Why is a scatterplot a better display than a binned plot for this case?
In this case, there is a strong relationship between \\(x\\) and \\(y\\). The outliers in this case are not extreme in either \\(x\\) or \\(y\\).
A binned plot would not reveal these outliers, and may lead us to conclude that the largest value of \\(x\\) was an outlier even though it appears to fit the bivariate pattern well.
The later chapter [Model Basics](model-basics.html#model-basics) discusses fitting models to bivariate data and plotting residuals, which would reveal this outliers.
7\.6 Patterns and models
------------------------
No exercises
7\.7 ggplot2 calls
------------------
No exercises
7\.8 Learning more
------------------
No exercises
7\.1 Introduction
-----------------
This will also use data from the **nycflights13** package.
The **ggbeeswarm**, **lvplot**, and **ggstance** packages provide some additional functions used in some solutions.
```
library("tidyverse")
library("nycflights13")
library("ggbeeswarm")
library("lvplot")
library("ggstance")
```
7\.2 Questions
--------------
7\.3 Variation
--------------
### Exercise 7\.3\.1
Explore the distribution of each of the `x`, `y`, and `z` variables in `diamonds`. What do you learn?
Think about a diamond and how you might decide which dimension is the length, width, and depth.
First, I’ll calculate summary statistics for these variables and plot their distributions.
```
summary(select(diamonds, x, y, z))
#> x y z
#> Min. : 0.00 Min. : 0.0 Min. : 0.0
#> 1st Qu.: 4.71 1st Qu.: 4.7 1st Qu.: 2.9
#> Median : 5.70 Median : 5.7 Median : 3.5
#> Mean : 5.73 Mean : 5.7 Mean : 3.5
#> 3rd Qu.: 6.54 3rd Qu.: 6.5 3rd Qu.: 4.0
#> Max. :10.74 Max. :58.9 Max. :31.8
```
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = x), binwidth = 0.01)
```
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = y), binwidth = 0.01)
```
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = z), binwidth = 0.01)
```
There several noticeable features of the distributions:
1. `x` and `y` are larger than `z`,
2. there are outliers,
3. they are all right skewed, and
4. they are multimodal or “spiky”.
The typical values of `x` and `y` are larger than `z`, with `x` and `y` having inter\-quartile
ranges of 4\.7–6\.5, while `z` has an inter\-quartile range of 2\.9–4\.0\.
There are two types of outliers in this data.
Some diamonds have values of zero and some have abnormally large values of `x`, `y`, or `z`.
```
summary(select(diamonds, x, y, z))
#> x y z
#> Min. : 0.00 Min. : 0.0 Min. : 0.0
#> 1st Qu.: 4.71 1st Qu.: 4.7 1st Qu.: 2.9
#> Median : 5.70 Median : 5.7 Median : 3.5
#> Mean : 5.73 Mean : 5.7 Mean : 3.5
#> 3rd Qu.: 6.54 3rd Qu.: 6.5 3rd Qu.: 4.0
#> Max. :10.74 Max. :58.9 Max. :31.8
```
These appear to be either data entry errors, or an undocumented convention in the dataset for indicating missing values. An alternative hypothesis would be that values of zero are the
result of rounding values like `0.002` down, but since there are no diamonds with values of 0\.01, that does not seem to be the case.
```
filter(diamonds, x == 0 | y == 0 | z == 0)
#> # A tibble: 20 x 10
#> carat cut color clarity depth table price x y z
#> <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 1 Premium G SI2 59.1 59 3142 6.55 6.48 0
#> 2 1.01 Premium H I1 58.1 59 3167 6.66 6.6 0
#> 3 1.1 Premium G SI2 63 59 3696 6.5 6.47 0
#> 4 1.01 Premium F SI2 59.2 58 3837 6.5 6.47 0
#> 5 1.5 Good G I1 64 61 4731 7.15 7.04 0
#> 6 1.07 Ideal F SI2 61.6 56 4954 0 6.62 0
#> # … with 14 more rows
```
There are also some diamonds with values of `y` and `z` that are abnormally large.
There are diamonds with `y == 58.9` and `y == 31.8`, and one with `z == 31.8`.
These are probably data errors since the values do not seem in line with the values of
the other variables.
```
diamonds %>%
arrange(desc(y)) %>%
head()
#> # A tibble: 6 x 10
#> carat cut color clarity depth table price x y z
#> <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 2 Premium H SI2 58.9 57 12210 8.09 58.9 8.06
#> 2 0.51 Ideal E VS1 61.8 55 2075 5.15 31.8 5.12
#> 3 5.01 Fair J I1 65.5 59 18018 10.7 10.5 6.98
#> 4 4.5 Fair J I1 65.8 58 18531 10.2 10.2 6.72
#> 5 4.01 Premium I I1 61 61 15223 10.1 10.1 6.17
#> 6 4.01 Premium J I1 62.5 62 15223 10.0 9.94 6.24
```
```
diamonds %>%
arrange(desc(z)) %>%
head()
#> # A tibble: 6 x 10
#> carat cut color clarity depth table price x y z
#> <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 0.51 Very Good E VS1 61.8 54.7 1970 5.12 5.15 31.8
#> 2 2 Premium H SI2 58.9 57 12210 8.09 58.9 8.06
#> 3 5.01 Fair J I1 65.5 59 18018 10.7 10.5 6.98
#> 4 4.5 Fair J I1 65.8 58 18531 10.2 10.2 6.72
#> 5 4.13 Fair H I1 64.8 61 17329 10 9.85 6.43
#> 6 3.65 Fair H I1 67.1 53 11668 9.53 9.48 6.38
```
Initially, I only considered univariate outliers. However, to check the plausibility
of those outliers I would informally consider how consistent their values are with
the values of the other variables. In this case, scatter plots of each combination
of `x`, `y`, and `z` shows these outliers much more clearly.
```
ggplot(diamonds, aes(x = x, y = y)) +
geom_point()
```
```
ggplot(diamonds, aes(x = x, y = z)) +
geom_point()
```
```
ggplot(diamonds, aes(x = y, y = z)) +
geom_point()
```
Removing the outliers from `x`, `y`, and `z` makes the distribution easier to see.
The right skewness of these distributions is unsurprising; there should be more smaller diamonds than larger ones and these values can never be negative.
More interestingly, there are spikes in the distribution at certain values.
These spikes often, but not exclusively, occur near integer values.
Without knowing more about diamond cutting, I can’t say more about what these spikes represent. If you know, add a comment.
I would guess that some diamond sizes are used more often than others, and these spikes correspond to those sizes.
Also, I would guess that a diamond cut and carat value of a diamond imply values of `x`, `y`, and `z`.
Since there are spikes in the distribution of carat sizes, and only a few different cuts, that could result in these spikes.
I’ll leave it to readers to figure out if that’s the case.
```
filter(diamonds, x > 0, x < 10) %>%
ggplot() +
geom_histogram(mapping = aes(x = x), binwidth = 0.01) +
scale_x_continuous(breaks = 1:10)
```
```
filter(diamonds, y > 0, y < 10) %>%
ggplot() +
geom_histogram(mapping = aes(x = y), binwidth = 0.01) +
scale_x_continuous(breaks = 1:10)
```
```
filter(diamonds, z > 0, z < 10) %>%
ggplot() +
geom_histogram(mapping = aes(x = z), binwidth = 0.01) +
scale_x_continuous(breaks = 1:10)
```
According to the documentation for `diamonds`, `x` is length, `y` is width, and `z` is depth.
If documentation were unavailable, I would compare the values of the variables to match them to the length, width, and depth.
I would expect length to always be less than width, otherwise the length would be called the width.
I would also search for the definitions of length, width, and depth with respect to diamond cuts.
[Depth](https://en.wikipedia.org/wiki/Diamond_cut) can be expressed as a percentage of the length/width of the diamond, which means it should be less than both the length and the width.
```
summarise(diamonds, mean(x > y), mean(x > z), mean(y > z))
#> # A tibble: 1 x 3
#> `mean(x > y)` `mean(x > z)` `mean(y > z)`
#> <dbl> <dbl> <dbl>
#> 1 0.434 1.00 1.00
```
It appears that depth (`z`) is always smaller than length (`x`) or width (`y`), perhaps because a shallower depth helps when setting diamonds in jewelry and due to how it affect the reflection of light.
Length is more than width in less than half the observations, the opposite of my expectations.
### Exercise 7\.3\.2
Explore the distribution of price. Do you discover anything unusual or surprising? (Hint: Carefully think about the `binwidth` and make sure you try a wide range of values.)
* The price data has many spikes, but I can’t tell what each spike corresponds to. The following plots don’t show much difference in the distributions in the last one or two digits.
* There are no diamonds with a price of $1,500 (between $1,455 and $1,545, including).
* There’s a bulge in the distribution around $750\.
```
ggplot(filter(diamonds, price < 2500), aes(x = price)) +
geom_histogram(binwidth = 10, center = 0)
```
```
ggplot(filter(diamonds), aes(x = price)) +
geom_histogram(binwidth = 100, center = 0)
```
The last digits of prices are often not uniformly distributed.
They are often round, ending in 0 or 5 (for one\-half).
Another common pattern is ending in 99, as in $1999\.
If we plot the distribution of the last one and two digits of prices do we observe patterns like that?
```
diamonds %>%
mutate(ending = price %% 10) %>%
ggplot(aes(x = ending)) +
geom_histogram(binwidth = 1, center = 0)
```
```
diamonds %>%
mutate(ending = price %% 100) %>%
ggplot(aes(x = ending)) +
geom_histogram(binwidth = 1)
```
```
diamonds %>%
mutate(ending = price %% 1000) %>%
filter(ending >= 500, ending <= 800) %>%
ggplot(aes(x = ending)) +
geom_histogram(binwidth = 1)
```
### Exercise 7\.3\.3
How many diamonds are 0\.99 carat?
How many are 1 carat?
What do you think is the cause of the difference?
There are more than 70 times as many 1 carat diamonds as 0\.99 carat diamond.
```
diamonds %>%
filter(carat >= 0.99, carat <= 1) %>%
count(carat)
#> # A tibble: 2 x 2
#> carat n
#> <dbl> <int>
#> 1 0.99 23
#> 2 1 1558
```
I don’t know exactly the process behind how carats are measured, but some way or another some diamonds carat values are being “rounded up”
Presumably there is a premium for a 1 carat diamond vs. a 0\.99 carat diamond beyond the expected increase in price due to a 0\.01 carat increase.[7](#fn7)
To check this intuition, we would want to look at the number of diamonds in each carat range to see if there is an unusually low number of 0\.99 carat diamonds, and an abnormally large number of 1 carat diamonds.
```
diamonds %>%
filter(carat >= 0.9, carat <= 1.1) %>%
count(carat) %>%
print(n = Inf)
#> # A tibble: 21 x 2
#> carat n
#> <dbl> <int>
#> 1 0.9 1485
#> 2 0.91 570
#> 3 0.92 226
#> 4 0.93 142
#> 5 0.94 59
#> 6 0.95 65
#> 7 0.96 103
#> 8 0.97 59
#> 9 0.98 31
#> 10 0.99 23
#> 11 1 1558
#> 12 1.01 2242
#> 13 1.02 883
#> 14 1.03 523
#> 15 1.04 475
#> 16 1.05 361
#> 17 1.06 373
#> 18 1.07 342
#> 19 1.08 246
#> 20 1.09 287
#> 21 1.1 278
```
### Exercise 7\.3\.4
Compare and contrast `coord_cartesian()` vs `xlim()` or `ylim()` when zooming in on a histogram. What happens if you leave `binwidth` unset? What happens if you try and zoom so only half a bar shows?
The `coord_cartesian()` function zooms in on the area specified by the limits,
after having calculated and drawn the geoms.
Since the histogram bins have already been calculated, it is unaffected.
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = price)) +
coord_cartesian(xlim = c(100, 5000), ylim = c(0, 3000))
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
However, the `xlim()` and `ylim()` functions influence actions before the calculation
of the stats related to the histogram. Thus, any values outside the x\- and y\-limits
are dropped before calculating bin widths and counts. This can influence how
the histogram looks.
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = price)) +
xlim(100, 5000) +
ylim(0, 3000)
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
#> Warning: Removed 14714 rows containing non-finite values (stat_bin).
#> Warning: Removed 6 rows containing missing values (geom_bar).
```
### Exercise 7\.3\.1
Explore the distribution of each of the `x`, `y`, and `z` variables in `diamonds`. What do you learn?
Think about a diamond and how you might decide which dimension is the length, width, and depth.
First, I’ll calculate summary statistics for these variables and plot their distributions.
```
summary(select(diamonds, x, y, z))
#> x y z
#> Min. : 0.00 Min. : 0.0 Min. : 0.0
#> 1st Qu.: 4.71 1st Qu.: 4.7 1st Qu.: 2.9
#> Median : 5.70 Median : 5.7 Median : 3.5
#> Mean : 5.73 Mean : 5.7 Mean : 3.5
#> 3rd Qu.: 6.54 3rd Qu.: 6.5 3rd Qu.: 4.0
#> Max. :10.74 Max. :58.9 Max. :31.8
```
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = x), binwidth = 0.01)
```
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = y), binwidth = 0.01)
```
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = z), binwidth = 0.01)
```
There several noticeable features of the distributions:
1. `x` and `y` are larger than `z`,
2. there are outliers,
3. they are all right skewed, and
4. they are multimodal or “spiky”.
The typical values of `x` and `y` are larger than `z`, with `x` and `y` having inter\-quartile
ranges of 4\.7–6\.5, while `z` has an inter\-quartile range of 2\.9–4\.0\.
There are two types of outliers in this data.
Some diamonds have values of zero and some have abnormally large values of `x`, `y`, or `z`.
```
summary(select(diamonds, x, y, z))
#> x y z
#> Min. : 0.00 Min. : 0.0 Min. : 0.0
#> 1st Qu.: 4.71 1st Qu.: 4.7 1st Qu.: 2.9
#> Median : 5.70 Median : 5.7 Median : 3.5
#> Mean : 5.73 Mean : 5.7 Mean : 3.5
#> 3rd Qu.: 6.54 3rd Qu.: 6.5 3rd Qu.: 4.0
#> Max. :10.74 Max. :58.9 Max. :31.8
```
These appear to be either data entry errors, or an undocumented convention in the dataset for indicating missing values. An alternative hypothesis would be that values of zero are the
result of rounding values like `0.002` down, but since there are no diamonds with values of 0\.01, that does not seem to be the case.
```
filter(diamonds, x == 0 | y == 0 | z == 0)
#> # A tibble: 20 x 10
#> carat cut color clarity depth table price x y z
#> <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 1 Premium G SI2 59.1 59 3142 6.55 6.48 0
#> 2 1.01 Premium H I1 58.1 59 3167 6.66 6.6 0
#> 3 1.1 Premium G SI2 63 59 3696 6.5 6.47 0
#> 4 1.01 Premium F SI2 59.2 58 3837 6.5 6.47 0
#> 5 1.5 Good G I1 64 61 4731 7.15 7.04 0
#> 6 1.07 Ideal F SI2 61.6 56 4954 0 6.62 0
#> # … with 14 more rows
```
There are also some diamonds with values of `y` and `z` that are abnormally large.
There are diamonds with `y == 58.9` and `y == 31.8`, and one with `z == 31.8`.
These are probably data errors since the values do not seem in line with the values of
the other variables.
```
diamonds %>%
arrange(desc(y)) %>%
head()
#> # A tibble: 6 x 10
#> carat cut color clarity depth table price x y z
#> <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 2 Premium H SI2 58.9 57 12210 8.09 58.9 8.06
#> 2 0.51 Ideal E VS1 61.8 55 2075 5.15 31.8 5.12
#> 3 5.01 Fair J I1 65.5 59 18018 10.7 10.5 6.98
#> 4 4.5 Fair J I1 65.8 58 18531 10.2 10.2 6.72
#> 5 4.01 Premium I I1 61 61 15223 10.1 10.1 6.17
#> 6 4.01 Premium J I1 62.5 62 15223 10.0 9.94 6.24
```
```
diamonds %>%
arrange(desc(z)) %>%
head()
#> # A tibble: 6 x 10
#> carat cut color clarity depth table price x y z
#> <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
#> 1 0.51 Very Good E VS1 61.8 54.7 1970 5.12 5.15 31.8
#> 2 2 Premium H SI2 58.9 57 12210 8.09 58.9 8.06
#> 3 5.01 Fair J I1 65.5 59 18018 10.7 10.5 6.98
#> 4 4.5 Fair J I1 65.8 58 18531 10.2 10.2 6.72
#> 5 4.13 Fair H I1 64.8 61 17329 10 9.85 6.43
#> 6 3.65 Fair H I1 67.1 53 11668 9.53 9.48 6.38
```
Initially, I only considered univariate outliers. However, to check the plausibility
of those outliers I would informally consider how consistent their values are with
the values of the other variables. In this case, scatter plots of each combination
of `x`, `y`, and `z` shows these outliers much more clearly.
```
ggplot(diamonds, aes(x = x, y = y)) +
geom_point()
```
```
ggplot(diamonds, aes(x = x, y = z)) +
geom_point()
```
```
ggplot(diamonds, aes(x = y, y = z)) +
geom_point()
```
Removing the outliers from `x`, `y`, and `z` makes the distribution easier to see.
The right skewness of these distributions is unsurprising; there should be more smaller diamonds than larger ones and these values can never be negative.
More interestingly, there are spikes in the distribution at certain values.
These spikes often, but not exclusively, occur near integer values.
Without knowing more about diamond cutting, I can’t say more about what these spikes represent. If you know, add a comment.
I would guess that some diamond sizes are used more often than others, and these spikes correspond to those sizes.
Also, I would guess that a diamond cut and carat value of a diamond imply values of `x`, `y`, and `z`.
Since there are spikes in the distribution of carat sizes, and only a few different cuts, that could result in these spikes.
I’ll leave it to readers to figure out if that’s the case.
```
filter(diamonds, x > 0, x < 10) %>%
ggplot() +
geom_histogram(mapping = aes(x = x), binwidth = 0.01) +
scale_x_continuous(breaks = 1:10)
```
```
filter(diamonds, y > 0, y < 10) %>%
ggplot() +
geom_histogram(mapping = aes(x = y), binwidth = 0.01) +
scale_x_continuous(breaks = 1:10)
```
```
filter(diamonds, z > 0, z < 10) %>%
ggplot() +
geom_histogram(mapping = aes(x = z), binwidth = 0.01) +
scale_x_continuous(breaks = 1:10)
```
According to the documentation for `diamonds`, `x` is length, `y` is width, and `z` is depth.
If documentation were unavailable, I would compare the values of the variables to match them to the length, width, and depth.
I would expect length to always be less than width, otherwise the length would be called the width.
I would also search for the definitions of length, width, and depth with respect to diamond cuts.
[Depth](https://en.wikipedia.org/wiki/Diamond_cut) can be expressed as a percentage of the length/width of the diamond, which means it should be less than both the length and the width.
```
summarise(diamonds, mean(x > y), mean(x > z), mean(y > z))
#> # A tibble: 1 x 3
#> `mean(x > y)` `mean(x > z)` `mean(y > z)`
#> <dbl> <dbl> <dbl>
#> 1 0.434 1.00 1.00
```
It appears that depth (`z`) is always smaller than length (`x`) or width (`y`), perhaps because a shallower depth helps when setting diamonds in jewelry and due to how it affect the reflection of light.
Length is more than width in less than half the observations, the opposite of my expectations.
### Exercise 7\.3\.2
Explore the distribution of price. Do you discover anything unusual or surprising? (Hint: Carefully think about the `binwidth` and make sure you try a wide range of values.)
* The price data has many spikes, but I can’t tell what each spike corresponds to. The following plots don’t show much difference in the distributions in the last one or two digits.
* There are no diamonds with a price of $1,500 (between $1,455 and $1,545, including).
* There’s a bulge in the distribution around $750\.
```
ggplot(filter(diamonds, price < 2500), aes(x = price)) +
geom_histogram(binwidth = 10, center = 0)
```
```
ggplot(filter(diamonds), aes(x = price)) +
geom_histogram(binwidth = 100, center = 0)
```
The last digits of prices are often not uniformly distributed.
They are often round, ending in 0 or 5 (for one\-half).
Another common pattern is ending in 99, as in $1999\.
If we plot the distribution of the last one and two digits of prices do we observe patterns like that?
```
diamonds %>%
mutate(ending = price %% 10) %>%
ggplot(aes(x = ending)) +
geom_histogram(binwidth = 1, center = 0)
```
```
diamonds %>%
mutate(ending = price %% 100) %>%
ggplot(aes(x = ending)) +
geom_histogram(binwidth = 1)
```
```
diamonds %>%
mutate(ending = price %% 1000) %>%
filter(ending >= 500, ending <= 800) %>%
ggplot(aes(x = ending)) +
geom_histogram(binwidth = 1)
```
### Exercise 7\.3\.3
How many diamonds are 0\.99 carat?
How many are 1 carat?
What do you think is the cause of the difference?
There are more than 70 times as many 1 carat diamonds as 0\.99 carat diamond.
```
diamonds %>%
filter(carat >= 0.99, carat <= 1) %>%
count(carat)
#> # A tibble: 2 x 2
#> carat n
#> <dbl> <int>
#> 1 0.99 23
#> 2 1 1558
```
I don’t know exactly the process behind how carats are measured, but some way or another some diamonds carat values are being “rounded up”
Presumably there is a premium for a 1 carat diamond vs. a 0\.99 carat diamond beyond the expected increase in price due to a 0\.01 carat increase.[7](#fn7)
To check this intuition, we would want to look at the number of diamonds in each carat range to see if there is an unusually low number of 0\.99 carat diamonds, and an abnormally large number of 1 carat diamonds.
```
diamonds %>%
filter(carat >= 0.9, carat <= 1.1) %>%
count(carat) %>%
print(n = Inf)
#> # A tibble: 21 x 2
#> carat n
#> <dbl> <int>
#> 1 0.9 1485
#> 2 0.91 570
#> 3 0.92 226
#> 4 0.93 142
#> 5 0.94 59
#> 6 0.95 65
#> 7 0.96 103
#> 8 0.97 59
#> 9 0.98 31
#> 10 0.99 23
#> 11 1 1558
#> 12 1.01 2242
#> 13 1.02 883
#> 14 1.03 523
#> 15 1.04 475
#> 16 1.05 361
#> 17 1.06 373
#> 18 1.07 342
#> 19 1.08 246
#> 20 1.09 287
#> 21 1.1 278
```
### Exercise 7\.3\.4
Compare and contrast `coord_cartesian()` vs `xlim()` or `ylim()` when zooming in on a histogram. What happens if you leave `binwidth` unset? What happens if you try and zoom so only half a bar shows?
The `coord_cartesian()` function zooms in on the area specified by the limits,
after having calculated and drawn the geoms.
Since the histogram bins have already been calculated, it is unaffected.
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = price)) +
coord_cartesian(xlim = c(100, 5000), ylim = c(0, 3000))
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
However, the `xlim()` and `ylim()` functions influence actions before the calculation
of the stats related to the histogram. Thus, any values outside the x\- and y\-limits
are dropped before calculating bin widths and counts. This can influence how
the histogram looks.
```
ggplot(diamonds) +
geom_histogram(mapping = aes(x = price)) +
xlim(100, 5000) +
ylim(0, 3000)
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
#> Warning: Removed 14714 rows containing non-finite values (stat_bin).
#> Warning: Removed 6 rows containing missing values (geom_bar).
```
7\.4 Missing values
-------------------
### Exercise 7\.4\.1
What happens to missing values in a histogram?
What happens to missing values in a bar chart?
Why is there a difference?
Missing values are removed when the number of observations in each bin are calculated. See the warning message: `Removed 9 rows containing non-finite values (stat_bin)`
```
diamonds2 <- diamonds %>%
mutate(y = ifelse(y < 3 | y > 20, NA, y))
ggplot(diamonds2, aes(x = y)) +
geom_histogram()
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
#> Warning: Removed 9 rows containing non-finite values (stat_bin).
```
In the `geom_bar()` function, `NA` is treated as another category. The `x` aesthetic in `geom_bar()` requires a discrete (categorical) variable, and missing values act like another category.
```
diamonds %>%
mutate(cut = if_else(runif(n()) < 0.1, NA_character_, as.character(cut))) %>%
ggplot() +
geom_bar(mapping = aes(x = cut))
```
In a histogram, the `x` aesthetic variable needs to be numeric, and `stat_bin()` groups the observations by ranges into bins.
Since the numeric value of the `NA` observations is unknown, they cannot be placed in a particular bin, and are dropped.
### Exercise 7\.4\.2
What does `na.rm = TRUE` do in `mean()` and `sum()`?
This option removes `NA` values from the vector prior to calculating the mean and sum.
```
mean(c(0, 1, 2, NA), na.rm = TRUE)
#> [1] 1
sum(c(0, 1, 2, NA), na.rm = TRUE)
#> [1] 3
```
### Exercise 7\.4\.1
What happens to missing values in a histogram?
What happens to missing values in a bar chart?
Why is there a difference?
Missing values are removed when the number of observations in each bin are calculated. See the warning message: `Removed 9 rows containing non-finite values (stat_bin)`
```
diamonds2 <- diamonds %>%
mutate(y = ifelse(y < 3 | y > 20, NA, y))
ggplot(diamonds2, aes(x = y)) +
geom_histogram()
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
#> Warning: Removed 9 rows containing non-finite values (stat_bin).
```
In the `geom_bar()` function, `NA` is treated as another category. The `x` aesthetic in `geom_bar()` requires a discrete (categorical) variable, and missing values act like another category.
```
diamonds %>%
mutate(cut = if_else(runif(n()) < 0.1, NA_character_, as.character(cut))) %>%
ggplot() +
geom_bar(mapping = aes(x = cut))
```
In a histogram, the `x` aesthetic variable needs to be numeric, and `stat_bin()` groups the observations by ranges into bins.
Since the numeric value of the `NA` observations is unknown, they cannot be placed in a particular bin, and are dropped.
### Exercise 7\.4\.2
What does `na.rm = TRUE` do in `mean()` and `sum()`?
This option removes `NA` values from the vector prior to calculating the mean and sum.
```
mean(c(0, 1, 2, NA), na.rm = TRUE)
#> [1] 1
sum(c(0, 1, 2, NA), na.rm = TRUE)
#> [1] 3
```
7\.5 Covariation
----------------
### 7\.5\.1 A categorical and continuous variable
#### Exercise 7\.5\.1\.1
Use what you’ve learned to improve the visualization of the departure times of cancelled vs. non\-cancelled flights.
Instead of a `freqplot` use a box\-plot
```
nycflights13::flights %>%
mutate(
cancelled = is.na(dep_time),
sched_hour = sched_dep_time %/% 100,
sched_min = sched_dep_time %% 100,
sched_dep_time = sched_hour + sched_min / 60
) %>%
ggplot() +
geom_boxplot(mapping = aes(y = sched_dep_time, x = cancelled))
```
#### Exercise 7\.5\.1\.2
What variable in the diamonds dataset is most important for predicting the price of a diamond?
How is that variable correlated with cut?
Why does the combination of those two relationships lead to lower quality diamonds being more expensive?
What are the general relationships of each variable with the price of the diamonds?
I will consider the variables: `carat`, `clarity`, `color`, and `cut`.
I ignore the dimensions of the diamond since `carat` measures size, and thus incorporates most of the information contained in these variables.
Since both `price` and `carat` are continuous variables, I use a scatter plot to visualize their relationship.
```
ggplot(diamonds, aes(x = carat, y = price)) +
geom_point()
```
However, since there is a large number of points in the data, I will use a boxplot by binning `carat`, as suggested in the chapter:
```
ggplot(data = diamonds, mapping = aes(x = carat, y = price)) +
geom_boxplot(mapping = aes(group = cut_width(carat, 0.1)), orientation = "x")
```
Note that the choice of the binning width is important, as if it were too large it would obscure any relationship, and if it were too small, the values in the bins could be too variable to reveal underlying trends.
Version 3\.3\.0 of ggplot2 introduced changes to boxplots that may affect the orientation.
> This geom treats each axis differently and, thus, can thus have two orientations.
> Often the orientation is easy to deduce from a combination of the given mappings and the types of positional scales in use.
> Thus, ggplot2 will by default try to guess which orientation the layer should have. Under rare circumstances, the orientation is ambiguous and guessing may fail
If you are getting something different with your code check the version of ggplot2\.
Use `orientation = "x"` (vertical boxplots) or `orientation = "y"` (horizontal boxplots) to explicitly specify how the geom should treat these axes.
The variables `color` and `clarity` are ordered categorical variables.
The chapter suggests visualizing a categorical and continuous variable using frequency polygons or boxplots.
In this case, I will use a box plot since it will better show a relationship between the variables.
There is a weak negative relationship between `color` and `price`.
The scale of diamond color goes from D (best) to J (worst).
Currently, the levels of `diamonds$color` are in the wrong order.
Before plotting, I will reverse the order of the `color` levels so they will be in increasing order of quality on the x\-axis.
The `color` column is an example of a factor variable, which is covered in the
“[Factors](https://r4ds.had.co.nz/factors.html)” chapter of *R4DS*.
```
diamonds %>%
mutate(color = fct_rev(color)) %>%
ggplot(aes(x = color, y = price)) +
geom_boxplot()
```
There is also weak negative relationship between `clarity` and `price`.
The scale of clarity goes from I1 (worst) to IF (best).
```
ggplot(data = diamonds) +
geom_boxplot(mapping = aes(x = clarity, y = price))
```
For both `clarity` and `color`, there is a much larger amount of variation within each category than between categories.
Carat is clearly the single best predictor of diamond prices.
Now that we have established that carat appears to be the best predictor of price, what is the relationship between it and cut?
Since this is an example of a continuous (carat) and categorical (cut) variable, it can be visualized with a box plot.
```
ggplot(diamonds, aes(x = cut, y = carat)) +
geom_boxplot()
```
There is a lot of variability in the distribution of carat sizes within each cut category.
There is a slight negative relationship between carat and cut.
Noticeably, the largest carat diamonds have a cut of “Fair” (the lowest).
This negative relationship can be due to the way in which diamonds are selected for sale.
A larger diamond can be profitably sold with a lower quality cut, while a smaller diamond requires a better cut.
#### Exercise 7\.5\.1\.3
Install the ggstance package, and create a horizontal box plot.
How does this compare to using `coord_flip()`?
Earlier, we created this horizontal box plot of the distribution `hwy` by `class`, using `geom_boxplot()` and `coord_flip()`:
```
ggplot(data = mpg) +
geom_boxplot(mapping = aes(x = reorder(class, hwy, FUN = median), y = hwy)) +
coord_flip()
```
In this case the output looks the same, but `x` and `y` aesthetics are flipped.
```
library("ggstance")
ggplot(data = mpg) +
geom_boxploth(mapping = aes(y = reorder(class, hwy, FUN = median), x = hwy))
```
Current versions of ggplot2 (since [version 3\.3\.0](https://ggplot2.tidyverse.org/news/index.html#new-features)) do not require `coord_flip()`.
All geoms can choose the direction.
The direction is be inferred from the aesthetic mapping.
In this case, switching `x` and `y` produces a horizontal boxplot.
```
ggplot(data = mpg) +
geom_boxplot(mapping = aes(y = reorder(class, hwy, FUN = median), x = hwy))
```
The `orientation` argument is used to explicitly specify the axis orientation of the plot.
```
ggplot(data = mpg) +
geom_boxplot(mapping = aes(y = reorder(class, hwy, FUN = median), x = hwy), orientation = "y")
```
#### Exercise 7\.5\.1\.4
One problem with box plots is that they were developed in an era of much smaller datasets and tend to display a prohibitively large number of “outlying values”.
One approach to remedy this problem is the letter value plot.
Install the lvplot package, and try using `geom_lv()` to display the distribution of price vs cut.
What do you learn?
How do you interpret the plots?
Like box\-plots, the boxes of the letter\-value plot correspond to quantiles. However, they incorporate
far more quantiles than box\-plots. They are useful for larger datasets because,
1. larger datasets can give precise estimates of quantiles beyond the quartiles, and
2. in expectation, larger datasets should have more outliers (in absolute numbers).
```
ggplot(diamonds, aes(x = cut, y = price)) +
geom_lv()
```
The letter\-value plot is described in Hofmann, Wickham, and Kafadar ([2017](#ref-HofmannWickhamKafadar2017)).
#### Exercise 7\.5\.1\.5
Compare and contrast `geom_violin()` with a faceted `geom_histogram()`, or a colored `geom_freqpoly()`.
What are the pros and cons of each method?
I produce plots for these three methods below. The `geom_freqpoly()` is better
for look\-up: meaning that given a price, it is easy to tell which `cut` has the
highest density. However, the overlapping lines makes it difficult to distinguish how the overall distributions relate to each other.
The `geom_violin()` and faceted `geom_histogram()` have similar strengths and
weaknesses.
It is easy to visually distinguish differences in the overall shape of the
distributions (skewness, central values, variance, etc).
However, since we can’t easily compare the vertical values of the distribution,
it is difficult to look up which category has the highest density for a given price.
All of these methods depend on tuning parameters to determine the level of
smoothness of the distribution.
```
ggplot(data = diamonds, mapping = aes(x = price, y = ..density..)) +
geom_freqpoly(mapping = aes(color = cut), binwidth = 500)
```
```
ggplot(data = diamonds, mapping = aes(x = price)) +
geom_histogram() +
facet_wrap(~cut, ncol = 1, scales = "free_y")
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
```
ggplot(data = diamonds, mapping = aes(x = cut, y = price)) +
geom_violin() +
coord_flip()
```
The violin plot was first described in Hintze and Nelson ([1998](#ref-HintzeNelson1998)).
#### Exercise 7\.5\.1\.6
If you have a small dataset, it’s sometimes useful to use `geom_jitter()` to see the relationship between a continuous and categorical variable.
The ggbeeswarm package provides a number of methods similar to `geom_jitter()`.
List them and briefly describe what each one does.
There are two methods:
* `geom_quasirandom()` produces plots that are a mix of jitter and violin plots. There are several different methods that determine exactly how the random location of the points is generated.
* `geom_beeswarm()` produces a plot similar to a violin plot, but by offsetting the points.
I’ll use the `mpg` box plot example since these methods display individual points, they are better suited for smaller datasets.
```
ggplot(data = mpg) +
geom_quasirandom(mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
))
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "tukey"
)
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "tukeyDense"
)
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "frowney"
)
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "smiley"
)
```
```
ggplot(data = mpg) +
geom_beeswarm(mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
))
```
### 7\.5\.2 Two categorical variables
#### Exercise 7\.5\.2\.1
How could you rescale the count dataset above to more clearly show the distribution of cut within color, or color within cut?
To clearly show the distribution of `cut` within `color`, calculate a new variable `prop` which is the proportion of each cut within a `color`.
This is done using a grouped mutate.
```
diamonds %>%
count(color, cut) %>%
group_by(color) %>%
mutate(prop = n / sum(n)) %>%
ggplot(mapping = aes(x = color, y = cut)) +
geom_tile(mapping = aes(fill = prop))
```
Similarly, to scale by the distribution of `color` within `cut`,
```
diamonds %>%
count(color, cut) %>%
group_by(cut) %>%
mutate(prop = n / sum(n)) %>%
ggplot(mapping = aes(x = color, y = cut)) +
geom_tile(mapping = aes(fill = prop))
```
I add `limit = c(0, 1)` to put the color scale between (0, 1\).
These are the logical boundaries of proportions.
This makes it possible to compare each cell to its actual value, and would improve comparisons across multiple plots.
However, it ends up limiting the colors and makes it harder to compare within the dataset.
However, using the default limits of the minimum and maximum values makes it easier to compare within the dataset the emphasizing relative differences, but harder to compare across datasets.
#### Exercise 7\.5\.2\.2
Use `geom_tile()` together with dplyr to explore how average flight delays vary by destination and month of year.
What makes the plot difficult to read?
How could you improve it?
```
flights %>%
group_by(month, dest) %>%
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = factor(month), y = dest, fill = dep_delay)) +
geom_tile() +
labs(x = "Month", y = "Destination", fill = "Departure Delay")
#> `summarise()` regrouping output by 'month' (override with `.groups` argument)
```
There are several things that could be done to improve it,
* sort destinations by a meaningful quantity (distance, number of flights, average delay)
* remove missing values
How to treat missing values is difficult.
In this case, missing values correspond to airports which don’t have regular flights (at least one flight each month) from NYC.
These are likely smaller airports (with higher variance in their average due to fewer observations).
When we group all pairs of (`month`, `dest`) again by dest, we should have a total count of 12 (one for each month) per group (`dest`).
This makes it easy to filter.
```
flights %>%
group_by(month, dest) %>% # This gives us (month, dest) pairs
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
group_by(dest) %>% # group all (month, dest) pairs by dest ..
filter(n() == 12) %>% # and only select those that have one entry per month
ungroup() %>%
mutate(dest = reorder(dest, dep_delay)) %>%
ggplot(aes(x = factor(month), y = dest, fill = dep_delay)) +
geom_tile() +
labs(x = "Month", y = "Destination", fill = "Departure Delay")
#> `summarise()` regrouping output by 'month' (override with `.groups` argument)
```
#### Exercise 7\.5\.2\.3
Why is it slightly better to use `aes(x = color, y = cut)` rather than `aes(x = cut, y = color)` in the example above?
It’s usually better to use the categorical variable with a larger number of categories or the longer labels on the y axis.
If at all possible, labels should be horizontal because that is easier to read.
However, switching the order doesn’t result in overlapping labels.
```
diamonds %>%
count(color, cut) %>%
ggplot(mapping = aes(y = color, x = cut)) +
geom_tile(mapping = aes(fill = n))
```
Another justification, for switching the order is that the larger numbers are at the top when `x = color` and `y = cut`, and that lowers the cognitive burden of interpreting the plot.
### 7\.5\.3 Two continuous variables
#### Exercise 7\.5\.3\.1
Instead of summarizing the conditional distribution with a box plot, you could use a frequency polygon.
What do you need to consider when using `cut_width()` vs `cut_number()`?
How does that impact a visualization of the 2d distribution of `carat` and `price`?
Both `cut_width()` and `cut_number()` split a variable into groups.
When using `cut_width()`, we need to choose the width, and the number of
bins will be calculated automatically.
When using `cut_number()`, we need to specify the number of bins, and
the widths will be calculated automatically.
In either case, we want to choose the bin widths and number to be large enough
to aggregate observations to remove noise, but not so large as to remove all the signal.
If categorical colors are used, no more than eight colors should be used
in order to keep them distinct. Using `cut_number`, I will split carats into
quantiles (five groups).
```
ggplot(
data = diamonds,
mapping = aes(color = cut_number(carat, 5), x = price)
) +
geom_freqpoly() +
labs(x = "Price", y = "Count", color = "Carat")
```
Alternatively, I could use `cut_width` to specify widths at which to cut.
I will choose 1\-carat widths. Since there are very few diamonds larger than
2\-carats, this is not as informative. However, using a width of 0\.5 carats
creates too many groups, and splitting at non\-whole numbers is unappealing.
```
ggplot(
data = diamonds,
mapping = aes(color = cut_width(carat, 1, boundary = 0), x = price)
) +
geom_freqpoly() +
labs(x = "Price", y = "Count", color = "Carat")
```
#### Exercise 7\.5\.3\.2
Visualize the distribution of `carat`, partitioned by `price`.
Plotted with a box plot with 10 bins with an equal number of observations, and the width determined by the number of observations.
```
ggplot(diamonds, aes(x = cut_number(price, 10), y = carat)) +
geom_boxplot() +
coord_flip() +
xlab("Price")
```
Plotted with a box plot with 10 equal\-width bins of $2,000\. The argument `boundary = 0` ensures that first bin is $0–$2,000\.
```
ggplot(diamonds, aes(x = cut_width(price, 2000, boundary = 0), y = carat)) +
geom_boxplot(varwidth = TRUE) +
coord_flip() +
xlab("Price")
```
#### Exercise 7\.5\.3\.3
How does the price distribution of very large diamonds compare to small diamonds.
Is it as you expect, or does it surprise you?
The distribution of very large diamonds is more variable.
I am not surprised, since I knew little about diamond prices.
After the fact, it does not seem surprising (as many thing do).
I would guess that this is due to the way in which diamonds are selected for retail sales.
Suppose that someone selling a diamond only finds it profitable to sell it if some combination size, cut, clarity, and color are above a certain threshold.
The smallest diamonds are only profitable to sell if they are exceptional in all the other factors (cut, clarity, and color), so the small diamonds sold have similar characteristics.
However, larger diamonds may be profitable regardless of the values of the other factors.
Thus we will observe large diamonds with a wider variety of cut, clarity, and color and thus more variability in prices.
#### Exercise 7\.5\.3\.4
Combine two of the techniques you’ve learned to visualize the combined distribution of cut, carat, and price.
There are many options to try, so your solutions may vary from mine.
Here are a few options that I tried.
```
ggplot(diamonds, aes(x = carat, y = price)) +
geom_hex() +
facet_wrap(~cut, ncol = 1)
```
```
ggplot(diamonds, aes(x = cut_number(carat, 5), y = price, colour = cut)) +
geom_boxplot()
```
```
ggplot(diamonds, aes(colour = cut_number(carat, 5), y = price, x = cut)) +
geom_boxplot()
```
#### Exercise 7\.5\.3\.5
Two dimensional plots reveal outliers that are not visible in one dimensional plots.
For example, some points in the plot below have an unusual combination of `x` and `y` values, which makes the points outliers even though their `x` and `y` values appear normal when examined separately.
```
ggplot(data = diamonds) +
geom_point(mapping = aes(x = x, y = y)) +
coord_cartesian(xlim = c(4, 11), ylim = c(4, 11))
```
Why is a scatterplot a better display than a binned plot for this case?
In this case, there is a strong relationship between \\(x\\) and \\(y\\). The outliers in this case are not extreme in either \\(x\\) or \\(y\\).
A binned plot would not reveal these outliers, and may lead us to conclude that the largest value of \\(x\\) was an outlier even though it appears to fit the bivariate pattern well.
The later chapter [Model Basics](model-basics.html#model-basics) discusses fitting models to bivariate data and plotting residuals, which would reveal this outliers.
### 7\.5\.1 A categorical and continuous variable
#### Exercise 7\.5\.1\.1
Use what you’ve learned to improve the visualization of the departure times of cancelled vs. non\-cancelled flights.
Instead of a `freqplot` use a box\-plot
```
nycflights13::flights %>%
mutate(
cancelled = is.na(dep_time),
sched_hour = sched_dep_time %/% 100,
sched_min = sched_dep_time %% 100,
sched_dep_time = sched_hour + sched_min / 60
) %>%
ggplot() +
geom_boxplot(mapping = aes(y = sched_dep_time, x = cancelled))
```
#### Exercise 7\.5\.1\.2
What variable in the diamonds dataset is most important for predicting the price of a diamond?
How is that variable correlated with cut?
Why does the combination of those two relationships lead to lower quality diamonds being more expensive?
What are the general relationships of each variable with the price of the diamonds?
I will consider the variables: `carat`, `clarity`, `color`, and `cut`.
I ignore the dimensions of the diamond since `carat` measures size, and thus incorporates most of the information contained in these variables.
Since both `price` and `carat` are continuous variables, I use a scatter plot to visualize their relationship.
```
ggplot(diamonds, aes(x = carat, y = price)) +
geom_point()
```
However, since there is a large number of points in the data, I will use a boxplot by binning `carat`, as suggested in the chapter:
```
ggplot(data = diamonds, mapping = aes(x = carat, y = price)) +
geom_boxplot(mapping = aes(group = cut_width(carat, 0.1)), orientation = "x")
```
Note that the choice of the binning width is important, as if it were too large it would obscure any relationship, and if it were too small, the values in the bins could be too variable to reveal underlying trends.
Version 3\.3\.0 of ggplot2 introduced changes to boxplots that may affect the orientation.
> This geom treats each axis differently and, thus, can thus have two orientations.
> Often the orientation is easy to deduce from a combination of the given mappings and the types of positional scales in use.
> Thus, ggplot2 will by default try to guess which orientation the layer should have. Under rare circumstances, the orientation is ambiguous and guessing may fail
If you are getting something different with your code check the version of ggplot2\.
Use `orientation = "x"` (vertical boxplots) or `orientation = "y"` (horizontal boxplots) to explicitly specify how the geom should treat these axes.
The variables `color` and `clarity` are ordered categorical variables.
The chapter suggests visualizing a categorical and continuous variable using frequency polygons or boxplots.
In this case, I will use a box plot since it will better show a relationship between the variables.
There is a weak negative relationship between `color` and `price`.
The scale of diamond color goes from D (best) to J (worst).
Currently, the levels of `diamonds$color` are in the wrong order.
Before plotting, I will reverse the order of the `color` levels so they will be in increasing order of quality on the x\-axis.
The `color` column is an example of a factor variable, which is covered in the
“[Factors](https://r4ds.had.co.nz/factors.html)” chapter of *R4DS*.
```
diamonds %>%
mutate(color = fct_rev(color)) %>%
ggplot(aes(x = color, y = price)) +
geom_boxplot()
```
There is also weak negative relationship between `clarity` and `price`.
The scale of clarity goes from I1 (worst) to IF (best).
```
ggplot(data = diamonds) +
geom_boxplot(mapping = aes(x = clarity, y = price))
```
For both `clarity` and `color`, there is a much larger amount of variation within each category than between categories.
Carat is clearly the single best predictor of diamond prices.
Now that we have established that carat appears to be the best predictor of price, what is the relationship between it and cut?
Since this is an example of a continuous (carat) and categorical (cut) variable, it can be visualized with a box plot.
```
ggplot(diamonds, aes(x = cut, y = carat)) +
geom_boxplot()
```
There is a lot of variability in the distribution of carat sizes within each cut category.
There is a slight negative relationship between carat and cut.
Noticeably, the largest carat diamonds have a cut of “Fair” (the lowest).
This negative relationship can be due to the way in which diamonds are selected for sale.
A larger diamond can be profitably sold with a lower quality cut, while a smaller diamond requires a better cut.
#### Exercise 7\.5\.1\.3
Install the ggstance package, and create a horizontal box plot.
How does this compare to using `coord_flip()`?
Earlier, we created this horizontal box plot of the distribution `hwy` by `class`, using `geom_boxplot()` and `coord_flip()`:
```
ggplot(data = mpg) +
geom_boxplot(mapping = aes(x = reorder(class, hwy, FUN = median), y = hwy)) +
coord_flip()
```
In this case the output looks the same, but `x` and `y` aesthetics are flipped.
```
library("ggstance")
ggplot(data = mpg) +
geom_boxploth(mapping = aes(y = reorder(class, hwy, FUN = median), x = hwy))
```
Current versions of ggplot2 (since [version 3\.3\.0](https://ggplot2.tidyverse.org/news/index.html#new-features)) do not require `coord_flip()`.
All geoms can choose the direction.
The direction is be inferred from the aesthetic mapping.
In this case, switching `x` and `y` produces a horizontal boxplot.
```
ggplot(data = mpg) +
geom_boxplot(mapping = aes(y = reorder(class, hwy, FUN = median), x = hwy))
```
The `orientation` argument is used to explicitly specify the axis orientation of the plot.
```
ggplot(data = mpg) +
geom_boxplot(mapping = aes(y = reorder(class, hwy, FUN = median), x = hwy), orientation = "y")
```
#### Exercise 7\.5\.1\.4
One problem with box plots is that they were developed in an era of much smaller datasets and tend to display a prohibitively large number of “outlying values”.
One approach to remedy this problem is the letter value plot.
Install the lvplot package, and try using `geom_lv()` to display the distribution of price vs cut.
What do you learn?
How do you interpret the plots?
Like box\-plots, the boxes of the letter\-value plot correspond to quantiles. However, they incorporate
far more quantiles than box\-plots. They are useful for larger datasets because,
1. larger datasets can give precise estimates of quantiles beyond the quartiles, and
2. in expectation, larger datasets should have more outliers (in absolute numbers).
```
ggplot(diamonds, aes(x = cut, y = price)) +
geom_lv()
```
The letter\-value plot is described in Hofmann, Wickham, and Kafadar ([2017](#ref-HofmannWickhamKafadar2017)).
#### Exercise 7\.5\.1\.5
Compare and contrast `geom_violin()` with a faceted `geom_histogram()`, or a colored `geom_freqpoly()`.
What are the pros and cons of each method?
I produce plots for these three methods below. The `geom_freqpoly()` is better
for look\-up: meaning that given a price, it is easy to tell which `cut` has the
highest density. However, the overlapping lines makes it difficult to distinguish how the overall distributions relate to each other.
The `geom_violin()` and faceted `geom_histogram()` have similar strengths and
weaknesses.
It is easy to visually distinguish differences in the overall shape of the
distributions (skewness, central values, variance, etc).
However, since we can’t easily compare the vertical values of the distribution,
it is difficult to look up which category has the highest density for a given price.
All of these methods depend on tuning parameters to determine the level of
smoothness of the distribution.
```
ggplot(data = diamonds, mapping = aes(x = price, y = ..density..)) +
geom_freqpoly(mapping = aes(color = cut), binwidth = 500)
```
```
ggplot(data = diamonds, mapping = aes(x = price)) +
geom_histogram() +
facet_wrap(~cut, ncol = 1, scales = "free_y")
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
```
ggplot(data = diamonds, mapping = aes(x = cut, y = price)) +
geom_violin() +
coord_flip()
```
The violin plot was first described in Hintze and Nelson ([1998](#ref-HintzeNelson1998)).
#### Exercise 7\.5\.1\.6
If you have a small dataset, it’s sometimes useful to use `geom_jitter()` to see the relationship between a continuous and categorical variable.
The ggbeeswarm package provides a number of methods similar to `geom_jitter()`.
List them and briefly describe what each one does.
There are two methods:
* `geom_quasirandom()` produces plots that are a mix of jitter and violin plots. There are several different methods that determine exactly how the random location of the points is generated.
* `geom_beeswarm()` produces a plot similar to a violin plot, but by offsetting the points.
I’ll use the `mpg` box plot example since these methods display individual points, they are better suited for smaller datasets.
```
ggplot(data = mpg) +
geom_quasirandom(mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
))
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "tukey"
)
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "tukeyDense"
)
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "frowney"
)
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "smiley"
)
```
```
ggplot(data = mpg) +
geom_beeswarm(mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
))
```
#### Exercise 7\.5\.1\.1
Use what you’ve learned to improve the visualization of the departure times of cancelled vs. non\-cancelled flights.
Instead of a `freqplot` use a box\-plot
```
nycflights13::flights %>%
mutate(
cancelled = is.na(dep_time),
sched_hour = sched_dep_time %/% 100,
sched_min = sched_dep_time %% 100,
sched_dep_time = sched_hour + sched_min / 60
) %>%
ggplot() +
geom_boxplot(mapping = aes(y = sched_dep_time, x = cancelled))
```
#### Exercise 7\.5\.1\.2
What variable in the diamonds dataset is most important for predicting the price of a diamond?
How is that variable correlated with cut?
Why does the combination of those two relationships lead to lower quality diamonds being more expensive?
What are the general relationships of each variable with the price of the diamonds?
I will consider the variables: `carat`, `clarity`, `color`, and `cut`.
I ignore the dimensions of the diamond since `carat` measures size, and thus incorporates most of the information contained in these variables.
Since both `price` and `carat` are continuous variables, I use a scatter plot to visualize their relationship.
```
ggplot(diamonds, aes(x = carat, y = price)) +
geom_point()
```
However, since there is a large number of points in the data, I will use a boxplot by binning `carat`, as suggested in the chapter:
```
ggplot(data = diamonds, mapping = aes(x = carat, y = price)) +
geom_boxplot(mapping = aes(group = cut_width(carat, 0.1)), orientation = "x")
```
Note that the choice of the binning width is important, as if it were too large it would obscure any relationship, and if it were too small, the values in the bins could be too variable to reveal underlying trends.
Version 3\.3\.0 of ggplot2 introduced changes to boxplots that may affect the orientation.
> This geom treats each axis differently and, thus, can thus have two orientations.
> Often the orientation is easy to deduce from a combination of the given mappings and the types of positional scales in use.
> Thus, ggplot2 will by default try to guess which orientation the layer should have. Under rare circumstances, the orientation is ambiguous and guessing may fail
If you are getting something different with your code check the version of ggplot2\.
Use `orientation = "x"` (vertical boxplots) or `orientation = "y"` (horizontal boxplots) to explicitly specify how the geom should treat these axes.
The variables `color` and `clarity` are ordered categorical variables.
The chapter suggests visualizing a categorical and continuous variable using frequency polygons or boxplots.
In this case, I will use a box plot since it will better show a relationship between the variables.
There is a weak negative relationship between `color` and `price`.
The scale of diamond color goes from D (best) to J (worst).
Currently, the levels of `diamonds$color` are in the wrong order.
Before plotting, I will reverse the order of the `color` levels so they will be in increasing order of quality on the x\-axis.
The `color` column is an example of a factor variable, which is covered in the
“[Factors](https://r4ds.had.co.nz/factors.html)” chapter of *R4DS*.
```
diamonds %>%
mutate(color = fct_rev(color)) %>%
ggplot(aes(x = color, y = price)) +
geom_boxplot()
```
There is also weak negative relationship between `clarity` and `price`.
The scale of clarity goes from I1 (worst) to IF (best).
```
ggplot(data = diamonds) +
geom_boxplot(mapping = aes(x = clarity, y = price))
```
For both `clarity` and `color`, there is a much larger amount of variation within each category than between categories.
Carat is clearly the single best predictor of diamond prices.
Now that we have established that carat appears to be the best predictor of price, what is the relationship between it and cut?
Since this is an example of a continuous (carat) and categorical (cut) variable, it can be visualized with a box plot.
```
ggplot(diamonds, aes(x = cut, y = carat)) +
geom_boxplot()
```
There is a lot of variability in the distribution of carat sizes within each cut category.
There is a slight negative relationship between carat and cut.
Noticeably, the largest carat diamonds have a cut of “Fair” (the lowest).
This negative relationship can be due to the way in which diamonds are selected for sale.
A larger diamond can be profitably sold with a lower quality cut, while a smaller diamond requires a better cut.
#### Exercise 7\.5\.1\.3
Install the ggstance package, and create a horizontal box plot.
How does this compare to using `coord_flip()`?
Earlier, we created this horizontal box plot of the distribution `hwy` by `class`, using `geom_boxplot()` and `coord_flip()`:
```
ggplot(data = mpg) +
geom_boxplot(mapping = aes(x = reorder(class, hwy, FUN = median), y = hwy)) +
coord_flip()
```
In this case the output looks the same, but `x` and `y` aesthetics are flipped.
```
library("ggstance")
ggplot(data = mpg) +
geom_boxploth(mapping = aes(y = reorder(class, hwy, FUN = median), x = hwy))
```
Current versions of ggplot2 (since [version 3\.3\.0](https://ggplot2.tidyverse.org/news/index.html#new-features)) do not require `coord_flip()`.
All geoms can choose the direction.
The direction is be inferred from the aesthetic mapping.
In this case, switching `x` and `y` produces a horizontal boxplot.
```
ggplot(data = mpg) +
geom_boxplot(mapping = aes(y = reorder(class, hwy, FUN = median), x = hwy))
```
The `orientation` argument is used to explicitly specify the axis orientation of the plot.
```
ggplot(data = mpg) +
geom_boxplot(mapping = aes(y = reorder(class, hwy, FUN = median), x = hwy), orientation = "y")
```
#### Exercise 7\.5\.1\.4
One problem with box plots is that they were developed in an era of much smaller datasets and tend to display a prohibitively large number of “outlying values”.
One approach to remedy this problem is the letter value plot.
Install the lvplot package, and try using `geom_lv()` to display the distribution of price vs cut.
What do you learn?
How do you interpret the plots?
Like box\-plots, the boxes of the letter\-value plot correspond to quantiles. However, they incorporate
far more quantiles than box\-plots. They are useful for larger datasets because,
1. larger datasets can give precise estimates of quantiles beyond the quartiles, and
2. in expectation, larger datasets should have more outliers (in absolute numbers).
```
ggplot(diamonds, aes(x = cut, y = price)) +
geom_lv()
```
The letter\-value plot is described in Hofmann, Wickham, and Kafadar ([2017](#ref-HofmannWickhamKafadar2017)).
#### Exercise 7\.5\.1\.5
Compare and contrast `geom_violin()` with a faceted `geom_histogram()`, or a colored `geom_freqpoly()`.
What are the pros and cons of each method?
I produce plots for these three methods below. The `geom_freqpoly()` is better
for look\-up: meaning that given a price, it is easy to tell which `cut` has the
highest density. However, the overlapping lines makes it difficult to distinguish how the overall distributions relate to each other.
The `geom_violin()` and faceted `geom_histogram()` have similar strengths and
weaknesses.
It is easy to visually distinguish differences in the overall shape of the
distributions (skewness, central values, variance, etc).
However, since we can’t easily compare the vertical values of the distribution,
it is difficult to look up which category has the highest density for a given price.
All of these methods depend on tuning parameters to determine the level of
smoothness of the distribution.
```
ggplot(data = diamonds, mapping = aes(x = price, y = ..density..)) +
geom_freqpoly(mapping = aes(color = cut), binwidth = 500)
```
```
ggplot(data = diamonds, mapping = aes(x = price)) +
geom_histogram() +
facet_wrap(~cut, ncol = 1, scales = "free_y")
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
```
ggplot(data = diamonds, mapping = aes(x = cut, y = price)) +
geom_violin() +
coord_flip()
```
The violin plot was first described in Hintze and Nelson ([1998](#ref-HintzeNelson1998)).
#### Exercise 7\.5\.1\.6
If you have a small dataset, it’s sometimes useful to use `geom_jitter()` to see the relationship between a continuous and categorical variable.
The ggbeeswarm package provides a number of methods similar to `geom_jitter()`.
List them and briefly describe what each one does.
There are two methods:
* `geom_quasirandom()` produces plots that are a mix of jitter and violin plots. There are several different methods that determine exactly how the random location of the points is generated.
* `geom_beeswarm()` produces a plot similar to a violin plot, but by offsetting the points.
I’ll use the `mpg` box plot example since these methods display individual points, they are better suited for smaller datasets.
```
ggplot(data = mpg) +
geom_quasirandom(mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
))
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "tukey"
)
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "tukeyDense"
)
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "frowney"
)
```
```
ggplot(data = mpg) +
geom_quasirandom(
mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
),
method = "smiley"
)
```
```
ggplot(data = mpg) +
geom_beeswarm(mapping = aes(
x = reorder(class, hwy, FUN = median),
y = hwy
))
```
### 7\.5\.2 Two categorical variables
#### Exercise 7\.5\.2\.1
How could you rescale the count dataset above to more clearly show the distribution of cut within color, or color within cut?
To clearly show the distribution of `cut` within `color`, calculate a new variable `prop` which is the proportion of each cut within a `color`.
This is done using a grouped mutate.
```
diamonds %>%
count(color, cut) %>%
group_by(color) %>%
mutate(prop = n / sum(n)) %>%
ggplot(mapping = aes(x = color, y = cut)) +
geom_tile(mapping = aes(fill = prop))
```
Similarly, to scale by the distribution of `color` within `cut`,
```
diamonds %>%
count(color, cut) %>%
group_by(cut) %>%
mutate(prop = n / sum(n)) %>%
ggplot(mapping = aes(x = color, y = cut)) +
geom_tile(mapping = aes(fill = prop))
```
I add `limit = c(0, 1)` to put the color scale between (0, 1\).
These are the logical boundaries of proportions.
This makes it possible to compare each cell to its actual value, and would improve comparisons across multiple plots.
However, it ends up limiting the colors and makes it harder to compare within the dataset.
However, using the default limits of the minimum and maximum values makes it easier to compare within the dataset the emphasizing relative differences, but harder to compare across datasets.
#### Exercise 7\.5\.2\.2
Use `geom_tile()` together with dplyr to explore how average flight delays vary by destination and month of year.
What makes the plot difficult to read?
How could you improve it?
```
flights %>%
group_by(month, dest) %>%
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = factor(month), y = dest, fill = dep_delay)) +
geom_tile() +
labs(x = "Month", y = "Destination", fill = "Departure Delay")
#> `summarise()` regrouping output by 'month' (override with `.groups` argument)
```
There are several things that could be done to improve it,
* sort destinations by a meaningful quantity (distance, number of flights, average delay)
* remove missing values
How to treat missing values is difficult.
In this case, missing values correspond to airports which don’t have regular flights (at least one flight each month) from NYC.
These are likely smaller airports (with higher variance in their average due to fewer observations).
When we group all pairs of (`month`, `dest`) again by dest, we should have a total count of 12 (one for each month) per group (`dest`).
This makes it easy to filter.
```
flights %>%
group_by(month, dest) %>% # This gives us (month, dest) pairs
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
group_by(dest) %>% # group all (month, dest) pairs by dest ..
filter(n() == 12) %>% # and only select those that have one entry per month
ungroup() %>%
mutate(dest = reorder(dest, dep_delay)) %>%
ggplot(aes(x = factor(month), y = dest, fill = dep_delay)) +
geom_tile() +
labs(x = "Month", y = "Destination", fill = "Departure Delay")
#> `summarise()` regrouping output by 'month' (override with `.groups` argument)
```
#### Exercise 7\.5\.2\.3
Why is it slightly better to use `aes(x = color, y = cut)` rather than `aes(x = cut, y = color)` in the example above?
It’s usually better to use the categorical variable with a larger number of categories or the longer labels on the y axis.
If at all possible, labels should be horizontal because that is easier to read.
However, switching the order doesn’t result in overlapping labels.
```
diamonds %>%
count(color, cut) %>%
ggplot(mapping = aes(y = color, x = cut)) +
geom_tile(mapping = aes(fill = n))
```
Another justification, for switching the order is that the larger numbers are at the top when `x = color` and `y = cut`, and that lowers the cognitive burden of interpreting the plot.
#### Exercise 7\.5\.2\.1
How could you rescale the count dataset above to more clearly show the distribution of cut within color, or color within cut?
To clearly show the distribution of `cut` within `color`, calculate a new variable `prop` which is the proportion of each cut within a `color`.
This is done using a grouped mutate.
```
diamonds %>%
count(color, cut) %>%
group_by(color) %>%
mutate(prop = n / sum(n)) %>%
ggplot(mapping = aes(x = color, y = cut)) +
geom_tile(mapping = aes(fill = prop))
```
Similarly, to scale by the distribution of `color` within `cut`,
```
diamonds %>%
count(color, cut) %>%
group_by(cut) %>%
mutate(prop = n / sum(n)) %>%
ggplot(mapping = aes(x = color, y = cut)) +
geom_tile(mapping = aes(fill = prop))
```
I add `limit = c(0, 1)` to put the color scale between (0, 1\).
These are the logical boundaries of proportions.
This makes it possible to compare each cell to its actual value, and would improve comparisons across multiple plots.
However, it ends up limiting the colors and makes it harder to compare within the dataset.
However, using the default limits of the minimum and maximum values makes it easier to compare within the dataset the emphasizing relative differences, but harder to compare across datasets.
#### Exercise 7\.5\.2\.2
Use `geom_tile()` together with dplyr to explore how average flight delays vary by destination and month of year.
What makes the plot difficult to read?
How could you improve it?
```
flights %>%
group_by(month, dest) %>%
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = factor(month), y = dest, fill = dep_delay)) +
geom_tile() +
labs(x = "Month", y = "Destination", fill = "Departure Delay")
#> `summarise()` regrouping output by 'month' (override with `.groups` argument)
```
There are several things that could be done to improve it,
* sort destinations by a meaningful quantity (distance, number of flights, average delay)
* remove missing values
How to treat missing values is difficult.
In this case, missing values correspond to airports which don’t have regular flights (at least one flight each month) from NYC.
These are likely smaller airports (with higher variance in their average due to fewer observations).
When we group all pairs of (`month`, `dest`) again by dest, we should have a total count of 12 (one for each month) per group (`dest`).
This makes it easy to filter.
```
flights %>%
group_by(month, dest) %>% # This gives us (month, dest) pairs
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
group_by(dest) %>% # group all (month, dest) pairs by dest ..
filter(n() == 12) %>% # and only select those that have one entry per month
ungroup() %>%
mutate(dest = reorder(dest, dep_delay)) %>%
ggplot(aes(x = factor(month), y = dest, fill = dep_delay)) +
geom_tile() +
labs(x = "Month", y = "Destination", fill = "Departure Delay")
#> `summarise()` regrouping output by 'month' (override with `.groups` argument)
```
#### Exercise 7\.5\.2\.3
Why is it slightly better to use `aes(x = color, y = cut)` rather than `aes(x = cut, y = color)` in the example above?
It’s usually better to use the categorical variable with a larger number of categories or the longer labels on the y axis.
If at all possible, labels should be horizontal because that is easier to read.
However, switching the order doesn’t result in overlapping labels.
```
diamonds %>%
count(color, cut) %>%
ggplot(mapping = aes(y = color, x = cut)) +
geom_tile(mapping = aes(fill = n))
```
Another justification, for switching the order is that the larger numbers are at the top when `x = color` and `y = cut`, and that lowers the cognitive burden of interpreting the plot.
### 7\.5\.3 Two continuous variables
#### Exercise 7\.5\.3\.1
Instead of summarizing the conditional distribution with a box plot, you could use a frequency polygon.
What do you need to consider when using `cut_width()` vs `cut_number()`?
How does that impact a visualization of the 2d distribution of `carat` and `price`?
Both `cut_width()` and `cut_number()` split a variable into groups.
When using `cut_width()`, we need to choose the width, and the number of
bins will be calculated automatically.
When using `cut_number()`, we need to specify the number of bins, and
the widths will be calculated automatically.
In either case, we want to choose the bin widths and number to be large enough
to aggregate observations to remove noise, but not so large as to remove all the signal.
If categorical colors are used, no more than eight colors should be used
in order to keep them distinct. Using `cut_number`, I will split carats into
quantiles (five groups).
```
ggplot(
data = diamonds,
mapping = aes(color = cut_number(carat, 5), x = price)
) +
geom_freqpoly() +
labs(x = "Price", y = "Count", color = "Carat")
```
Alternatively, I could use `cut_width` to specify widths at which to cut.
I will choose 1\-carat widths. Since there are very few diamonds larger than
2\-carats, this is not as informative. However, using a width of 0\.5 carats
creates too many groups, and splitting at non\-whole numbers is unappealing.
```
ggplot(
data = diamonds,
mapping = aes(color = cut_width(carat, 1, boundary = 0), x = price)
) +
geom_freqpoly() +
labs(x = "Price", y = "Count", color = "Carat")
```
#### Exercise 7\.5\.3\.2
Visualize the distribution of `carat`, partitioned by `price`.
Plotted with a box plot with 10 bins with an equal number of observations, and the width determined by the number of observations.
```
ggplot(diamonds, aes(x = cut_number(price, 10), y = carat)) +
geom_boxplot() +
coord_flip() +
xlab("Price")
```
Plotted with a box plot with 10 equal\-width bins of $2,000\. The argument `boundary = 0` ensures that first bin is $0–$2,000\.
```
ggplot(diamonds, aes(x = cut_width(price, 2000, boundary = 0), y = carat)) +
geom_boxplot(varwidth = TRUE) +
coord_flip() +
xlab("Price")
```
#### Exercise 7\.5\.3\.3
How does the price distribution of very large diamonds compare to small diamonds.
Is it as you expect, or does it surprise you?
The distribution of very large diamonds is more variable.
I am not surprised, since I knew little about diamond prices.
After the fact, it does not seem surprising (as many thing do).
I would guess that this is due to the way in which diamonds are selected for retail sales.
Suppose that someone selling a diamond only finds it profitable to sell it if some combination size, cut, clarity, and color are above a certain threshold.
The smallest diamonds are only profitable to sell if they are exceptional in all the other factors (cut, clarity, and color), so the small diamonds sold have similar characteristics.
However, larger diamonds may be profitable regardless of the values of the other factors.
Thus we will observe large diamonds with a wider variety of cut, clarity, and color and thus more variability in prices.
#### Exercise 7\.5\.3\.4
Combine two of the techniques you’ve learned to visualize the combined distribution of cut, carat, and price.
There are many options to try, so your solutions may vary from mine.
Here are a few options that I tried.
```
ggplot(diamonds, aes(x = carat, y = price)) +
geom_hex() +
facet_wrap(~cut, ncol = 1)
```
```
ggplot(diamonds, aes(x = cut_number(carat, 5), y = price, colour = cut)) +
geom_boxplot()
```
```
ggplot(diamonds, aes(colour = cut_number(carat, 5), y = price, x = cut)) +
geom_boxplot()
```
#### Exercise 7\.5\.3\.5
Two dimensional plots reveal outliers that are not visible in one dimensional plots.
For example, some points in the plot below have an unusual combination of `x` and `y` values, which makes the points outliers even though their `x` and `y` values appear normal when examined separately.
```
ggplot(data = diamonds) +
geom_point(mapping = aes(x = x, y = y)) +
coord_cartesian(xlim = c(4, 11), ylim = c(4, 11))
```
Why is a scatterplot a better display than a binned plot for this case?
In this case, there is a strong relationship between \\(x\\) and \\(y\\). The outliers in this case are not extreme in either \\(x\\) or \\(y\\).
A binned plot would not reveal these outliers, and may lead us to conclude that the largest value of \\(x\\) was an outlier even though it appears to fit the bivariate pattern well.
The later chapter [Model Basics](model-basics.html#model-basics) discusses fitting models to bivariate data and plotting residuals, which would reveal this outliers.
#### Exercise 7\.5\.3\.1
Instead of summarizing the conditional distribution with a box plot, you could use a frequency polygon.
What do you need to consider when using `cut_width()` vs `cut_number()`?
How does that impact a visualization of the 2d distribution of `carat` and `price`?
Both `cut_width()` and `cut_number()` split a variable into groups.
When using `cut_width()`, we need to choose the width, and the number of
bins will be calculated automatically.
When using `cut_number()`, we need to specify the number of bins, and
the widths will be calculated automatically.
In either case, we want to choose the bin widths and number to be large enough
to aggregate observations to remove noise, but not so large as to remove all the signal.
If categorical colors are used, no more than eight colors should be used
in order to keep them distinct. Using `cut_number`, I will split carats into
quantiles (five groups).
```
ggplot(
data = diamonds,
mapping = aes(color = cut_number(carat, 5), x = price)
) +
geom_freqpoly() +
labs(x = "Price", y = "Count", color = "Carat")
```
Alternatively, I could use `cut_width` to specify widths at which to cut.
I will choose 1\-carat widths. Since there are very few diamonds larger than
2\-carats, this is not as informative. However, using a width of 0\.5 carats
creates too many groups, and splitting at non\-whole numbers is unappealing.
```
ggplot(
data = diamonds,
mapping = aes(color = cut_width(carat, 1, boundary = 0), x = price)
) +
geom_freqpoly() +
labs(x = "Price", y = "Count", color = "Carat")
```
#### Exercise 7\.5\.3\.2
Visualize the distribution of `carat`, partitioned by `price`.
Plotted with a box plot with 10 bins with an equal number of observations, and the width determined by the number of observations.
```
ggplot(diamonds, aes(x = cut_number(price, 10), y = carat)) +
geom_boxplot() +
coord_flip() +
xlab("Price")
```
Plotted with a box plot with 10 equal\-width bins of $2,000\. The argument `boundary = 0` ensures that first bin is $0–$2,000\.
```
ggplot(diamonds, aes(x = cut_width(price, 2000, boundary = 0), y = carat)) +
geom_boxplot(varwidth = TRUE) +
coord_flip() +
xlab("Price")
```
#### Exercise 7\.5\.3\.3
How does the price distribution of very large diamonds compare to small diamonds.
Is it as you expect, or does it surprise you?
The distribution of very large diamonds is more variable.
I am not surprised, since I knew little about diamond prices.
After the fact, it does not seem surprising (as many thing do).
I would guess that this is due to the way in which diamonds are selected for retail sales.
Suppose that someone selling a diamond only finds it profitable to sell it if some combination size, cut, clarity, and color are above a certain threshold.
The smallest diamonds are only profitable to sell if they are exceptional in all the other factors (cut, clarity, and color), so the small diamonds sold have similar characteristics.
However, larger diamonds may be profitable regardless of the values of the other factors.
Thus we will observe large diamonds with a wider variety of cut, clarity, and color and thus more variability in prices.
#### Exercise 7\.5\.3\.4
Combine two of the techniques you’ve learned to visualize the combined distribution of cut, carat, and price.
There are many options to try, so your solutions may vary from mine.
Here are a few options that I tried.
```
ggplot(diamonds, aes(x = carat, y = price)) +
geom_hex() +
facet_wrap(~cut, ncol = 1)
```
```
ggplot(diamonds, aes(x = cut_number(carat, 5), y = price, colour = cut)) +
geom_boxplot()
```
```
ggplot(diamonds, aes(colour = cut_number(carat, 5), y = price, x = cut)) +
geom_boxplot()
```
#### Exercise 7\.5\.3\.5
Two dimensional plots reveal outliers that are not visible in one dimensional plots.
For example, some points in the plot below have an unusual combination of `x` and `y` values, which makes the points outliers even though their `x` and `y` values appear normal when examined separately.
```
ggplot(data = diamonds) +
geom_point(mapping = aes(x = x, y = y)) +
coord_cartesian(xlim = c(4, 11), ylim = c(4, 11))
```
Why is a scatterplot a better display than a binned plot for this case?
In this case, there is a strong relationship between \\(x\\) and \\(y\\). The outliers in this case are not extreme in either \\(x\\) or \\(y\\).
A binned plot would not reveal these outliers, and may lead us to conclude that the largest value of \\(x\\) was an outlier even though it appears to fit the bivariate pattern well.
The later chapter [Model Basics](model-basics.html#model-basics) discusses fitting models to bivariate data and plotting residuals, which would reveal this outliers.
7\.6 Patterns and models
------------------------
No exercises
7\.7 ggplot2 calls
------------------
No exercises
7\.8 Learning more
------------------
No exercises
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/tibbles.html |
10 Tibbles
==========
```
library("tidyverse")
```
Exercise 10\.1
--------------
How can you tell if an object is a tibble? (Hint: try printing `mtcars`, which is a regular data frame).
When we print `mtcars`, it prints all the columns.
```
mtcars
#> mpg cyl disp hp drat wt qsec vs am gear carb
#> Mazda RX4 21.0 6 160.0 110 3.90 2.62 16.5 0 1 4 4
#> Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.88 17.0 0 1 4 4
#> Datsun 710 22.8 4 108.0 93 3.85 2.32 18.6 1 1 4 1
#> Hornet 4 Drive 21.4 6 258.0 110 3.08 3.21 19.4 1 0 3 1
#> Hornet Sportabout 18.7 8 360.0 175 3.15 3.44 17.0 0 0 3 2
#> Valiant 18.1 6 225.0 105 2.76 3.46 20.2 1 0 3 1
#> Duster 360 14.3 8 360.0 245 3.21 3.57 15.8 0 0 3 4
#> Merc 240D 24.4 4 146.7 62 3.69 3.19 20.0 1 0 4 2
#> Merc 230 22.8 4 140.8 95 3.92 3.15 22.9 1 0 4 2
#> Merc 280 19.2 6 167.6 123 3.92 3.44 18.3 1 0 4 4
#> Merc 280C 17.8 6 167.6 123 3.92 3.44 18.9 1 0 4 4
#> Merc 450SE 16.4 8 275.8 180 3.07 4.07 17.4 0 0 3 3
#> Merc 450SL 17.3 8 275.8 180 3.07 3.73 17.6 0 0 3 3
#> Merc 450SLC 15.2 8 275.8 180 3.07 3.78 18.0 0 0 3 3
#> Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.25 18.0 0 0 3 4
#> Lincoln Continental 10.4 8 460.0 215 3.00 5.42 17.8 0 0 3 4
#> Chrysler Imperial 14.7 8 440.0 230 3.23 5.34 17.4 0 0 3 4
#> Fiat 128 32.4 4 78.7 66 4.08 2.20 19.5 1 1 4 1
#> Honda Civic 30.4 4 75.7 52 4.93 1.61 18.5 1 1 4 2
#> Toyota Corolla 33.9 4 71.1 65 4.22 1.83 19.9 1 1 4 1
#> Toyota Corona 21.5 4 120.1 97 3.70 2.46 20.0 1 0 3 1
#> Dodge Challenger 15.5 8 318.0 150 2.76 3.52 16.9 0 0 3 2
#> AMC Javelin 15.2 8 304.0 150 3.15 3.44 17.3 0 0 3 2
#> Camaro Z28 13.3 8 350.0 245 3.73 3.84 15.4 0 0 3 4
#> Pontiac Firebird 19.2 8 400.0 175 3.08 3.85 17.1 0 0 3 2
#> Fiat X1-9 27.3 4 79.0 66 4.08 1.94 18.9 1 1 4 1
#> Porsche 914-2 26.0 4 120.3 91 4.43 2.14 16.7 0 1 5 2
#> Lotus Europa 30.4 4 95.1 113 3.77 1.51 16.9 1 1 5 2
#> Ford Pantera L 15.8 8 351.0 264 4.22 3.17 14.5 0 1 5 4
#> Ferrari Dino 19.7 6 145.0 175 3.62 2.77 15.5 0 1 5 6
#> Maserati Bora 15.0 8 301.0 335 3.54 3.57 14.6 0 1 5 8
#> Volvo 142E 21.4 4 121.0 109 4.11 2.78 18.6 1 1 4 2
```
But when we first convert `mtcars` to a tibble using `as_tibble()`, it prints only the first ten observations.
There are also some other differences in formatting of the printed data frame.
It prints the number of rows and columns and the date type of each column.
```
as_tibble(mtcars)
#> # A tibble: 32 x 11
#> mpg cyl disp hp drat wt qsec vs am gear carb
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4
#> 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4
#> 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1
#> 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1
#> 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2
#> 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1
#> # … with 26 more rows
```
You can use the function `is_tibble()` to check whether a data frame is a tibble or not.
The `mtcars` data frame is not a tibble.
```
is_tibble(mtcars)
#> [1] FALSE
```
But the `diamonds` and `flights` data are tibbles.
```
is_tibble(ggplot2::diamonds)
#> [1] TRUE
is_tibble(nycflights13::flights)
#> [1] TRUE
is_tibble(as_tibble(mtcars))
#> [1] TRUE
```
More generally, you can use the `class()` function to find out the class of an
object. Tibbles has the classes `c("tbl_df", "tbl", "data.frame")`, while old
data frames will only have the class `"data.frame"`.
```
class(mtcars)
#> [1] "data.frame"
class(ggplot2::diamonds)
#> [1] "tbl_df" "tbl" "data.frame"
class(nycflights13::flights)
#> [1] "tbl_df" "tbl" "data.frame"
```
If you are interested in reading more on R’s classes, read the chapters on
object oriented programming in [Advanced R](http://adv-r.had.co.nz/S3.html).
Exercise 10\.2
--------------
Compare and contrast the following operations on a `data.frame` and equivalent tibble. What is different? Why might the default data frame behaviors cause you frustration?
```
df <- data.frame(abc = 1, xyz = "a")
df$x
#> [1] "a"
df[, "xyz"]
#> [1] "a"
df[, c("abc", "xyz")]
#> abc xyz
#> 1 1 a
```
```
tbl <- as_tibble(df)
tbl$x
#> Warning: Unknown or uninitialised column: `x`.
#> NULL
tbl[, "xyz"]
#> # A tibble: 1 x 1
#> xyz
#> <chr>
#> 1 a
tbl[, c("abc", "xyz")]
#> # A tibble: 1 x 2
#> abc xyz
#> <dbl> <chr>
#> 1 1 a
```
The `$` operator will match any column name that starts with the name following it.
Since there is a column named `xyz`, the expression `df$x` will be expanded to `df$xyz`.
This behavior of the `$` operator saves a few keystrokes, but it can result in accidentally using a different column than you thought you were using.
With data.frames, with `[` the type of object that is returned differs on the
number of columns. If it is one column, it won’t return a data.frame, but
instead will return a vector. With more than one column, then it will return a
data.frame. This is fine if you know what you are passing in, but suppose you
did `df[ , vars]` where `vars` was a variable. Then what that code does
depends on `length(vars)` and you’d have to write code to account for those
situations or risk bugs.
Exercise 10\.3
--------------
If you have the name of a variable stored in an object, e.g. `var <- "mpg"`, how can you extract the reference variable from a tibble?
You can use the double bracket, like `df[[var]]`. You cannot use the dollar sign, because `df$var` would look for a column named `var`.
Exercise 10\.4
--------------
Practice referring to non\-syntactic names in the following data frame by:
1. Extracting the variable called 1\.
2. Plotting a scatterplot of 1 vs 2\.
3. Creating a new column called 3 which is 2 divided by 1\.
4. Renaming the columns to one, two and three.
For this example, I’ll create a dataset called annoying with
columns named `1` and `2`.
```
annoying <- tibble(
`1` = 1:10,
`2` = `1` * 2 + rnorm(length(`1`))
)
```
1. To extract the variable named `1`:
```
annoying[["1"]]
#> [1] 1 2 3 4 5 6 7 8 9 10
```
or
```
annoying$`1`
#> [1] 1 2 3 4 5 6 7 8 9 10
```
2. To create a scatter plot of `1` vs. `2`:
```
ggplot(annoying, aes(x = `1`, y = `2`)) +
geom_point()
```
3. To add a new column `3` which is `2` divided by `1`:
```
mutate(annoying, `3` = `2` / `1`)
#> # A tibble: 10 x 3
#> `1` `2` `3`
#> <int> <dbl> <dbl>
#> 1 1 0.600 0.600
#> 2 2 4.26 2.13
#> 3 3 3.56 1.19
#> 4 4 7.99 2.00
#> 5 5 10.6 2.12
#> 6 6 13.1 2.19
#> # … with 4 more rows
```
or
```
annoying[["3"]] <- annoying$`2` / annoying$`1`
```
or
```
annoying[["3"]] <- annoying[["2"]] / annoying[["1"]]
```
4. To rename the columns to `one`, `two`, and `three`, run:
```
annoying <- rename(annoying, one = `1`, two = `2`, three = `3`)
glimpse(annoying)
#> Rows: 10
#> Columns: 3
#> $ one <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
#> $ two <dbl> 0.60, 4.26, 3.56, 7.99, 10.62, 13.15, 12.18, 15.75, 17.76, 19.72
#> $ three <dbl> 0.60, 2.13, 1.19, 2.00, 2.12, 2.19, 1.74, 1.97, 1.97, 1.97
```
Exercise 10\.5
--------------
What does `tibble::enframe()` do? When might you use it?
The function `tibble::enframe()` converts named vectors to a data frame with names and values
```
enframe(c(a = 1, b = 2, c = 3))
#> # A tibble: 3 x 2
#> name value
#> <chr> <dbl>
#> 1 a 1
#> 2 b 2
#> 3 c 3
```
Exercise 10\.6
--------------
What option controls how many additional column names are printed at the footer of a tibble?
The help page for the `print()` method of tibble objects is discussed in `?print.tbl`.
The `n_extra` argument determines the number of extra columns to print information for.
Exercise 10\.1
--------------
How can you tell if an object is a tibble? (Hint: try printing `mtcars`, which is a regular data frame).
When we print `mtcars`, it prints all the columns.
```
mtcars
#> mpg cyl disp hp drat wt qsec vs am gear carb
#> Mazda RX4 21.0 6 160.0 110 3.90 2.62 16.5 0 1 4 4
#> Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.88 17.0 0 1 4 4
#> Datsun 710 22.8 4 108.0 93 3.85 2.32 18.6 1 1 4 1
#> Hornet 4 Drive 21.4 6 258.0 110 3.08 3.21 19.4 1 0 3 1
#> Hornet Sportabout 18.7 8 360.0 175 3.15 3.44 17.0 0 0 3 2
#> Valiant 18.1 6 225.0 105 2.76 3.46 20.2 1 0 3 1
#> Duster 360 14.3 8 360.0 245 3.21 3.57 15.8 0 0 3 4
#> Merc 240D 24.4 4 146.7 62 3.69 3.19 20.0 1 0 4 2
#> Merc 230 22.8 4 140.8 95 3.92 3.15 22.9 1 0 4 2
#> Merc 280 19.2 6 167.6 123 3.92 3.44 18.3 1 0 4 4
#> Merc 280C 17.8 6 167.6 123 3.92 3.44 18.9 1 0 4 4
#> Merc 450SE 16.4 8 275.8 180 3.07 4.07 17.4 0 0 3 3
#> Merc 450SL 17.3 8 275.8 180 3.07 3.73 17.6 0 0 3 3
#> Merc 450SLC 15.2 8 275.8 180 3.07 3.78 18.0 0 0 3 3
#> Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.25 18.0 0 0 3 4
#> Lincoln Continental 10.4 8 460.0 215 3.00 5.42 17.8 0 0 3 4
#> Chrysler Imperial 14.7 8 440.0 230 3.23 5.34 17.4 0 0 3 4
#> Fiat 128 32.4 4 78.7 66 4.08 2.20 19.5 1 1 4 1
#> Honda Civic 30.4 4 75.7 52 4.93 1.61 18.5 1 1 4 2
#> Toyota Corolla 33.9 4 71.1 65 4.22 1.83 19.9 1 1 4 1
#> Toyota Corona 21.5 4 120.1 97 3.70 2.46 20.0 1 0 3 1
#> Dodge Challenger 15.5 8 318.0 150 2.76 3.52 16.9 0 0 3 2
#> AMC Javelin 15.2 8 304.0 150 3.15 3.44 17.3 0 0 3 2
#> Camaro Z28 13.3 8 350.0 245 3.73 3.84 15.4 0 0 3 4
#> Pontiac Firebird 19.2 8 400.0 175 3.08 3.85 17.1 0 0 3 2
#> Fiat X1-9 27.3 4 79.0 66 4.08 1.94 18.9 1 1 4 1
#> Porsche 914-2 26.0 4 120.3 91 4.43 2.14 16.7 0 1 5 2
#> Lotus Europa 30.4 4 95.1 113 3.77 1.51 16.9 1 1 5 2
#> Ford Pantera L 15.8 8 351.0 264 4.22 3.17 14.5 0 1 5 4
#> Ferrari Dino 19.7 6 145.0 175 3.62 2.77 15.5 0 1 5 6
#> Maserati Bora 15.0 8 301.0 335 3.54 3.57 14.6 0 1 5 8
#> Volvo 142E 21.4 4 121.0 109 4.11 2.78 18.6 1 1 4 2
```
But when we first convert `mtcars` to a tibble using `as_tibble()`, it prints only the first ten observations.
There are also some other differences in formatting of the printed data frame.
It prints the number of rows and columns and the date type of each column.
```
as_tibble(mtcars)
#> # A tibble: 32 x 11
#> mpg cyl disp hp drat wt qsec vs am gear carb
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4
#> 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4
#> 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1
#> 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1
#> 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2
#> 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1
#> # … with 26 more rows
```
You can use the function `is_tibble()` to check whether a data frame is a tibble or not.
The `mtcars` data frame is not a tibble.
```
is_tibble(mtcars)
#> [1] FALSE
```
But the `diamonds` and `flights` data are tibbles.
```
is_tibble(ggplot2::diamonds)
#> [1] TRUE
is_tibble(nycflights13::flights)
#> [1] TRUE
is_tibble(as_tibble(mtcars))
#> [1] TRUE
```
More generally, you can use the `class()` function to find out the class of an
object. Tibbles has the classes `c("tbl_df", "tbl", "data.frame")`, while old
data frames will only have the class `"data.frame"`.
```
class(mtcars)
#> [1] "data.frame"
class(ggplot2::diamonds)
#> [1] "tbl_df" "tbl" "data.frame"
class(nycflights13::flights)
#> [1] "tbl_df" "tbl" "data.frame"
```
If you are interested in reading more on R’s classes, read the chapters on
object oriented programming in [Advanced R](http://adv-r.had.co.nz/S3.html).
Exercise 10\.2
--------------
Compare and contrast the following operations on a `data.frame` and equivalent tibble. What is different? Why might the default data frame behaviors cause you frustration?
```
df <- data.frame(abc = 1, xyz = "a")
df$x
#> [1] "a"
df[, "xyz"]
#> [1] "a"
df[, c("abc", "xyz")]
#> abc xyz
#> 1 1 a
```
```
tbl <- as_tibble(df)
tbl$x
#> Warning: Unknown or uninitialised column: `x`.
#> NULL
tbl[, "xyz"]
#> # A tibble: 1 x 1
#> xyz
#> <chr>
#> 1 a
tbl[, c("abc", "xyz")]
#> # A tibble: 1 x 2
#> abc xyz
#> <dbl> <chr>
#> 1 1 a
```
The `$` operator will match any column name that starts with the name following it.
Since there is a column named `xyz`, the expression `df$x` will be expanded to `df$xyz`.
This behavior of the `$` operator saves a few keystrokes, but it can result in accidentally using a different column than you thought you were using.
With data.frames, with `[` the type of object that is returned differs on the
number of columns. If it is one column, it won’t return a data.frame, but
instead will return a vector. With more than one column, then it will return a
data.frame. This is fine if you know what you are passing in, but suppose you
did `df[ , vars]` where `vars` was a variable. Then what that code does
depends on `length(vars)` and you’d have to write code to account for those
situations or risk bugs.
Exercise 10\.3
--------------
If you have the name of a variable stored in an object, e.g. `var <- "mpg"`, how can you extract the reference variable from a tibble?
You can use the double bracket, like `df[[var]]`. You cannot use the dollar sign, because `df$var` would look for a column named `var`.
Exercise 10\.4
--------------
Practice referring to non\-syntactic names in the following data frame by:
1. Extracting the variable called 1\.
2. Plotting a scatterplot of 1 vs 2\.
3. Creating a new column called 3 which is 2 divided by 1\.
4. Renaming the columns to one, two and three.
For this example, I’ll create a dataset called annoying with
columns named `1` and `2`.
```
annoying <- tibble(
`1` = 1:10,
`2` = `1` * 2 + rnorm(length(`1`))
)
```
1. To extract the variable named `1`:
```
annoying[["1"]]
#> [1] 1 2 3 4 5 6 7 8 9 10
```
or
```
annoying$`1`
#> [1] 1 2 3 4 5 6 7 8 9 10
```
2. To create a scatter plot of `1` vs. `2`:
```
ggplot(annoying, aes(x = `1`, y = `2`)) +
geom_point()
```
3. To add a new column `3` which is `2` divided by `1`:
```
mutate(annoying, `3` = `2` / `1`)
#> # A tibble: 10 x 3
#> `1` `2` `3`
#> <int> <dbl> <dbl>
#> 1 1 0.600 0.600
#> 2 2 4.26 2.13
#> 3 3 3.56 1.19
#> 4 4 7.99 2.00
#> 5 5 10.6 2.12
#> 6 6 13.1 2.19
#> # … with 4 more rows
```
or
```
annoying[["3"]] <- annoying$`2` / annoying$`1`
```
or
```
annoying[["3"]] <- annoying[["2"]] / annoying[["1"]]
```
4. To rename the columns to `one`, `two`, and `three`, run:
```
annoying <- rename(annoying, one = `1`, two = `2`, three = `3`)
glimpse(annoying)
#> Rows: 10
#> Columns: 3
#> $ one <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
#> $ two <dbl> 0.60, 4.26, 3.56, 7.99, 10.62, 13.15, 12.18, 15.75, 17.76, 19.72
#> $ three <dbl> 0.60, 2.13, 1.19, 2.00, 2.12, 2.19, 1.74, 1.97, 1.97, 1.97
```
Exercise 10\.5
--------------
What does `tibble::enframe()` do? When might you use it?
The function `tibble::enframe()` converts named vectors to a data frame with names and values
```
enframe(c(a = 1, b = 2, c = 3))
#> # A tibble: 3 x 2
#> name value
#> <chr> <dbl>
#> 1 a 1
#> 2 b 2
#> 3 c 3
```
Exercise 10\.6
--------------
What option controls how many additional column names are printed at the footer of a tibble?
The help page for the `print()` method of tibble objects is discussed in `?print.tbl`.
The `n_extra` argument determines the number of extra columns to print information for.
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/data-import.html |
11 Data import
==============
11\.1 Introduction
------------------
```
library("tidyverse")
```
11\.2 Getting started
---------------------
### Exercise 11\.2\.1
What function would you use to read a file where fields were separated with “\|”?
Use the `read_delim()` function with the argument `delim="|"`.
```
read_delim(file, delim = "|")
```
### Exercise 11\.2\.2
Apart from `file`, `skip`, and `comment`, what other arguments do `read_csv()` and `read_tsv()` have in common?
They have the following arguments in common:
```
intersect(names(formals(read_csv)), names(formals(read_tsv)))
#> [1] "file" "col_names" "col_types" "locale"
#> [5] "na" "quoted_na" "quote" "comment"
#> [9] "trim_ws" "skip" "n_max" "guess_max"
#> [13] "progress" "skip_empty_rows"
```
* `col_names` and `col_types` are used to specify the column names and how to parse the columns
* `locale` is important for determining things like the encoding and whether “.” or “,” is used as a decimal mark.
* `na` and `quoted_na` control which strings are treated as missing values when parsing vectors
* `trim_ws` trims whitespace before and after cells before parsing
* `n_max` sets how many rows to read
* `guess_max` sets how many rows to use when guessing the column type
* `progress` determines whether a progress bar is shown.
In fact, the two functions have the exact same arguments:
```
identical(names(formals(read_csv)), names(formals(read_tsv)))
#> [1] TRUE
```
### Exercise 11\.2\.3
What are the most important arguments to `read_fwf()`?
The most important argument to `read_fwf()` which reads “fixed\-width formats”, is `col_positions` which tells the function where data columns begin and end.
### Exercise 11\.2\.4
Sometimes strings in a CSV file contain commas.
To prevent them from causing problems they need to be surrounded by a quoting character, like `"` or `'`.
By convention, `read_csv()` assumes that the quoting character will be `"`, and if you want to change it you’ll need to use `read_delim()` instead.
What arguments do you need to specify to read the following text into a data frame?
```
"x,y\n1,'a,b'"
```
For `read_delim()`, we will will need to specify a delimiter, in this case `","`, and a quote argument.
```
x <- "x,y\n1,'a,b'"
read_delim(x, ",", quote = "'")
#> # A tibble: 1 x 2
#> x y
#> <dbl> <chr>
#> 1 1 a,b
```
However, this question is out of date. `read_csv()` now supports a quote argument, so the following code works.
```
read_csv(x, quote = "'")
#> # A tibble: 1 x 2
#> x y
#> <dbl> <chr>
#> 1 1 a,b
```
### Exercise 11\.2\.5
Identify what is wrong with each of the following inline CSV files.
What happens when you run the code?
```
read_csv("a,b\n1,2,3\n4,5,6")
#> Warning: 2 parsing failures.
#> row col expected actual file
#> 1 -- 2 columns 3 columns literal data
#> 2 -- 2 columns 3 columns literal data
#> # A tibble: 2 x 2
#> a b
#> <dbl> <dbl>
#> 1 1 2
#> 2 4 5
```
Only two columns are specified in the header “a” and “b”, but the rows have three columns, so the last column is dropped.
```
read_csv("a,b,c\n1,2\n1,2,3,4")
#> Warning: 2 parsing failures.
#> row col expected actual file
#> 1 -- 3 columns 2 columns literal data
#> 2 -- 3 columns 4 columns literal data
#> # A tibble: 2 x 3
#> a b c
#> <dbl> <dbl> <dbl>
#> 1 1 2 NA
#> 2 1 2 3
```
The numbers of columns in the data do not match the number of columns in the header (three).
In row one, there are only two values, so column `c` is set to missing.
In row two, there is an extra value, and that value is dropped.
```
read_csv("a,b\n\"1")
#> Warning: 2 parsing failures.
#> row col expected actual file
#> 1 a closing quote at end of file literal data
#> 1 -- 2 columns 1 columns literal data
#> # A tibble: 1 x 2
#> a b
#> <dbl> <chr>
#> 1 1 <NA>
```
It’s not clear what the intent was here.
The opening quote `"1` is dropped because it is not closed, and `a` is treated as an integer.
```
read_csv("a,b\n1,2\na,b")
#> # A tibble: 2 x 2
#> a b
#> <chr> <chr>
#> 1 1 2
#> 2 a b
```
Both “a” and “b” are treated as character vectors since they contain non\-numeric strings.
This may have been intentional, or the author may have intended the values of the columns to be “1,2” and “a,b”.
```
read_csv("a;b\n1;3")
#> # A tibble: 1 x 1
#> `a;b`
#> <chr>
#> 1 1;3
```
The values are separated by “;” rather than “,”. Use `read_csv2()` instead:
```
read_csv2("a;b\n1;3")
#> Using ',' as decimal and '.' as grouping mark. Use read_delim() for more control.
#> # A tibble: 1 x 2
#> a b
#> <dbl> <dbl>
#> 1 1 3
```
11\.3 Parsing a vector
----------------------
### Exercise 11\.3\.1
What are the most important arguments to `locale()`?
The locale object has arguments to set the following:
* date and time formats: `date_names`, `date_format`, and `time_format`
* time zone: `tz`
* numbers: `decimal_mark`, `grouping_mark`
* encoding: `encoding`
### Exercise 11\.3\.2
What happens if you try and set `decimal_mark` and `grouping_mark` to the same character?
What happens to the default value of `grouping_mark` when you set `decimal_mark` to `","`?
What happens to the default value of `decimal_mark` when you set the `grouping_mark` to `"."`?
If the decimal and grouping marks are set to the same character, `locale` throws an error:
```
locale(decimal_mark = ".", grouping_mark = ".")
#> Error: `decimal_mark` and `grouping_mark` must be different
```
If the `decimal_mark` is set to the comma "`,"`, then the grouping mark is set to the period `"."`:
```
locale(decimal_mark = ",")
#> <locale>
#> Numbers: 123.456,78
#> Formats: %AD / %AT
#> Timezone: UTC
#> Encoding: UTF-8
#> <date_names>
#> Days: Sunday (Sun), Monday (Mon), Tuesday (Tue), Wednesday (Wed), Thursday
#> (Thu), Friday (Fri), Saturday (Sat)
#> Months: January (Jan), February (Feb), March (Mar), April (Apr), May (May),
#> June (Jun), July (Jul), August (Aug), September (Sep), October
#> (Oct), November (Nov), December (Dec)
#> AM/PM: AM/PM
```
If the grouping mark is set to a period, then the decimal mark is set to a comma
```
locale(grouping_mark = ".")
#> <locale>
#> Numbers: 123.456,78
#> Formats: %AD / %AT
#> Timezone: UTC
#> Encoding: UTF-8
#> <date_names>
#> Days: Sunday (Sun), Monday (Mon), Tuesday (Tue), Wednesday (Wed), Thursday
#> (Thu), Friday (Fri), Saturday (Sat)
#> Months: January (Jan), February (Feb), March (Mar), April (Apr), May (May),
#> June (Jun), July (Jul), August (Aug), September (Sep), October
#> (Oct), November (Nov), December (Dec)
#> AM/PM: AM/PM
```
### Exercise 11\.3\.3
I didn’t discuss the `date_format` and `time_format` options to `locale()`.
What do they do?
Construct an example that shows when they might be useful.
They provide default date and time formats.
The [readr vignette](https://cran.r-project.org/web/packages/readr/vignettes/locales.html) discusses using these to parse dates: since dates can include languages specific weekday and month names, and different conventions for specifying AM/PM
```
locale()
#> <locale>
#> Numbers: 123,456.78
#> Formats: %AD / %AT
#> Timezone: UTC
#> Encoding: UTF-8
#> <date_names>
#> Days: Sunday (Sun), Monday (Mon), Tuesday (Tue), Wednesday (Wed), Thursday
#> (Thu), Friday (Fri), Saturday (Sat)
#> Months: January (Jan), February (Feb), March (Mar), April (Apr), May (May),
#> June (Jun), July (Jul), August (Aug), September (Sep), October
#> (Oct), November (Nov), December (Dec)
#> AM/PM: AM/PM
```
Examples from the readr vignette of parsing French dates
```
parse_date("1 janvier 2015", "%d %B %Y", locale = locale("fr"))
#> [1] "2015-01-01"
parse_date("14 oct. 1979", "%d %b %Y", locale = locale("fr"))
#> [1] "1979-10-14"
```
Both the date format and time format are used for guessing column types.
Thus if you were often parsing data that had non\-standard formats for the date and time, you could specify custom values for `date_format` and `time_format`.
```
locale_custom <- locale(date_format = "Day %d Mon %M Year %y",
time_format = "Sec %S Min %M Hour %H")
date_custom <- c("Day 01 Mon 02 Year 03", "Day 03 Mon 01 Year 01")
parse_date(date_custom)
#> Warning: 2 parsing failures.
#> row col expected actual
#> 1 -- date like Day 01 Mon 02 Year 03
#> 2 -- date like Day 03 Mon 01 Year 01
#> [1] NA NA
parse_date(date_custom, locale = locale_custom)
#> [1] "2003-01-01" "2001-01-03"
time_custom <- c("Sec 01 Min 02 Hour 03", "Sec 03 Min 02 Hour 01")
parse_time(time_custom)
#> Warning: 2 parsing failures.
#> row col expected actual
#> 1 -- time like Sec 01 Min 02 Hour 03
#> 2 -- time like Sec 03 Min 02 Hour 01
#> NA
#> NA
parse_time(time_custom, locale = locale_custom)
#> 03:02:01
#> 01:02:03
```
### Exercise 11\.3\.4
If you live outside the US, create a new locale object that encapsulates the settings for the types of file you read most commonly.
Read the help page for `locale()` using `?locale` to learn about the different variables that can be set.
As an example, consider Australia.
Most of the defaults values are valid, except that the date format is “(d)d/mm/yyyy”, meaning that January 2, 2006 is written as `02/01/2006`.
However, default locale will parse that date as February 1, 2006\.
```
parse_date("02/01/2006")
#> Warning: 1 parsing failure.
#> row col expected actual
#> 1 -- date like 02/01/2006
#> [1] NA
```
To correctly parse Australian dates, define a new `locale` object.
```
au_locale <- locale(date_format = "%d/%m/%Y")
```
Using `parse_date()` with the `au_locale` as its locale will correctly parse our example date.
```
parse_date("02/01/2006", locale = au_locale)
#> [1] "2006-01-02"
```
### Exercise 11\.3\.5
What’s the difference between `read_csv()` and `read_csv2()`?
The delimiter. The function `read_csv()` uses a comma, while `read_csv2()` uses a semi\-colon (`;`). Using a semi\-colon is useful when commas are used as the decimal point (as in Europe).
### Exercise 11\.3\.6
What are the most common encodings used in Europe?
What are the most common encodings used in Asia?
Do some googling to find out.
UTF\-8 is standard now, and ASCII has been around forever.
For the European languages, there are separate encodings for Romance languages and Eastern European languages using Latin script, Cyrillic, Greek, Hebrew, Turkish: usually with separate ISO and Windows encoding standards.
There is also Mac OS Roman.
For Asian languages Arabic and Vietnamese have ISO and Windows standards. The other major Asian scripts have their own:
* Japanese: JIS X 0208, Shift JIS, ISO\-2022\-JP
* Chinese: GB 2312, GBK, GB 18030
* Korean: KS X 1001, EUC\-KR, ISO\-2022\-KR
The list in the documentation for `stringi::stri_enc_detect()` is a good list of encodings since it supports the most common encodings.
* Western European Latin script languages: ISO\-8859\-1, Windows\-1250 (also CP\-1250 for code\-point)
* Eastern European Latin script languages: ISO\-8859\-2, Windows\-1252
* Greek: ISO\-8859\-7
* Turkish: ISO\-8859\-9, Windows\-1254
* Hebrew: ISO\-8859\-8, IBM424, Windows 1255
* Russian: Windows 1251
* Japanese: Shift JIS, ISO\-2022\-JP, EUC\-JP
* Korean: ISO\-2022\-KR, EUC\-KR
* Chinese: GB18030, ISO\-2022\-CN (Simplified), Big5 (Traditional)
* Arabic: ISO\-8859\-6, IBM420, Windows 1256
For more information on character encodings see the following sources.
* The Wikipedia page [Character encoding](https://en.wikipedia.org/wiki/Character_encoding), has a good list of encodings.
* Unicode [CLDR](http://cldr.unicode.org/) project
* [What is the most common encoding of each language](https://stackoverflow.com/questions/8509339/what-is-the-most-common-encoding-of-each-language) (Stack Overflow)
* “What Every Programmer Absolutely, Positively Needs To Know About Encodings And Character Sets To Work With Text”, <http://kunststube.net/encoding/>.
Programs that identify the encoding of text include:
* `readr::guess_encoding()`
* `stringi::str_enc_detect()`
* [iconv](https://en.wikipedia.org/wiki/Iconv)
* [chardet](https://github.com/chardet/chardet) (Python)
### Exercise 11\.3\.7
Generate the correct format string to parse each of the following dates and times:
```
d1 <- "January 1, 2010"
d2 <- "2015-Mar-07"
d3 <- "06-Jun-2017"
d4 <- c("August 19 (2015)", "July 1 (2015)")
d5 <- "12/30/14" # Dec 30, 2014
t1 <- "1705"
t2 <- "11:15:10.12 PM"
```
The correct formats are:
```
parse_date(d1, "%B %d, %Y")
#> [1] "2010-01-01"
parse_date(d2, "%Y-%b-%d")
#> [1] "2015-03-07"
parse_date(d3, "%d-%b-%Y")
#> [1] "2017-06-06"
parse_date(d4, "%B %d (%Y)")
#> [1] "2015-08-19" "2015-07-01"
parse_date(d5, "%m/%d/%y")
#> [1] "2014-12-30"
parse_time(t1, "%H%M")
#> 17:05:00
```
The time `t2` uses real seconds,
```
parse_time(t2, "%H:%M:%OS %p")
#> 23:15:10.12
```
11\.4 Parsing a file
--------------------
No exercises
11\.5 Writing to a file
-----------------------
No exercises
11\.6 Other types of data
-------------------------
No exercises
11\.1 Introduction
------------------
```
library("tidyverse")
```
11\.2 Getting started
---------------------
### Exercise 11\.2\.1
What function would you use to read a file where fields were separated with “\|”?
Use the `read_delim()` function with the argument `delim="|"`.
```
read_delim(file, delim = "|")
```
### Exercise 11\.2\.2
Apart from `file`, `skip`, and `comment`, what other arguments do `read_csv()` and `read_tsv()` have in common?
They have the following arguments in common:
```
intersect(names(formals(read_csv)), names(formals(read_tsv)))
#> [1] "file" "col_names" "col_types" "locale"
#> [5] "na" "quoted_na" "quote" "comment"
#> [9] "trim_ws" "skip" "n_max" "guess_max"
#> [13] "progress" "skip_empty_rows"
```
* `col_names` and `col_types` are used to specify the column names and how to parse the columns
* `locale` is important for determining things like the encoding and whether “.” or “,” is used as a decimal mark.
* `na` and `quoted_na` control which strings are treated as missing values when parsing vectors
* `trim_ws` trims whitespace before and after cells before parsing
* `n_max` sets how many rows to read
* `guess_max` sets how many rows to use when guessing the column type
* `progress` determines whether a progress bar is shown.
In fact, the two functions have the exact same arguments:
```
identical(names(formals(read_csv)), names(formals(read_tsv)))
#> [1] TRUE
```
### Exercise 11\.2\.3
What are the most important arguments to `read_fwf()`?
The most important argument to `read_fwf()` which reads “fixed\-width formats”, is `col_positions` which tells the function where data columns begin and end.
### Exercise 11\.2\.4
Sometimes strings in a CSV file contain commas.
To prevent them from causing problems they need to be surrounded by a quoting character, like `"` or `'`.
By convention, `read_csv()` assumes that the quoting character will be `"`, and if you want to change it you’ll need to use `read_delim()` instead.
What arguments do you need to specify to read the following text into a data frame?
```
"x,y\n1,'a,b'"
```
For `read_delim()`, we will will need to specify a delimiter, in this case `","`, and a quote argument.
```
x <- "x,y\n1,'a,b'"
read_delim(x, ",", quote = "'")
#> # A tibble: 1 x 2
#> x y
#> <dbl> <chr>
#> 1 1 a,b
```
However, this question is out of date. `read_csv()` now supports a quote argument, so the following code works.
```
read_csv(x, quote = "'")
#> # A tibble: 1 x 2
#> x y
#> <dbl> <chr>
#> 1 1 a,b
```
### Exercise 11\.2\.5
Identify what is wrong with each of the following inline CSV files.
What happens when you run the code?
```
read_csv("a,b\n1,2,3\n4,5,6")
#> Warning: 2 parsing failures.
#> row col expected actual file
#> 1 -- 2 columns 3 columns literal data
#> 2 -- 2 columns 3 columns literal data
#> # A tibble: 2 x 2
#> a b
#> <dbl> <dbl>
#> 1 1 2
#> 2 4 5
```
Only two columns are specified in the header “a” and “b”, but the rows have three columns, so the last column is dropped.
```
read_csv("a,b,c\n1,2\n1,2,3,4")
#> Warning: 2 parsing failures.
#> row col expected actual file
#> 1 -- 3 columns 2 columns literal data
#> 2 -- 3 columns 4 columns literal data
#> # A tibble: 2 x 3
#> a b c
#> <dbl> <dbl> <dbl>
#> 1 1 2 NA
#> 2 1 2 3
```
The numbers of columns in the data do not match the number of columns in the header (three).
In row one, there are only two values, so column `c` is set to missing.
In row two, there is an extra value, and that value is dropped.
```
read_csv("a,b\n\"1")
#> Warning: 2 parsing failures.
#> row col expected actual file
#> 1 a closing quote at end of file literal data
#> 1 -- 2 columns 1 columns literal data
#> # A tibble: 1 x 2
#> a b
#> <dbl> <chr>
#> 1 1 <NA>
```
It’s not clear what the intent was here.
The opening quote `"1` is dropped because it is not closed, and `a` is treated as an integer.
```
read_csv("a,b\n1,2\na,b")
#> # A tibble: 2 x 2
#> a b
#> <chr> <chr>
#> 1 1 2
#> 2 a b
```
Both “a” and “b” are treated as character vectors since they contain non\-numeric strings.
This may have been intentional, or the author may have intended the values of the columns to be “1,2” and “a,b”.
```
read_csv("a;b\n1;3")
#> # A tibble: 1 x 1
#> `a;b`
#> <chr>
#> 1 1;3
```
The values are separated by “;” rather than “,”. Use `read_csv2()` instead:
```
read_csv2("a;b\n1;3")
#> Using ',' as decimal and '.' as grouping mark. Use read_delim() for more control.
#> # A tibble: 1 x 2
#> a b
#> <dbl> <dbl>
#> 1 1 3
```
### Exercise 11\.2\.1
What function would you use to read a file where fields were separated with “\|”?
Use the `read_delim()` function with the argument `delim="|"`.
```
read_delim(file, delim = "|")
```
### Exercise 11\.2\.2
Apart from `file`, `skip`, and `comment`, what other arguments do `read_csv()` and `read_tsv()` have in common?
They have the following arguments in common:
```
intersect(names(formals(read_csv)), names(formals(read_tsv)))
#> [1] "file" "col_names" "col_types" "locale"
#> [5] "na" "quoted_na" "quote" "comment"
#> [9] "trim_ws" "skip" "n_max" "guess_max"
#> [13] "progress" "skip_empty_rows"
```
* `col_names` and `col_types` are used to specify the column names and how to parse the columns
* `locale` is important for determining things like the encoding and whether “.” or “,” is used as a decimal mark.
* `na` and `quoted_na` control which strings are treated as missing values when parsing vectors
* `trim_ws` trims whitespace before and after cells before parsing
* `n_max` sets how many rows to read
* `guess_max` sets how many rows to use when guessing the column type
* `progress` determines whether a progress bar is shown.
In fact, the two functions have the exact same arguments:
```
identical(names(formals(read_csv)), names(formals(read_tsv)))
#> [1] TRUE
```
### Exercise 11\.2\.3
What are the most important arguments to `read_fwf()`?
The most important argument to `read_fwf()` which reads “fixed\-width formats”, is `col_positions` which tells the function where data columns begin and end.
### Exercise 11\.2\.4
Sometimes strings in a CSV file contain commas.
To prevent them from causing problems they need to be surrounded by a quoting character, like `"` or `'`.
By convention, `read_csv()` assumes that the quoting character will be `"`, and if you want to change it you’ll need to use `read_delim()` instead.
What arguments do you need to specify to read the following text into a data frame?
```
"x,y\n1,'a,b'"
```
For `read_delim()`, we will will need to specify a delimiter, in this case `","`, and a quote argument.
```
x <- "x,y\n1,'a,b'"
read_delim(x, ",", quote = "'")
#> # A tibble: 1 x 2
#> x y
#> <dbl> <chr>
#> 1 1 a,b
```
However, this question is out of date. `read_csv()` now supports a quote argument, so the following code works.
```
read_csv(x, quote = "'")
#> # A tibble: 1 x 2
#> x y
#> <dbl> <chr>
#> 1 1 a,b
```
### Exercise 11\.2\.5
Identify what is wrong with each of the following inline CSV files.
What happens when you run the code?
```
read_csv("a,b\n1,2,3\n4,5,6")
#> Warning: 2 parsing failures.
#> row col expected actual file
#> 1 -- 2 columns 3 columns literal data
#> 2 -- 2 columns 3 columns literal data
#> # A tibble: 2 x 2
#> a b
#> <dbl> <dbl>
#> 1 1 2
#> 2 4 5
```
Only two columns are specified in the header “a” and “b”, but the rows have three columns, so the last column is dropped.
```
read_csv("a,b,c\n1,2\n1,2,3,4")
#> Warning: 2 parsing failures.
#> row col expected actual file
#> 1 -- 3 columns 2 columns literal data
#> 2 -- 3 columns 4 columns literal data
#> # A tibble: 2 x 3
#> a b c
#> <dbl> <dbl> <dbl>
#> 1 1 2 NA
#> 2 1 2 3
```
The numbers of columns in the data do not match the number of columns in the header (three).
In row one, there are only two values, so column `c` is set to missing.
In row two, there is an extra value, and that value is dropped.
```
read_csv("a,b\n\"1")
#> Warning: 2 parsing failures.
#> row col expected actual file
#> 1 a closing quote at end of file literal data
#> 1 -- 2 columns 1 columns literal data
#> # A tibble: 1 x 2
#> a b
#> <dbl> <chr>
#> 1 1 <NA>
```
It’s not clear what the intent was here.
The opening quote `"1` is dropped because it is not closed, and `a` is treated as an integer.
```
read_csv("a,b\n1,2\na,b")
#> # A tibble: 2 x 2
#> a b
#> <chr> <chr>
#> 1 1 2
#> 2 a b
```
Both “a” and “b” are treated as character vectors since they contain non\-numeric strings.
This may have been intentional, or the author may have intended the values of the columns to be “1,2” and “a,b”.
```
read_csv("a;b\n1;3")
#> # A tibble: 1 x 1
#> `a;b`
#> <chr>
#> 1 1;3
```
The values are separated by “;” rather than “,”. Use `read_csv2()` instead:
```
read_csv2("a;b\n1;3")
#> Using ',' as decimal and '.' as grouping mark. Use read_delim() for more control.
#> # A tibble: 1 x 2
#> a b
#> <dbl> <dbl>
#> 1 1 3
```
11\.3 Parsing a vector
----------------------
### Exercise 11\.3\.1
What are the most important arguments to `locale()`?
The locale object has arguments to set the following:
* date and time formats: `date_names`, `date_format`, and `time_format`
* time zone: `tz`
* numbers: `decimal_mark`, `grouping_mark`
* encoding: `encoding`
### Exercise 11\.3\.2
What happens if you try and set `decimal_mark` and `grouping_mark` to the same character?
What happens to the default value of `grouping_mark` when you set `decimal_mark` to `","`?
What happens to the default value of `decimal_mark` when you set the `grouping_mark` to `"."`?
If the decimal and grouping marks are set to the same character, `locale` throws an error:
```
locale(decimal_mark = ".", grouping_mark = ".")
#> Error: `decimal_mark` and `grouping_mark` must be different
```
If the `decimal_mark` is set to the comma "`,"`, then the grouping mark is set to the period `"."`:
```
locale(decimal_mark = ",")
#> <locale>
#> Numbers: 123.456,78
#> Formats: %AD / %AT
#> Timezone: UTC
#> Encoding: UTF-8
#> <date_names>
#> Days: Sunday (Sun), Monday (Mon), Tuesday (Tue), Wednesday (Wed), Thursday
#> (Thu), Friday (Fri), Saturday (Sat)
#> Months: January (Jan), February (Feb), March (Mar), April (Apr), May (May),
#> June (Jun), July (Jul), August (Aug), September (Sep), October
#> (Oct), November (Nov), December (Dec)
#> AM/PM: AM/PM
```
If the grouping mark is set to a period, then the decimal mark is set to a comma
```
locale(grouping_mark = ".")
#> <locale>
#> Numbers: 123.456,78
#> Formats: %AD / %AT
#> Timezone: UTC
#> Encoding: UTF-8
#> <date_names>
#> Days: Sunday (Sun), Monday (Mon), Tuesday (Tue), Wednesday (Wed), Thursday
#> (Thu), Friday (Fri), Saturday (Sat)
#> Months: January (Jan), February (Feb), March (Mar), April (Apr), May (May),
#> June (Jun), July (Jul), August (Aug), September (Sep), October
#> (Oct), November (Nov), December (Dec)
#> AM/PM: AM/PM
```
### Exercise 11\.3\.3
I didn’t discuss the `date_format` and `time_format` options to `locale()`.
What do they do?
Construct an example that shows when they might be useful.
They provide default date and time formats.
The [readr vignette](https://cran.r-project.org/web/packages/readr/vignettes/locales.html) discusses using these to parse dates: since dates can include languages specific weekday and month names, and different conventions for specifying AM/PM
```
locale()
#> <locale>
#> Numbers: 123,456.78
#> Formats: %AD / %AT
#> Timezone: UTC
#> Encoding: UTF-8
#> <date_names>
#> Days: Sunday (Sun), Monday (Mon), Tuesday (Tue), Wednesday (Wed), Thursday
#> (Thu), Friday (Fri), Saturday (Sat)
#> Months: January (Jan), February (Feb), March (Mar), April (Apr), May (May),
#> June (Jun), July (Jul), August (Aug), September (Sep), October
#> (Oct), November (Nov), December (Dec)
#> AM/PM: AM/PM
```
Examples from the readr vignette of parsing French dates
```
parse_date("1 janvier 2015", "%d %B %Y", locale = locale("fr"))
#> [1] "2015-01-01"
parse_date("14 oct. 1979", "%d %b %Y", locale = locale("fr"))
#> [1] "1979-10-14"
```
Both the date format and time format are used for guessing column types.
Thus if you were often parsing data that had non\-standard formats for the date and time, you could specify custom values for `date_format` and `time_format`.
```
locale_custom <- locale(date_format = "Day %d Mon %M Year %y",
time_format = "Sec %S Min %M Hour %H")
date_custom <- c("Day 01 Mon 02 Year 03", "Day 03 Mon 01 Year 01")
parse_date(date_custom)
#> Warning: 2 parsing failures.
#> row col expected actual
#> 1 -- date like Day 01 Mon 02 Year 03
#> 2 -- date like Day 03 Mon 01 Year 01
#> [1] NA NA
parse_date(date_custom, locale = locale_custom)
#> [1] "2003-01-01" "2001-01-03"
time_custom <- c("Sec 01 Min 02 Hour 03", "Sec 03 Min 02 Hour 01")
parse_time(time_custom)
#> Warning: 2 parsing failures.
#> row col expected actual
#> 1 -- time like Sec 01 Min 02 Hour 03
#> 2 -- time like Sec 03 Min 02 Hour 01
#> NA
#> NA
parse_time(time_custom, locale = locale_custom)
#> 03:02:01
#> 01:02:03
```
### Exercise 11\.3\.4
If you live outside the US, create a new locale object that encapsulates the settings for the types of file you read most commonly.
Read the help page for `locale()` using `?locale` to learn about the different variables that can be set.
As an example, consider Australia.
Most of the defaults values are valid, except that the date format is “(d)d/mm/yyyy”, meaning that January 2, 2006 is written as `02/01/2006`.
However, default locale will parse that date as February 1, 2006\.
```
parse_date("02/01/2006")
#> Warning: 1 parsing failure.
#> row col expected actual
#> 1 -- date like 02/01/2006
#> [1] NA
```
To correctly parse Australian dates, define a new `locale` object.
```
au_locale <- locale(date_format = "%d/%m/%Y")
```
Using `parse_date()` with the `au_locale` as its locale will correctly parse our example date.
```
parse_date("02/01/2006", locale = au_locale)
#> [1] "2006-01-02"
```
### Exercise 11\.3\.5
What’s the difference between `read_csv()` and `read_csv2()`?
The delimiter. The function `read_csv()` uses a comma, while `read_csv2()` uses a semi\-colon (`;`). Using a semi\-colon is useful when commas are used as the decimal point (as in Europe).
### Exercise 11\.3\.6
What are the most common encodings used in Europe?
What are the most common encodings used in Asia?
Do some googling to find out.
UTF\-8 is standard now, and ASCII has been around forever.
For the European languages, there are separate encodings for Romance languages and Eastern European languages using Latin script, Cyrillic, Greek, Hebrew, Turkish: usually with separate ISO and Windows encoding standards.
There is also Mac OS Roman.
For Asian languages Arabic and Vietnamese have ISO and Windows standards. The other major Asian scripts have their own:
* Japanese: JIS X 0208, Shift JIS, ISO\-2022\-JP
* Chinese: GB 2312, GBK, GB 18030
* Korean: KS X 1001, EUC\-KR, ISO\-2022\-KR
The list in the documentation for `stringi::stri_enc_detect()` is a good list of encodings since it supports the most common encodings.
* Western European Latin script languages: ISO\-8859\-1, Windows\-1250 (also CP\-1250 for code\-point)
* Eastern European Latin script languages: ISO\-8859\-2, Windows\-1252
* Greek: ISO\-8859\-7
* Turkish: ISO\-8859\-9, Windows\-1254
* Hebrew: ISO\-8859\-8, IBM424, Windows 1255
* Russian: Windows 1251
* Japanese: Shift JIS, ISO\-2022\-JP, EUC\-JP
* Korean: ISO\-2022\-KR, EUC\-KR
* Chinese: GB18030, ISO\-2022\-CN (Simplified), Big5 (Traditional)
* Arabic: ISO\-8859\-6, IBM420, Windows 1256
For more information on character encodings see the following sources.
* The Wikipedia page [Character encoding](https://en.wikipedia.org/wiki/Character_encoding), has a good list of encodings.
* Unicode [CLDR](http://cldr.unicode.org/) project
* [What is the most common encoding of each language](https://stackoverflow.com/questions/8509339/what-is-the-most-common-encoding-of-each-language) (Stack Overflow)
* “What Every Programmer Absolutely, Positively Needs To Know About Encodings And Character Sets To Work With Text”, <http://kunststube.net/encoding/>.
Programs that identify the encoding of text include:
* `readr::guess_encoding()`
* `stringi::str_enc_detect()`
* [iconv](https://en.wikipedia.org/wiki/Iconv)
* [chardet](https://github.com/chardet/chardet) (Python)
### Exercise 11\.3\.7
Generate the correct format string to parse each of the following dates and times:
```
d1 <- "January 1, 2010"
d2 <- "2015-Mar-07"
d3 <- "06-Jun-2017"
d4 <- c("August 19 (2015)", "July 1 (2015)")
d5 <- "12/30/14" # Dec 30, 2014
t1 <- "1705"
t2 <- "11:15:10.12 PM"
```
The correct formats are:
```
parse_date(d1, "%B %d, %Y")
#> [1] "2010-01-01"
parse_date(d2, "%Y-%b-%d")
#> [1] "2015-03-07"
parse_date(d3, "%d-%b-%Y")
#> [1] "2017-06-06"
parse_date(d4, "%B %d (%Y)")
#> [1] "2015-08-19" "2015-07-01"
parse_date(d5, "%m/%d/%y")
#> [1] "2014-12-30"
parse_time(t1, "%H%M")
#> 17:05:00
```
The time `t2` uses real seconds,
```
parse_time(t2, "%H:%M:%OS %p")
#> 23:15:10.12
```
### Exercise 11\.3\.1
What are the most important arguments to `locale()`?
The locale object has arguments to set the following:
* date and time formats: `date_names`, `date_format`, and `time_format`
* time zone: `tz`
* numbers: `decimal_mark`, `grouping_mark`
* encoding: `encoding`
### Exercise 11\.3\.2
What happens if you try and set `decimal_mark` and `grouping_mark` to the same character?
What happens to the default value of `grouping_mark` when you set `decimal_mark` to `","`?
What happens to the default value of `decimal_mark` when you set the `grouping_mark` to `"."`?
If the decimal and grouping marks are set to the same character, `locale` throws an error:
```
locale(decimal_mark = ".", grouping_mark = ".")
#> Error: `decimal_mark` and `grouping_mark` must be different
```
If the `decimal_mark` is set to the comma "`,"`, then the grouping mark is set to the period `"."`:
```
locale(decimal_mark = ",")
#> <locale>
#> Numbers: 123.456,78
#> Formats: %AD / %AT
#> Timezone: UTC
#> Encoding: UTF-8
#> <date_names>
#> Days: Sunday (Sun), Monday (Mon), Tuesday (Tue), Wednesday (Wed), Thursday
#> (Thu), Friday (Fri), Saturday (Sat)
#> Months: January (Jan), February (Feb), March (Mar), April (Apr), May (May),
#> June (Jun), July (Jul), August (Aug), September (Sep), October
#> (Oct), November (Nov), December (Dec)
#> AM/PM: AM/PM
```
If the grouping mark is set to a period, then the decimal mark is set to a comma
```
locale(grouping_mark = ".")
#> <locale>
#> Numbers: 123.456,78
#> Formats: %AD / %AT
#> Timezone: UTC
#> Encoding: UTF-8
#> <date_names>
#> Days: Sunday (Sun), Monday (Mon), Tuesday (Tue), Wednesday (Wed), Thursday
#> (Thu), Friday (Fri), Saturday (Sat)
#> Months: January (Jan), February (Feb), March (Mar), April (Apr), May (May),
#> June (Jun), July (Jul), August (Aug), September (Sep), October
#> (Oct), November (Nov), December (Dec)
#> AM/PM: AM/PM
```
### Exercise 11\.3\.3
I didn’t discuss the `date_format` and `time_format` options to `locale()`.
What do they do?
Construct an example that shows when they might be useful.
They provide default date and time formats.
The [readr vignette](https://cran.r-project.org/web/packages/readr/vignettes/locales.html) discusses using these to parse dates: since dates can include languages specific weekday and month names, and different conventions for specifying AM/PM
```
locale()
#> <locale>
#> Numbers: 123,456.78
#> Formats: %AD / %AT
#> Timezone: UTC
#> Encoding: UTF-8
#> <date_names>
#> Days: Sunday (Sun), Monday (Mon), Tuesday (Tue), Wednesday (Wed), Thursday
#> (Thu), Friday (Fri), Saturday (Sat)
#> Months: January (Jan), February (Feb), March (Mar), April (Apr), May (May),
#> June (Jun), July (Jul), August (Aug), September (Sep), October
#> (Oct), November (Nov), December (Dec)
#> AM/PM: AM/PM
```
Examples from the readr vignette of parsing French dates
```
parse_date("1 janvier 2015", "%d %B %Y", locale = locale("fr"))
#> [1] "2015-01-01"
parse_date("14 oct. 1979", "%d %b %Y", locale = locale("fr"))
#> [1] "1979-10-14"
```
Both the date format and time format are used for guessing column types.
Thus if you were often parsing data that had non\-standard formats for the date and time, you could specify custom values for `date_format` and `time_format`.
```
locale_custom <- locale(date_format = "Day %d Mon %M Year %y",
time_format = "Sec %S Min %M Hour %H")
date_custom <- c("Day 01 Mon 02 Year 03", "Day 03 Mon 01 Year 01")
parse_date(date_custom)
#> Warning: 2 parsing failures.
#> row col expected actual
#> 1 -- date like Day 01 Mon 02 Year 03
#> 2 -- date like Day 03 Mon 01 Year 01
#> [1] NA NA
parse_date(date_custom, locale = locale_custom)
#> [1] "2003-01-01" "2001-01-03"
time_custom <- c("Sec 01 Min 02 Hour 03", "Sec 03 Min 02 Hour 01")
parse_time(time_custom)
#> Warning: 2 parsing failures.
#> row col expected actual
#> 1 -- time like Sec 01 Min 02 Hour 03
#> 2 -- time like Sec 03 Min 02 Hour 01
#> NA
#> NA
parse_time(time_custom, locale = locale_custom)
#> 03:02:01
#> 01:02:03
```
### Exercise 11\.3\.4
If you live outside the US, create a new locale object that encapsulates the settings for the types of file you read most commonly.
Read the help page for `locale()` using `?locale` to learn about the different variables that can be set.
As an example, consider Australia.
Most of the defaults values are valid, except that the date format is “(d)d/mm/yyyy”, meaning that January 2, 2006 is written as `02/01/2006`.
However, default locale will parse that date as February 1, 2006\.
```
parse_date("02/01/2006")
#> Warning: 1 parsing failure.
#> row col expected actual
#> 1 -- date like 02/01/2006
#> [1] NA
```
To correctly parse Australian dates, define a new `locale` object.
```
au_locale <- locale(date_format = "%d/%m/%Y")
```
Using `parse_date()` with the `au_locale` as its locale will correctly parse our example date.
```
parse_date("02/01/2006", locale = au_locale)
#> [1] "2006-01-02"
```
### Exercise 11\.3\.5
What’s the difference between `read_csv()` and `read_csv2()`?
The delimiter. The function `read_csv()` uses a comma, while `read_csv2()` uses a semi\-colon (`;`). Using a semi\-colon is useful when commas are used as the decimal point (as in Europe).
### Exercise 11\.3\.6
What are the most common encodings used in Europe?
What are the most common encodings used in Asia?
Do some googling to find out.
UTF\-8 is standard now, and ASCII has been around forever.
For the European languages, there are separate encodings for Romance languages and Eastern European languages using Latin script, Cyrillic, Greek, Hebrew, Turkish: usually with separate ISO and Windows encoding standards.
There is also Mac OS Roman.
For Asian languages Arabic and Vietnamese have ISO and Windows standards. The other major Asian scripts have their own:
* Japanese: JIS X 0208, Shift JIS, ISO\-2022\-JP
* Chinese: GB 2312, GBK, GB 18030
* Korean: KS X 1001, EUC\-KR, ISO\-2022\-KR
The list in the documentation for `stringi::stri_enc_detect()` is a good list of encodings since it supports the most common encodings.
* Western European Latin script languages: ISO\-8859\-1, Windows\-1250 (also CP\-1250 for code\-point)
* Eastern European Latin script languages: ISO\-8859\-2, Windows\-1252
* Greek: ISO\-8859\-7
* Turkish: ISO\-8859\-9, Windows\-1254
* Hebrew: ISO\-8859\-8, IBM424, Windows 1255
* Russian: Windows 1251
* Japanese: Shift JIS, ISO\-2022\-JP, EUC\-JP
* Korean: ISO\-2022\-KR, EUC\-KR
* Chinese: GB18030, ISO\-2022\-CN (Simplified), Big5 (Traditional)
* Arabic: ISO\-8859\-6, IBM420, Windows 1256
For more information on character encodings see the following sources.
* The Wikipedia page [Character encoding](https://en.wikipedia.org/wiki/Character_encoding), has a good list of encodings.
* Unicode [CLDR](http://cldr.unicode.org/) project
* [What is the most common encoding of each language](https://stackoverflow.com/questions/8509339/what-is-the-most-common-encoding-of-each-language) (Stack Overflow)
* “What Every Programmer Absolutely, Positively Needs To Know About Encodings And Character Sets To Work With Text”, <http://kunststube.net/encoding/>.
Programs that identify the encoding of text include:
* `readr::guess_encoding()`
* `stringi::str_enc_detect()`
* [iconv](https://en.wikipedia.org/wiki/Iconv)
* [chardet](https://github.com/chardet/chardet) (Python)
### Exercise 11\.3\.7
Generate the correct format string to parse each of the following dates and times:
```
d1 <- "January 1, 2010"
d2 <- "2015-Mar-07"
d3 <- "06-Jun-2017"
d4 <- c("August 19 (2015)", "July 1 (2015)")
d5 <- "12/30/14" # Dec 30, 2014
t1 <- "1705"
t2 <- "11:15:10.12 PM"
```
The correct formats are:
```
parse_date(d1, "%B %d, %Y")
#> [1] "2010-01-01"
parse_date(d2, "%Y-%b-%d")
#> [1] "2015-03-07"
parse_date(d3, "%d-%b-%Y")
#> [1] "2017-06-06"
parse_date(d4, "%B %d (%Y)")
#> [1] "2015-08-19" "2015-07-01"
parse_date(d5, "%m/%d/%y")
#> [1] "2014-12-30"
parse_time(t1, "%H%M")
#> 17:05:00
```
The time `t2` uses real seconds,
```
parse_time(t2, "%H:%M:%OS %p")
#> 23:15:10.12
```
11\.4 Parsing a file
--------------------
No exercises
11\.5 Writing to a file
-----------------------
No exercises
11\.6 Other types of data
-------------------------
No exercises
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/tidy-data.html |
12 Tidy data
============
12\.1 Introduction
------------------
```
library("tidyverse")
```
12\.2 Tidy data
---------------
### Exercise 12\.2\.1
Using prose, describe how the variables and observations are organized in each of the sample tables.
In table `table1`, each row represents a (country, year) combination.
The columns `cases` and `population` contain the values for those variables.
```
table1
#> # A tibble: 6 x 4
#> country year cases population
#> <chr> <int> <int> <int>
#> 1 Afghanistan 1999 745 19987071
#> 2 Afghanistan 2000 2666 20595360
#> 3 Brazil 1999 37737 172006362
#> 4 Brazil 2000 80488 174504898
#> 5 China 1999 212258 1272915272
#> 6 China 2000 213766 1280428583
```
In `table2`, each row represents a (country, year, variable) combination.
The column `count` contains the values of variables `cases` and `population` in separate rows.
```
table2
#> # A tibble: 12 x 4
#> country year type count
#> <chr> <int> <chr> <int>
#> 1 Afghanistan 1999 cases 745
#> 2 Afghanistan 1999 population 19987071
#> 3 Afghanistan 2000 cases 2666
#> 4 Afghanistan 2000 population 20595360
#> 5 Brazil 1999 cases 37737
#> 6 Brazil 1999 population 172006362
#> # … with 6 more rows
```
In `table3`, each row represents a (country, year) combination.
The column `rate` provides the values of both `cases` and `population` in a string formatted like `cases / population`.
```
table3
#> # A tibble: 6 x 3
#> country year rate
#> * <chr> <int> <chr>
#> 1 Afghanistan 1999 745/19987071
#> 2 Afghanistan 2000 2666/20595360
#> 3 Brazil 1999 37737/172006362
#> 4 Brazil 2000 80488/174504898
#> 5 China 1999 212258/1272915272
#> 6 China 2000 213766/1280428583
```
Table 4 is split into two tables, one table for each variable.
The table `table4a` contains the values of cases and `table4b` contains the values of population.
Within each table, each row represents a country, each column represents a year, and the cells are the value of the table’s variable for that country and year.
```
table4a
#> # A tibble: 3 x 3
#> country `1999` `2000`
#> * <chr> <int> <int>
#> 1 Afghanistan 745 2666
#> 2 Brazil 37737 80488
#> 3 China 212258 213766
```
```
table4b
#> # A tibble: 3 x 3
#> country `1999` `2000`
#> * <chr> <int> <int>
#> 1 Afghanistan 19987071 20595360
#> 2 Brazil 172006362 174504898
#> 3 China 1272915272 1280428583
```
### Exercise 12\.2\.2
Compute the `rate` for `table2`, and `table4a` \+ `table4b`.
You will need to perform four operations:
1. Extract the number of TB cases per country per year.
2. Extract the matching population per country per year.
3. Divide cases by population, and multiply by 10000\.
4. Store back in the appropriate place.
Which representation is easiest to work with?
Which is hardest?
Why?
To calculate cases per person, we need to divide cases by population for each country and year.
This is easiest if the cases and population variables are two columns in a data frame in which rows represent (country, year) combinations.
Table 2: First, create separate tables for cases and population and ensure that they are sorted in the same order.
```
t2_cases <- filter(table2, type == "cases") %>%
rename(cases = count) %>%
arrange(country, year)
t2_population <- filter(table2, type == "population") %>%
rename(population = count) %>%
arrange(country, year)
```
Then create a new data frame with the population and cases columns, and calculate the cases per capita in a new column.
```
t2_cases_per_cap <- tibble(
year = t2_cases$year,
country = t2_cases$country,
cases = t2_cases$cases,
population = t2_population$population
) %>%
mutate(cases_per_cap = (cases / population) * 10000) %>%
select(country, year, cases_per_cap)
```
To store this new variable in the appropriate location, we will add new rows to `table2`.
```
t2_cases_per_cap <- t2_cases_per_cap %>%
mutate(type = "cases_per_cap") %>%
rename(count = cases_per_cap)
```
```
bind_rows(table2, t2_cases_per_cap) %>%
arrange(country, year, type, count)
#> # A tibble: 18 x 4
#> country year type count
#> <chr> <int> <chr> <dbl>
#> 1 Afghanistan 1999 cases 745
#> 2 Afghanistan 1999 cases_per_cap 0.373
#> 3 Afghanistan 1999 population 19987071
#> 4 Afghanistan 2000 cases 2666
#> 5 Afghanistan 2000 cases_per_cap 1.29
#> 6 Afghanistan 2000 population 20595360
#> # … with 12 more rows
```
Note that after adding the `cases_per_cap` rows, the type of `count` is coerced to `numeric` (double) because `cases_per_cap` is not an integer.
For `table4a` and `table4b`, create a new table for cases per capita, which we’ll name `table4c`, with country rows and year columns.
```
table4c <-
tibble(
country = table4a$country,
`1999` = table4a[["1999"]] / table4b[["1999"]] * 10000,
`2000` = table4a[["2000"]] / table4b[["2000"]] * 10000
)
table4c
#> # A tibble: 3 x 3
#> country `1999` `2000`
#> <chr> <dbl> <dbl>
#> 1 Afghanistan 0.373 1.29
#> 2 Brazil 2.19 4.61
#> 3 China 1.67 1.67
```
Neither table is particularly easy to work with.
Since `table2` has separate rows for cases and population we needed to generate a table with columns for cases and population where we could
calculate cases per capita.
`table4a` and `table4b` split the cases and population variables into different tables which
made it easy to divide cases by population.
However, we had to repeat this calculation for each row.
The ideal format of a data frame to answer this question is one with columns `country`, `year`, `cases`, and `population`.
Then problem could be answered with a single `mutate()` call.
### Exercise 12\.2\.3
Recreate the plot showing change in cases over time using `table2` instead of `table1`.
What do you need to do first?
Before creating the plot with change in cases over time, we need to filter `table` to only include rows representing cases of TB.
```
table2 %>%
filter(type == "cases") %>%
ggplot(aes(year, count)) +
geom_line(aes(group = country), colour = "grey50") +
geom_point(aes(colour = country)) +
scale_x_continuous(breaks = unique(table2$year)) +
ylab("cases")
```
12\.3 Pivoting
--------------
This code is reproduced from the chapter because it is needed by the exercises.
```
tidy4a <- table4a %>%
pivot_longer(c(`1999`, `2000`), names_to = "year", values_to = "cases")
tidy4b <- table4b %>%
pivot_longer(c(`1999`, `2000`), names_to = "year", values_to = "population")
```
### Exercise 12\.3\.1
Why are `pivot_longer()` and `pivot_wider()` not perfectly symmetrical?
Carefully consider the following example:
Carefully consider the following example:
```
stocks <- tibble(
year = c(2015, 2015, 2016, 2016),
half = c( 1, 2, 1, 2),
return = c(1.88, 0.59, 0.92, 0.17)
)
stocks %>%
pivot_wider(names_from = year, values_from = return) %>%
pivot_longer(`2015`:`2016`, names_to = "year", values_to = "return")
#> # A tibble: 4 x 3
#> half year return
#> <dbl> <chr> <dbl>
#> 1 1 2015 1.88
#> 2 1 2016 0.92
#> 3 2 2015 0.59
#> 4 2 2016 0.17
```
(Hint: look at the variable types and think about column names.)
`pivot_longer()` has a `names_ptype` argument, e.g. `names_ptype = list(year = double())`. What does it do?
The functions `pivot_longer()` and `pivot_wider()` are not perfectly symmetrical because column type information is lost when a data frame is converted from wide to long.
The function `pivot_longer()` stacks multiple columns which may have had multiple data types into a single column with a single data type.
This transformation throws away the individual data types of the original columns.
The function `pivot_wider()` creates column names from values in column.
These column names will always be treated as `character` values by `pivot_longer()` so if the original variable used to create the column names did not have a `character` data type, then the round\-trip will not reproduce the same dataset.
In the provided example, columns have the following data types:
```
glimpse(stocks)
#> Rows: 4
#> Columns: 3
#> $ year <dbl> 2015, 2015, 2016, 2016
#> $ half <dbl> 1, 2, 1, 2
#> $ return <dbl> 1.88, 0.59, 0.92, 0.17
```
The `pivot_wider()` expression pivots the table to create a data frame with years as column names, and the values in `return` as the column values.
```
stocks %>%
pivot_wider(names_from = year, values_from = return)
#> # A tibble: 2 x 3
#> half `2015` `2016`
#> <dbl> <dbl> <dbl>
#> 1 1 1.88 0.92
#> 2 2 0.59 0.17
```
The `pivot_longer()` expression unpivots the table, returning it to a tidy data frame with columns for `half`, `year`, and `return`.
```
stocks %>%
pivot_wider(names_from = year, values_from = return)%>%
pivot_longer(`2015`:`2016`, names_to = "year", values_to = "return")
#> # A tibble: 4 x 3
#> half year return
#> <dbl> <chr> <dbl>
#> 1 1 2015 1.88
#> 2 1 2016 0.92
#> 3 2 2015 0.59
#> 4 2 2016 0.17
```
There is one difference, in the new data frame, `year` has a data type of `character` rather than `numeric`.
The `names_to` column created from column names by `pivot_longer()` will be character by default, which is usually a safe assumption, since syntactically valid\-column names can only be character values.
The original data types of column which `pivot_wider()` used to create the column names was not stored, so `pivot_longer()` has no idea that the column names in this case should be numeric values.
In the current version of `tidyr`, the `names_ptype` argument does not convert the `year` column to a numeric vector, and it will raise an error.
```
stocks %>%
pivot_wider(names_from = year, values_from = return)%>%
pivot_longer(`2015`:`2016`, names_to = "year", values_to = "return",
names_ptype = list(year = double()))
#> Error: Can't convert <character> to <double>.
```
Instead, use the `names_transform` argument to `pivot_longer()`, which provides a function to coerce the column to a different data type.
```
stocks %>%
pivot_wider(names_from = year, values_from = return)%>%
pivot_longer(`2015`:`2016`, names_to = "year", values_to = "return",
names_transform = list(year = as.numeric))
#> # A tibble: 4 x 3
#> half year return
#> <dbl> <dbl> <dbl>
#> 1 1 2015 1.88
#> 2 1 2016 0.92
#> 3 2 2015 0.59
#> 4 2 2016 0.17
```
### Exercise 12\.3\.2
Why does this code fail?
```
table4a %>%
pivot_longer(c(1999, 2000), names_to = "year", values_to = "cases")
#> Error: Can't subset columns that don't exist.
#> ✖ Locations 1999 and 2000 don't exist.
#> ℹ There are only 3 columns.
```
The code fails because the column names `1999` and `2000` are not non\-syntactic variable names.\[^non\-syntactic]
When selecting variables from a data frame, tidyverse functions will interpret numbers, like `1999` and `2000`, as column numbers.
In this case, `pivot_longer()` tries to select the 1999th and 2000th column of the data frame.
To select the columns `1999` and `2000`, the names must be surrounded in backticks (\`\`\`) or provided as strings.
```
table4a %>%
pivot_longer(c(`1999`, `2000`), names_to = "year", values_to = "cases")
#> # A tibble: 6 x 3
#> country year cases
#> <chr> <chr> <int>
#> 1 Afghanistan 1999 745
#> 2 Afghanistan 2000 2666
#> 3 Brazil 1999 37737
#> 4 Brazil 2000 80488
#> 5 China 1999 212258
#> 6 China 2000 213766
```
```
table4a %>%
pivot_longer(c("1999", "2000"), names_to = "year", values_to = "cases")
#> # A tibble: 6 x 3
#> country year cases
#> <chr> <chr> <int>
#> 1 Afghanistan 1999 745
#> 2 Afghanistan 2000 2666
#> 3 Brazil 1999 37737
#> 4 Brazil 2000 80488
#> 5 China 1999 212258
#> 6 China 2000 213766
```
### Exercise 12\.3\.3
What would happen if you widen this table?
Why?
How could you add a new column to uniquely identify each value?
```
people <- tribble(
~name, ~key, ~value,
#-----------------|--------|------
"Phillip Woods", "age", 45,
"Phillip Woods", "height", 186,
"Phillip Woods", "age", 50,
"Jessica Cordero", "age", 37,
"Jessica Cordero", "height", 156
)
glimpse(people)
#> Rows: 5
#> Columns: 3
#> $ name <chr> "Phillip Woods", "Phillip Woods", "Phillip Woods", "Jessica Cor…
#> $ key <chr> "age", "height", "age", "age", "height"
#> $ value <dbl> 45, 186, 50, 37, 156
```
Widening this data frame using `pivot_wider()` produces columns that are lists of numeric vectors because the `name` and `key` columns do not uniquely identify rows.
In particular, there are two rows with values for the age of “Phillip Woods”.
```
pivot_wider(people, names_from="name", values_from = "value")
#> Warning: Values are not uniquely identified; output will contain list-cols.
#> * Use `values_fn = list` to suppress this warning.
#> * Use `values_fn = length` to identify where the duplicates arise
#> * Use `values_fn = {summary_fun}` to summarise duplicates
#> # A tibble: 2 x 3
#> key `Phillip Woods` `Jessica Cordero`
#> <chr> <list> <list>
#> 1 age <dbl [2]> <dbl [1]>
#> 2 height <dbl [1]> <dbl [1]>
```
We could solve the problem by adding a row with a distinct observation count for each combination of name and key.
```
people2 <- people %>%
group_by(name, key) %>%
mutate(obs = row_number())
people2
#> # A tibble: 5 x 4
#> # Groups: name, key [4]
#> name key value obs
#> <chr> <chr> <dbl> <int>
#> 1 Phillip Woods age 45 1
#> 2 Phillip Woods height 186 1
#> 3 Phillip Woods age 50 2
#> 4 Jessica Cordero age 37 1
#> 5 Jessica Cordero height 156 1
```
We can make `people2` wider because the combination of `name` and `obs` will uniquely identify the rows in the wide data frame.
```
pivot_wider(people2, names_from="name", values_from = "value")
#> # A tibble: 3 x 4
#> # Groups: key [2]
#> key obs `Phillip Woods` `Jessica Cordero`
#> <chr> <int> <dbl> <dbl>
#> 1 age 1 45 37
#> 2 height 1 186 156
#> 3 age 2 50 NA
```
Another way to solve this problem is by keeping only distinct rows of the name and key values, and dropping duplicate rows.
```
people %>%
distinct(name, key, .keep_all = TRUE) %>%
pivot_wider(names_from="name", values_from = "value")
#> # A tibble: 2 x 3
#> key `Phillip Woods` `Jessica Cordero`
#> <chr> <dbl> <dbl>
#> 1 age 45 37
#> 2 height 186 156
```
However, before doing this understand why there are duplicates in the data.
The duplicate values may not be just a nuisance, but may indicate deeper problems with the data.
### Exercise 12\.3\.4
Tidy the simple tibble below. Do you need to make it wider or longer?
What are the variables?
```
preg <- tribble(
~pregnant, ~male, ~female,
"yes", NA, 10,
"no", 20, 12
)
```
To tidy the `preg` table use `pivot_longer()` to create a long table.
The variables in this data are:
* `sex` (“female”, “male”)
* `pregnant` (“yes”, “no”)
* `count`, which is a non\-negative integer representing the number of observations.
The observations in this data are unique combinations of sex and pregnancy status.
```
preg_tidy <- preg %>%
pivot_longer(c(male, female), names_to = "sex", values_to = "count")
preg_tidy
#> # A tibble: 4 x 3
#> pregnant sex count
#> <chr> <chr> <dbl>
#> 1 yes male NA
#> 2 yes female 10
#> 3 no male 20
#> 4 no female 12
```
Remove the (male, pregnant) row with a missing value to simplify the tidied data frame.
```
preg_tidy2 <- preg %>%
pivot_longer(c(male, female), names_to = "sex", values_to = "count", values_drop_na = TRUE)
preg_tidy2
#> # A tibble: 3 x 3
#> pregnant sex count
#> <chr> <chr> <dbl>
#> 1 yes female 10
#> 2 no male 20
#> 3 no female 12
```
This an example of turning an explicit missing value into an implicit missing value, which is discussed in the upcoming section, [Missing Values](https://r4ds.had.co.nz/tidy-data.html#missing-values-3) section.
The missing (male, pregnant) row represents an implicit missing value because the value of `count` can be inferred from its absence.
In the tidy data, we can represent rows with missing values of `count` either explicitly with an `NA` (as in `preg_tidy`) or implicitly by the absence of a row (as in `preg_tidy2`).
But in the wide data, the missing values can only be represented explicitly.
Though we have already done enough to make the data tidy, there are some other transformations that can clean the data further.
If a variable takes two values, like `pregnant` and `sex`, it is often preferable to store them as logical vectors.
```
preg_tidy3 <- preg_tidy2 %>%
mutate(
female = sex == "female",
pregnant = pregnant == "yes"
) %>%
select(female, pregnant, count)
preg_tidy3
#> # A tibble: 3 x 3
#> female pregnant count
#> <lgl> <lgl> <dbl>
#> 1 TRUE TRUE 10
#> 2 FALSE FALSE 20
#> 3 TRUE FALSE 12
```
In the previous data frame, I named the logical variable representing the sex `female`, not `sex`.
This makes the meaning of the variable self\-documenting.
If the variable were named `sex` with values `TRUE` and `FALSE`, without reading the documentation, we wouldn’t know whether `TRUE` means male or female.
Apart from some minor memory savings, representing these variables as logical vectors results in more clear and concise code.
Compare the `filter()` calls to select non\-pregnant females from `preg_tidy2` and `preg_tidy`.
```
filter(preg_tidy2, sex == "female", pregnant == "no")
#> # A tibble: 1 x 3
#> pregnant sex count
#> <chr> <chr> <dbl>
#> 1 no female 12
filter(preg_tidy3, female, !pregnant)
#> # A tibble: 1 x 3
#> female pregnant count
#> <lgl> <lgl> <dbl>
#> 1 TRUE FALSE 12
```
12\.4 Separating and uniting
----------------------------
### Exercise 12\.4\.1
What do the extra and fill arguments do in `separate()`?
Experiment with the various options for the following two toy datasets.
```
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"))
#> Warning: Expected 3 pieces. Additional pieces discarded in 1 rows [2].
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e f
#> 3 h i j
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"))
#> Warning: Expected 3 pieces. Missing pieces filled with `NA` in 1 rows [2].
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e <NA>
#> 3 f g i
```
The `extra` argument tells `separate()` what to do if there are too many pieces, and the `fill` argument tells it what to do if there aren’t enough.
By default, `separate()` drops extra values with a warning.
```
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"))
#> Warning: Expected 3 pieces. Additional pieces discarded in 1 rows [2].
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e f
#> 3 h i j
```
Adding the argument, `extra = "drop"`, produces the same result as above but without the warning.
```
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"), extra = "drop")
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e f
#> 3 h i j
```
Setting `extra = "merge"`, then the extra values are not split, so `"f,g"` appears in column three.
```
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"), extra = "merge")
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e f,g
#> 3 h i j
```
In this example, one of the values, `"d,e"`, has too few elements.
The default for `fill` is similar to those in `separate()`;
it fills columns with missing values but emits a warning.
In this example, the 2nd row of column `three` is `NA`.
```
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"))
#> Warning: Expected 3 pieces. Missing pieces filled with `NA` in 1 rows [2].
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e <NA>
#> 3 f g i
```
Alternative options for the `fill` are `"right"`, to fill with missing values from the right, but without a warning
```
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"), fill = "right")
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e <NA>
#> 3 f g i
```
The option `fill = "left"` also fills with missing values without emitting a warning, but this time from the left side.
Now, the 2nd row of column `one` will be missing, and the other values in that row are shifted right.
```
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"), fill = "left")
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 <NA> d e
#> 3 f g i
```
### Exercise 12\.4\.2
Both `unite()` and `separate()` have a `remove` argument.
What does it do?
Why would you set it to `FALSE`?
The `remove` argument discards input columns in the result data frame. You would set it to `FALSE` if you want to create a new variable, but keep the old one.
### Exercise 12\.4\.3
Compare and contrast `separate()` and `extract()`, Why are there three variations of separation (by position, by separator, and with groups), but only one unite?
The function `separate()`, splits a column into multiple columns by separator, if the `sep` argument is a character vector, or by character positions, if `sep` is numeric.
```
# example with separators
tibble(x = c("X_1", "X_2", "AA_1", "AA_2")) %>%
separate(x, c("variable", "into"), sep = "_")
#> # A tibble: 4 x 2
#> variable into
#> <chr> <chr>
#> 1 X 1
#> 2 X 2
#> 3 AA 1
#> 4 AA 2
# example with position
tibble(x = c("X1", "X2", "Y1", "Y2")) %>%
separate(x, c("variable", "into"), sep = c(1))
#> # A tibble: 4 x 2
#> variable into
#> <chr> <chr>
#> 1 X 1
#> 2 X 2
#> 3 Y 1
#> 4 Y 2
```
The function `extract()` uses a regular expression to specify groups in character vector and split that single character vector into multiple columns.
This is more flexible than `separate()` because it does not require a common
separator or specific column positions.
```
# example with separators
tibble(x = c("X_1", "X_2", "AA_1", "AA_2")) %>%
extract(x, c("variable", "id"), regex = "([A-Z])_([0-9])")
#> # A tibble: 4 x 2
#> variable id
#> <chr> <chr>
#> 1 X 1
#> 2 X 2
#> 3 A 1
#> 4 A 2
# example with position
tibble(x = c("X1", "X2", "Y1", "Y2")) %>%
extract(x, c("variable", "id"), regex = "([A-Z])([0-9])")
#> # A tibble: 4 x 2
#> variable id
#> <chr> <chr>
#> 1 X 1
#> 2 X 2
#> 3 Y 1
#> 4 Y 2
# example that separate could not parse
tibble(x = c("X1", "X20", "AA11", "AA2")) %>%
extract(x, c("variable", "id"), regex = "([A-Z]+)([0-9]+)")
#> # A tibble: 4 x 2
#> variable id
#> <chr> <chr>
#> 1 X 1
#> 2 X 20
#> 3 AA 11
#> 4 AA 2
```
Both `separate()` and `extract()` convert a single column to many columns.
However, `unite()` converts many columns to one, with a choice of a separator to include between column values.
```
tibble(variable = c("X", "X", "Y", "Y"), id = c(1, 2, 1, 2)) %>%
unite(x, variable, id, sep = "_")
#> # A tibble: 4 x 1
#> x
#> <chr>
#> 1 X_1
#> 2 X_2
#> 3 Y_1
#> 4 Y_2
```
In other words, with `extract()` and `separate()` only one column can be chosen,
but there are many choices how to split that single column into different columns.
With `unite()`, there are many choices as to which columns to include, but only one
choice as to how to combine their contents into a single vector.
12\.5 Missing values
--------------------
### Exercise 12\.5\.1
Compare and contrast the `fill` arguments to `pivot_wider()` and `complete()`.
The `values_fill` argument in `pivot_wider()` and the `fill` argument to `complete()` both set vales to replace `NA`.
Both arguments accept named lists to set values for each column.
Additionally, the `values_fill` argument of `pivot_wider()` accepts a single value.
In `complete()`, the fill argument also sets a value to replace `NA`s but it is named list, allowing for different values for different variables.
Also, both cases replace both implicit and explicit missing values.
For example, this will fill in the missing values of the long data frame with `0` `complete()`:
```
stocks <- tibble(
year = c(2015, 2015, 2015, 2015, 2016, 2016, 2016),
qtr = c( 1, 2, 3, 4, 2, 3, 4),
return = c(1.88, 0.59, 0.35, NA, 0.92, 0.17, 2.66)
)
stocks %>%
pivot_wider(names_from = year, values_from = return,
values_fill = 0)
#> # A tibble: 4 x 3
#> qtr `2015` `2016`
#> <dbl> <dbl> <dbl>
#> 1 1 1.88 0
#> 2 2 0.59 0.92
#> 3 3 0.35 0.17
#> 4 4 NA 2.66
```
```
stocks <- tibble(
year = c(2015, 2015, 2015, 2015, 2016, 2016, 2016),
qtr = c( 1, 2, 3, 4, 2, 3, 4),
return = c(1.88, 0.59, 0.35, NA, 0.92, 0.17, 2.66)
)
stocks %>%
pivot_wider(names_from = year, values_from = return,
values_fill = 0)
#> # A tibble: 4 x 3
#> qtr `2015` `2016`
#> <dbl> <dbl> <dbl>
#> 1 1 1.88 0
#> 2 2 0.59 0.92
#> 3 3 0.35 0.17
#> 4 4 NA 2.66
```
For example, this will fill in the missing values of the long data frame with `0` `complete()`:
```
stocks %>%
complete(year, qtr, fill=list(return=0))
#> # A tibble: 8 x 3
#> year qtr return
#> <dbl> <dbl> <dbl>
#> 1 2015 1 1.88
#> 2 2015 2 0.59
#> 3 2015 3 0.35
#> 4 2015 4 0
#> 5 2016 1 0
#> 6 2016 2 0.92
#> # … with 2 more rows
```
### Exercise 12\.5\.2
What does the `direction` argument to `fill()` do?
With `fill`, the `direction` determines whether `NA` values should be replaced by the previous non\-missing value (`"down"`) or the next non\-missing value (`"up"`).
12\.6 Case Study
----------------
This code is repeated from the chapter because it is needed by the exercises.
```
who1 <- who %>%
pivot_longer(
cols = new_sp_m014:newrel_f65,
names_to = "key",
values_to = "cases",
values_drop_na = TRUE
)
who1
#> # A tibble: 76,046 x 6
#> country iso2 iso3 year key cases
#> <chr> <chr> <chr> <int> <chr> <int>
#> 1 Afghanistan AF AFG 1997 new_sp_m014 0
#> 2 Afghanistan AF AFG 1997 new_sp_m1524 10
#> 3 Afghanistan AF AFG 1997 new_sp_m2534 6
#> 4 Afghanistan AF AFG 1997 new_sp_m3544 3
#> 5 Afghanistan AF AFG 1997 new_sp_m4554 5
#> 6 Afghanistan AF AFG 1997 new_sp_m5564 2
#> # … with 76,040 more rows
```
```
who2 <- who1 %>%
mutate(names_from = stringr::str_replace(key, "newrel", "new_rel"))
who2
#> # A tibble: 76,046 x 7
#> country iso2 iso3 year key cases names_from
#> <chr> <chr> <chr> <int> <chr> <int> <chr>
#> 1 Afghanistan AF AFG 1997 new_sp_m014 0 new_sp_m014
#> 2 Afghanistan AF AFG 1997 new_sp_m1524 10 new_sp_m1524
#> 3 Afghanistan AF AFG 1997 new_sp_m2534 6 new_sp_m2534
#> 4 Afghanistan AF AFG 1997 new_sp_m3544 3 new_sp_m3544
#> 5 Afghanistan AF AFG 1997 new_sp_m4554 5 new_sp_m4554
#> 6 Afghanistan AF AFG 1997 new_sp_m5564 2 new_sp_m5564
#> # … with 76,040 more rows
```
```
who3 <- who2 %>%
separate(key, c("new", "type", "sexage"), sep = "_")
#> Warning: Expected 3 pieces. Missing pieces filled with `NA` in 2580 rows [243,
#> 244, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 903,
#> 904, 905, 906, ...].
who3
#> # A tibble: 76,046 x 9
#> country iso2 iso3 year new type sexage cases names_from
#> <chr> <chr> <chr> <int> <chr> <chr> <chr> <int> <chr>
#> 1 Afghanistan AF AFG 1997 new sp m014 0 new_sp_m014
#> 2 Afghanistan AF AFG 1997 new sp m1524 10 new_sp_m1524
#> 3 Afghanistan AF AFG 1997 new sp m2534 6 new_sp_m2534
#> 4 Afghanistan AF AFG 1997 new sp m3544 3 new_sp_m3544
#> 5 Afghanistan AF AFG 1997 new sp m4554 5 new_sp_m4554
#> 6 Afghanistan AF AFG 1997 new sp m5564 2 new_sp_m5564
#> # … with 76,040 more rows
```
```
who3 %>%
count(new)
#> # A tibble: 2 x 2
#> new n
#> <chr> <int>
#> 1 new 73466
#> 2 newrel 2580
```
```
who4 <- who3 %>%
select(-new, -iso2, -iso3)
```
```
who5 <- who4 %>%
separate(sexage, c("sex", "age"), sep = 1)
who5
#> # A tibble: 76,046 x 7
#> country year type sex age cases names_from
#> <chr> <int> <chr> <chr> <chr> <int> <chr>
#> 1 Afghanistan 1997 sp m 014 0 new_sp_m014
#> 2 Afghanistan 1997 sp m 1524 10 new_sp_m1524
#> 3 Afghanistan 1997 sp m 2534 6 new_sp_m2534
#> 4 Afghanistan 1997 sp m 3544 3 new_sp_m3544
#> 5 Afghanistan 1997 sp m 4554 5 new_sp_m4554
#> 6 Afghanistan 1997 sp m 5564 2 new_sp_m5564
#> # … with 76,040 more rows
```
### Exercise 12\.6\.1
In this case study, I set `na.rm = TRUE` just to make it easier to check that we had the correct values.
Is this reasonable?
Think about how missing values are represented in this dataset.
Are there implicit missing values?
What’s the difference between an `NA` and zero?
The reasonableness of using `na.rm = TRUE` depends on how missing values are represented in this dataset.
The main concern is whether a missing value means that there were no cases of TB or whether it means that the WHO does not have data on the number of TB cases.
Here are some things we should look for to help distinguish between these cases.
* If there are no 0 values in the data, then missing values may be used to indicate no cases.
* If there are both explicit and implicit missing values, then it suggests that missing values
are being used differently. In that case, it is likely that explicit missing values would
mean no cases, and implicit missing values would mean no data on the number of cases.
First, I’ll check for the presence of zeros in the data.
```
who1 %>%
filter(cases == 0) %>%
nrow()
#> [1] 11080
```
There are zeros in the data, so it appears that cases of zero TB are explicitly indicated, and the value of`NA` is used to indicate missing data.
Second, I should check whether all values for a (country, year) are missing or whether it is possible for only some columns to be missing.
```
pivot_longer(who, c(new_sp_m014:newrel_f65), names_to = "key", values_to = "cases") %>%
group_by(country, year) %>%
mutate(prop_missing = sum(is.na(cases)) / n()) %>%
filter(prop_missing > 0, prop_missing < 1)
#> # A tibble: 195,104 x 7
#> # Groups: country, year [3,484]
#> country iso2 iso3 year key cases prop_missing
#> <chr> <chr> <chr> <int> <chr> <int> <dbl>
#> 1 Afghanistan AF AFG 1997 new_sp_m014 0 0.75
#> 2 Afghanistan AF AFG 1997 new_sp_m1524 10 0.75
#> 3 Afghanistan AF AFG 1997 new_sp_m2534 6 0.75
#> 4 Afghanistan AF AFG 1997 new_sp_m3544 3 0.75
#> 5 Afghanistan AF AFG 1997 new_sp_m4554 5 0.75
#> 6 Afghanistan AF AFG 1997 new_sp_m5564 2 0.75
#> # … with 195,098 more rows
```
From the results above, it looks like it is possible for a (country, year) row to contain some, but not all, missing values in its columns.
Finally, I will check for implicit missing values.
Implicit missing values are (`year`, `country`) combinations that do not appear in the data.
```
nrow(who)
#> [1] 7240
who %>%
complete(country, year) %>%
nrow()
#> [1] 7446
```
Since the number of complete cases of (`country`, `year`) is greater than the number of rows in `who`, there are some implicit values.
But that doesn’t tell us what those implicit missing values are.
To do this, I will use the `anti_join()` function introduced in the later [Relational Data](https://r4ds.had.co.nz/relational-data.html#filtering-joins) chapter.
```
anti_join(complete(who, country, year), who, by = c("country", "year")) %>%
select(country, year) %>%
group_by(country) %>%
# so I can make better sense of the years
summarise(min_year = min(year), max_year = max(year))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 9 x 3
#> country min_year max_year
#> <chr> <int> <int>
#> 1 Bonaire, Saint Eustatius and Saba 1980 2009
#> 2 Curacao 1980 2009
#> 3 Montenegro 1980 2004
#> 4 Netherlands Antilles 2010 2013
#> 5 Serbia 1980 2004
#> 6 Serbia & Montenegro 2005 2013
#> # … with 3 more rows
```
All of these refer to (`country`, `year`) combinations for years prior to the existence of the country.
For example, Timor\-Leste achieved independence in 2002, so years prior to that are not included in the data.
To summarize:
* `0` is used to represent no cases of TB.
* Explicit missing values (`NA`s) are used to represent missing data for (`country`, `year`) combinations in which the country existed in that year.
* Implicit missing values are used to represent missing data because a country did not exist in that year.
### Exercise 12\.6\.2
What happens if you neglect the `mutate()` step?
(`mutate(key = str_replace(key, "newrel", "new_rel")`)
The `separate()` function emits the warning “too few values”.
If we check the rows for keys beginning with `"newrel_"`, we see that `sexage` is missing,
and `type = m014`.
```
who3a <- who1 %>%
separate(key, c("new", "type", "sexage"), sep = "_")
#> Warning: Expected 3 pieces. Missing pieces filled with `NA` in 2580 rows [243,
#> 244, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 903,
#> 904, 905, 906, ...].
filter(who3a, new == "newrel") %>% head()
#> # A tibble: 6 x 8
#> country iso2 iso3 year new type sexage cases
#> <chr> <chr> <chr> <int> <chr> <chr> <chr> <int>
#> 1 Afghanistan AF AFG 2013 newrel m014 <NA> 1705
#> 2 Afghanistan AF AFG 2013 newrel f014 <NA> 1749
#> 3 Albania AL ALB 2013 newrel m014 <NA> 14
#> 4 Albania AL ALB 2013 newrel m1524 <NA> 60
#> 5 Albania AL ALB 2013 newrel m2534 <NA> 61
#> 6 Albania AL ALB 2013 newrel m3544 <NA> 32
```
### Exercise 12\.6\.3
I claimed that `iso2` and `iso3` were redundant with country.
Confirm this claim.
If `iso2` and `iso3` are redundant with `country`, then, within each country,
there should only be one distinct combination of `iso2` and `iso3` values, which is the case.
```
select(who3, country, iso2, iso3) %>%
distinct() %>%
group_by(country) %>%
filter(n() > 1)
#> # A tibble: 0 x 3
#> # Groups: country [0]
#> # … with 3 variables: country <chr>, iso2 <chr>, iso3 <chr>
```
This makes sense, since `iso2` and `iso3` contain the 2\- and 3\-letter country abbreviations for the country.
The `iso2` variable contains each country’s [ISO 3166 alpha\-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2), and the `iso3` variable contains each country’s [ISO 3166 alpha\-3](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3) abbreviation.
You may recognize the ISO 3166\-2 abbreviations, since they are almost identical to internet [country\-code top level domains](https://en.wikipedia.org/wiki/Country_code_top-level_domain), such as `.uk` (United Kingdom), `.ly` (Libya), `.tv` (Tuvalu), and `.io` (British Indian Ocean Territory).
### Exercise 12\.6\.4
For each country, year, and sex compute the total number of cases of TB.
Make an informative visualization of the data.
```
who5 %>%
group_by(country, year, sex) %>%
filter(year > 1995) %>%
summarise(cases = sum(cases)) %>%
unite(country_sex, country, sex, remove = FALSE) %>%
ggplot(aes(x = year, y = cases, group = country_sex, colour = sex)) +
geom_line()
#> `summarise()` regrouping output by 'country', 'year' (override with `.groups` argument)
```
A small multiples plot faceting by country is difficult given the number of countries.
Focusing on those countries with the largest changes or absolute magnitudes after providing the context above is another option.
12\.7 Non\-tidy data
--------------------
No exercises
\[ex\-12\.2\.2]: It would be better to join these tables using the methods covered in the [Relational Data](https://r4ds.had.co.nz/relational-data.html).
We could use `inner_join(t2_cases, t2_population, by = c("country", "year"))`.
\[non\-syntactic]: See the [Creating Tibbles](https://r4ds.had.co.nz/tibbles.html#tibbles) section.
12\.1 Introduction
------------------
```
library("tidyverse")
```
12\.2 Tidy data
---------------
### Exercise 12\.2\.1
Using prose, describe how the variables and observations are organized in each of the sample tables.
In table `table1`, each row represents a (country, year) combination.
The columns `cases` and `population` contain the values for those variables.
```
table1
#> # A tibble: 6 x 4
#> country year cases population
#> <chr> <int> <int> <int>
#> 1 Afghanistan 1999 745 19987071
#> 2 Afghanistan 2000 2666 20595360
#> 3 Brazil 1999 37737 172006362
#> 4 Brazil 2000 80488 174504898
#> 5 China 1999 212258 1272915272
#> 6 China 2000 213766 1280428583
```
In `table2`, each row represents a (country, year, variable) combination.
The column `count` contains the values of variables `cases` and `population` in separate rows.
```
table2
#> # A tibble: 12 x 4
#> country year type count
#> <chr> <int> <chr> <int>
#> 1 Afghanistan 1999 cases 745
#> 2 Afghanistan 1999 population 19987071
#> 3 Afghanistan 2000 cases 2666
#> 4 Afghanistan 2000 population 20595360
#> 5 Brazil 1999 cases 37737
#> 6 Brazil 1999 population 172006362
#> # … with 6 more rows
```
In `table3`, each row represents a (country, year) combination.
The column `rate` provides the values of both `cases` and `population` in a string formatted like `cases / population`.
```
table3
#> # A tibble: 6 x 3
#> country year rate
#> * <chr> <int> <chr>
#> 1 Afghanistan 1999 745/19987071
#> 2 Afghanistan 2000 2666/20595360
#> 3 Brazil 1999 37737/172006362
#> 4 Brazil 2000 80488/174504898
#> 5 China 1999 212258/1272915272
#> 6 China 2000 213766/1280428583
```
Table 4 is split into two tables, one table for each variable.
The table `table4a` contains the values of cases and `table4b` contains the values of population.
Within each table, each row represents a country, each column represents a year, and the cells are the value of the table’s variable for that country and year.
```
table4a
#> # A tibble: 3 x 3
#> country `1999` `2000`
#> * <chr> <int> <int>
#> 1 Afghanistan 745 2666
#> 2 Brazil 37737 80488
#> 3 China 212258 213766
```
```
table4b
#> # A tibble: 3 x 3
#> country `1999` `2000`
#> * <chr> <int> <int>
#> 1 Afghanistan 19987071 20595360
#> 2 Brazil 172006362 174504898
#> 3 China 1272915272 1280428583
```
### Exercise 12\.2\.2
Compute the `rate` for `table2`, and `table4a` \+ `table4b`.
You will need to perform four operations:
1. Extract the number of TB cases per country per year.
2. Extract the matching population per country per year.
3. Divide cases by population, and multiply by 10000\.
4. Store back in the appropriate place.
Which representation is easiest to work with?
Which is hardest?
Why?
To calculate cases per person, we need to divide cases by population for each country and year.
This is easiest if the cases and population variables are two columns in a data frame in which rows represent (country, year) combinations.
Table 2: First, create separate tables for cases and population and ensure that they are sorted in the same order.
```
t2_cases <- filter(table2, type == "cases") %>%
rename(cases = count) %>%
arrange(country, year)
t2_population <- filter(table2, type == "population") %>%
rename(population = count) %>%
arrange(country, year)
```
Then create a new data frame with the population and cases columns, and calculate the cases per capita in a new column.
```
t2_cases_per_cap <- tibble(
year = t2_cases$year,
country = t2_cases$country,
cases = t2_cases$cases,
population = t2_population$population
) %>%
mutate(cases_per_cap = (cases / population) * 10000) %>%
select(country, year, cases_per_cap)
```
To store this new variable in the appropriate location, we will add new rows to `table2`.
```
t2_cases_per_cap <- t2_cases_per_cap %>%
mutate(type = "cases_per_cap") %>%
rename(count = cases_per_cap)
```
```
bind_rows(table2, t2_cases_per_cap) %>%
arrange(country, year, type, count)
#> # A tibble: 18 x 4
#> country year type count
#> <chr> <int> <chr> <dbl>
#> 1 Afghanistan 1999 cases 745
#> 2 Afghanistan 1999 cases_per_cap 0.373
#> 3 Afghanistan 1999 population 19987071
#> 4 Afghanistan 2000 cases 2666
#> 5 Afghanistan 2000 cases_per_cap 1.29
#> 6 Afghanistan 2000 population 20595360
#> # … with 12 more rows
```
Note that after adding the `cases_per_cap` rows, the type of `count` is coerced to `numeric` (double) because `cases_per_cap` is not an integer.
For `table4a` and `table4b`, create a new table for cases per capita, which we’ll name `table4c`, with country rows and year columns.
```
table4c <-
tibble(
country = table4a$country,
`1999` = table4a[["1999"]] / table4b[["1999"]] * 10000,
`2000` = table4a[["2000"]] / table4b[["2000"]] * 10000
)
table4c
#> # A tibble: 3 x 3
#> country `1999` `2000`
#> <chr> <dbl> <dbl>
#> 1 Afghanistan 0.373 1.29
#> 2 Brazil 2.19 4.61
#> 3 China 1.67 1.67
```
Neither table is particularly easy to work with.
Since `table2` has separate rows for cases and population we needed to generate a table with columns for cases and population where we could
calculate cases per capita.
`table4a` and `table4b` split the cases and population variables into different tables which
made it easy to divide cases by population.
However, we had to repeat this calculation for each row.
The ideal format of a data frame to answer this question is one with columns `country`, `year`, `cases`, and `population`.
Then problem could be answered with a single `mutate()` call.
### Exercise 12\.2\.3
Recreate the plot showing change in cases over time using `table2` instead of `table1`.
What do you need to do first?
Before creating the plot with change in cases over time, we need to filter `table` to only include rows representing cases of TB.
```
table2 %>%
filter(type == "cases") %>%
ggplot(aes(year, count)) +
geom_line(aes(group = country), colour = "grey50") +
geom_point(aes(colour = country)) +
scale_x_continuous(breaks = unique(table2$year)) +
ylab("cases")
```
### Exercise 12\.2\.1
Using prose, describe how the variables and observations are organized in each of the sample tables.
In table `table1`, each row represents a (country, year) combination.
The columns `cases` and `population` contain the values for those variables.
```
table1
#> # A tibble: 6 x 4
#> country year cases population
#> <chr> <int> <int> <int>
#> 1 Afghanistan 1999 745 19987071
#> 2 Afghanistan 2000 2666 20595360
#> 3 Brazil 1999 37737 172006362
#> 4 Brazil 2000 80488 174504898
#> 5 China 1999 212258 1272915272
#> 6 China 2000 213766 1280428583
```
In `table2`, each row represents a (country, year, variable) combination.
The column `count` contains the values of variables `cases` and `population` in separate rows.
```
table2
#> # A tibble: 12 x 4
#> country year type count
#> <chr> <int> <chr> <int>
#> 1 Afghanistan 1999 cases 745
#> 2 Afghanistan 1999 population 19987071
#> 3 Afghanistan 2000 cases 2666
#> 4 Afghanistan 2000 population 20595360
#> 5 Brazil 1999 cases 37737
#> 6 Brazil 1999 population 172006362
#> # … with 6 more rows
```
In `table3`, each row represents a (country, year) combination.
The column `rate` provides the values of both `cases` and `population` in a string formatted like `cases / population`.
```
table3
#> # A tibble: 6 x 3
#> country year rate
#> * <chr> <int> <chr>
#> 1 Afghanistan 1999 745/19987071
#> 2 Afghanistan 2000 2666/20595360
#> 3 Brazil 1999 37737/172006362
#> 4 Brazil 2000 80488/174504898
#> 5 China 1999 212258/1272915272
#> 6 China 2000 213766/1280428583
```
Table 4 is split into two tables, one table for each variable.
The table `table4a` contains the values of cases and `table4b` contains the values of population.
Within each table, each row represents a country, each column represents a year, and the cells are the value of the table’s variable for that country and year.
```
table4a
#> # A tibble: 3 x 3
#> country `1999` `2000`
#> * <chr> <int> <int>
#> 1 Afghanistan 745 2666
#> 2 Brazil 37737 80488
#> 3 China 212258 213766
```
```
table4b
#> # A tibble: 3 x 3
#> country `1999` `2000`
#> * <chr> <int> <int>
#> 1 Afghanistan 19987071 20595360
#> 2 Brazil 172006362 174504898
#> 3 China 1272915272 1280428583
```
### Exercise 12\.2\.2
Compute the `rate` for `table2`, and `table4a` \+ `table4b`.
You will need to perform four operations:
1. Extract the number of TB cases per country per year.
2. Extract the matching population per country per year.
3. Divide cases by population, and multiply by 10000\.
4. Store back in the appropriate place.
Which representation is easiest to work with?
Which is hardest?
Why?
To calculate cases per person, we need to divide cases by population for each country and year.
This is easiest if the cases and population variables are two columns in a data frame in which rows represent (country, year) combinations.
Table 2: First, create separate tables for cases and population and ensure that they are sorted in the same order.
```
t2_cases <- filter(table2, type == "cases") %>%
rename(cases = count) %>%
arrange(country, year)
t2_population <- filter(table2, type == "population") %>%
rename(population = count) %>%
arrange(country, year)
```
Then create a new data frame with the population and cases columns, and calculate the cases per capita in a new column.
```
t2_cases_per_cap <- tibble(
year = t2_cases$year,
country = t2_cases$country,
cases = t2_cases$cases,
population = t2_population$population
) %>%
mutate(cases_per_cap = (cases / population) * 10000) %>%
select(country, year, cases_per_cap)
```
To store this new variable in the appropriate location, we will add new rows to `table2`.
```
t2_cases_per_cap <- t2_cases_per_cap %>%
mutate(type = "cases_per_cap") %>%
rename(count = cases_per_cap)
```
```
bind_rows(table2, t2_cases_per_cap) %>%
arrange(country, year, type, count)
#> # A tibble: 18 x 4
#> country year type count
#> <chr> <int> <chr> <dbl>
#> 1 Afghanistan 1999 cases 745
#> 2 Afghanistan 1999 cases_per_cap 0.373
#> 3 Afghanistan 1999 population 19987071
#> 4 Afghanistan 2000 cases 2666
#> 5 Afghanistan 2000 cases_per_cap 1.29
#> 6 Afghanistan 2000 population 20595360
#> # … with 12 more rows
```
Note that after adding the `cases_per_cap` rows, the type of `count` is coerced to `numeric` (double) because `cases_per_cap` is not an integer.
For `table4a` and `table4b`, create a new table for cases per capita, which we’ll name `table4c`, with country rows and year columns.
```
table4c <-
tibble(
country = table4a$country,
`1999` = table4a[["1999"]] / table4b[["1999"]] * 10000,
`2000` = table4a[["2000"]] / table4b[["2000"]] * 10000
)
table4c
#> # A tibble: 3 x 3
#> country `1999` `2000`
#> <chr> <dbl> <dbl>
#> 1 Afghanistan 0.373 1.29
#> 2 Brazil 2.19 4.61
#> 3 China 1.67 1.67
```
Neither table is particularly easy to work with.
Since `table2` has separate rows for cases and population we needed to generate a table with columns for cases and population where we could
calculate cases per capita.
`table4a` and `table4b` split the cases and population variables into different tables which
made it easy to divide cases by population.
However, we had to repeat this calculation for each row.
The ideal format of a data frame to answer this question is one with columns `country`, `year`, `cases`, and `population`.
Then problem could be answered with a single `mutate()` call.
### Exercise 12\.2\.3
Recreate the plot showing change in cases over time using `table2` instead of `table1`.
What do you need to do first?
Before creating the plot with change in cases over time, we need to filter `table` to only include rows representing cases of TB.
```
table2 %>%
filter(type == "cases") %>%
ggplot(aes(year, count)) +
geom_line(aes(group = country), colour = "grey50") +
geom_point(aes(colour = country)) +
scale_x_continuous(breaks = unique(table2$year)) +
ylab("cases")
```
12\.3 Pivoting
--------------
This code is reproduced from the chapter because it is needed by the exercises.
```
tidy4a <- table4a %>%
pivot_longer(c(`1999`, `2000`), names_to = "year", values_to = "cases")
tidy4b <- table4b %>%
pivot_longer(c(`1999`, `2000`), names_to = "year", values_to = "population")
```
### Exercise 12\.3\.1
Why are `pivot_longer()` and `pivot_wider()` not perfectly symmetrical?
Carefully consider the following example:
Carefully consider the following example:
```
stocks <- tibble(
year = c(2015, 2015, 2016, 2016),
half = c( 1, 2, 1, 2),
return = c(1.88, 0.59, 0.92, 0.17)
)
stocks %>%
pivot_wider(names_from = year, values_from = return) %>%
pivot_longer(`2015`:`2016`, names_to = "year", values_to = "return")
#> # A tibble: 4 x 3
#> half year return
#> <dbl> <chr> <dbl>
#> 1 1 2015 1.88
#> 2 1 2016 0.92
#> 3 2 2015 0.59
#> 4 2 2016 0.17
```
(Hint: look at the variable types and think about column names.)
`pivot_longer()` has a `names_ptype` argument, e.g. `names_ptype = list(year = double())`. What does it do?
The functions `pivot_longer()` and `pivot_wider()` are not perfectly symmetrical because column type information is lost when a data frame is converted from wide to long.
The function `pivot_longer()` stacks multiple columns which may have had multiple data types into a single column with a single data type.
This transformation throws away the individual data types of the original columns.
The function `pivot_wider()` creates column names from values in column.
These column names will always be treated as `character` values by `pivot_longer()` so if the original variable used to create the column names did not have a `character` data type, then the round\-trip will not reproduce the same dataset.
In the provided example, columns have the following data types:
```
glimpse(stocks)
#> Rows: 4
#> Columns: 3
#> $ year <dbl> 2015, 2015, 2016, 2016
#> $ half <dbl> 1, 2, 1, 2
#> $ return <dbl> 1.88, 0.59, 0.92, 0.17
```
The `pivot_wider()` expression pivots the table to create a data frame with years as column names, and the values in `return` as the column values.
```
stocks %>%
pivot_wider(names_from = year, values_from = return)
#> # A tibble: 2 x 3
#> half `2015` `2016`
#> <dbl> <dbl> <dbl>
#> 1 1 1.88 0.92
#> 2 2 0.59 0.17
```
The `pivot_longer()` expression unpivots the table, returning it to a tidy data frame with columns for `half`, `year`, and `return`.
```
stocks %>%
pivot_wider(names_from = year, values_from = return)%>%
pivot_longer(`2015`:`2016`, names_to = "year", values_to = "return")
#> # A tibble: 4 x 3
#> half year return
#> <dbl> <chr> <dbl>
#> 1 1 2015 1.88
#> 2 1 2016 0.92
#> 3 2 2015 0.59
#> 4 2 2016 0.17
```
There is one difference, in the new data frame, `year` has a data type of `character` rather than `numeric`.
The `names_to` column created from column names by `pivot_longer()` will be character by default, which is usually a safe assumption, since syntactically valid\-column names can only be character values.
The original data types of column which `pivot_wider()` used to create the column names was not stored, so `pivot_longer()` has no idea that the column names in this case should be numeric values.
In the current version of `tidyr`, the `names_ptype` argument does not convert the `year` column to a numeric vector, and it will raise an error.
```
stocks %>%
pivot_wider(names_from = year, values_from = return)%>%
pivot_longer(`2015`:`2016`, names_to = "year", values_to = "return",
names_ptype = list(year = double()))
#> Error: Can't convert <character> to <double>.
```
Instead, use the `names_transform` argument to `pivot_longer()`, which provides a function to coerce the column to a different data type.
```
stocks %>%
pivot_wider(names_from = year, values_from = return)%>%
pivot_longer(`2015`:`2016`, names_to = "year", values_to = "return",
names_transform = list(year = as.numeric))
#> # A tibble: 4 x 3
#> half year return
#> <dbl> <dbl> <dbl>
#> 1 1 2015 1.88
#> 2 1 2016 0.92
#> 3 2 2015 0.59
#> 4 2 2016 0.17
```
### Exercise 12\.3\.2
Why does this code fail?
```
table4a %>%
pivot_longer(c(1999, 2000), names_to = "year", values_to = "cases")
#> Error: Can't subset columns that don't exist.
#> ✖ Locations 1999 and 2000 don't exist.
#> ℹ There are only 3 columns.
```
The code fails because the column names `1999` and `2000` are not non\-syntactic variable names.\[^non\-syntactic]
When selecting variables from a data frame, tidyverse functions will interpret numbers, like `1999` and `2000`, as column numbers.
In this case, `pivot_longer()` tries to select the 1999th and 2000th column of the data frame.
To select the columns `1999` and `2000`, the names must be surrounded in backticks (\`\`\`) or provided as strings.
```
table4a %>%
pivot_longer(c(`1999`, `2000`), names_to = "year", values_to = "cases")
#> # A tibble: 6 x 3
#> country year cases
#> <chr> <chr> <int>
#> 1 Afghanistan 1999 745
#> 2 Afghanistan 2000 2666
#> 3 Brazil 1999 37737
#> 4 Brazil 2000 80488
#> 5 China 1999 212258
#> 6 China 2000 213766
```
```
table4a %>%
pivot_longer(c("1999", "2000"), names_to = "year", values_to = "cases")
#> # A tibble: 6 x 3
#> country year cases
#> <chr> <chr> <int>
#> 1 Afghanistan 1999 745
#> 2 Afghanistan 2000 2666
#> 3 Brazil 1999 37737
#> 4 Brazil 2000 80488
#> 5 China 1999 212258
#> 6 China 2000 213766
```
### Exercise 12\.3\.3
What would happen if you widen this table?
Why?
How could you add a new column to uniquely identify each value?
```
people <- tribble(
~name, ~key, ~value,
#-----------------|--------|------
"Phillip Woods", "age", 45,
"Phillip Woods", "height", 186,
"Phillip Woods", "age", 50,
"Jessica Cordero", "age", 37,
"Jessica Cordero", "height", 156
)
glimpse(people)
#> Rows: 5
#> Columns: 3
#> $ name <chr> "Phillip Woods", "Phillip Woods", "Phillip Woods", "Jessica Cor…
#> $ key <chr> "age", "height", "age", "age", "height"
#> $ value <dbl> 45, 186, 50, 37, 156
```
Widening this data frame using `pivot_wider()` produces columns that are lists of numeric vectors because the `name` and `key` columns do not uniquely identify rows.
In particular, there are two rows with values for the age of “Phillip Woods”.
```
pivot_wider(people, names_from="name", values_from = "value")
#> Warning: Values are not uniquely identified; output will contain list-cols.
#> * Use `values_fn = list` to suppress this warning.
#> * Use `values_fn = length` to identify where the duplicates arise
#> * Use `values_fn = {summary_fun}` to summarise duplicates
#> # A tibble: 2 x 3
#> key `Phillip Woods` `Jessica Cordero`
#> <chr> <list> <list>
#> 1 age <dbl [2]> <dbl [1]>
#> 2 height <dbl [1]> <dbl [1]>
```
We could solve the problem by adding a row with a distinct observation count for each combination of name and key.
```
people2 <- people %>%
group_by(name, key) %>%
mutate(obs = row_number())
people2
#> # A tibble: 5 x 4
#> # Groups: name, key [4]
#> name key value obs
#> <chr> <chr> <dbl> <int>
#> 1 Phillip Woods age 45 1
#> 2 Phillip Woods height 186 1
#> 3 Phillip Woods age 50 2
#> 4 Jessica Cordero age 37 1
#> 5 Jessica Cordero height 156 1
```
We can make `people2` wider because the combination of `name` and `obs` will uniquely identify the rows in the wide data frame.
```
pivot_wider(people2, names_from="name", values_from = "value")
#> # A tibble: 3 x 4
#> # Groups: key [2]
#> key obs `Phillip Woods` `Jessica Cordero`
#> <chr> <int> <dbl> <dbl>
#> 1 age 1 45 37
#> 2 height 1 186 156
#> 3 age 2 50 NA
```
Another way to solve this problem is by keeping only distinct rows of the name and key values, and dropping duplicate rows.
```
people %>%
distinct(name, key, .keep_all = TRUE) %>%
pivot_wider(names_from="name", values_from = "value")
#> # A tibble: 2 x 3
#> key `Phillip Woods` `Jessica Cordero`
#> <chr> <dbl> <dbl>
#> 1 age 45 37
#> 2 height 186 156
```
However, before doing this understand why there are duplicates in the data.
The duplicate values may not be just a nuisance, but may indicate deeper problems with the data.
### Exercise 12\.3\.4
Tidy the simple tibble below. Do you need to make it wider or longer?
What are the variables?
```
preg <- tribble(
~pregnant, ~male, ~female,
"yes", NA, 10,
"no", 20, 12
)
```
To tidy the `preg` table use `pivot_longer()` to create a long table.
The variables in this data are:
* `sex` (“female”, “male”)
* `pregnant` (“yes”, “no”)
* `count`, which is a non\-negative integer representing the number of observations.
The observations in this data are unique combinations of sex and pregnancy status.
```
preg_tidy <- preg %>%
pivot_longer(c(male, female), names_to = "sex", values_to = "count")
preg_tidy
#> # A tibble: 4 x 3
#> pregnant sex count
#> <chr> <chr> <dbl>
#> 1 yes male NA
#> 2 yes female 10
#> 3 no male 20
#> 4 no female 12
```
Remove the (male, pregnant) row with a missing value to simplify the tidied data frame.
```
preg_tidy2 <- preg %>%
pivot_longer(c(male, female), names_to = "sex", values_to = "count", values_drop_na = TRUE)
preg_tidy2
#> # A tibble: 3 x 3
#> pregnant sex count
#> <chr> <chr> <dbl>
#> 1 yes female 10
#> 2 no male 20
#> 3 no female 12
```
This an example of turning an explicit missing value into an implicit missing value, which is discussed in the upcoming section, [Missing Values](https://r4ds.had.co.nz/tidy-data.html#missing-values-3) section.
The missing (male, pregnant) row represents an implicit missing value because the value of `count` can be inferred from its absence.
In the tidy data, we can represent rows with missing values of `count` either explicitly with an `NA` (as in `preg_tidy`) or implicitly by the absence of a row (as in `preg_tidy2`).
But in the wide data, the missing values can only be represented explicitly.
Though we have already done enough to make the data tidy, there are some other transformations that can clean the data further.
If a variable takes two values, like `pregnant` and `sex`, it is often preferable to store them as logical vectors.
```
preg_tidy3 <- preg_tidy2 %>%
mutate(
female = sex == "female",
pregnant = pregnant == "yes"
) %>%
select(female, pregnant, count)
preg_tidy3
#> # A tibble: 3 x 3
#> female pregnant count
#> <lgl> <lgl> <dbl>
#> 1 TRUE TRUE 10
#> 2 FALSE FALSE 20
#> 3 TRUE FALSE 12
```
In the previous data frame, I named the logical variable representing the sex `female`, not `sex`.
This makes the meaning of the variable self\-documenting.
If the variable were named `sex` with values `TRUE` and `FALSE`, without reading the documentation, we wouldn’t know whether `TRUE` means male or female.
Apart from some minor memory savings, representing these variables as logical vectors results in more clear and concise code.
Compare the `filter()` calls to select non\-pregnant females from `preg_tidy2` and `preg_tidy`.
```
filter(preg_tidy2, sex == "female", pregnant == "no")
#> # A tibble: 1 x 3
#> pregnant sex count
#> <chr> <chr> <dbl>
#> 1 no female 12
filter(preg_tidy3, female, !pregnant)
#> # A tibble: 1 x 3
#> female pregnant count
#> <lgl> <lgl> <dbl>
#> 1 TRUE FALSE 12
```
### Exercise 12\.3\.1
Why are `pivot_longer()` and `pivot_wider()` not perfectly symmetrical?
Carefully consider the following example:
Carefully consider the following example:
```
stocks <- tibble(
year = c(2015, 2015, 2016, 2016),
half = c( 1, 2, 1, 2),
return = c(1.88, 0.59, 0.92, 0.17)
)
stocks %>%
pivot_wider(names_from = year, values_from = return) %>%
pivot_longer(`2015`:`2016`, names_to = "year", values_to = "return")
#> # A tibble: 4 x 3
#> half year return
#> <dbl> <chr> <dbl>
#> 1 1 2015 1.88
#> 2 1 2016 0.92
#> 3 2 2015 0.59
#> 4 2 2016 0.17
```
(Hint: look at the variable types and think about column names.)
`pivot_longer()` has a `names_ptype` argument, e.g. `names_ptype = list(year = double())`. What does it do?
The functions `pivot_longer()` and `pivot_wider()` are not perfectly symmetrical because column type information is lost when a data frame is converted from wide to long.
The function `pivot_longer()` stacks multiple columns which may have had multiple data types into a single column with a single data type.
This transformation throws away the individual data types of the original columns.
The function `pivot_wider()` creates column names from values in column.
These column names will always be treated as `character` values by `pivot_longer()` so if the original variable used to create the column names did not have a `character` data type, then the round\-trip will not reproduce the same dataset.
In the provided example, columns have the following data types:
```
glimpse(stocks)
#> Rows: 4
#> Columns: 3
#> $ year <dbl> 2015, 2015, 2016, 2016
#> $ half <dbl> 1, 2, 1, 2
#> $ return <dbl> 1.88, 0.59, 0.92, 0.17
```
The `pivot_wider()` expression pivots the table to create a data frame with years as column names, and the values in `return` as the column values.
```
stocks %>%
pivot_wider(names_from = year, values_from = return)
#> # A tibble: 2 x 3
#> half `2015` `2016`
#> <dbl> <dbl> <dbl>
#> 1 1 1.88 0.92
#> 2 2 0.59 0.17
```
The `pivot_longer()` expression unpivots the table, returning it to a tidy data frame with columns for `half`, `year`, and `return`.
```
stocks %>%
pivot_wider(names_from = year, values_from = return)%>%
pivot_longer(`2015`:`2016`, names_to = "year", values_to = "return")
#> # A tibble: 4 x 3
#> half year return
#> <dbl> <chr> <dbl>
#> 1 1 2015 1.88
#> 2 1 2016 0.92
#> 3 2 2015 0.59
#> 4 2 2016 0.17
```
There is one difference, in the new data frame, `year` has a data type of `character` rather than `numeric`.
The `names_to` column created from column names by `pivot_longer()` will be character by default, which is usually a safe assumption, since syntactically valid\-column names can only be character values.
The original data types of column which `pivot_wider()` used to create the column names was not stored, so `pivot_longer()` has no idea that the column names in this case should be numeric values.
In the current version of `tidyr`, the `names_ptype` argument does not convert the `year` column to a numeric vector, and it will raise an error.
```
stocks %>%
pivot_wider(names_from = year, values_from = return)%>%
pivot_longer(`2015`:`2016`, names_to = "year", values_to = "return",
names_ptype = list(year = double()))
#> Error: Can't convert <character> to <double>.
```
Instead, use the `names_transform` argument to `pivot_longer()`, which provides a function to coerce the column to a different data type.
```
stocks %>%
pivot_wider(names_from = year, values_from = return)%>%
pivot_longer(`2015`:`2016`, names_to = "year", values_to = "return",
names_transform = list(year = as.numeric))
#> # A tibble: 4 x 3
#> half year return
#> <dbl> <dbl> <dbl>
#> 1 1 2015 1.88
#> 2 1 2016 0.92
#> 3 2 2015 0.59
#> 4 2 2016 0.17
```
### Exercise 12\.3\.2
Why does this code fail?
```
table4a %>%
pivot_longer(c(1999, 2000), names_to = "year", values_to = "cases")
#> Error: Can't subset columns that don't exist.
#> ✖ Locations 1999 and 2000 don't exist.
#> ℹ There are only 3 columns.
```
The code fails because the column names `1999` and `2000` are not non\-syntactic variable names.\[^non\-syntactic]
When selecting variables from a data frame, tidyverse functions will interpret numbers, like `1999` and `2000`, as column numbers.
In this case, `pivot_longer()` tries to select the 1999th and 2000th column of the data frame.
To select the columns `1999` and `2000`, the names must be surrounded in backticks (\`\`\`) or provided as strings.
```
table4a %>%
pivot_longer(c(`1999`, `2000`), names_to = "year", values_to = "cases")
#> # A tibble: 6 x 3
#> country year cases
#> <chr> <chr> <int>
#> 1 Afghanistan 1999 745
#> 2 Afghanistan 2000 2666
#> 3 Brazil 1999 37737
#> 4 Brazil 2000 80488
#> 5 China 1999 212258
#> 6 China 2000 213766
```
```
table4a %>%
pivot_longer(c("1999", "2000"), names_to = "year", values_to = "cases")
#> # A tibble: 6 x 3
#> country year cases
#> <chr> <chr> <int>
#> 1 Afghanistan 1999 745
#> 2 Afghanistan 2000 2666
#> 3 Brazil 1999 37737
#> 4 Brazil 2000 80488
#> 5 China 1999 212258
#> 6 China 2000 213766
```
### Exercise 12\.3\.3
What would happen if you widen this table?
Why?
How could you add a new column to uniquely identify each value?
```
people <- tribble(
~name, ~key, ~value,
#-----------------|--------|------
"Phillip Woods", "age", 45,
"Phillip Woods", "height", 186,
"Phillip Woods", "age", 50,
"Jessica Cordero", "age", 37,
"Jessica Cordero", "height", 156
)
glimpse(people)
#> Rows: 5
#> Columns: 3
#> $ name <chr> "Phillip Woods", "Phillip Woods", "Phillip Woods", "Jessica Cor…
#> $ key <chr> "age", "height", "age", "age", "height"
#> $ value <dbl> 45, 186, 50, 37, 156
```
Widening this data frame using `pivot_wider()` produces columns that are lists of numeric vectors because the `name` and `key` columns do not uniquely identify rows.
In particular, there are two rows with values for the age of “Phillip Woods”.
```
pivot_wider(people, names_from="name", values_from = "value")
#> Warning: Values are not uniquely identified; output will contain list-cols.
#> * Use `values_fn = list` to suppress this warning.
#> * Use `values_fn = length` to identify where the duplicates arise
#> * Use `values_fn = {summary_fun}` to summarise duplicates
#> # A tibble: 2 x 3
#> key `Phillip Woods` `Jessica Cordero`
#> <chr> <list> <list>
#> 1 age <dbl [2]> <dbl [1]>
#> 2 height <dbl [1]> <dbl [1]>
```
We could solve the problem by adding a row with a distinct observation count for each combination of name and key.
```
people2 <- people %>%
group_by(name, key) %>%
mutate(obs = row_number())
people2
#> # A tibble: 5 x 4
#> # Groups: name, key [4]
#> name key value obs
#> <chr> <chr> <dbl> <int>
#> 1 Phillip Woods age 45 1
#> 2 Phillip Woods height 186 1
#> 3 Phillip Woods age 50 2
#> 4 Jessica Cordero age 37 1
#> 5 Jessica Cordero height 156 1
```
We can make `people2` wider because the combination of `name` and `obs` will uniquely identify the rows in the wide data frame.
```
pivot_wider(people2, names_from="name", values_from = "value")
#> # A tibble: 3 x 4
#> # Groups: key [2]
#> key obs `Phillip Woods` `Jessica Cordero`
#> <chr> <int> <dbl> <dbl>
#> 1 age 1 45 37
#> 2 height 1 186 156
#> 3 age 2 50 NA
```
Another way to solve this problem is by keeping only distinct rows of the name and key values, and dropping duplicate rows.
```
people %>%
distinct(name, key, .keep_all = TRUE) %>%
pivot_wider(names_from="name", values_from = "value")
#> # A tibble: 2 x 3
#> key `Phillip Woods` `Jessica Cordero`
#> <chr> <dbl> <dbl>
#> 1 age 45 37
#> 2 height 186 156
```
However, before doing this understand why there are duplicates in the data.
The duplicate values may not be just a nuisance, but may indicate deeper problems with the data.
### Exercise 12\.3\.4
Tidy the simple tibble below. Do you need to make it wider or longer?
What are the variables?
```
preg <- tribble(
~pregnant, ~male, ~female,
"yes", NA, 10,
"no", 20, 12
)
```
To tidy the `preg` table use `pivot_longer()` to create a long table.
The variables in this data are:
* `sex` (“female”, “male”)
* `pregnant` (“yes”, “no”)
* `count`, which is a non\-negative integer representing the number of observations.
The observations in this data are unique combinations of sex and pregnancy status.
```
preg_tidy <- preg %>%
pivot_longer(c(male, female), names_to = "sex", values_to = "count")
preg_tidy
#> # A tibble: 4 x 3
#> pregnant sex count
#> <chr> <chr> <dbl>
#> 1 yes male NA
#> 2 yes female 10
#> 3 no male 20
#> 4 no female 12
```
Remove the (male, pregnant) row with a missing value to simplify the tidied data frame.
```
preg_tidy2 <- preg %>%
pivot_longer(c(male, female), names_to = "sex", values_to = "count", values_drop_na = TRUE)
preg_tidy2
#> # A tibble: 3 x 3
#> pregnant sex count
#> <chr> <chr> <dbl>
#> 1 yes female 10
#> 2 no male 20
#> 3 no female 12
```
This an example of turning an explicit missing value into an implicit missing value, which is discussed in the upcoming section, [Missing Values](https://r4ds.had.co.nz/tidy-data.html#missing-values-3) section.
The missing (male, pregnant) row represents an implicit missing value because the value of `count` can be inferred from its absence.
In the tidy data, we can represent rows with missing values of `count` either explicitly with an `NA` (as in `preg_tidy`) or implicitly by the absence of a row (as in `preg_tidy2`).
But in the wide data, the missing values can only be represented explicitly.
Though we have already done enough to make the data tidy, there are some other transformations that can clean the data further.
If a variable takes two values, like `pregnant` and `sex`, it is often preferable to store them as logical vectors.
```
preg_tidy3 <- preg_tidy2 %>%
mutate(
female = sex == "female",
pregnant = pregnant == "yes"
) %>%
select(female, pregnant, count)
preg_tidy3
#> # A tibble: 3 x 3
#> female pregnant count
#> <lgl> <lgl> <dbl>
#> 1 TRUE TRUE 10
#> 2 FALSE FALSE 20
#> 3 TRUE FALSE 12
```
In the previous data frame, I named the logical variable representing the sex `female`, not `sex`.
This makes the meaning of the variable self\-documenting.
If the variable were named `sex` with values `TRUE` and `FALSE`, without reading the documentation, we wouldn’t know whether `TRUE` means male or female.
Apart from some minor memory savings, representing these variables as logical vectors results in more clear and concise code.
Compare the `filter()` calls to select non\-pregnant females from `preg_tidy2` and `preg_tidy`.
```
filter(preg_tidy2, sex == "female", pregnant == "no")
#> # A tibble: 1 x 3
#> pregnant sex count
#> <chr> <chr> <dbl>
#> 1 no female 12
filter(preg_tidy3, female, !pregnant)
#> # A tibble: 1 x 3
#> female pregnant count
#> <lgl> <lgl> <dbl>
#> 1 TRUE FALSE 12
```
12\.4 Separating and uniting
----------------------------
### Exercise 12\.4\.1
What do the extra and fill arguments do in `separate()`?
Experiment with the various options for the following two toy datasets.
```
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"))
#> Warning: Expected 3 pieces. Additional pieces discarded in 1 rows [2].
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e f
#> 3 h i j
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"))
#> Warning: Expected 3 pieces. Missing pieces filled with `NA` in 1 rows [2].
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e <NA>
#> 3 f g i
```
The `extra` argument tells `separate()` what to do if there are too many pieces, and the `fill` argument tells it what to do if there aren’t enough.
By default, `separate()` drops extra values with a warning.
```
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"))
#> Warning: Expected 3 pieces. Additional pieces discarded in 1 rows [2].
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e f
#> 3 h i j
```
Adding the argument, `extra = "drop"`, produces the same result as above but without the warning.
```
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"), extra = "drop")
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e f
#> 3 h i j
```
Setting `extra = "merge"`, then the extra values are not split, so `"f,g"` appears in column three.
```
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"), extra = "merge")
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e f,g
#> 3 h i j
```
In this example, one of the values, `"d,e"`, has too few elements.
The default for `fill` is similar to those in `separate()`;
it fills columns with missing values but emits a warning.
In this example, the 2nd row of column `three` is `NA`.
```
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"))
#> Warning: Expected 3 pieces. Missing pieces filled with `NA` in 1 rows [2].
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e <NA>
#> 3 f g i
```
Alternative options for the `fill` are `"right"`, to fill with missing values from the right, but without a warning
```
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"), fill = "right")
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e <NA>
#> 3 f g i
```
The option `fill = "left"` also fills with missing values without emitting a warning, but this time from the left side.
Now, the 2nd row of column `one` will be missing, and the other values in that row are shifted right.
```
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"), fill = "left")
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 <NA> d e
#> 3 f g i
```
### Exercise 12\.4\.2
Both `unite()` and `separate()` have a `remove` argument.
What does it do?
Why would you set it to `FALSE`?
The `remove` argument discards input columns in the result data frame. You would set it to `FALSE` if you want to create a new variable, but keep the old one.
### Exercise 12\.4\.3
Compare and contrast `separate()` and `extract()`, Why are there three variations of separation (by position, by separator, and with groups), but only one unite?
The function `separate()`, splits a column into multiple columns by separator, if the `sep` argument is a character vector, or by character positions, if `sep` is numeric.
```
# example with separators
tibble(x = c("X_1", "X_2", "AA_1", "AA_2")) %>%
separate(x, c("variable", "into"), sep = "_")
#> # A tibble: 4 x 2
#> variable into
#> <chr> <chr>
#> 1 X 1
#> 2 X 2
#> 3 AA 1
#> 4 AA 2
# example with position
tibble(x = c("X1", "X2", "Y1", "Y2")) %>%
separate(x, c("variable", "into"), sep = c(1))
#> # A tibble: 4 x 2
#> variable into
#> <chr> <chr>
#> 1 X 1
#> 2 X 2
#> 3 Y 1
#> 4 Y 2
```
The function `extract()` uses a regular expression to specify groups in character vector and split that single character vector into multiple columns.
This is more flexible than `separate()` because it does not require a common
separator or specific column positions.
```
# example with separators
tibble(x = c("X_1", "X_2", "AA_1", "AA_2")) %>%
extract(x, c("variable", "id"), regex = "([A-Z])_([0-9])")
#> # A tibble: 4 x 2
#> variable id
#> <chr> <chr>
#> 1 X 1
#> 2 X 2
#> 3 A 1
#> 4 A 2
# example with position
tibble(x = c("X1", "X2", "Y1", "Y2")) %>%
extract(x, c("variable", "id"), regex = "([A-Z])([0-9])")
#> # A tibble: 4 x 2
#> variable id
#> <chr> <chr>
#> 1 X 1
#> 2 X 2
#> 3 Y 1
#> 4 Y 2
# example that separate could not parse
tibble(x = c("X1", "X20", "AA11", "AA2")) %>%
extract(x, c("variable", "id"), regex = "([A-Z]+)([0-9]+)")
#> # A tibble: 4 x 2
#> variable id
#> <chr> <chr>
#> 1 X 1
#> 2 X 20
#> 3 AA 11
#> 4 AA 2
```
Both `separate()` and `extract()` convert a single column to many columns.
However, `unite()` converts many columns to one, with a choice of a separator to include between column values.
```
tibble(variable = c("X", "X", "Y", "Y"), id = c(1, 2, 1, 2)) %>%
unite(x, variable, id, sep = "_")
#> # A tibble: 4 x 1
#> x
#> <chr>
#> 1 X_1
#> 2 X_2
#> 3 Y_1
#> 4 Y_2
```
In other words, with `extract()` and `separate()` only one column can be chosen,
but there are many choices how to split that single column into different columns.
With `unite()`, there are many choices as to which columns to include, but only one
choice as to how to combine their contents into a single vector.
### Exercise 12\.4\.1
What do the extra and fill arguments do in `separate()`?
Experiment with the various options for the following two toy datasets.
```
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"))
#> Warning: Expected 3 pieces. Additional pieces discarded in 1 rows [2].
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e f
#> 3 h i j
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"))
#> Warning: Expected 3 pieces. Missing pieces filled with `NA` in 1 rows [2].
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e <NA>
#> 3 f g i
```
The `extra` argument tells `separate()` what to do if there are too many pieces, and the `fill` argument tells it what to do if there aren’t enough.
By default, `separate()` drops extra values with a warning.
```
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"))
#> Warning: Expected 3 pieces. Additional pieces discarded in 1 rows [2].
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e f
#> 3 h i j
```
Adding the argument, `extra = "drop"`, produces the same result as above but without the warning.
```
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"), extra = "drop")
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e f
#> 3 h i j
```
Setting `extra = "merge"`, then the extra values are not split, so `"f,g"` appears in column three.
```
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"), extra = "merge")
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e f,g
#> 3 h i j
```
In this example, one of the values, `"d,e"`, has too few elements.
The default for `fill` is similar to those in `separate()`;
it fills columns with missing values but emits a warning.
In this example, the 2nd row of column `three` is `NA`.
```
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"))
#> Warning: Expected 3 pieces. Missing pieces filled with `NA` in 1 rows [2].
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e <NA>
#> 3 f g i
```
Alternative options for the `fill` are `"right"`, to fill with missing values from the right, but without a warning
```
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"), fill = "right")
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 d e <NA>
#> 3 f g i
```
The option `fill = "left"` also fills with missing values without emitting a warning, but this time from the left side.
Now, the 2nd row of column `one` will be missing, and the other values in that row are shifted right.
```
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"), fill = "left")
#> # A tibble: 3 x 3
#> one two three
#> <chr> <chr> <chr>
#> 1 a b c
#> 2 <NA> d e
#> 3 f g i
```
### Exercise 12\.4\.2
Both `unite()` and `separate()` have a `remove` argument.
What does it do?
Why would you set it to `FALSE`?
The `remove` argument discards input columns in the result data frame. You would set it to `FALSE` if you want to create a new variable, but keep the old one.
### Exercise 12\.4\.3
Compare and contrast `separate()` and `extract()`, Why are there three variations of separation (by position, by separator, and with groups), but only one unite?
The function `separate()`, splits a column into multiple columns by separator, if the `sep` argument is a character vector, or by character positions, if `sep` is numeric.
```
# example with separators
tibble(x = c("X_1", "X_2", "AA_1", "AA_2")) %>%
separate(x, c("variable", "into"), sep = "_")
#> # A tibble: 4 x 2
#> variable into
#> <chr> <chr>
#> 1 X 1
#> 2 X 2
#> 3 AA 1
#> 4 AA 2
# example with position
tibble(x = c("X1", "X2", "Y1", "Y2")) %>%
separate(x, c("variable", "into"), sep = c(1))
#> # A tibble: 4 x 2
#> variable into
#> <chr> <chr>
#> 1 X 1
#> 2 X 2
#> 3 Y 1
#> 4 Y 2
```
The function `extract()` uses a regular expression to specify groups in character vector and split that single character vector into multiple columns.
This is more flexible than `separate()` because it does not require a common
separator or specific column positions.
```
# example with separators
tibble(x = c("X_1", "X_2", "AA_1", "AA_2")) %>%
extract(x, c("variable", "id"), regex = "([A-Z])_([0-9])")
#> # A tibble: 4 x 2
#> variable id
#> <chr> <chr>
#> 1 X 1
#> 2 X 2
#> 3 A 1
#> 4 A 2
# example with position
tibble(x = c("X1", "X2", "Y1", "Y2")) %>%
extract(x, c("variable", "id"), regex = "([A-Z])([0-9])")
#> # A tibble: 4 x 2
#> variable id
#> <chr> <chr>
#> 1 X 1
#> 2 X 2
#> 3 Y 1
#> 4 Y 2
# example that separate could not parse
tibble(x = c("X1", "X20", "AA11", "AA2")) %>%
extract(x, c("variable", "id"), regex = "([A-Z]+)([0-9]+)")
#> # A tibble: 4 x 2
#> variable id
#> <chr> <chr>
#> 1 X 1
#> 2 X 20
#> 3 AA 11
#> 4 AA 2
```
Both `separate()` and `extract()` convert a single column to many columns.
However, `unite()` converts many columns to one, with a choice of a separator to include between column values.
```
tibble(variable = c("X", "X", "Y", "Y"), id = c(1, 2, 1, 2)) %>%
unite(x, variable, id, sep = "_")
#> # A tibble: 4 x 1
#> x
#> <chr>
#> 1 X_1
#> 2 X_2
#> 3 Y_1
#> 4 Y_2
```
In other words, with `extract()` and `separate()` only one column can be chosen,
but there are many choices how to split that single column into different columns.
With `unite()`, there are many choices as to which columns to include, but only one
choice as to how to combine their contents into a single vector.
12\.5 Missing values
--------------------
### Exercise 12\.5\.1
Compare and contrast the `fill` arguments to `pivot_wider()` and `complete()`.
The `values_fill` argument in `pivot_wider()` and the `fill` argument to `complete()` both set vales to replace `NA`.
Both arguments accept named lists to set values for each column.
Additionally, the `values_fill` argument of `pivot_wider()` accepts a single value.
In `complete()`, the fill argument also sets a value to replace `NA`s but it is named list, allowing for different values for different variables.
Also, both cases replace both implicit and explicit missing values.
For example, this will fill in the missing values of the long data frame with `0` `complete()`:
```
stocks <- tibble(
year = c(2015, 2015, 2015, 2015, 2016, 2016, 2016),
qtr = c( 1, 2, 3, 4, 2, 3, 4),
return = c(1.88, 0.59, 0.35, NA, 0.92, 0.17, 2.66)
)
stocks %>%
pivot_wider(names_from = year, values_from = return,
values_fill = 0)
#> # A tibble: 4 x 3
#> qtr `2015` `2016`
#> <dbl> <dbl> <dbl>
#> 1 1 1.88 0
#> 2 2 0.59 0.92
#> 3 3 0.35 0.17
#> 4 4 NA 2.66
```
```
stocks <- tibble(
year = c(2015, 2015, 2015, 2015, 2016, 2016, 2016),
qtr = c( 1, 2, 3, 4, 2, 3, 4),
return = c(1.88, 0.59, 0.35, NA, 0.92, 0.17, 2.66)
)
stocks %>%
pivot_wider(names_from = year, values_from = return,
values_fill = 0)
#> # A tibble: 4 x 3
#> qtr `2015` `2016`
#> <dbl> <dbl> <dbl>
#> 1 1 1.88 0
#> 2 2 0.59 0.92
#> 3 3 0.35 0.17
#> 4 4 NA 2.66
```
For example, this will fill in the missing values of the long data frame with `0` `complete()`:
```
stocks %>%
complete(year, qtr, fill=list(return=0))
#> # A tibble: 8 x 3
#> year qtr return
#> <dbl> <dbl> <dbl>
#> 1 2015 1 1.88
#> 2 2015 2 0.59
#> 3 2015 3 0.35
#> 4 2015 4 0
#> 5 2016 1 0
#> 6 2016 2 0.92
#> # … with 2 more rows
```
### Exercise 12\.5\.2
What does the `direction` argument to `fill()` do?
With `fill`, the `direction` determines whether `NA` values should be replaced by the previous non\-missing value (`"down"`) or the next non\-missing value (`"up"`).
### Exercise 12\.5\.1
Compare and contrast the `fill` arguments to `pivot_wider()` and `complete()`.
The `values_fill` argument in `pivot_wider()` and the `fill` argument to `complete()` both set vales to replace `NA`.
Both arguments accept named lists to set values for each column.
Additionally, the `values_fill` argument of `pivot_wider()` accepts a single value.
In `complete()`, the fill argument also sets a value to replace `NA`s but it is named list, allowing for different values for different variables.
Also, both cases replace both implicit and explicit missing values.
For example, this will fill in the missing values of the long data frame with `0` `complete()`:
```
stocks <- tibble(
year = c(2015, 2015, 2015, 2015, 2016, 2016, 2016),
qtr = c( 1, 2, 3, 4, 2, 3, 4),
return = c(1.88, 0.59, 0.35, NA, 0.92, 0.17, 2.66)
)
stocks %>%
pivot_wider(names_from = year, values_from = return,
values_fill = 0)
#> # A tibble: 4 x 3
#> qtr `2015` `2016`
#> <dbl> <dbl> <dbl>
#> 1 1 1.88 0
#> 2 2 0.59 0.92
#> 3 3 0.35 0.17
#> 4 4 NA 2.66
```
```
stocks <- tibble(
year = c(2015, 2015, 2015, 2015, 2016, 2016, 2016),
qtr = c( 1, 2, 3, 4, 2, 3, 4),
return = c(1.88, 0.59, 0.35, NA, 0.92, 0.17, 2.66)
)
stocks %>%
pivot_wider(names_from = year, values_from = return,
values_fill = 0)
#> # A tibble: 4 x 3
#> qtr `2015` `2016`
#> <dbl> <dbl> <dbl>
#> 1 1 1.88 0
#> 2 2 0.59 0.92
#> 3 3 0.35 0.17
#> 4 4 NA 2.66
```
For example, this will fill in the missing values of the long data frame with `0` `complete()`:
```
stocks %>%
complete(year, qtr, fill=list(return=0))
#> # A tibble: 8 x 3
#> year qtr return
#> <dbl> <dbl> <dbl>
#> 1 2015 1 1.88
#> 2 2015 2 0.59
#> 3 2015 3 0.35
#> 4 2015 4 0
#> 5 2016 1 0
#> 6 2016 2 0.92
#> # … with 2 more rows
```
### Exercise 12\.5\.2
What does the `direction` argument to `fill()` do?
With `fill`, the `direction` determines whether `NA` values should be replaced by the previous non\-missing value (`"down"`) or the next non\-missing value (`"up"`).
12\.6 Case Study
----------------
This code is repeated from the chapter because it is needed by the exercises.
```
who1 <- who %>%
pivot_longer(
cols = new_sp_m014:newrel_f65,
names_to = "key",
values_to = "cases",
values_drop_na = TRUE
)
who1
#> # A tibble: 76,046 x 6
#> country iso2 iso3 year key cases
#> <chr> <chr> <chr> <int> <chr> <int>
#> 1 Afghanistan AF AFG 1997 new_sp_m014 0
#> 2 Afghanistan AF AFG 1997 new_sp_m1524 10
#> 3 Afghanistan AF AFG 1997 new_sp_m2534 6
#> 4 Afghanistan AF AFG 1997 new_sp_m3544 3
#> 5 Afghanistan AF AFG 1997 new_sp_m4554 5
#> 6 Afghanistan AF AFG 1997 new_sp_m5564 2
#> # … with 76,040 more rows
```
```
who2 <- who1 %>%
mutate(names_from = stringr::str_replace(key, "newrel", "new_rel"))
who2
#> # A tibble: 76,046 x 7
#> country iso2 iso3 year key cases names_from
#> <chr> <chr> <chr> <int> <chr> <int> <chr>
#> 1 Afghanistan AF AFG 1997 new_sp_m014 0 new_sp_m014
#> 2 Afghanistan AF AFG 1997 new_sp_m1524 10 new_sp_m1524
#> 3 Afghanistan AF AFG 1997 new_sp_m2534 6 new_sp_m2534
#> 4 Afghanistan AF AFG 1997 new_sp_m3544 3 new_sp_m3544
#> 5 Afghanistan AF AFG 1997 new_sp_m4554 5 new_sp_m4554
#> 6 Afghanistan AF AFG 1997 new_sp_m5564 2 new_sp_m5564
#> # … with 76,040 more rows
```
```
who3 <- who2 %>%
separate(key, c("new", "type", "sexage"), sep = "_")
#> Warning: Expected 3 pieces. Missing pieces filled with `NA` in 2580 rows [243,
#> 244, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 903,
#> 904, 905, 906, ...].
who3
#> # A tibble: 76,046 x 9
#> country iso2 iso3 year new type sexage cases names_from
#> <chr> <chr> <chr> <int> <chr> <chr> <chr> <int> <chr>
#> 1 Afghanistan AF AFG 1997 new sp m014 0 new_sp_m014
#> 2 Afghanistan AF AFG 1997 new sp m1524 10 new_sp_m1524
#> 3 Afghanistan AF AFG 1997 new sp m2534 6 new_sp_m2534
#> 4 Afghanistan AF AFG 1997 new sp m3544 3 new_sp_m3544
#> 5 Afghanistan AF AFG 1997 new sp m4554 5 new_sp_m4554
#> 6 Afghanistan AF AFG 1997 new sp m5564 2 new_sp_m5564
#> # … with 76,040 more rows
```
```
who3 %>%
count(new)
#> # A tibble: 2 x 2
#> new n
#> <chr> <int>
#> 1 new 73466
#> 2 newrel 2580
```
```
who4 <- who3 %>%
select(-new, -iso2, -iso3)
```
```
who5 <- who4 %>%
separate(sexage, c("sex", "age"), sep = 1)
who5
#> # A tibble: 76,046 x 7
#> country year type sex age cases names_from
#> <chr> <int> <chr> <chr> <chr> <int> <chr>
#> 1 Afghanistan 1997 sp m 014 0 new_sp_m014
#> 2 Afghanistan 1997 sp m 1524 10 new_sp_m1524
#> 3 Afghanistan 1997 sp m 2534 6 new_sp_m2534
#> 4 Afghanistan 1997 sp m 3544 3 new_sp_m3544
#> 5 Afghanistan 1997 sp m 4554 5 new_sp_m4554
#> 6 Afghanistan 1997 sp m 5564 2 new_sp_m5564
#> # … with 76,040 more rows
```
### Exercise 12\.6\.1
In this case study, I set `na.rm = TRUE` just to make it easier to check that we had the correct values.
Is this reasonable?
Think about how missing values are represented in this dataset.
Are there implicit missing values?
What’s the difference between an `NA` and zero?
The reasonableness of using `na.rm = TRUE` depends on how missing values are represented in this dataset.
The main concern is whether a missing value means that there were no cases of TB or whether it means that the WHO does not have data on the number of TB cases.
Here are some things we should look for to help distinguish between these cases.
* If there are no 0 values in the data, then missing values may be used to indicate no cases.
* If there are both explicit and implicit missing values, then it suggests that missing values
are being used differently. In that case, it is likely that explicit missing values would
mean no cases, and implicit missing values would mean no data on the number of cases.
First, I’ll check for the presence of zeros in the data.
```
who1 %>%
filter(cases == 0) %>%
nrow()
#> [1] 11080
```
There are zeros in the data, so it appears that cases of zero TB are explicitly indicated, and the value of`NA` is used to indicate missing data.
Second, I should check whether all values for a (country, year) are missing or whether it is possible for only some columns to be missing.
```
pivot_longer(who, c(new_sp_m014:newrel_f65), names_to = "key", values_to = "cases") %>%
group_by(country, year) %>%
mutate(prop_missing = sum(is.na(cases)) / n()) %>%
filter(prop_missing > 0, prop_missing < 1)
#> # A tibble: 195,104 x 7
#> # Groups: country, year [3,484]
#> country iso2 iso3 year key cases prop_missing
#> <chr> <chr> <chr> <int> <chr> <int> <dbl>
#> 1 Afghanistan AF AFG 1997 new_sp_m014 0 0.75
#> 2 Afghanistan AF AFG 1997 new_sp_m1524 10 0.75
#> 3 Afghanistan AF AFG 1997 new_sp_m2534 6 0.75
#> 4 Afghanistan AF AFG 1997 new_sp_m3544 3 0.75
#> 5 Afghanistan AF AFG 1997 new_sp_m4554 5 0.75
#> 6 Afghanistan AF AFG 1997 new_sp_m5564 2 0.75
#> # … with 195,098 more rows
```
From the results above, it looks like it is possible for a (country, year) row to contain some, but not all, missing values in its columns.
Finally, I will check for implicit missing values.
Implicit missing values are (`year`, `country`) combinations that do not appear in the data.
```
nrow(who)
#> [1] 7240
who %>%
complete(country, year) %>%
nrow()
#> [1] 7446
```
Since the number of complete cases of (`country`, `year`) is greater than the number of rows in `who`, there are some implicit values.
But that doesn’t tell us what those implicit missing values are.
To do this, I will use the `anti_join()` function introduced in the later [Relational Data](https://r4ds.had.co.nz/relational-data.html#filtering-joins) chapter.
```
anti_join(complete(who, country, year), who, by = c("country", "year")) %>%
select(country, year) %>%
group_by(country) %>%
# so I can make better sense of the years
summarise(min_year = min(year), max_year = max(year))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 9 x 3
#> country min_year max_year
#> <chr> <int> <int>
#> 1 Bonaire, Saint Eustatius and Saba 1980 2009
#> 2 Curacao 1980 2009
#> 3 Montenegro 1980 2004
#> 4 Netherlands Antilles 2010 2013
#> 5 Serbia 1980 2004
#> 6 Serbia & Montenegro 2005 2013
#> # … with 3 more rows
```
All of these refer to (`country`, `year`) combinations for years prior to the existence of the country.
For example, Timor\-Leste achieved independence in 2002, so years prior to that are not included in the data.
To summarize:
* `0` is used to represent no cases of TB.
* Explicit missing values (`NA`s) are used to represent missing data for (`country`, `year`) combinations in which the country existed in that year.
* Implicit missing values are used to represent missing data because a country did not exist in that year.
### Exercise 12\.6\.2
What happens if you neglect the `mutate()` step?
(`mutate(key = str_replace(key, "newrel", "new_rel")`)
The `separate()` function emits the warning “too few values”.
If we check the rows for keys beginning with `"newrel_"`, we see that `sexage` is missing,
and `type = m014`.
```
who3a <- who1 %>%
separate(key, c("new", "type", "sexage"), sep = "_")
#> Warning: Expected 3 pieces. Missing pieces filled with `NA` in 2580 rows [243,
#> 244, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 903,
#> 904, 905, 906, ...].
filter(who3a, new == "newrel") %>% head()
#> # A tibble: 6 x 8
#> country iso2 iso3 year new type sexage cases
#> <chr> <chr> <chr> <int> <chr> <chr> <chr> <int>
#> 1 Afghanistan AF AFG 2013 newrel m014 <NA> 1705
#> 2 Afghanistan AF AFG 2013 newrel f014 <NA> 1749
#> 3 Albania AL ALB 2013 newrel m014 <NA> 14
#> 4 Albania AL ALB 2013 newrel m1524 <NA> 60
#> 5 Albania AL ALB 2013 newrel m2534 <NA> 61
#> 6 Albania AL ALB 2013 newrel m3544 <NA> 32
```
### Exercise 12\.6\.3
I claimed that `iso2` and `iso3` were redundant with country.
Confirm this claim.
If `iso2` and `iso3` are redundant with `country`, then, within each country,
there should only be one distinct combination of `iso2` and `iso3` values, which is the case.
```
select(who3, country, iso2, iso3) %>%
distinct() %>%
group_by(country) %>%
filter(n() > 1)
#> # A tibble: 0 x 3
#> # Groups: country [0]
#> # … with 3 variables: country <chr>, iso2 <chr>, iso3 <chr>
```
This makes sense, since `iso2` and `iso3` contain the 2\- and 3\-letter country abbreviations for the country.
The `iso2` variable contains each country’s [ISO 3166 alpha\-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2), and the `iso3` variable contains each country’s [ISO 3166 alpha\-3](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3) abbreviation.
You may recognize the ISO 3166\-2 abbreviations, since they are almost identical to internet [country\-code top level domains](https://en.wikipedia.org/wiki/Country_code_top-level_domain), such as `.uk` (United Kingdom), `.ly` (Libya), `.tv` (Tuvalu), and `.io` (British Indian Ocean Territory).
### Exercise 12\.6\.4
For each country, year, and sex compute the total number of cases of TB.
Make an informative visualization of the data.
```
who5 %>%
group_by(country, year, sex) %>%
filter(year > 1995) %>%
summarise(cases = sum(cases)) %>%
unite(country_sex, country, sex, remove = FALSE) %>%
ggplot(aes(x = year, y = cases, group = country_sex, colour = sex)) +
geom_line()
#> `summarise()` regrouping output by 'country', 'year' (override with `.groups` argument)
```
A small multiples plot faceting by country is difficult given the number of countries.
Focusing on those countries with the largest changes or absolute magnitudes after providing the context above is another option.
### Exercise 12\.6\.1
In this case study, I set `na.rm = TRUE` just to make it easier to check that we had the correct values.
Is this reasonable?
Think about how missing values are represented in this dataset.
Are there implicit missing values?
What’s the difference between an `NA` and zero?
The reasonableness of using `na.rm = TRUE` depends on how missing values are represented in this dataset.
The main concern is whether a missing value means that there were no cases of TB or whether it means that the WHO does not have data on the number of TB cases.
Here are some things we should look for to help distinguish between these cases.
* If there are no 0 values in the data, then missing values may be used to indicate no cases.
* If there are both explicit and implicit missing values, then it suggests that missing values
are being used differently. In that case, it is likely that explicit missing values would
mean no cases, and implicit missing values would mean no data on the number of cases.
First, I’ll check for the presence of zeros in the data.
```
who1 %>%
filter(cases == 0) %>%
nrow()
#> [1] 11080
```
There are zeros in the data, so it appears that cases of zero TB are explicitly indicated, and the value of`NA` is used to indicate missing data.
Second, I should check whether all values for a (country, year) are missing or whether it is possible for only some columns to be missing.
```
pivot_longer(who, c(new_sp_m014:newrel_f65), names_to = "key", values_to = "cases") %>%
group_by(country, year) %>%
mutate(prop_missing = sum(is.na(cases)) / n()) %>%
filter(prop_missing > 0, prop_missing < 1)
#> # A tibble: 195,104 x 7
#> # Groups: country, year [3,484]
#> country iso2 iso3 year key cases prop_missing
#> <chr> <chr> <chr> <int> <chr> <int> <dbl>
#> 1 Afghanistan AF AFG 1997 new_sp_m014 0 0.75
#> 2 Afghanistan AF AFG 1997 new_sp_m1524 10 0.75
#> 3 Afghanistan AF AFG 1997 new_sp_m2534 6 0.75
#> 4 Afghanistan AF AFG 1997 new_sp_m3544 3 0.75
#> 5 Afghanistan AF AFG 1997 new_sp_m4554 5 0.75
#> 6 Afghanistan AF AFG 1997 new_sp_m5564 2 0.75
#> # … with 195,098 more rows
```
From the results above, it looks like it is possible for a (country, year) row to contain some, but not all, missing values in its columns.
Finally, I will check for implicit missing values.
Implicit missing values are (`year`, `country`) combinations that do not appear in the data.
```
nrow(who)
#> [1] 7240
who %>%
complete(country, year) %>%
nrow()
#> [1] 7446
```
Since the number of complete cases of (`country`, `year`) is greater than the number of rows in `who`, there are some implicit values.
But that doesn’t tell us what those implicit missing values are.
To do this, I will use the `anti_join()` function introduced in the later [Relational Data](https://r4ds.had.co.nz/relational-data.html#filtering-joins) chapter.
```
anti_join(complete(who, country, year), who, by = c("country", "year")) %>%
select(country, year) %>%
group_by(country) %>%
# so I can make better sense of the years
summarise(min_year = min(year), max_year = max(year))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 9 x 3
#> country min_year max_year
#> <chr> <int> <int>
#> 1 Bonaire, Saint Eustatius and Saba 1980 2009
#> 2 Curacao 1980 2009
#> 3 Montenegro 1980 2004
#> 4 Netherlands Antilles 2010 2013
#> 5 Serbia 1980 2004
#> 6 Serbia & Montenegro 2005 2013
#> # … with 3 more rows
```
All of these refer to (`country`, `year`) combinations for years prior to the existence of the country.
For example, Timor\-Leste achieved independence in 2002, so years prior to that are not included in the data.
To summarize:
* `0` is used to represent no cases of TB.
* Explicit missing values (`NA`s) are used to represent missing data for (`country`, `year`) combinations in which the country existed in that year.
* Implicit missing values are used to represent missing data because a country did not exist in that year.
### Exercise 12\.6\.2
What happens if you neglect the `mutate()` step?
(`mutate(key = str_replace(key, "newrel", "new_rel")`)
The `separate()` function emits the warning “too few values”.
If we check the rows for keys beginning with `"newrel_"`, we see that `sexage` is missing,
and `type = m014`.
```
who3a <- who1 %>%
separate(key, c("new", "type", "sexage"), sep = "_")
#> Warning: Expected 3 pieces. Missing pieces filled with `NA` in 2580 rows [243,
#> 244, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 903,
#> 904, 905, 906, ...].
filter(who3a, new == "newrel") %>% head()
#> # A tibble: 6 x 8
#> country iso2 iso3 year new type sexage cases
#> <chr> <chr> <chr> <int> <chr> <chr> <chr> <int>
#> 1 Afghanistan AF AFG 2013 newrel m014 <NA> 1705
#> 2 Afghanistan AF AFG 2013 newrel f014 <NA> 1749
#> 3 Albania AL ALB 2013 newrel m014 <NA> 14
#> 4 Albania AL ALB 2013 newrel m1524 <NA> 60
#> 5 Albania AL ALB 2013 newrel m2534 <NA> 61
#> 6 Albania AL ALB 2013 newrel m3544 <NA> 32
```
### Exercise 12\.6\.3
I claimed that `iso2` and `iso3` were redundant with country.
Confirm this claim.
If `iso2` and `iso3` are redundant with `country`, then, within each country,
there should only be one distinct combination of `iso2` and `iso3` values, which is the case.
```
select(who3, country, iso2, iso3) %>%
distinct() %>%
group_by(country) %>%
filter(n() > 1)
#> # A tibble: 0 x 3
#> # Groups: country [0]
#> # … with 3 variables: country <chr>, iso2 <chr>, iso3 <chr>
```
This makes sense, since `iso2` and `iso3` contain the 2\- and 3\-letter country abbreviations for the country.
The `iso2` variable contains each country’s [ISO 3166 alpha\-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2), and the `iso3` variable contains each country’s [ISO 3166 alpha\-3](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3) abbreviation.
You may recognize the ISO 3166\-2 abbreviations, since they are almost identical to internet [country\-code top level domains](https://en.wikipedia.org/wiki/Country_code_top-level_domain), such as `.uk` (United Kingdom), `.ly` (Libya), `.tv` (Tuvalu), and `.io` (British Indian Ocean Territory).
### Exercise 12\.6\.4
For each country, year, and sex compute the total number of cases of TB.
Make an informative visualization of the data.
```
who5 %>%
group_by(country, year, sex) %>%
filter(year > 1995) %>%
summarise(cases = sum(cases)) %>%
unite(country_sex, country, sex, remove = FALSE) %>%
ggplot(aes(x = year, y = cases, group = country_sex, colour = sex)) +
geom_line()
#> `summarise()` regrouping output by 'country', 'year' (override with `.groups` argument)
```
A small multiples plot faceting by country is difficult given the number of countries.
Focusing on those countries with the largest changes or absolute magnitudes after providing the context above is another option.
12\.7 Non\-tidy data
--------------------
No exercises
\[ex\-12\.2\.2]: It would be better to join these tables using the methods covered in the [Relational Data](https://r4ds.had.co.nz/relational-data.html).
We could use `inner_join(t2_cases, t2_population, by = c("country", "year"))`.
\[non\-syntactic]: See the [Creating Tibbles](https://r4ds.had.co.nz/tibbles.html#tibbles) section.
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/relational-data.html |
13 Relational data
==================
13\.1 Introduction
------------------
The datamodelr package is used to draw database schema.
```
library("tidyverse")
library("nycflights13")
library("viridis")
library("datamodelr")
```
13\.2 nycflights13
------------------
### Exercise 13\.2\.1
Imagine you wanted to draw (approximately) the route each plane flies from its origin to its destination.
What variables would you need?
What tables would you need to combine?
Drawing the routes requires the latitude and longitude of the origin and the destination airports of each flight.
This requires the `flights` and `airports` tables.
The `flights` table has the origin (`origin`) and destination (`dest`) airport of each flight.
The `airports` table has the longitude (`lon`) and latitude (`lat`) of each airport.
To get the latitude and longitude for the origin and destination of each flight,
requires two joins for `flights` to `airports`,
once for the latitude and longitude of the origin airport,
and once for the latitude and longitude of the destination airport.
I use an inner join in order to drop any flights with missing airports since they will not have a longitude or latitude.
```
flights_latlon <- flights %>%
inner_join(select(airports, origin = faa, origin_lat = lat, origin_lon = lon),
by = "origin"
) %>%
inner_join(select(airports, dest = faa, dest_lat = lat, dest_lon = lon),
by = "dest"
)
```
This plots the approximate flight paths of the first 100 flights in the `flights` dataset.
```
flights_latlon %>%
slice(1:100) %>%
ggplot(aes(
x = origin_lon, xend = dest_lon,
y = origin_lat, yend = dest_lat
)) +
borders("state") +
geom_segment(arrow = arrow(length = unit(0.1, "cm"))) +
coord_quickmap() +
labs(y = "Latitude", x = "Longitude")
```
### Exercise 13\.2\.2
I forgot to draw the relationship between `weather` and `airports`.
What is the relationship and how should it appear in the diagram?
The column `airports$faa` is a foreign key of `weather$origin`.
The following drawing updates the one in [Section 13\.2](https://r4ds.had.co.nz/relational-data.html#nycflights13-relational) to include this relation.
The line representing the new relation between `weather` and `airports` is colored black.
The lines representing the old relations are gray and thinner.
### Exercise 13\.2\.3
Weather only contains information for the origin (NYC) airports.
If it contained weather records for all airports in the USA, what additional relation would it define with `flights`?
If the weather was included for all airports in the US, then it would provide the weather for the destination of each flight.
The `weather` data frame columns (`year`, `month`, `day`, `hour`, `origin`) are a foreign key for the `flights` data frame columns (`year`, `month`, `day`, `hour`, `dest`).
This would provide information about the weather at the destination airport at the time of the flight take off, unless the arrival date\-time were calculated.
So why was this not a relationship prior to adding additional rows to the `weather` table?
In a foreign key relationship, the collection of columns in the child table
must refer to a unique collection of columns in the parent table.
When the `weather` table only contained New York airports,
there were many values of (`year`, `month`, `day`, `hour`, `dest`) in `flights` that
did not appear in the `weather` table.
Therefore, it was not a foreign key. It was only after
all combinations of year, month, day, hour, and airports that are defined in `flights`
were added to the `weather` table that there existed this relation between these tables.
### Exercise 13\.2\.4
We know that some days of the year are “special”, and fewer people than usual fly on them.
How might you represent that data as a data frame?
What would be the primary keys of that table?
How would it connect to the existing tables?
I would add a table of special dates, similar to the following table.
```
special_days <- tribble(
~year, ~month, ~day, ~holiday,
2013, 01, 01, "New Years Day",
2013, 07, 04, "Independence Day",
2013, 11, 29, "Thanksgiving Day",
2013, 12, 25, "Christmas Day"
)
```
The primary key of the table would be the (`year`, `month`, `day`) columns.
The (`year`, `month`, `day`) columns could be used to join `special_days` with other tables.
13\.3 Keys
----------
### Exercise 13\.3\.1
Add a surrogate key to flights.
I add the column `flight_id` as a surrogate key.
I sort the data prior to making the key, even though it is not strictly necessary, so the order of the rows has some meaning.
```
flights %>%
arrange(year, month, day, sched_dep_time, carrier, flight) %>%
mutate(flight_id = row_number()) %>%
glimpse()
#> Rows: 336,776
#> Columns: 20
#> $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, …
#> $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
#> $ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
#> $ dep_time <int> 517, 533, 542, 544, 554, 559, 558, 559, 558, 558, 557,…
#> $ sched_dep_time <int> 515, 529, 540, 545, 558, 559, 600, 600, 600, 600, 600,…
#> $ dep_delay <dbl> 2, 4, 2, -1, -4, 0, -2, -1, -2, -2, -3, NA, 1, 0, -5, …
#> $ arr_time <int> 830, 850, 923, 1004, 740, 702, 753, 941, 849, 853, 838…
#> $ sched_arr_time <int> 819, 830, 850, 1022, 728, 706, 745, 910, 851, 856, 846…
#> $ arr_delay <dbl> 11, 20, 33, -18, 12, -4, 8, 31, -2, -3, -8, NA, -6, -7…
#> $ carrier <chr> "UA", "UA", "AA", "B6", "UA", "B6", "AA", "AA", "B6", …
#> $ flight <int> 1545, 1714, 1141, 725, 1696, 1806, 301, 707, 49, 71, 7…
#> $ tailnum <chr> "N14228", "N24211", "N619AA", "N804JB", "N39463", "N70…
#> $ origin <chr> "EWR", "LGA", "JFK", "JFK", "EWR", "JFK", "LGA", "LGA"…
#> $ dest <chr> "IAH", "IAH", "MIA", "BQN", "ORD", "BOS", "ORD", "DFW"…
#> $ air_time <dbl> 227, 227, 160, 183, 150, 44, 138, 257, 149, 158, 140, …
#> $ distance <dbl> 1400, 1416, 1089, 1576, 719, 187, 733, 1389, 1028, 100…
#> $ hour <dbl> 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, …
#> $ minute <dbl> 15, 29, 40, 45, 58, 59, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
#> $ time_hour <dttm> 2013-01-01 05:00:00, 2013-01-01 05:00:00, 2013-01-01 …
#> $ flight_id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,…
```
### Exercise 13\.3\.2
Identify the keys in the following datasets
1. `Lahman::Batting`
2. `babynames::babynames`
3. `nasaweather::atmos`
4. `fueleconomy::vehicles`
5. `ggplot2::diamonds`
(You might need to install some packages and read some documentation.)
The answer to each part follows.
1. The primary key for `Lahman::Batting` is (`playerID`, `yearID`, `stint`).
The columns (`playerID`, `yearID`) are not a primary key because players can play on different teams within the same year.
```
Lahman::Batting %>%
count(playerID, yearID, stint) %>%
filter(n > 1) %>%
nrow()
#> [1] 0
```
2. The primary key for `babynames::babynames` is (`year`, `sex`, `name`).
The columns (`year`, `name`) are not a primary key since there are separate counts for each name for each sex, and the same names can be used by more than one sex.
```
babynames::babynames %>%
count(year, sex, name) %>%
filter(n > 1) %>%
nrow()
#> Using `n` as weighting variable
#> ℹ Quiet this message with `wt = n` or count rows with `wt = 1`
#> [1] 1924665
```
3. The primary key for `nasaweather::atmos` is (`lat`, `long`, `year`, `month`).
The primary key represents the location and time that the measurement was taken.
```
nasaweather::atmos %>%
count(lat, long, year, month) %>%
filter(n > 1) %>%
nrow()
#> [1] 0
```
4. The column `id`, the unique EPA identifier of the vehicle, is the primary key for `fueleconomy::vehicles`.
```
fueleconomy::vehicles %>%
count(id) %>%
filter(n > 1) %>%
nrow()
#> [1] 0
```
5. There is no primary key for `ggplot2::diamonds` since there is no combination of variables that uniquely identifies each observation.
This is implied by the fact that the number of distinct rows in the dataset is less than the total number of rows, meaning that there are some duplicate rows.
```
ggplot2::diamonds %>%
distinct() %>%
nrow()
#> [1] 53794
nrow(ggplot2::diamonds)
#> [1] 53940
```
If we need a unique identifier for our analysis, we could add a surrogate key.
```
diamonds <- mutate(ggplot2::diamonds, id = row_number())
```
### Exercise 13\.3\.3
Draw a diagram illustrating the connections between the `Batting`, `Master`, and `Salaries` tables in the Lahman package.
Draw another diagram that shows the relationship between `Master`, `Managers`, `AwardsManagers`.
How would you characterize the relationship between the `Batting`, `Pitching`, and `Fielding` tables?
For the `Batting`, `Master`, and `Salaries` tables:
* `Master`
+ Primary key: `playerID`
* `Batting`
+ Primary key: `playerID`, `yearID`, `stint`
+ Foreign keys:
- `playerID` \= `Master$playerID` (many\-to\-1\)
* `Salaries`
+ Primary key: `yearID`, `teamID`, `playerID`
+ Foreign keys:
- `playerID` \= `Master$playerID` (many\-to\-1\)
The columns `teamID` and `lgID` are not foreign keys even though they appear in multiple tables (with the same meaning) because they are not primary keys for any of the tables considered in this exercise.
The `teamID` variable references `Teams$teamID`, and `lgID` does not have its own table.
*R for Data Science* uses database schema diagrams to illustrate relations between the tables.
Most flowchart or diagramming software can be used used to create database schema diagrams, as well as some specialized database software.
The diagrams in *R for Data Science* were created with [OmniGraffle](https://www.gliffy.com/), and their sources can be found in its [GitHub repository](https://github.com/hadley/r4ds/tree/master/diagrams).
The following diagram was created with OmniGraffle in the same style as those
in *R for Data Science* .
It shows the relations between the `Master`, `Batting` and `Salaries` tables.
Another option to draw database schema diagrams is the R package [datamodelr](https://github.com/bergant/datamodelr), which can programmatically create database schema diagrams.
The following code uses datamodelr to draw a diagram of the relations between the `Batting`, `Master`, and `Salaries` tables.
```
dm1 <- dm_from_data_frames(list(
Batting = Lahman::Batting,
Master = Lahman::Master,
Salaries = Lahman::Salaries
)) %>%
dm_set_key("Batting", c("playerID", "yearID", "stint")) %>%
dm_set_key("Master", "playerID") %>%
dm_set_key("Salaries", c("yearID", "teamID", "playerID")) %>%
dm_add_references(
Batting$playerID == Master$playerID,
Salaries$playerID == Master$playerID
)
dm_create_graph(dm1, rankdir = "LR", columnArrows = TRUE) %>%
dm_render_graph()
```
For the `Master`, `Manager`, and `AwardsManagers` tables:
* `Master`
+ Primary key: `playerID`
* `Managers`
+ Primary key: `yearID`, `teamID`, `inseason`
+ Foreign keys:
- `playerID` references `Master$playerID` (many\-to\-1\)
* `AwardsManagers`:
+ Primary key: `playerID`, `awardID`, `yearID`
+ Foreign keys:
- `playerID` references `Master$playerID` (many\-to\-1\)
For `AwardsManagers`, the columns (`awardID`, `yearID`, `lgID`) are not a primary
key because there can be, and have been ties, as indicated by the `tie` variable.
The relations between the `Master`, `Managers`, and `AwardsManagers` tables
are shown in the following two diagrams: the first created manually with OmniGraffle,
and the second programmatically in R with the datamodelr package.
```
dm2 <- dm_from_data_frames(list(
Master = Lahman::Master,
Managers = Lahman::Managers,
AwardsManagers = Lahman::AwardsManagers
)) %>%
dm_set_key("Master", "playerID") %>%
dm_set_key("Managers", c("yearID", "teamID", "inseason")) %>%
dm_set_key("AwardsManagers", c("playerID", "awardID", "yearID")) %>%
dm_add_references(
Managers$playerID == Master$playerID,
AwardsManagers$playerID == Master$playerID
)
dm_create_graph(dm2, rankdir = "LR", columnArrows = TRUE) %>%
dm_render_graph()
```
The primary keys of `Batting`, `Pitching`, and `Fielding` are the following:
* `Batting`: (`playerID`, `yearID`, `stint`)
* `Pitching`: (`playerID`, `yearID`, `stint`)
* `Fielding`: (`playerID`, `yearID`, `stint`, `POS`).
While `Batting` and `Pitching` has one row per player, year, stint, the `Fielding`
table has additional rows for each position (`POS`) a player played within a stint.
Since `Batting`, `Pitching`, and `Fielding` all share the `playerID`, `yearID`, and `stint`
we would expect some foreign key relations between these tables.
The columns (`playerID`, `yearID`, `stint`) in `Pitching` are a foreign key which
references the same columns in `Batting`. We can check this by checking that
all observed combinations of values of these columns appearing in `Pitching`
also appear in `Batting`. To do this I use an anti\-join, which is discussed
in the section [Filtering Joins](https://r4ds.had.co.nz/relational-data.html#filtering-joins).
```
nrow(anti_join(Lahman::Pitching, Lahman::Batting,
by = c("playerID", "yearID", "stint")
))
#> [1] 0
```
Similarly, the columns (`playerID`, `yearID`, `stint`) in `Fielding` are a foreign key which references the same columns in `Batting`.
```
nrow(anti_join(Lahman::Fielding, Lahman::Batting,
by = c("playerID", "yearID", "stint")
))
#> [1] 0
```
The following diagram shows the relations between the `Batting`, `Pitching`, and
`Fielding` tables.
13\.4 Mutating joins
--------------------
```
flights2 <- flights %>%
select(year:day, hour, origin, dest, tailnum, carrier)
```
### Exercise 13\.4\.1
Compute the average delay by destination, then join on the `airports` data frame so you can show the spatial distribution of delays. Here’s an easy way to draw a map of the United States:
```
airports %>%
semi_join(flights, c("faa" = "dest")) %>%
ggplot(aes(lon, lat)) +
borders("state") +
geom_point() +
coord_quickmap()
```
(Don’t worry if you don’t understand what `semi_join()` does — you’ll learn about it next.)
You might want to use the size or color of the points to display the average delay for each airport.
```
avg_dest_delays <-
flights %>%
group_by(dest) %>%
# arrival delay NA's are cancelled flights
summarise(delay = mean(arr_delay, na.rm = TRUE)) %>%
inner_join(airports, by = c(dest = "faa"))
#> `summarise()` ungrouping output (override with `.groups` argument)
```
```
avg_dest_delays %>%
ggplot(aes(lon, lat, colour = delay)) +
borders("state") +
geom_point() +
coord_quickmap()
```
### Exercise 13\.4\.2
Add the location of the origin and destination (i.e. the `lat` and `lon`) to `flights`.
You can perform one join after another. If duplicate variables are found, by default, dplyr will distinguish the two by adding `.x`, and `.y` to the ends of the variable names to solve naming conflicts.
```
airport_locations <- airports %>%
select(faa, lat, lon)
flights %>%
select(year:day, hour, origin, dest) %>%
left_join(
airport_locations,
by = c("origin" = "faa")
) %>%
left_join(
airport_locations,
by = c("dest" = "faa")
)
#> # A tibble: 336,776 x 10
#> year month day hour origin dest lat.x lon.x lat.y lon.y
#> <int> <int> <int> <dbl> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 2013 1 1 5 EWR IAH 40.7 -74.2 30.0 -95.3
#> 2 2013 1 1 5 LGA IAH 40.8 -73.9 30.0 -95.3
#> 3 2013 1 1 5 JFK MIA 40.6 -73.8 25.8 -80.3
#> 4 2013 1 1 5 JFK BQN 40.6 -73.8 NA NA
#> 5 2013 1 1 6 LGA ATL 40.8 -73.9 33.6 -84.4
#> 6 2013 1 1 5 EWR ORD 40.7 -74.2 42.0 -87.9
#> # … with 336,770 more rows
```
The `suffix` argument overrides this default behavior.
Since is always good practice to have clear variable names, I will use the
suffixes `"_dest"` and `"_origin`" to specify whether the column refers to
the destination or origin airport.
```
airport_locations <- airports %>%
select(faa, lat, lon)
flights %>%
select(year:day, hour, origin, dest) %>%
left_join(
airport_locations,
by = c("origin" = "faa")
) %>%
left_join(
airport_locations,
by = c("dest" = "faa"),
suffix = c("_origin", "_dest")
# existing lat and lon variables in tibble gain the _origin suffix
# new lat and lon variables are given _dest suffix
)
#> # A tibble: 336,776 x 10
#> year month day hour origin dest lat_origin lon_origin lat_dest lon_dest
#> <int> <int> <int> <dbl> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 2013 1 1 5 EWR IAH 40.7 -74.2 30.0 -95.3
#> 2 2013 1 1 5 LGA IAH 40.8 -73.9 30.0 -95.3
#> 3 2013 1 1 5 JFK MIA 40.6 -73.8 25.8 -80.3
#> 4 2013 1 1 5 JFK BQN 40.6 -73.8 NA NA
#> 5 2013 1 1 6 LGA ATL 40.8 -73.9 33.6 -84.4
#> 6 2013 1 1 5 EWR ORD 40.7 -74.2 42.0 -87.9
#> # … with 336,770 more rows
```
### Exercise 13\.4\.3
Is there a relationship between the age of a plane and its delays?
The question does not specify whether the relationship is with departure delay
or arrival delay.
I will look at both.
To compare the age of the plane to flights delay, I merge `flights` with the `planes`, which contains a variable `plane_year`, with the year in which the plane was built.
To look at the relationship between plane age and departure delay, I will calculate the average arrival and departure delay for each age of a flight.
Since there are few planes older than 25 years, so I truncate `age` at 25 years.
```
plane_cohorts <- inner_join(flights,
select(planes, tailnum, plane_year = year),
by = "tailnum"
) %>%
mutate(age = year - plane_year) %>%
filter(!is.na(age)) %>%
mutate(age = if_else(age > 25, 25L, age)) %>%
group_by(age) %>%
summarise(
dep_delay_mean = mean(dep_delay, na.rm = TRUE),
dep_delay_sd = sd(dep_delay, na.rm = TRUE),
arr_delay_mean = mean(arr_delay, na.rm = TRUE),
arr_delay_sd = sd(arr_delay, na.rm = TRUE),
n_arr_delay = sum(!is.na(arr_delay)),
n_dep_delay = sum(!is.na(dep_delay))
)
#> `summarise()` ungrouping output (override with `.groups` argument)
```
I will look for a relationship between departure delay and age by plotting age against the average departure delay.
The average departure delay is increasing for planes with ages up until 10 years. After that the departure delay decreases or levels off.
The decrease in departure delay could be because older planes with many mechanical issues are removed from service or because air lines schedule these planes with enough time so that mechanical issues do not delay them.
```
ggplot(plane_cohorts, aes(x = age, y = dep_delay_mean)) +
geom_point() +
scale_x_continuous("Age of plane (years)", breaks = seq(0, 30, by = 10)) +
scale_y_continuous("Mean Departure Delay (minutes)")
```
There is a similar relationship in arrival delays.
Delays increase with the age of the plane until ten years, then it declines and flattens out.
```
ggplot(plane_cohorts, aes(x = age, y = arr_delay_mean)) +
geom_point() +
scale_x_continuous("Age of Plane (years)", breaks = seq(0, 30, by = 10)) +
scale_y_continuous("Mean Arrival Delay (minutes)")
```
### Exercise 13\.4\.4
What weather conditions make it more likely to see a delay?
Almost any amount of precipitation is associated with a delay.
However, there is not a strong a trend above 0\.02 in. of precipitation.
```
flight_weather <-
flights %>%
inner_join(weather, by = c(
"origin" = "origin",
"year" = "year",
"month" = "month",
"day" = "day",
"hour" = "hour"
))
```
```
flight_weather %>%
group_by(precip) %>%
summarise(delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = precip, y = delay)) +
geom_line() + geom_point()
#> `summarise()` ungrouping output (override with `.groups` argument)
```
There seems to be a stronger relationship between visibility and delay.
Delays are higher when visibility is less than 2 miles.
```
flight_weather %>%
ungroup() %>%
mutate(visib_cat = cut_interval(visib, n = 10)) %>%
group_by(visib_cat) %>%
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = visib_cat, y = dep_delay)) +
geom_point()
#> `summarise()` ungrouping output (override with `.groups` argument)
```
### Exercise 13\.4\.5
What happened on June 13, 2013?
Display the spatial pattern of delays, and then use Google to cross\-reference with the weather.
There was a large series of storms (derechos) in the southeastern US (see [June 12\-13, 2013 derecho series](https://en.wikipedia.org/wiki/June_12%E2%80%9313,_2013_derecho_series)).
The following plot show that the largest delays were in Tennessee (Nashville), the Southeast, and the Midwest, which were the locations of the derechos.
```
flights %>%
filter(year == 2013, month == 6, day == 13) %>%
group_by(dest) %>%
summarise(delay = mean(arr_delay, na.rm = TRUE)) %>%
inner_join(airports, by = c("dest" = "faa")) %>%
ggplot(aes(y = lat, x = lon, size = delay, colour = delay)) +
borders("state") +
geom_point() +
coord_quickmap() +
scale_colour_viridis()
#> `summarise()` ungrouping output (override with `.groups` argument)
#> Warning: Removed 3 rows containing missing values (geom_point).
```
13\.5 Filtering joins
---------------------
### Exercise 13\.5\.1
What does it mean for a flight to have a missing `tailnum`?
What do the tail numbers that don’t have a matching record in planes have in common?
(Hint: one variable explains \~90% of the problems.)
Flights that have a missing `tailnum` all have missing values of `arr_time`, meaning that the flight was canceled.
```
flights %>%
filter(is.na(tailnum), !is.na(arr_time)) %>%
nrow()
#> [1] 0
```
Many of the tail numbers that don’t have a matching value in `planes` are
registered to American Airlines (AA) or Envoy Airlines (MQ).
The documentation for `planes` states
> American Airways (AA) and Envoy Air (MQ) report fleet numbers rather than tail numbers so can’t be matched.
```
flights %>%
anti_join(planes, by = "tailnum") %>%
count(carrier, sort = TRUE) %>%
mutate(p = n / sum(n))
#> # A tibble: 10 x 3
#> carrier n p
#> <chr> <int> <dbl>
#> 1 MQ 25397 0.483
#> 2 AA 22558 0.429
#> 3 UA 1693 0.0322
#> 4 9E 1044 0.0198
#> 5 B6 830 0.0158
#> 6 US 699 0.0133
#> # … with 4 more rows
```
However, not all tail numbers appearing in`flights` from these carriers are missing from the `planes` table. I don’t know how to reconcile this discrepancy.
```
flights %>%
distinct(carrier, tailnum) %>%
left_join(planes, by = "tailnum") %>%
group_by(carrier) %>%
summarise(total_planes = n(),
not_in_planes = sum(is.na(model))) %>%
mutate(missing_pct = not_in_planes / total_planes) %>%
arrange(desc(missing_pct))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 16 x 4
#> carrier total_planes not_in_planes missing_pct
#> <chr> <int> <int> <dbl>
#> 1 MQ 238 234 0.983
#> 2 AA 601 430 0.715
#> 3 F9 26 3 0.115
#> 4 FL 129 12 0.0930
#> 5 UA 621 23 0.0370
#> 6 US 290 9 0.0310
#> # … with 10 more rows
```
### Exercise 13\.5\.2
Filter flights to only show flights with planes that have flown at least 100 flights.
First, I find all planes that have flown at least 100 flights.
I need to filter flights that are missing a tail number otherwise all flights missing a tail number will be treated as a single plane.
```
planes_gte100 <- flights %>%
filter(!is.na(tailnum)) %>%
group_by(tailnum) %>%
count() %>%
filter(n >= 100)
```
Now, I will semi join the data frame of planes that have flown at least 100 flights to the data frame of flights to select the flights by those planes.
```
flights %>%
semi_join(planes_gte100, by = "tailnum")
#> # A tibble: 228,390 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 544 545 -1 1004 1022
#> 4 2013 1 1 554 558 -4 740 728
#> 5 2013 1 1 555 600 -5 913 854
#> 6 2013 1 1 557 600 -3 709 723
#> # … with 228,384 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
This can also be answered with a grouped mutate.
```
flights %>%
filter(!is.na(tailnum)) %>%
group_by(tailnum) %>%
mutate(n = n()) %>%
filter(n >= 100)
#> # A tibble: 228,390 x 20
#> # Groups: tailnum [1,217]
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 544 545 -1 1004 1022
#> 4 2013 1 1 554 558 -4 740 728
#> 5 2013 1 1 555 600 -5 913 854
#> 6 2013 1 1 557 600 -3 709 723
#> # … with 228,384 more rows, and 12 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>,
#> # n <int>
```
### Exercise 13\.5\.3
Combine `fueleconomy::vehicles` and `fueleconomy::common` to find only the records for the most common models.
```
fueleconomy::vehicles %>%
semi_join(fueleconomy::common, by = c("make", "model"))
#> # A tibble: 14,531 x 12
#> id make model year class trans drive cyl displ fuel hwy cty
#> <dbl> <chr> <chr> <dbl> <chr> <chr> <chr> <dbl> <dbl> <chr> <dbl> <dbl>
#> 1 1833 Acura Integ… 1986 Subcom… Automa… Front-… 4 1.6 Regu… 28 22
#> 2 1834 Acura Integ… 1986 Subcom… Manual… Front-… 4 1.6 Regu… 28 23
#> 3 3037 Acura Integ… 1987 Subcom… Automa… Front-… 4 1.6 Regu… 28 22
#> 4 3038 Acura Integ… 1987 Subcom… Manual… Front-… 4 1.6 Regu… 28 23
#> 5 4183 Acura Integ… 1988 Subcom… Automa… Front-… 4 1.6 Regu… 27 22
#> 6 4184 Acura Integ… 1988 Subcom… Manual… Front-… 4 1.6 Regu… 28 23
#> # … with 14,525 more rows
```
Why does the above code join on `make` and `model` and not just `model`?
It is possible for two car brands (`make`) to produce a car with the same name (`model`).
In both the `vehicles` and `common` data we can find some examples.
For example, “Truck 4WD” is produced by many different brands.
```
fueleconomy::vehicles %>%
distinct(model, make) %>%
group_by(model) %>%
filter(n() > 1) %>%
arrange(model)
#> # A tibble: 126 x 2
#> # Groups: model [60]
#> make model
#> <chr> <chr>
#> 1 Audi 200
#> 2 Chrysler 200
#> 3 Mcevoy Motors 240 DL/240 GL Wagon
#> 4 Volvo 240 DL/240 GL Wagon
#> 5 Lambda Control Systems 300E
#> 6 Mercedes-Benz 300E
#> # … with 120 more rows
```
```
fueleconomy::common %>%
distinct(model, make) %>%
group_by(model) %>%
filter(n() > 1) %>%
arrange(model)
#> # A tibble: 8 x 2
#> # Groups: model [3]
#> make model
#> <chr> <chr>
#> 1 Dodge Colt
#> 2 Plymouth Colt
#> 3 Mitsubishi Truck 2WD
#> 4 Nissan Truck 2WD
#> 5 Toyota Truck 2WD
#> 6 Mitsubishi Truck 4WD
#> # … with 2 more rows
```
If we were to merge these data on the `model` column alone, there would be incorrect matches.
### Exercise 13\.5\.4
Find the 48 hours (over the course of the whole year) that have the worst delays.
Cross\-reference it with the weather data.
Can you see any patterns?
I will start by clarifying how I will be measuring the concepts in the question.
There are three concepts that need to be defined more precisely.
1. What is meant by “delay”?
I will use departure delay.
Since the `weather` data only contains data for the New York City airports, and
departure delays will be more sensitive to New York City weather conditions than arrival delays.
2. What is meant by “worst”? I define worst delay as the average departure delay per flight for flights *scheduled* to depart in that hour.
For hour, I will use the scheduled departure time rather than the actual departure time.
If planes are delayed due to weather conditions, the weather conditions during the scheduled time are more important than the actual departure time, at which point, the weather could have improved.
3. What is meant by “48 hours over the course of the year”? This could mean two days, a span of 48 contiguous hours,
or 48 hours that are not necessarily contiguous hours.
I will find 48 not\-necessarily contiguous hours.
That definition makes better use of the methods introduced in this section and chapter.
4. What is the unit of analysis? Although the question mentions only hours, I will use airport hours.
The weather dataset has an observation for each airport for each hour.
Since all the departure airports are in the vicinity of New York City, their weather should be similar, it will not be the same.
First, I need to find the 48 hours with the worst delays.
I group flights by hour of scheduled departure time and calculate the average delay.
Then I select the 48 observations (hours) with the highest average delay.
```
worst_hours <- flights %>%
mutate(hour = sched_dep_time %/% 100) %>%
group_by(origin, year, month, day, hour) %>%
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ungroup() %>%
arrange(desc(dep_delay)) %>%
slice(1:48)
#> `summarise()` regrouping output by 'origin', 'year', 'month', 'day' (override with `.groups` argument)
```
Then I can use `semi_join()` to get the weather for these hours.
```
weather_most_delayed <- semi_join(weather, worst_hours,
by = c("origin", "year",
"month", "day", "hour"))
```
For weather, I’ll focus on precipitation, wind speed, and temperature.
I will display these in both a table and a plot.
Many of these observations have a higher than average wind speed (10 mph) or some precipitation.
However, I would have expected the weather for the hours with the worst delays to be much worse.
```
select(weather_most_delayed, temp, wind_speed, precip) %>%
print(n = 48)
#> # A tibble: 48 x 3
#> temp wind_speed precip
#> <dbl> <dbl> <dbl>
#> 1 27.0 13.8 0
#> 2 28.0 19.6 0
#> 3 28.9 28.8 0
#> 4 33.8 9.21 0.06
#> 5 34.0 8.06 0.05
#> 6 80.1 8.06 0
#> 7 86 13.8 0
#> 8 73.4 6.90 0.08
#> 9 84.0 5.75 0
#> 10 78.8 18.4 0.23
#> 11 53.6 0 0
#> 12 60.8 31.1 0.11
#> 13 55.4 17.3 0.14
#> 14 53.1 9.21 0.01
#> 15 55.9 11.5 0.1
#> 16 55.4 8.06 0.15
#> 17 57.0 29.9 0
#> 18 33.8 20.7 0.02
#> 19 34.0 19.6 0.01
#> 20 36.0 21.9 0.01
#> 21 37.9 16.1 0
#> 22 32 13.8 0.12
#> 23 60.1 33.4 0.14
#> 24 60.8 11.5 0.02
#> 25 62.1 17.3 0
#> 26 66.9 10.4 0
#> 27 66.9 13.8 0
#> 28 79.0 10.4 0
#> 29 77 16.1 0.07
#> 30 75.9 13.8 0
#> 31 82.4 8.06 0
#> 32 86 9.21 0
#> 33 80.1 9.21 0
#> 34 80.6 11.5 0
#> 35 78.1 6.90 0
#> 36 75.2 10.4 0.01
#> 37 73.9 5.75 0.03
#> 38 73.9 8.06 0
#> 39 75.0 4.60 0
#> 40 75.0 4.60 0.01
#> 41 80.1 0 0.01
#> 42 80.1 0 0
#> 43 77 10.4 0
#> 44 82.0 10.4 0
#> 45 72.0 13.8 0.3
#> 46 72.0 4.60 0.03
#> 47 51.1 4.60 0
#> 48 54.0 6.90 0
```
```
ggplot(weather_most_delayed, aes(x = precip, y = wind_speed, color = temp)) +
geom_point()
```
It’s hard to say much more than that without using the tools from [Exploratory Data Analysis](https://r4ds.had.co.nz/exploratory-data-analysis.html#covariation) section
to look for covariation between weather and flight delays using all flights.
Implicitly in my informal analysis of trends in weather using only the 48 hours with the worst delays, I was comparing the weather in these hours to some belief I had about what constitutes “normal” or “good” weather.
It would be better to actually use data to make that comparison.
### Exercise 13\.5\.5
What does `anti_join(flights, airports, by = c("dest" = "faa"))` tell you?
What does `anti_join(airports, flights, by = c("faa" = "dest"))` tell you?
The expression `anti_join(flights, airports, by = c("dest" = "faa"))` returns the flights that went to an airport that is not in the FAA list of destinations.
Since the FAA list only contains domestic airports, these are likely foreign flights.
However, running that expression that there are only four airports in this list.
```
anti_join(flights, airports, by = c("dest" = "faa")) %>%
distinct(dest)
#> # A tibble: 4 x 1
#> dest
#> <chr>
#> 1 BQN
#> 2 SJU
#> 3 STT
#> 4 PSE
```
In this set of four airports three are in Puerto Rico ([BQN](https://en.wikipedia.org/wiki/Rafael_Hern%C3%A1ndez_Airport), [SJU](https://en.wikipedia.org/wiki/Luis_Mu%C3%B1oz_Mar%C3%ADn_International_Airport), and [PSE](https://en.wikipedia.org/wiki/Mercedita_International_Airport)) and one is in the US Virgin Islands ( [STT](https://en.wikipedia.org/wiki/Cyril_E._King_Airport)).
The reason for this discrepancy is that the `flights` and `airports` tables are derived from different sources.
The `flights` data comes from the US Department of Transportation [Bureau of Transportation Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236), while the airport metadata comes from [openflights.org](https://openflights.org/data.html).
The BTS includes Puerto Rico and U.S. Virgin Islands as “domestic” (part of the US), while the [openflights.org](https://raw.githubusercontent.com/jpatokal/openflights/master/data/airports.dat) give use different values of country for airports in the US states (`"United States"`) Puerto Rico (`"Puerto Rico"`) and US Virgin Islands (`"Virgin Islands"`).
The expression `anti_join(airports, flights, by = c("faa" = "dest"))` returns the US airports that were not the destination of any flight in the data.
Since the data contains all flights from New York City airports, this is also the list of US airports that did not have a nonstop flight from New York City in 2013\.
```
anti_join(airports, flights, by = c("faa" = "dest"))
#> # A tibble: 1,357 x 8
#> faa name lat lon alt tz dst tzone
#> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
#> 1 04G Lansdowne Airport 41.1 -80.6 1044 -5 A America/New_Y…
#> 2 06A Moton Field Municipal Airp… 32.5 -85.7 264 -6 A America/Chica…
#> 3 06C Schaumburg Regional 42.0 -88.1 801 -6 A America/Chica…
#> 4 06N Randall Airport 41.4 -74.4 523 -5 A America/New_Y…
#> 5 09J Jekyll Island Airport 31.1 -81.4 11 -5 A America/New_Y…
#> 6 0A9 Elizabethton Municipal Air… 36.4 -82.2 1593 -5 A America/New_Y…
#> # … with 1,351 more rows
```
### Exercise 13\.5\.6
You might expect that there’s an implicit relationship between plane and airline, because each plane is flown by a single airline.
Confirm or reject this hypothesis using the tools you’ve learned above.
At each point in time, each plane is flown by a single airline.
However, a plane can be sold and fly for multiple airlines.
Logically, it is possible that a plane can fly for multiple airlines over the course of its lifetime.
But, it is not necessarily the case that a plane will fly for more than one airline in this data, especially since it comprises only a year of data.
So let’s check to see if there are any planes in the data flew for multiple airlines.
First, find all distinct airline, plane combinations.
```
planes_carriers <-
flights %>%
filter(!is.na(tailnum)) %>%
distinct(tailnum, carrier)
```
The number of planes that have flown for more than one airline are those `tailnum` that appear more than once in the `planes_carriers` data.
```
planes_carriers %>%
count(tailnum) %>%
filter(n > 1) %>%
nrow()
#> [1] 17
```
The names of airlines are easier to understand than the two\-letter carrier codes.
The `airlines` data frame contains the names of the airlines.
```
carrier_transfer_tbl <- planes_carriers %>%
# keep only planes which have flown for more than one airline
group_by(tailnum) %>%
filter(n() > 1) %>%
# join with airlines to get airline names
left_join(airlines, by = "carrier") %>%
arrange(tailnum, carrier)
carrier_transfer_tbl
#> # A tibble: 34 x 3
#> # Groups: tailnum [17]
#> carrier tailnum name
#> <chr> <chr> <chr>
#> 1 9E N146PQ Endeavor Air Inc.
#> 2 EV N146PQ ExpressJet Airlines Inc.
#> 3 9E N153PQ Endeavor Air Inc.
#> 4 EV N153PQ ExpressJet Airlines Inc.
#> 5 9E N176PQ Endeavor Air Inc.
#> 6 EV N176PQ ExpressJet Airlines Inc.
#> # … with 28 more rows
```
13\.6 Join problems
-------------------
No exercises
13\.7 Set operations
--------------------
No exercises
13\.1 Introduction
------------------
The datamodelr package is used to draw database schema.
```
library("tidyverse")
library("nycflights13")
library("viridis")
library("datamodelr")
```
13\.2 nycflights13
------------------
### Exercise 13\.2\.1
Imagine you wanted to draw (approximately) the route each plane flies from its origin to its destination.
What variables would you need?
What tables would you need to combine?
Drawing the routes requires the latitude and longitude of the origin and the destination airports of each flight.
This requires the `flights` and `airports` tables.
The `flights` table has the origin (`origin`) and destination (`dest`) airport of each flight.
The `airports` table has the longitude (`lon`) and latitude (`lat`) of each airport.
To get the latitude and longitude for the origin and destination of each flight,
requires two joins for `flights` to `airports`,
once for the latitude and longitude of the origin airport,
and once for the latitude and longitude of the destination airport.
I use an inner join in order to drop any flights with missing airports since they will not have a longitude or latitude.
```
flights_latlon <- flights %>%
inner_join(select(airports, origin = faa, origin_lat = lat, origin_lon = lon),
by = "origin"
) %>%
inner_join(select(airports, dest = faa, dest_lat = lat, dest_lon = lon),
by = "dest"
)
```
This plots the approximate flight paths of the first 100 flights in the `flights` dataset.
```
flights_latlon %>%
slice(1:100) %>%
ggplot(aes(
x = origin_lon, xend = dest_lon,
y = origin_lat, yend = dest_lat
)) +
borders("state") +
geom_segment(arrow = arrow(length = unit(0.1, "cm"))) +
coord_quickmap() +
labs(y = "Latitude", x = "Longitude")
```
### Exercise 13\.2\.2
I forgot to draw the relationship between `weather` and `airports`.
What is the relationship and how should it appear in the diagram?
The column `airports$faa` is a foreign key of `weather$origin`.
The following drawing updates the one in [Section 13\.2](https://r4ds.had.co.nz/relational-data.html#nycflights13-relational) to include this relation.
The line representing the new relation between `weather` and `airports` is colored black.
The lines representing the old relations are gray and thinner.
### Exercise 13\.2\.3
Weather only contains information for the origin (NYC) airports.
If it contained weather records for all airports in the USA, what additional relation would it define with `flights`?
If the weather was included for all airports in the US, then it would provide the weather for the destination of each flight.
The `weather` data frame columns (`year`, `month`, `day`, `hour`, `origin`) are a foreign key for the `flights` data frame columns (`year`, `month`, `day`, `hour`, `dest`).
This would provide information about the weather at the destination airport at the time of the flight take off, unless the arrival date\-time were calculated.
So why was this not a relationship prior to adding additional rows to the `weather` table?
In a foreign key relationship, the collection of columns in the child table
must refer to a unique collection of columns in the parent table.
When the `weather` table only contained New York airports,
there were many values of (`year`, `month`, `day`, `hour`, `dest`) in `flights` that
did not appear in the `weather` table.
Therefore, it was not a foreign key. It was only after
all combinations of year, month, day, hour, and airports that are defined in `flights`
were added to the `weather` table that there existed this relation between these tables.
### Exercise 13\.2\.4
We know that some days of the year are “special”, and fewer people than usual fly on them.
How might you represent that data as a data frame?
What would be the primary keys of that table?
How would it connect to the existing tables?
I would add a table of special dates, similar to the following table.
```
special_days <- tribble(
~year, ~month, ~day, ~holiday,
2013, 01, 01, "New Years Day",
2013, 07, 04, "Independence Day",
2013, 11, 29, "Thanksgiving Day",
2013, 12, 25, "Christmas Day"
)
```
The primary key of the table would be the (`year`, `month`, `day`) columns.
The (`year`, `month`, `day`) columns could be used to join `special_days` with other tables.
### Exercise 13\.2\.1
Imagine you wanted to draw (approximately) the route each plane flies from its origin to its destination.
What variables would you need?
What tables would you need to combine?
Drawing the routes requires the latitude and longitude of the origin and the destination airports of each flight.
This requires the `flights` and `airports` tables.
The `flights` table has the origin (`origin`) and destination (`dest`) airport of each flight.
The `airports` table has the longitude (`lon`) and latitude (`lat`) of each airport.
To get the latitude and longitude for the origin and destination of each flight,
requires two joins for `flights` to `airports`,
once for the latitude and longitude of the origin airport,
and once for the latitude and longitude of the destination airport.
I use an inner join in order to drop any flights with missing airports since they will not have a longitude or latitude.
```
flights_latlon <- flights %>%
inner_join(select(airports, origin = faa, origin_lat = lat, origin_lon = lon),
by = "origin"
) %>%
inner_join(select(airports, dest = faa, dest_lat = lat, dest_lon = lon),
by = "dest"
)
```
This plots the approximate flight paths of the first 100 flights in the `flights` dataset.
```
flights_latlon %>%
slice(1:100) %>%
ggplot(aes(
x = origin_lon, xend = dest_lon,
y = origin_lat, yend = dest_lat
)) +
borders("state") +
geom_segment(arrow = arrow(length = unit(0.1, "cm"))) +
coord_quickmap() +
labs(y = "Latitude", x = "Longitude")
```
### Exercise 13\.2\.2
I forgot to draw the relationship between `weather` and `airports`.
What is the relationship and how should it appear in the diagram?
The column `airports$faa` is a foreign key of `weather$origin`.
The following drawing updates the one in [Section 13\.2](https://r4ds.had.co.nz/relational-data.html#nycflights13-relational) to include this relation.
The line representing the new relation between `weather` and `airports` is colored black.
The lines representing the old relations are gray and thinner.
### Exercise 13\.2\.3
Weather only contains information for the origin (NYC) airports.
If it contained weather records for all airports in the USA, what additional relation would it define with `flights`?
If the weather was included for all airports in the US, then it would provide the weather for the destination of each flight.
The `weather` data frame columns (`year`, `month`, `day`, `hour`, `origin`) are a foreign key for the `flights` data frame columns (`year`, `month`, `day`, `hour`, `dest`).
This would provide information about the weather at the destination airport at the time of the flight take off, unless the arrival date\-time were calculated.
So why was this not a relationship prior to adding additional rows to the `weather` table?
In a foreign key relationship, the collection of columns in the child table
must refer to a unique collection of columns in the parent table.
When the `weather` table only contained New York airports,
there were many values of (`year`, `month`, `day`, `hour`, `dest`) in `flights` that
did not appear in the `weather` table.
Therefore, it was not a foreign key. It was only after
all combinations of year, month, day, hour, and airports that are defined in `flights`
were added to the `weather` table that there existed this relation between these tables.
### Exercise 13\.2\.4
We know that some days of the year are “special”, and fewer people than usual fly on them.
How might you represent that data as a data frame?
What would be the primary keys of that table?
How would it connect to the existing tables?
I would add a table of special dates, similar to the following table.
```
special_days <- tribble(
~year, ~month, ~day, ~holiday,
2013, 01, 01, "New Years Day",
2013, 07, 04, "Independence Day",
2013, 11, 29, "Thanksgiving Day",
2013, 12, 25, "Christmas Day"
)
```
The primary key of the table would be the (`year`, `month`, `day`) columns.
The (`year`, `month`, `day`) columns could be used to join `special_days` with other tables.
13\.3 Keys
----------
### Exercise 13\.3\.1
Add a surrogate key to flights.
I add the column `flight_id` as a surrogate key.
I sort the data prior to making the key, even though it is not strictly necessary, so the order of the rows has some meaning.
```
flights %>%
arrange(year, month, day, sched_dep_time, carrier, flight) %>%
mutate(flight_id = row_number()) %>%
glimpse()
#> Rows: 336,776
#> Columns: 20
#> $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, …
#> $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
#> $ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
#> $ dep_time <int> 517, 533, 542, 544, 554, 559, 558, 559, 558, 558, 557,…
#> $ sched_dep_time <int> 515, 529, 540, 545, 558, 559, 600, 600, 600, 600, 600,…
#> $ dep_delay <dbl> 2, 4, 2, -1, -4, 0, -2, -1, -2, -2, -3, NA, 1, 0, -5, …
#> $ arr_time <int> 830, 850, 923, 1004, 740, 702, 753, 941, 849, 853, 838…
#> $ sched_arr_time <int> 819, 830, 850, 1022, 728, 706, 745, 910, 851, 856, 846…
#> $ arr_delay <dbl> 11, 20, 33, -18, 12, -4, 8, 31, -2, -3, -8, NA, -6, -7…
#> $ carrier <chr> "UA", "UA", "AA", "B6", "UA", "B6", "AA", "AA", "B6", …
#> $ flight <int> 1545, 1714, 1141, 725, 1696, 1806, 301, 707, 49, 71, 7…
#> $ tailnum <chr> "N14228", "N24211", "N619AA", "N804JB", "N39463", "N70…
#> $ origin <chr> "EWR", "LGA", "JFK", "JFK", "EWR", "JFK", "LGA", "LGA"…
#> $ dest <chr> "IAH", "IAH", "MIA", "BQN", "ORD", "BOS", "ORD", "DFW"…
#> $ air_time <dbl> 227, 227, 160, 183, 150, 44, 138, 257, 149, 158, 140, …
#> $ distance <dbl> 1400, 1416, 1089, 1576, 719, 187, 733, 1389, 1028, 100…
#> $ hour <dbl> 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, …
#> $ minute <dbl> 15, 29, 40, 45, 58, 59, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
#> $ time_hour <dttm> 2013-01-01 05:00:00, 2013-01-01 05:00:00, 2013-01-01 …
#> $ flight_id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,…
```
### Exercise 13\.3\.2
Identify the keys in the following datasets
1. `Lahman::Batting`
2. `babynames::babynames`
3. `nasaweather::atmos`
4. `fueleconomy::vehicles`
5. `ggplot2::diamonds`
(You might need to install some packages and read some documentation.)
The answer to each part follows.
1. The primary key for `Lahman::Batting` is (`playerID`, `yearID`, `stint`).
The columns (`playerID`, `yearID`) are not a primary key because players can play on different teams within the same year.
```
Lahman::Batting %>%
count(playerID, yearID, stint) %>%
filter(n > 1) %>%
nrow()
#> [1] 0
```
2. The primary key for `babynames::babynames` is (`year`, `sex`, `name`).
The columns (`year`, `name`) are not a primary key since there are separate counts for each name for each sex, and the same names can be used by more than one sex.
```
babynames::babynames %>%
count(year, sex, name) %>%
filter(n > 1) %>%
nrow()
#> Using `n` as weighting variable
#> ℹ Quiet this message with `wt = n` or count rows with `wt = 1`
#> [1] 1924665
```
3. The primary key for `nasaweather::atmos` is (`lat`, `long`, `year`, `month`).
The primary key represents the location and time that the measurement was taken.
```
nasaweather::atmos %>%
count(lat, long, year, month) %>%
filter(n > 1) %>%
nrow()
#> [1] 0
```
4. The column `id`, the unique EPA identifier of the vehicle, is the primary key for `fueleconomy::vehicles`.
```
fueleconomy::vehicles %>%
count(id) %>%
filter(n > 1) %>%
nrow()
#> [1] 0
```
5. There is no primary key for `ggplot2::diamonds` since there is no combination of variables that uniquely identifies each observation.
This is implied by the fact that the number of distinct rows in the dataset is less than the total number of rows, meaning that there are some duplicate rows.
```
ggplot2::diamonds %>%
distinct() %>%
nrow()
#> [1] 53794
nrow(ggplot2::diamonds)
#> [1] 53940
```
If we need a unique identifier for our analysis, we could add a surrogate key.
```
diamonds <- mutate(ggplot2::diamonds, id = row_number())
```
### Exercise 13\.3\.3
Draw a diagram illustrating the connections between the `Batting`, `Master`, and `Salaries` tables in the Lahman package.
Draw another diagram that shows the relationship between `Master`, `Managers`, `AwardsManagers`.
How would you characterize the relationship between the `Batting`, `Pitching`, and `Fielding` tables?
For the `Batting`, `Master`, and `Salaries` tables:
* `Master`
+ Primary key: `playerID`
* `Batting`
+ Primary key: `playerID`, `yearID`, `stint`
+ Foreign keys:
- `playerID` \= `Master$playerID` (many\-to\-1\)
* `Salaries`
+ Primary key: `yearID`, `teamID`, `playerID`
+ Foreign keys:
- `playerID` \= `Master$playerID` (many\-to\-1\)
The columns `teamID` and `lgID` are not foreign keys even though they appear in multiple tables (with the same meaning) because they are not primary keys for any of the tables considered in this exercise.
The `teamID` variable references `Teams$teamID`, and `lgID` does not have its own table.
*R for Data Science* uses database schema diagrams to illustrate relations between the tables.
Most flowchart or diagramming software can be used used to create database schema diagrams, as well as some specialized database software.
The diagrams in *R for Data Science* were created with [OmniGraffle](https://www.gliffy.com/), and their sources can be found in its [GitHub repository](https://github.com/hadley/r4ds/tree/master/diagrams).
The following diagram was created with OmniGraffle in the same style as those
in *R for Data Science* .
It shows the relations between the `Master`, `Batting` and `Salaries` tables.
Another option to draw database schema diagrams is the R package [datamodelr](https://github.com/bergant/datamodelr), which can programmatically create database schema diagrams.
The following code uses datamodelr to draw a diagram of the relations between the `Batting`, `Master`, and `Salaries` tables.
```
dm1 <- dm_from_data_frames(list(
Batting = Lahman::Batting,
Master = Lahman::Master,
Salaries = Lahman::Salaries
)) %>%
dm_set_key("Batting", c("playerID", "yearID", "stint")) %>%
dm_set_key("Master", "playerID") %>%
dm_set_key("Salaries", c("yearID", "teamID", "playerID")) %>%
dm_add_references(
Batting$playerID == Master$playerID,
Salaries$playerID == Master$playerID
)
dm_create_graph(dm1, rankdir = "LR", columnArrows = TRUE) %>%
dm_render_graph()
```
For the `Master`, `Manager`, and `AwardsManagers` tables:
* `Master`
+ Primary key: `playerID`
* `Managers`
+ Primary key: `yearID`, `teamID`, `inseason`
+ Foreign keys:
- `playerID` references `Master$playerID` (many\-to\-1\)
* `AwardsManagers`:
+ Primary key: `playerID`, `awardID`, `yearID`
+ Foreign keys:
- `playerID` references `Master$playerID` (many\-to\-1\)
For `AwardsManagers`, the columns (`awardID`, `yearID`, `lgID`) are not a primary
key because there can be, and have been ties, as indicated by the `tie` variable.
The relations between the `Master`, `Managers`, and `AwardsManagers` tables
are shown in the following two diagrams: the first created manually with OmniGraffle,
and the second programmatically in R with the datamodelr package.
```
dm2 <- dm_from_data_frames(list(
Master = Lahman::Master,
Managers = Lahman::Managers,
AwardsManagers = Lahman::AwardsManagers
)) %>%
dm_set_key("Master", "playerID") %>%
dm_set_key("Managers", c("yearID", "teamID", "inseason")) %>%
dm_set_key("AwardsManagers", c("playerID", "awardID", "yearID")) %>%
dm_add_references(
Managers$playerID == Master$playerID,
AwardsManagers$playerID == Master$playerID
)
dm_create_graph(dm2, rankdir = "LR", columnArrows = TRUE) %>%
dm_render_graph()
```
The primary keys of `Batting`, `Pitching`, and `Fielding` are the following:
* `Batting`: (`playerID`, `yearID`, `stint`)
* `Pitching`: (`playerID`, `yearID`, `stint`)
* `Fielding`: (`playerID`, `yearID`, `stint`, `POS`).
While `Batting` and `Pitching` has one row per player, year, stint, the `Fielding`
table has additional rows for each position (`POS`) a player played within a stint.
Since `Batting`, `Pitching`, and `Fielding` all share the `playerID`, `yearID`, and `stint`
we would expect some foreign key relations between these tables.
The columns (`playerID`, `yearID`, `stint`) in `Pitching` are a foreign key which
references the same columns in `Batting`. We can check this by checking that
all observed combinations of values of these columns appearing in `Pitching`
also appear in `Batting`. To do this I use an anti\-join, which is discussed
in the section [Filtering Joins](https://r4ds.had.co.nz/relational-data.html#filtering-joins).
```
nrow(anti_join(Lahman::Pitching, Lahman::Batting,
by = c("playerID", "yearID", "stint")
))
#> [1] 0
```
Similarly, the columns (`playerID`, `yearID`, `stint`) in `Fielding` are a foreign key which references the same columns in `Batting`.
```
nrow(anti_join(Lahman::Fielding, Lahman::Batting,
by = c("playerID", "yearID", "stint")
))
#> [1] 0
```
The following diagram shows the relations between the `Batting`, `Pitching`, and
`Fielding` tables.
### Exercise 13\.3\.1
Add a surrogate key to flights.
I add the column `flight_id` as a surrogate key.
I sort the data prior to making the key, even though it is not strictly necessary, so the order of the rows has some meaning.
```
flights %>%
arrange(year, month, day, sched_dep_time, carrier, flight) %>%
mutate(flight_id = row_number()) %>%
glimpse()
#> Rows: 336,776
#> Columns: 20
#> $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013, …
#> $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
#> $ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
#> $ dep_time <int> 517, 533, 542, 544, 554, 559, 558, 559, 558, 558, 557,…
#> $ sched_dep_time <int> 515, 529, 540, 545, 558, 559, 600, 600, 600, 600, 600,…
#> $ dep_delay <dbl> 2, 4, 2, -1, -4, 0, -2, -1, -2, -2, -3, NA, 1, 0, -5, …
#> $ arr_time <int> 830, 850, 923, 1004, 740, 702, 753, 941, 849, 853, 838…
#> $ sched_arr_time <int> 819, 830, 850, 1022, 728, 706, 745, 910, 851, 856, 846…
#> $ arr_delay <dbl> 11, 20, 33, -18, 12, -4, 8, 31, -2, -3, -8, NA, -6, -7…
#> $ carrier <chr> "UA", "UA", "AA", "B6", "UA", "B6", "AA", "AA", "B6", …
#> $ flight <int> 1545, 1714, 1141, 725, 1696, 1806, 301, 707, 49, 71, 7…
#> $ tailnum <chr> "N14228", "N24211", "N619AA", "N804JB", "N39463", "N70…
#> $ origin <chr> "EWR", "LGA", "JFK", "JFK", "EWR", "JFK", "LGA", "LGA"…
#> $ dest <chr> "IAH", "IAH", "MIA", "BQN", "ORD", "BOS", "ORD", "DFW"…
#> $ air_time <dbl> 227, 227, 160, 183, 150, 44, 138, 257, 149, 158, 140, …
#> $ distance <dbl> 1400, 1416, 1089, 1576, 719, 187, 733, 1389, 1028, 100…
#> $ hour <dbl> 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, …
#> $ minute <dbl> 15, 29, 40, 45, 58, 59, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
#> $ time_hour <dttm> 2013-01-01 05:00:00, 2013-01-01 05:00:00, 2013-01-01 …
#> $ flight_id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,…
```
### Exercise 13\.3\.2
Identify the keys in the following datasets
1. `Lahman::Batting`
2. `babynames::babynames`
3. `nasaweather::atmos`
4. `fueleconomy::vehicles`
5. `ggplot2::diamonds`
(You might need to install some packages and read some documentation.)
The answer to each part follows.
1. The primary key for `Lahman::Batting` is (`playerID`, `yearID`, `stint`).
The columns (`playerID`, `yearID`) are not a primary key because players can play on different teams within the same year.
```
Lahman::Batting %>%
count(playerID, yearID, stint) %>%
filter(n > 1) %>%
nrow()
#> [1] 0
```
2. The primary key for `babynames::babynames` is (`year`, `sex`, `name`).
The columns (`year`, `name`) are not a primary key since there are separate counts for each name for each sex, and the same names can be used by more than one sex.
```
babynames::babynames %>%
count(year, sex, name) %>%
filter(n > 1) %>%
nrow()
#> Using `n` as weighting variable
#> ℹ Quiet this message with `wt = n` or count rows with `wt = 1`
#> [1] 1924665
```
3. The primary key for `nasaweather::atmos` is (`lat`, `long`, `year`, `month`).
The primary key represents the location and time that the measurement was taken.
```
nasaweather::atmos %>%
count(lat, long, year, month) %>%
filter(n > 1) %>%
nrow()
#> [1] 0
```
4. The column `id`, the unique EPA identifier of the vehicle, is the primary key for `fueleconomy::vehicles`.
```
fueleconomy::vehicles %>%
count(id) %>%
filter(n > 1) %>%
nrow()
#> [1] 0
```
5. There is no primary key for `ggplot2::diamonds` since there is no combination of variables that uniquely identifies each observation.
This is implied by the fact that the number of distinct rows in the dataset is less than the total number of rows, meaning that there are some duplicate rows.
```
ggplot2::diamonds %>%
distinct() %>%
nrow()
#> [1] 53794
nrow(ggplot2::diamonds)
#> [1] 53940
```
If we need a unique identifier for our analysis, we could add a surrogate key.
```
diamonds <- mutate(ggplot2::diamonds, id = row_number())
```
### Exercise 13\.3\.3
Draw a diagram illustrating the connections between the `Batting`, `Master`, and `Salaries` tables in the Lahman package.
Draw another diagram that shows the relationship between `Master`, `Managers`, `AwardsManagers`.
How would you characterize the relationship between the `Batting`, `Pitching`, and `Fielding` tables?
For the `Batting`, `Master`, and `Salaries` tables:
* `Master`
+ Primary key: `playerID`
* `Batting`
+ Primary key: `playerID`, `yearID`, `stint`
+ Foreign keys:
- `playerID` \= `Master$playerID` (many\-to\-1\)
* `Salaries`
+ Primary key: `yearID`, `teamID`, `playerID`
+ Foreign keys:
- `playerID` \= `Master$playerID` (many\-to\-1\)
The columns `teamID` and `lgID` are not foreign keys even though they appear in multiple tables (with the same meaning) because they are not primary keys for any of the tables considered in this exercise.
The `teamID` variable references `Teams$teamID`, and `lgID` does not have its own table.
*R for Data Science* uses database schema diagrams to illustrate relations between the tables.
Most flowchart or diagramming software can be used used to create database schema diagrams, as well as some specialized database software.
The diagrams in *R for Data Science* were created with [OmniGraffle](https://www.gliffy.com/), and their sources can be found in its [GitHub repository](https://github.com/hadley/r4ds/tree/master/diagrams).
The following diagram was created with OmniGraffle in the same style as those
in *R for Data Science* .
It shows the relations between the `Master`, `Batting` and `Salaries` tables.
Another option to draw database schema diagrams is the R package [datamodelr](https://github.com/bergant/datamodelr), which can programmatically create database schema diagrams.
The following code uses datamodelr to draw a diagram of the relations between the `Batting`, `Master`, and `Salaries` tables.
```
dm1 <- dm_from_data_frames(list(
Batting = Lahman::Batting,
Master = Lahman::Master,
Salaries = Lahman::Salaries
)) %>%
dm_set_key("Batting", c("playerID", "yearID", "stint")) %>%
dm_set_key("Master", "playerID") %>%
dm_set_key("Salaries", c("yearID", "teamID", "playerID")) %>%
dm_add_references(
Batting$playerID == Master$playerID,
Salaries$playerID == Master$playerID
)
dm_create_graph(dm1, rankdir = "LR", columnArrows = TRUE) %>%
dm_render_graph()
```
For the `Master`, `Manager`, and `AwardsManagers` tables:
* `Master`
+ Primary key: `playerID`
* `Managers`
+ Primary key: `yearID`, `teamID`, `inseason`
+ Foreign keys:
- `playerID` references `Master$playerID` (many\-to\-1\)
* `AwardsManagers`:
+ Primary key: `playerID`, `awardID`, `yearID`
+ Foreign keys:
- `playerID` references `Master$playerID` (many\-to\-1\)
For `AwardsManagers`, the columns (`awardID`, `yearID`, `lgID`) are not a primary
key because there can be, and have been ties, as indicated by the `tie` variable.
The relations between the `Master`, `Managers`, and `AwardsManagers` tables
are shown in the following two diagrams: the first created manually with OmniGraffle,
and the second programmatically in R with the datamodelr package.
```
dm2 <- dm_from_data_frames(list(
Master = Lahman::Master,
Managers = Lahman::Managers,
AwardsManagers = Lahman::AwardsManagers
)) %>%
dm_set_key("Master", "playerID") %>%
dm_set_key("Managers", c("yearID", "teamID", "inseason")) %>%
dm_set_key("AwardsManagers", c("playerID", "awardID", "yearID")) %>%
dm_add_references(
Managers$playerID == Master$playerID,
AwardsManagers$playerID == Master$playerID
)
dm_create_graph(dm2, rankdir = "LR", columnArrows = TRUE) %>%
dm_render_graph()
```
The primary keys of `Batting`, `Pitching`, and `Fielding` are the following:
* `Batting`: (`playerID`, `yearID`, `stint`)
* `Pitching`: (`playerID`, `yearID`, `stint`)
* `Fielding`: (`playerID`, `yearID`, `stint`, `POS`).
While `Batting` and `Pitching` has one row per player, year, stint, the `Fielding`
table has additional rows for each position (`POS`) a player played within a stint.
Since `Batting`, `Pitching`, and `Fielding` all share the `playerID`, `yearID`, and `stint`
we would expect some foreign key relations between these tables.
The columns (`playerID`, `yearID`, `stint`) in `Pitching` are a foreign key which
references the same columns in `Batting`. We can check this by checking that
all observed combinations of values of these columns appearing in `Pitching`
also appear in `Batting`. To do this I use an anti\-join, which is discussed
in the section [Filtering Joins](https://r4ds.had.co.nz/relational-data.html#filtering-joins).
```
nrow(anti_join(Lahman::Pitching, Lahman::Batting,
by = c("playerID", "yearID", "stint")
))
#> [1] 0
```
Similarly, the columns (`playerID`, `yearID`, `stint`) in `Fielding` are a foreign key which references the same columns in `Batting`.
```
nrow(anti_join(Lahman::Fielding, Lahman::Batting,
by = c("playerID", "yearID", "stint")
))
#> [1] 0
```
The following diagram shows the relations between the `Batting`, `Pitching`, and
`Fielding` tables.
13\.4 Mutating joins
--------------------
```
flights2 <- flights %>%
select(year:day, hour, origin, dest, tailnum, carrier)
```
### Exercise 13\.4\.1
Compute the average delay by destination, then join on the `airports` data frame so you can show the spatial distribution of delays. Here’s an easy way to draw a map of the United States:
```
airports %>%
semi_join(flights, c("faa" = "dest")) %>%
ggplot(aes(lon, lat)) +
borders("state") +
geom_point() +
coord_quickmap()
```
(Don’t worry if you don’t understand what `semi_join()` does — you’ll learn about it next.)
You might want to use the size or color of the points to display the average delay for each airport.
```
avg_dest_delays <-
flights %>%
group_by(dest) %>%
# arrival delay NA's are cancelled flights
summarise(delay = mean(arr_delay, na.rm = TRUE)) %>%
inner_join(airports, by = c(dest = "faa"))
#> `summarise()` ungrouping output (override with `.groups` argument)
```
```
avg_dest_delays %>%
ggplot(aes(lon, lat, colour = delay)) +
borders("state") +
geom_point() +
coord_quickmap()
```
### Exercise 13\.4\.2
Add the location of the origin and destination (i.e. the `lat` and `lon`) to `flights`.
You can perform one join after another. If duplicate variables are found, by default, dplyr will distinguish the two by adding `.x`, and `.y` to the ends of the variable names to solve naming conflicts.
```
airport_locations <- airports %>%
select(faa, lat, lon)
flights %>%
select(year:day, hour, origin, dest) %>%
left_join(
airport_locations,
by = c("origin" = "faa")
) %>%
left_join(
airport_locations,
by = c("dest" = "faa")
)
#> # A tibble: 336,776 x 10
#> year month day hour origin dest lat.x lon.x lat.y lon.y
#> <int> <int> <int> <dbl> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 2013 1 1 5 EWR IAH 40.7 -74.2 30.0 -95.3
#> 2 2013 1 1 5 LGA IAH 40.8 -73.9 30.0 -95.3
#> 3 2013 1 1 5 JFK MIA 40.6 -73.8 25.8 -80.3
#> 4 2013 1 1 5 JFK BQN 40.6 -73.8 NA NA
#> 5 2013 1 1 6 LGA ATL 40.8 -73.9 33.6 -84.4
#> 6 2013 1 1 5 EWR ORD 40.7 -74.2 42.0 -87.9
#> # … with 336,770 more rows
```
The `suffix` argument overrides this default behavior.
Since is always good practice to have clear variable names, I will use the
suffixes `"_dest"` and `"_origin`" to specify whether the column refers to
the destination or origin airport.
```
airport_locations <- airports %>%
select(faa, lat, lon)
flights %>%
select(year:day, hour, origin, dest) %>%
left_join(
airport_locations,
by = c("origin" = "faa")
) %>%
left_join(
airport_locations,
by = c("dest" = "faa"),
suffix = c("_origin", "_dest")
# existing lat and lon variables in tibble gain the _origin suffix
# new lat and lon variables are given _dest suffix
)
#> # A tibble: 336,776 x 10
#> year month day hour origin dest lat_origin lon_origin lat_dest lon_dest
#> <int> <int> <int> <dbl> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 2013 1 1 5 EWR IAH 40.7 -74.2 30.0 -95.3
#> 2 2013 1 1 5 LGA IAH 40.8 -73.9 30.0 -95.3
#> 3 2013 1 1 5 JFK MIA 40.6 -73.8 25.8 -80.3
#> 4 2013 1 1 5 JFK BQN 40.6 -73.8 NA NA
#> 5 2013 1 1 6 LGA ATL 40.8 -73.9 33.6 -84.4
#> 6 2013 1 1 5 EWR ORD 40.7 -74.2 42.0 -87.9
#> # … with 336,770 more rows
```
### Exercise 13\.4\.3
Is there a relationship between the age of a plane and its delays?
The question does not specify whether the relationship is with departure delay
or arrival delay.
I will look at both.
To compare the age of the plane to flights delay, I merge `flights` with the `planes`, which contains a variable `plane_year`, with the year in which the plane was built.
To look at the relationship between plane age and departure delay, I will calculate the average arrival and departure delay for each age of a flight.
Since there are few planes older than 25 years, so I truncate `age` at 25 years.
```
plane_cohorts <- inner_join(flights,
select(planes, tailnum, plane_year = year),
by = "tailnum"
) %>%
mutate(age = year - plane_year) %>%
filter(!is.na(age)) %>%
mutate(age = if_else(age > 25, 25L, age)) %>%
group_by(age) %>%
summarise(
dep_delay_mean = mean(dep_delay, na.rm = TRUE),
dep_delay_sd = sd(dep_delay, na.rm = TRUE),
arr_delay_mean = mean(arr_delay, na.rm = TRUE),
arr_delay_sd = sd(arr_delay, na.rm = TRUE),
n_arr_delay = sum(!is.na(arr_delay)),
n_dep_delay = sum(!is.na(dep_delay))
)
#> `summarise()` ungrouping output (override with `.groups` argument)
```
I will look for a relationship between departure delay and age by plotting age against the average departure delay.
The average departure delay is increasing for planes with ages up until 10 years. After that the departure delay decreases or levels off.
The decrease in departure delay could be because older planes with many mechanical issues are removed from service or because air lines schedule these planes with enough time so that mechanical issues do not delay them.
```
ggplot(plane_cohorts, aes(x = age, y = dep_delay_mean)) +
geom_point() +
scale_x_continuous("Age of plane (years)", breaks = seq(0, 30, by = 10)) +
scale_y_continuous("Mean Departure Delay (minutes)")
```
There is a similar relationship in arrival delays.
Delays increase with the age of the plane until ten years, then it declines and flattens out.
```
ggplot(plane_cohorts, aes(x = age, y = arr_delay_mean)) +
geom_point() +
scale_x_continuous("Age of Plane (years)", breaks = seq(0, 30, by = 10)) +
scale_y_continuous("Mean Arrival Delay (minutes)")
```
### Exercise 13\.4\.4
What weather conditions make it more likely to see a delay?
Almost any amount of precipitation is associated with a delay.
However, there is not a strong a trend above 0\.02 in. of precipitation.
```
flight_weather <-
flights %>%
inner_join(weather, by = c(
"origin" = "origin",
"year" = "year",
"month" = "month",
"day" = "day",
"hour" = "hour"
))
```
```
flight_weather %>%
group_by(precip) %>%
summarise(delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = precip, y = delay)) +
geom_line() + geom_point()
#> `summarise()` ungrouping output (override with `.groups` argument)
```
There seems to be a stronger relationship between visibility and delay.
Delays are higher when visibility is less than 2 miles.
```
flight_weather %>%
ungroup() %>%
mutate(visib_cat = cut_interval(visib, n = 10)) %>%
group_by(visib_cat) %>%
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = visib_cat, y = dep_delay)) +
geom_point()
#> `summarise()` ungrouping output (override with `.groups` argument)
```
### Exercise 13\.4\.5
What happened on June 13, 2013?
Display the spatial pattern of delays, and then use Google to cross\-reference with the weather.
There was a large series of storms (derechos) in the southeastern US (see [June 12\-13, 2013 derecho series](https://en.wikipedia.org/wiki/June_12%E2%80%9313,_2013_derecho_series)).
The following plot show that the largest delays were in Tennessee (Nashville), the Southeast, and the Midwest, which were the locations of the derechos.
```
flights %>%
filter(year == 2013, month == 6, day == 13) %>%
group_by(dest) %>%
summarise(delay = mean(arr_delay, na.rm = TRUE)) %>%
inner_join(airports, by = c("dest" = "faa")) %>%
ggplot(aes(y = lat, x = lon, size = delay, colour = delay)) +
borders("state") +
geom_point() +
coord_quickmap() +
scale_colour_viridis()
#> `summarise()` ungrouping output (override with `.groups` argument)
#> Warning: Removed 3 rows containing missing values (geom_point).
```
### Exercise 13\.4\.1
Compute the average delay by destination, then join on the `airports` data frame so you can show the spatial distribution of delays. Here’s an easy way to draw a map of the United States:
```
airports %>%
semi_join(flights, c("faa" = "dest")) %>%
ggplot(aes(lon, lat)) +
borders("state") +
geom_point() +
coord_quickmap()
```
(Don’t worry if you don’t understand what `semi_join()` does — you’ll learn about it next.)
You might want to use the size or color of the points to display the average delay for each airport.
```
avg_dest_delays <-
flights %>%
group_by(dest) %>%
# arrival delay NA's are cancelled flights
summarise(delay = mean(arr_delay, na.rm = TRUE)) %>%
inner_join(airports, by = c(dest = "faa"))
#> `summarise()` ungrouping output (override with `.groups` argument)
```
```
avg_dest_delays %>%
ggplot(aes(lon, lat, colour = delay)) +
borders("state") +
geom_point() +
coord_quickmap()
```
### Exercise 13\.4\.2
Add the location of the origin and destination (i.e. the `lat` and `lon`) to `flights`.
You can perform one join after another. If duplicate variables are found, by default, dplyr will distinguish the two by adding `.x`, and `.y` to the ends of the variable names to solve naming conflicts.
```
airport_locations <- airports %>%
select(faa, lat, lon)
flights %>%
select(year:day, hour, origin, dest) %>%
left_join(
airport_locations,
by = c("origin" = "faa")
) %>%
left_join(
airport_locations,
by = c("dest" = "faa")
)
#> # A tibble: 336,776 x 10
#> year month day hour origin dest lat.x lon.x lat.y lon.y
#> <int> <int> <int> <dbl> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 2013 1 1 5 EWR IAH 40.7 -74.2 30.0 -95.3
#> 2 2013 1 1 5 LGA IAH 40.8 -73.9 30.0 -95.3
#> 3 2013 1 1 5 JFK MIA 40.6 -73.8 25.8 -80.3
#> 4 2013 1 1 5 JFK BQN 40.6 -73.8 NA NA
#> 5 2013 1 1 6 LGA ATL 40.8 -73.9 33.6 -84.4
#> 6 2013 1 1 5 EWR ORD 40.7 -74.2 42.0 -87.9
#> # … with 336,770 more rows
```
The `suffix` argument overrides this default behavior.
Since is always good practice to have clear variable names, I will use the
suffixes `"_dest"` and `"_origin`" to specify whether the column refers to
the destination or origin airport.
```
airport_locations <- airports %>%
select(faa, lat, lon)
flights %>%
select(year:day, hour, origin, dest) %>%
left_join(
airport_locations,
by = c("origin" = "faa")
) %>%
left_join(
airport_locations,
by = c("dest" = "faa"),
suffix = c("_origin", "_dest")
# existing lat and lon variables in tibble gain the _origin suffix
# new lat and lon variables are given _dest suffix
)
#> # A tibble: 336,776 x 10
#> year month day hour origin dest lat_origin lon_origin lat_dest lon_dest
#> <int> <int> <int> <dbl> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 2013 1 1 5 EWR IAH 40.7 -74.2 30.0 -95.3
#> 2 2013 1 1 5 LGA IAH 40.8 -73.9 30.0 -95.3
#> 3 2013 1 1 5 JFK MIA 40.6 -73.8 25.8 -80.3
#> 4 2013 1 1 5 JFK BQN 40.6 -73.8 NA NA
#> 5 2013 1 1 6 LGA ATL 40.8 -73.9 33.6 -84.4
#> 6 2013 1 1 5 EWR ORD 40.7 -74.2 42.0 -87.9
#> # … with 336,770 more rows
```
### Exercise 13\.4\.3
Is there a relationship between the age of a plane and its delays?
The question does not specify whether the relationship is with departure delay
or arrival delay.
I will look at both.
To compare the age of the plane to flights delay, I merge `flights` with the `planes`, which contains a variable `plane_year`, with the year in which the plane was built.
To look at the relationship between plane age and departure delay, I will calculate the average arrival and departure delay for each age of a flight.
Since there are few planes older than 25 years, so I truncate `age` at 25 years.
```
plane_cohorts <- inner_join(flights,
select(planes, tailnum, plane_year = year),
by = "tailnum"
) %>%
mutate(age = year - plane_year) %>%
filter(!is.na(age)) %>%
mutate(age = if_else(age > 25, 25L, age)) %>%
group_by(age) %>%
summarise(
dep_delay_mean = mean(dep_delay, na.rm = TRUE),
dep_delay_sd = sd(dep_delay, na.rm = TRUE),
arr_delay_mean = mean(arr_delay, na.rm = TRUE),
arr_delay_sd = sd(arr_delay, na.rm = TRUE),
n_arr_delay = sum(!is.na(arr_delay)),
n_dep_delay = sum(!is.na(dep_delay))
)
#> `summarise()` ungrouping output (override with `.groups` argument)
```
I will look for a relationship between departure delay and age by plotting age against the average departure delay.
The average departure delay is increasing for planes with ages up until 10 years. After that the departure delay decreases or levels off.
The decrease in departure delay could be because older planes with many mechanical issues are removed from service or because air lines schedule these planes with enough time so that mechanical issues do not delay them.
```
ggplot(plane_cohorts, aes(x = age, y = dep_delay_mean)) +
geom_point() +
scale_x_continuous("Age of plane (years)", breaks = seq(0, 30, by = 10)) +
scale_y_continuous("Mean Departure Delay (minutes)")
```
There is a similar relationship in arrival delays.
Delays increase with the age of the plane until ten years, then it declines and flattens out.
```
ggplot(plane_cohorts, aes(x = age, y = arr_delay_mean)) +
geom_point() +
scale_x_continuous("Age of Plane (years)", breaks = seq(0, 30, by = 10)) +
scale_y_continuous("Mean Arrival Delay (minutes)")
```
### Exercise 13\.4\.4
What weather conditions make it more likely to see a delay?
Almost any amount of precipitation is associated with a delay.
However, there is not a strong a trend above 0\.02 in. of precipitation.
```
flight_weather <-
flights %>%
inner_join(weather, by = c(
"origin" = "origin",
"year" = "year",
"month" = "month",
"day" = "day",
"hour" = "hour"
))
```
```
flight_weather %>%
group_by(precip) %>%
summarise(delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = precip, y = delay)) +
geom_line() + geom_point()
#> `summarise()` ungrouping output (override with `.groups` argument)
```
There seems to be a stronger relationship between visibility and delay.
Delays are higher when visibility is less than 2 miles.
```
flight_weather %>%
ungroup() %>%
mutate(visib_cat = cut_interval(visib, n = 10)) %>%
group_by(visib_cat) %>%
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = visib_cat, y = dep_delay)) +
geom_point()
#> `summarise()` ungrouping output (override with `.groups` argument)
```
### Exercise 13\.4\.5
What happened on June 13, 2013?
Display the spatial pattern of delays, and then use Google to cross\-reference with the weather.
There was a large series of storms (derechos) in the southeastern US (see [June 12\-13, 2013 derecho series](https://en.wikipedia.org/wiki/June_12%E2%80%9313,_2013_derecho_series)).
The following plot show that the largest delays were in Tennessee (Nashville), the Southeast, and the Midwest, which were the locations of the derechos.
```
flights %>%
filter(year == 2013, month == 6, day == 13) %>%
group_by(dest) %>%
summarise(delay = mean(arr_delay, na.rm = TRUE)) %>%
inner_join(airports, by = c("dest" = "faa")) %>%
ggplot(aes(y = lat, x = lon, size = delay, colour = delay)) +
borders("state") +
geom_point() +
coord_quickmap() +
scale_colour_viridis()
#> `summarise()` ungrouping output (override with `.groups` argument)
#> Warning: Removed 3 rows containing missing values (geom_point).
```
13\.5 Filtering joins
---------------------
### Exercise 13\.5\.1
What does it mean for a flight to have a missing `tailnum`?
What do the tail numbers that don’t have a matching record in planes have in common?
(Hint: one variable explains \~90% of the problems.)
Flights that have a missing `tailnum` all have missing values of `arr_time`, meaning that the flight was canceled.
```
flights %>%
filter(is.na(tailnum), !is.na(arr_time)) %>%
nrow()
#> [1] 0
```
Many of the tail numbers that don’t have a matching value in `planes` are
registered to American Airlines (AA) or Envoy Airlines (MQ).
The documentation for `planes` states
> American Airways (AA) and Envoy Air (MQ) report fleet numbers rather than tail numbers so can’t be matched.
```
flights %>%
anti_join(planes, by = "tailnum") %>%
count(carrier, sort = TRUE) %>%
mutate(p = n / sum(n))
#> # A tibble: 10 x 3
#> carrier n p
#> <chr> <int> <dbl>
#> 1 MQ 25397 0.483
#> 2 AA 22558 0.429
#> 3 UA 1693 0.0322
#> 4 9E 1044 0.0198
#> 5 B6 830 0.0158
#> 6 US 699 0.0133
#> # … with 4 more rows
```
However, not all tail numbers appearing in`flights` from these carriers are missing from the `planes` table. I don’t know how to reconcile this discrepancy.
```
flights %>%
distinct(carrier, tailnum) %>%
left_join(planes, by = "tailnum") %>%
group_by(carrier) %>%
summarise(total_planes = n(),
not_in_planes = sum(is.na(model))) %>%
mutate(missing_pct = not_in_planes / total_planes) %>%
arrange(desc(missing_pct))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 16 x 4
#> carrier total_planes not_in_planes missing_pct
#> <chr> <int> <int> <dbl>
#> 1 MQ 238 234 0.983
#> 2 AA 601 430 0.715
#> 3 F9 26 3 0.115
#> 4 FL 129 12 0.0930
#> 5 UA 621 23 0.0370
#> 6 US 290 9 0.0310
#> # … with 10 more rows
```
### Exercise 13\.5\.2
Filter flights to only show flights with planes that have flown at least 100 flights.
First, I find all planes that have flown at least 100 flights.
I need to filter flights that are missing a tail number otherwise all flights missing a tail number will be treated as a single plane.
```
planes_gte100 <- flights %>%
filter(!is.na(tailnum)) %>%
group_by(tailnum) %>%
count() %>%
filter(n >= 100)
```
Now, I will semi join the data frame of planes that have flown at least 100 flights to the data frame of flights to select the flights by those planes.
```
flights %>%
semi_join(planes_gte100, by = "tailnum")
#> # A tibble: 228,390 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 544 545 -1 1004 1022
#> 4 2013 1 1 554 558 -4 740 728
#> 5 2013 1 1 555 600 -5 913 854
#> 6 2013 1 1 557 600 -3 709 723
#> # … with 228,384 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
This can also be answered with a grouped mutate.
```
flights %>%
filter(!is.na(tailnum)) %>%
group_by(tailnum) %>%
mutate(n = n()) %>%
filter(n >= 100)
#> # A tibble: 228,390 x 20
#> # Groups: tailnum [1,217]
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 544 545 -1 1004 1022
#> 4 2013 1 1 554 558 -4 740 728
#> 5 2013 1 1 555 600 -5 913 854
#> 6 2013 1 1 557 600 -3 709 723
#> # … with 228,384 more rows, and 12 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>,
#> # n <int>
```
### Exercise 13\.5\.3
Combine `fueleconomy::vehicles` and `fueleconomy::common` to find only the records for the most common models.
```
fueleconomy::vehicles %>%
semi_join(fueleconomy::common, by = c("make", "model"))
#> # A tibble: 14,531 x 12
#> id make model year class trans drive cyl displ fuel hwy cty
#> <dbl> <chr> <chr> <dbl> <chr> <chr> <chr> <dbl> <dbl> <chr> <dbl> <dbl>
#> 1 1833 Acura Integ… 1986 Subcom… Automa… Front-… 4 1.6 Regu… 28 22
#> 2 1834 Acura Integ… 1986 Subcom… Manual… Front-… 4 1.6 Regu… 28 23
#> 3 3037 Acura Integ… 1987 Subcom… Automa… Front-… 4 1.6 Regu… 28 22
#> 4 3038 Acura Integ… 1987 Subcom… Manual… Front-… 4 1.6 Regu… 28 23
#> 5 4183 Acura Integ… 1988 Subcom… Automa… Front-… 4 1.6 Regu… 27 22
#> 6 4184 Acura Integ… 1988 Subcom… Manual… Front-… 4 1.6 Regu… 28 23
#> # … with 14,525 more rows
```
Why does the above code join on `make` and `model` and not just `model`?
It is possible for two car brands (`make`) to produce a car with the same name (`model`).
In both the `vehicles` and `common` data we can find some examples.
For example, “Truck 4WD” is produced by many different brands.
```
fueleconomy::vehicles %>%
distinct(model, make) %>%
group_by(model) %>%
filter(n() > 1) %>%
arrange(model)
#> # A tibble: 126 x 2
#> # Groups: model [60]
#> make model
#> <chr> <chr>
#> 1 Audi 200
#> 2 Chrysler 200
#> 3 Mcevoy Motors 240 DL/240 GL Wagon
#> 4 Volvo 240 DL/240 GL Wagon
#> 5 Lambda Control Systems 300E
#> 6 Mercedes-Benz 300E
#> # … with 120 more rows
```
```
fueleconomy::common %>%
distinct(model, make) %>%
group_by(model) %>%
filter(n() > 1) %>%
arrange(model)
#> # A tibble: 8 x 2
#> # Groups: model [3]
#> make model
#> <chr> <chr>
#> 1 Dodge Colt
#> 2 Plymouth Colt
#> 3 Mitsubishi Truck 2WD
#> 4 Nissan Truck 2WD
#> 5 Toyota Truck 2WD
#> 6 Mitsubishi Truck 4WD
#> # … with 2 more rows
```
If we were to merge these data on the `model` column alone, there would be incorrect matches.
### Exercise 13\.5\.4
Find the 48 hours (over the course of the whole year) that have the worst delays.
Cross\-reference it with the weather data.
Can you see any patterns?
I will start by clarifying how I will be measuring the concepts in the question.
There are three concepts that need to be defined more precisely.
1. What is meant by “delay”?
I will use departure delay.
Since the `weather` data only contains data for the New York City airports, and
departure delays will be more sensitive to New York City weather conditions than arrival delays.
2. What is meant by “worst”? I define worst delay as the average departure delay per flight for flights *scheduled* to depart in that hour.
For hour, I will use the scheduled departure time rather than the actual departure time.
If planes are delayed due to weather conditions, the weather conditions during the scheduled time are more important than the actual departure time, at which point, the weather could have improved.
3. What is meant by “48 hours over the course of the year”? This could mean two days, a span of 48 contiguous hours,
or 48 hours that are not necessarily contiguous hours.
I will find 48 not\-necessarily contiguous hours.
That definition makes better use of the methods introduced in this section and chapter.
4. What is the unit of analysis? Although the question mentions only hours, I will use airport hours.
The weather dataset has an observation for each airport for each hour.
Since all the departure airports are in the vicinity of New York City, their weather should be similar, it will not be the same.
First, I need to find the 48 hours with the worst delays.
I group flights by hour of scheduled departure time and calculate the average delay.
Then I select the 48 observations (hours) with the highest average delay.
```
worst_hours <- flights %>%
mutate(hour = sched_dep_time %/% 100) %>%
group_by(origin, year, month, day, hour) %>%
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ungroup() %>%
arrange(desc(dep_delay)) %>%
slice(1:48)
#> `summarise()` regrouping output by 'origin', 'year', 'month', 'day' (override with `.groups` argument)
```
Then I can use `semi_join()` to get the weather for these hours.
```
weather_most_delayed <- semi_join(weather, worst_hours,
by = c("origin", "year",
"month", "day", "hour"))
```
For weather, I’ll focus on precipitation, wind speed, and temperature.
I will display these in both a table and a plot.
Many of these observations have a higher than average wind speed (10 mph) or some precipitation.
However, I would have expected the weather for the hours with the worst delays to be much worse.
```
select(weather_most_delayed, temp, wind_speed, precip) %>%
print(n = 48)
#> # A tibble: 48 x 3
#> temp wind_speed precip
#> <dbl> <dbl> <dbl>
#> 1 27.0 13.8 0
#> 2 28.0 19.6 0
#> 3 28.9 28.8 0
#> 4 33.8 9.21 0.06
#> 5 34.0 8.06 0.05
#> 6 80.1 8.06 0
#> 7 86 13.8 0
#> 8 73.4 6.90 0.08
#> 9 84.0 5.75 0
#> 10 78.8 18.4 0.23
#> 11 53.6 0 0
#> 12 60.8 31.1 0.11
#> 13 55.4 17.3 0.14
#> 14 53.1 9.21 0.01
#> 15 55.9 11.5 0.1
#> 16 55.4 8.06 0.15
#> 17 57.0 29.9 0
#> 18 33.8 20.7 0.02
#> 19 34.0 19.6 0.01
#> 20 36.0 21.9 0.01
#> 21 37.9 16.1 0
#> 22 32 13.8 0.12
#> 23 60.1 33.4 0.14
#> 24 60.8 11.5 0.02
#> 25 62.1 17.3 0
#> 26 66.9 10.4 0
#> 27 66.9 13.8 0
#> 28 79.0 10.4 0
#> 29 77 16.1 0.07
#> 30 75.9 13.8 0
#> 31 82.4 8.06 0
#> 32 86 9.21 0
#> 33 80.1 9.21 0
#> 34 80.6 11.5 0
#> 35 78.1 6.90 0
#> 36 75.2 10.4 0.01
#> 37 73.9 5.75 0.03
#> 38 73.9 8.06 0
#> 39 75.0 4.60 0
#> 40 75.0 4.60 0.01
#> 41 80.1 0 0.01
#> 42 80.1 0 0
#> 43 77 10.4 0
#> 44 82.0 10.4 0
#> 45 72.0 13.8 0.3
#> 46 72.0 4.60 0.03
#> 47 51.1 4.60 0
#> 48 54.0 6.90 0
```
```
ggplot(weather_most_delayed, aes(x = precip, y = wind_speed, color = temp)) +
geom_point()
```
It’s hard to say much more than that without using the tools from [Exploratory Data Analysis](https://r4ds.had.co.nz/exploratory-data-analysis.html#covariation) section
to look for covariation between weather and flight delays using all flights.
Implicitly in my informal analysis of trends in weather using only the 48 hours with the worst delays, I was comparing the weather in these hours to some belief I had about what constitutes “normal” or “good” weather.
It would be better to actually use data to make that comparison.
### Exercise 13\.5\.5
What does `anti_join(flights, airports, by = c("dest" = "faa"))` tell you?
What does `anti_join(airports, flights, by = c("faa" = "dest"))` tell you?
The expression `anti_join(flights, airports, by = c("dest" = "faa"))` returns the flights that went to an airport that is not in the FAA list of destinations.
Since the FAA list only contains domestic airports, these are likely foreign flights.
However, running that expression that there are only four airports in this list.
```
anti_join(flights, airports, by = c("dest" = "faa")) %>%
distinct(dest)
#> # A tibble: 4 x 1
#> dest
#> <chr>
#> 1 BQN
#> 2 SJU
#> 3 STT
#> 4 PSE
```
In this set of four airports three are in Puerto Rico ([BQN](https://en.wikipedia.org/wiki/Rafael_Hern%C3%A1ndez_Airport), [SJU](https://en.wikipedia.org/wiki/Luis_Mu%C3%B1oz_Mar%C3%ADn_International_Airport), and [PSE](https://en.wikipedia.org/wiki/Mercedita_International_Airport)) and one is in the US Virgin Islands ( [STT](https://en.wikipedia.org/wiki/Cyril_E._King_Airport)).
The reason for this discrepancy is that the `flights` and `airports` tables are derived from different sources.
The `flights` data comes from the US Department of Transportation [Bureau of Transportation Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236), while the airport metadata comes from [openflights.org](https://openflights.org/data.html).
The BTS includes Puerto Rico and U.S. Virgin Islands as “domestic” (part of the US), while the [openflights.org](https://raw.githubusercontent.com/jpatokal/openflights/master/data/airports.dat) give use different values of country for airports in the US states (`"United States"`) Puerto Rico (`"Puerto Rico"`) and US Virgin Islands (`"Virgin Islands"`).
The expression `anti_join(airports, flights, by = c("faa" = "dest"))` returns the US airports that were not the destination of any flight in the data.
Since the data contains all flights from New York City airports, this is also the list of US airports that did not have a nonstop flight from New York City in 2013\.
```
anti_join(airports, flights, by = c("faa" = "dest"))
#> # A tibble: 1,357 x 8
#> faa name lat lon alt tz dst tzone
#> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
#> 1 04G Lansdowne Airport 41.1 -80.6 1044 -5 A America/New_Y…
#> 2 06A Moton Field Municipal Airp… 32.5 -85.7 264 -6 A America/Chica…
#> 3 06C Schaumburg Regional 42.0 -88.1 801 -6 A America/Chica…
#> 4 06N Randall Airport 41.4 -74.4 523 -5 A America/New_Y…
#> 5 09J Jekyll Island Airport 31.1 -81.4 11 -5 A America/New_Y…
#> 6 0A9 Elizabethton Municipal Air… 36.4 -82.2 1593 -5 A America/New_Y…
#> # … with 1,351 more rows
```
### Exercise 13\.5\.6
You might expect that there’s an implicit relationship between plane and airline, because each plane is flown by a single airline.
Confirm or reject this hypothesis using the tools you’ve learned above.
At each point in time, each plane is flown by a single airline.
However, a plane can be sold and fly for multiple airlines.
Logically, it is possible that a plane can fly for multiple airlines over the course of its lifetime.
But, it is not necessarily the case that a plane will fly for more than one airline in this data, especially since it comprises only a year of data.
So let’s check to see if there are any planes in the data flew for multiple airlines.
First, find all distinct airline, plane combinations.
```
planes_carriers <-
flights %>%
filter(!is.na(tailnum)) %>%
distinct(tailnum, carrier)
```
The number of planes that have flown for more than one airline are those `tailnum` that appear more than once in the `planes_carriers` data.
```
planes_carriers %>%
count(tailnum) %>%
filter(n > 1) %>%
nrow()
#> [1] 17
```
The names of airlines are easier to understand than the two\-letter carrier codes.
The `airlines` data frame contains the names of the airlines.
```
carrier_transfer_tbl <- planes_carriers %>%
# keep only planes which have flown for more than one airline
group_by(tailnum) %>%
filter(n() > 1) %>%
# join with airlines to get airline names
left_join(airlines, by = "carrier") %>%
arrange(tailnum, carrier)
carrier_transfer_tbl
#> # A tibble: 34 x 3
#> # Groups: tailnum [17]
#> carrier tailnum name
#> <chr> <chr> <chr>
#> 1 9E N146PQ Endeavor Air Inc.
#> 2 EV N146PQ ExpressJet Airlines Inc.
#> 3 9E N153PQ Endeavor Air Inc.
#> 4 EV N153PQ ExpressJet Airlines Inc.
#> 5 9E N176PQ Endeavor Air Inc.
#> 6 EV N176PQ ExpressJet Airlines Inc.
#> # … with 28 more rows
```
### Exercise 13\.5\.1
What does it mean for a flight to have a missing `tailnum`?
What do the tail numbers that don’t have a matching record in planes have in common?
(Hint: one variable explains \~90% of the problems.)
Flights that have a missing `tailnum` all have missing values of `arr_time`, meaning that the flight was canceled.
```
flights %>%
filter(is.na(tailnum), !is.na(arr_time)) %>%
nrow()
#> [1] 0
```
Many of the tail numbers that don’t have a matching value in `planes` are
registered to American Airlines (AA) or Envoy Airlines (MQ).
The documentation for `planes` states
> American Airways (AA) and Envoy Air (MQ) report fleet numbers rather than tail numbers so can’t be matched.
```
flights %>%
anti_join(planes, by = "tailnum") %>%
count(carrier, sort = TRUE) %>%
mutate(p = n / sum(n))
#> # A tibble: 10 x 3
#> carrier n p
#> <chr> <int> <dbl>
#> 1 MQ 25397 0.483
#> 2 AA 22558 0.429
#> 3 UA 1693 0.0322
#> 4 9E 1044 0.0198
#> 5 B6 830 0.0158
#> 6 US 699 0.0133
#> # … with 4 more rows
```
However, not all tail numbers appearing in`flights` from these carriers are missing from the `planes` table. I don’t know how to reconcile this discrepancy.
```
flights %>%
distinct(carrier, tailnum) %>%
left_join(planes, by = "tailnum") %>%
group_by(carrier) %>%
summarise(total_planes = n(),
not_in_planes = sum(is.na(model))) %>%
mutate(missing_pct = not_in_planes / total_planes) %>%
arrange(desc(missing_pct))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 16 x 4
#> carrier total_planes not_in_planes missing_pct
#> <chr> <int> <int> <dbl>
#> 1 MQ 238 234 0.983
#> 2 AA 601 430 0.715
#> 3 F9 26 3 0.115
#> 4 FL 129 12 0.0930
#> 5 UA 621 23 0.0370
#> 6 US 290 9 0.0310
#> # … with 10 more rows
```
### Exercise 13\.5\.2
Filter flights to only show flights with planes that have flown at least 100 flights.
First, I find all planes that have flown at least 100 flights.
I need to filter flights that are missing a tail number otherwise all flights missing a tail number will be treated as a single plane.
```
planes_gte100 <- flights %>%
filter(!is.na(tailnum)) %>%
group_by(tailnum) %>%
count() %>%
filter(n >= 100)
```
Now, I will semi join the data frame of planes that have flown at least 100 flights to the data frame of flights to select the flights by those planes.
```
flights %>%
semi_join(planes_gte100, by = "tailnum")
#> # A tibble: 228,390 x 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 544 545 -1 1004 1022
#> 4 2013 1 1 554 558 -4 740 728
#> 5 2013 1 1 555 600 -5 913 854
#> 6 2013 1 1 557 600 -3 709 723
#> # … with 228,384 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
This can also be answered with a grouped mutate.
```
flights %>%
filter(!is.na(tailnum)) %>%
group_by(tailnum) %>%
mutate(n = n()) %>%
filter(n >= 100)
#> # A tibble: 228,390 x 20
#> # Groups: tailnum [1,217]
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 533 529 4 850 830
#> 3 2013 1 1 544 545 -1 1004 1022
#> 4 2013 1 1 554 558 -4 740 728
#> 5 2013 1 1 555 600 -5 913 854
#> 6 2013 1 1 557 600 -3 709 723
#> # … with 228,384 more rows, and 12 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>,
#> # n <int>
```
### Exercise 13\.5\.3
Combine `fueleconomy::vehicles` and `fueleconomy::common` to find only the records for the most common models.
```
fueleconomy::vehicles %>%
semi_join(fueleconomy::common, by = c("make", "model"))
#> # A tibble: 14,531 x 12
#> id make model year class trans drive cyl displ fuel hwy cty
#> <dbl> <chr> <chr> <dbl> <chr> <chr> <chr> <dbl> <dbl> <chr> <dbl> <dbl>
#> 1 1833 Acura Integ… 1986 Subcom… Automa… Front-… 4 1.6 Regu… 28 22
#> 2 1834 Acura Integ… 1986 Subcom… Manual… Front-… 4 1.6 Regu… 28 23
#> 3 3037 Acura Integ… 1987 Subcom… Automa… Front-… 4 1.6 Regu… 28 22
#> 4 3038 Acura Integ… 1987 Subcom… Manual… Front-… 4 1.6 Regu… 28 23
#> 5 4183 Acura Integ… 1988 Subcom… Automa… Front-… 4 1.6 Regu… 27 22
#> 6 4184 Acura Integ… 1988 Subcom… Manual… Front-… 4 1.6 Regu… 28 23
#> # … with 14,525 more rows
```
Why does the above code join on `make` and `model` and not just `model`?
It is possible for two car brands (`make`) to produce a car with the same name (`model`).
In both the `vehicles` and `common` data we can find some examples.
For example, “Truck 4WD” is produced by many different brands.
```
fueleconomy::vehicles %>%
distinct(model, make) %>%
group_by(model) %>%
filter(n() > 1) %>%
arrange(model)
#> # A tibble: 126 x 2
#> # Groups: model [60]
#> make model
#> <chr> <chr>
#> 1 Audi 200
#> 2 Chrysler 200
#> 3 Mcevoy Motors 240 DL/240 GL Wagon
#> 4 Volvo 240 DL/240 GL Wagon
#> 5 Lambda Control Systems 300E
#> 6 Mercedes-Benz 300E
#> # … with 120 more rows
```
```
fueleconomy::common %>%
distinct(model, make) %>%
group_by(model) %>%
filter(n() > 1) %>%
arrange(model)
#> # A tibble: 8 x 2
#> # Groups: model [3]
#> make model
#> <chr> <chr>
#> 1 Dodge Colt
#> 2 Plymouth Colt
#> 3 Mitsubishi Truck 2WD
#> 4 Nissan Truck 2WD
#> 5 Toyota Truck 2WD
#> 6 Mitsubishi Truck 4WD
#> # … with 2 more rows
```
If we were to merge these data on the `model` column alone, there would be incorrect matches.
### Exercise 13\.5\.4
Find the 48 hours (over the course of the whole year) that have the worst delays.
Cross\-reference it with the weather data.
Can you see any patterns?
I will start by clarifying how I will be measuring the concepts in the question.
There are three concepts that need to be defined more precisely.
1. What is meant by “delay”?
I will use departure delay.
Since the `weather` data only contains data for the New York City airports, and
departure delays will be more sensitive to New York City weather conditions than arrival delays.
2. What is meant by “worst”? I define worst delay as the average departure delay per flight for flights *scheduled* to depart in that hour.
For hour, I will use the scheduled departure time rather than the actual departure time.
If planes are delayed due to weather conditions, the weather conditions during the scheduled time are more important than the actual departure time, at which point, the weather could have improved.
3. What is meant by “48 hours over the course of the year”? This could mean two days, a span of 48 contiguous hours,
or 48 hours that are not necessarily contiguous hours.
I will find 48 not\-necessarily contiguous hours.
That definition makes better use of the methods introduced in this section and chapter.
4. What is the unit of analysis? Although the question mentions only hours, I will use airport hours.
The weather dataset has an observation for each airport for each hour.
Since all the departure airports are in the vicinity of New York City, their weather should be similar, it will not be the same.
First, I need to find the 48 hours with the worst delays.
I group flights by hour of scheduled departure time and calculate the average delay.
Then I select the 48 observations (hours) with the highest average delay.
```
worst_hours <- flights %>%
mutate(hour = sched_dep_time %/% 100) %>%
group_by(origin, year, month, day, hour) %>%
summarise(dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ungroup() %>%
arrange(desc(dep_delay)) %>%
slice(1:48)
#> `summarise()` regrouping output by 'origin', 'year', 'month', 'day' (override with `.groups` argument)
```
Then I can use `semi_join()` to get the weather for these hours.
```
weather_most_delayed <- semi_join(weather, worst_hours,
by = c("origin", "year",
"month", "day", "hour"))
```
For weather, I’ll focus on precipitation, wind speed, and temperature.
I will display these in both a table and a plot.
Many of these observations have a higher than average wind speed (10 mph) or some precipitation.
However, I would have expected the weather for the hours with the worst delays to be much worse.
```
select(weather_most_delayed, temp, wind_speed, precip) %>%
print(n = 48)
#> # A tibble: 48 x 3
#> temp wind_speed precip
#> <dbl> <dbl> <dbl>
#> 1 27.0 13.8 0
#> 2 28.0 19.6 0
#> 3 28.9 28.8 0
#> 4 33.8 9.21 0.06
#> 5 34.0 8.06 0.05
#> 6 80.1 8.06 0
#> 7 86 13.8 0
#> 8 73.4 6.90 0.08
#> 9 84.0 5.75 0
#> 10 78.8 18.4 0.23
#> 11 53.6 0 0
#> 12 60.8 31.1 0.11
#> 13 55.4 17.3 0.14
#> 14 53.1 9.21 0.01
#> 15 55.9 11.5 0.1
#> 16 55.4 8.06 0.15
#> 17 57.0 29.9 0
#> 18 33.8 20.7 0.02
#> 19 34.0 19.6 0.01
#> 20 36.0 21.9 0.01
#> 21 37.9 16.1 0
#> 22 32 13.8 0.12
#> 23 60.1 33.4 0.14
#> 24 60.8 11.5 0.02
#> 25 62.1 17.3 0
#> 26 66.9 10.4 0
#> 27 66.9 13.8 0
#> 28 79.0 10.4 0
#> 29 77 16.1 0.07
#> 30 75.9 13.8 0
#> 31 82.4 8.06 0
#> 32 86 9.21 0
#> 33 80.1 9.21 0
#> 34 80.6 11.5 0
#> 35 78.1 6.90 0
#> 36 75.2 10.4 0.01
#> 37 73.9 5.75 0.03
#> 38 73.9 8.06 0
#> 39 75.0 4.60 0
#> 40 75.0 4.60 0.01
#> 41 80.1 0 0.01
#> 42 80.1 0 0
#> 43 77 10.4 0
#> 44 82.0 10.4 0
#> 45 72.0 13.8 0.3
#> 46 72.0 4.60 0.03
#> 47 51.1 4.60 0
#> 48 54.0 6.90 0
```
```
ggplot(weather_most_delayed, aes(x = precip, y = wind_speed, color = temp)) +
geom_point()
```
It’s hard to say much more than that without using the tools from [Exploratory Data Analysis](https://r4ds.had.co.nz/exploratory-data-analysis.html#covariation) section
to look for covariation between weather and flight delays using all flights.
Implicitly in my informal analysis of trends in weather using only the 48 hours with the worst delays, I was comparing the weather in these hours to some belief I had about what constitutes “normal” or “good” weather.
It would be better to actually use data to make that comparison.
### Exercise 13\.5\.5
What does `anti_join(flights, airports, by = c("dest" = "faa"))` tell you?
What does `anti_join(airports, flights, by = c("faa" = "dest"))` tell you?
The expression `anti_join(flights, airports, by = c("dest" = "faa"))` returns the flights that went to an airport that is not in the FAA list of destinations.
Since the FAA list only contains domestic airports, these are likely foreign flights.
However, running that expression that there are only four airports in this list.
```
anti_join(flights, airports, by = c("dest" = "faa")) %>%
distinct(dest)
#> # A tibble: 4 x 1
#> dest
#> <chr>
#> 1 BQN
#> 2 SJU
#> 3 STT
#> 4 PSE
```
In this set of four airports three are in Puerto Rico ([BQN](https://en.wikipedia.org/wiki/Rafael_Hern%C3%A1ndez_Airport), [SJU](https://en.wikipedia.org/wiki/Luis_Mu%C3%B1oz_Mar%C3%ADn_International_Airport), and [PSE](https://en.wikipedia.org/wiki/Mercedita_International_Airport)) and one is in the US Virgin Islands ( [STT](https://en.wikipedia.org/wiki/Cyril_E._King_Airport)).
The reason for this discrepancy is that the `flights` and `airports` tables are derived from different sources.
The `flights` data comes from the US Department of Transportation [Bureau of Transportation Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236), while the airport metadata comes from [openflights.org](https://openflights.org/data.html).
The BTS includes Puerto Rico and U.S. Virgin Islands as “domestic” (part of the US), while the [openflights.org](https://raw.githubusercontent.com/jpatokal/openflights/master/data/airports.dat) give use different values of country for airports in the US states (`"United States"`) Puerto Rico (`"Puerto Rico"`) and US Virgin Islands (`"Virgin Islands"`).
The expression `anti_join(airports, flights, by = c("faa" = "dest"))` returns the US airports that were not the destination of any flight in the data.
Since the data contains all flights from New York City airports, this is also the list of US airports that did not have a nonstop flight from New York City in 2013\.
```
anti_join(airports, flights, by = c("faa" = "dest"))
#> # A tibble: 1,357 x 8
#> faa name lat lon alt tz dst tzone
#> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
#> 1 04G Lansdowne Airport 41.1 -80.6 1044 -5 A America/New_Y…
#> 2 06A Moton Field Municipal Airp… 32.5 -85.7 264 -6 A America/Chica…
#> 3 06C Schaumburg Regional 42.0 -88.1 801 -6 A America/Chica…
#> 4 06N Randall Airport 41.4 -74.4 523 -5 A America/New_Y…
#> 5 09J Jekyll Island Airport 31.1 -81.4 11 -5 A America/New_Y…
#> 6 0A9 Elizabethton Municipal Air… 36.4 -82.2 1593 -5 A America/New_Y…
#> # … with 1,351 more rows
```
### Exercise 13\.5\.6
You might expect that there’s an implicit relationship between plane and airline, because each plane is flown by a single airline.
Confirm or reject this hypothesis using the tools you’ve learned above.
At each point in time, each plane is flown by a single airline.
However, a plane can be sold and fly for multiple airlines.
Logically, it is possible that a plane can fly for multiple airlines over the course of its lifetime.
But, it is not necessarily the case that a plane will fly for more than one airline in this data, especially since it comprises only a year of data.
So let’s check to see if there are any planes in the data flew for multiple airlines.
First, find all distinct airline, plane combinations.
```
planes_carriers <-
flights %>%
filter(!is.na(tailnum)) %>%
distinct(tailnum, carrier)
```
The number of planes that have flown for more than one airline are those `tailnum` that appear more than once in the `planes_carriers` data.
```
planes_carriers %>%
count(tailnum) %>%
filter(n > 1) %>%
nrow()
#> [1] 17
```
The names of airlines are easier to understand than the two\-letter carrier codes.
The `airlines` data frame contains the names of the airlines.
```
carrier_transfer_tbl <- planes_carriers %>%
# keep only planes which have flown for more than one airline
group_by(tailnum) %>%
filter(n() > 1) %>%
# join with airlines to get airline names
left_join(airlines, by = "carrier") %>%
arrange(tailnum, carrier)
carrier_transfer_tbl
#> # A tibble: 34 x 3
#> # Groups: tailnum [17]
#> carrier tailnum name
#> <chr> <chr> <chr>
#> 1 9E N146PQ Endeavor Air Inc.
#> 2 EV N146PQ ExpressJet Airlines Inc.
#> 3 9E N153PQ Endeavor Air Inc.
#> 4 EV N153PQ ExpressJet Airlines Inc.
#> 5 9E N176PQ Endeavor Air Inc.
#> 6 EV N176PQ ExpressJet Airlines Inc.
#> # … with 28 more rows
```
13\.6 Join problems
-------------------
No exercises
13\.7 Set operations
--------------------
No exercises
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/strings.html |
14 Strings
==========
14\.1 Introduction
------------------
```
library("tidyverse")
```
14\.2 String basics
-------------------
### Exercise 14\.2\.1
In code that doesn’t use stringr, you’ll often see `paste()` and `paste0()`.
What’s the difference between the two functions? What stringr function are they equivalent to?
How do the functions differ in their handling of `NA`?
The function `paste()` separates strings by spaces by default, while `paste0()` does not separate strings with spaces by default.
```
paste("foo", "bar")
#> [1] "foo bar"
paste0("foo", "bar")
#> [1] "foobar"
```
Since `str_c()` does not separate strings with spaces by default it is closer in behavior to `paste0()`.
```
str_c("foo", "bar")
#> [1] "foobar"
```
However, `str_c()` and the paste function handle NA differently.
The function `str_c()` propagates `NA`, if any argument is a missing value, it returns a missing value.
This is in line with how the numeric R functions, e.g. `sum()`, `mean()`, handle missing values.
However, the paste functions, convert `NA` to the string `"NA"` and then treat it as any other character vector.
```
str_c("foo", NA)
#> [1] NA
paste("foo", NA)
#> [1] "foo NA"
paste0("foo", NA)
#> [1] "fooNA"
```
### Exercise 14\.2\.2
In your own words, describe the difference between the `sep` and `collapse` arguments to `str_c()`.
The `sep` argument is the string inserted between arguments to `str_c()`, while `collapse` is the string used to separate any elements of the character vector into a character vector of length one.
### Exercise 14\.2\.3
Use `str_length()` and `str_sub()` to extract the middle character from a string. What will you do if the string has an even number of characters?
The following function extracts the middle character. If the string has an even number of characters the choice is arbitrary.
We choose to select \\(\\lceil n / 2 \\rceil\\), because that case works even if the string is only of length one.
A more general method would allow the user to select either the floor or ceiling for the middle character of an even string.
```
x <- c("a", "abc", "abcd", "abcde", "abcdef")
L <- str_length(x)
m <- ceiling(L / 2)
str_sub(x, m, m)
#> [1] "a" "b" "b" "c" "c"
```
### Exercise 14\.2\.4
What does `str_wrap()` do? When might you want to use it?
The function `str_wrap()` wraps text so that it fits within a certain width.
This is useful for wrapping long strings of text to be typeset.
### Exercise 14\.2\.5
What does `str_trim()` do? What’s the opposite of `str_trim()`?
The function `str_trim()` trims the whitespace from a string.
```
str_trim(" abc ")
#> [1] "abc"
str_trim(" abc ", side = "left")
#> [1] "abc "
str_trim(" abc ", side = "right")
#> [1] " abc"
```
The opposite of `str_trim()` is `str_pad()` which adds characters to each side.
```
str_pad("abc", 5, side = "both")
#> [1] " abc "
str_pad("abc", 4, side = "right")
#> [1] "abc "
str_pad("abc", 4, side = "left")
#> [1] " abc"
```
### Exercise 14\.2\.6
Write a function that turns (e.g.) a vector `c("a", "b", "c")` into the string `"a, b, and c"`. Think carefully about what it should do if given a vector of length 0, 1, or 2\.
See the Chapter \[Functions] for more details on writing R functions.
This function needs to handle four cases.
1. `n == 0`: an empty string, e.g. `""`.
2. `n == 1`: the original vector, e.g. `"a"`.
3. `n == 2`: return the two elements separated by “and”, e.g. `"a and b"`.
4. `n > 2`: return the first `n - 1` elements separated by commas, and the last element separated by a comma and “and”, e.g. `"a, b, and c"`.
```
str_commasep <- function(x, delim = ",") {
n <- length(x)
if (n == 0) {
""
} else if (n == 1) {
x
} else if (n == 2) {
# no comma before and when n == 2
str_c(x[[1]], "and", x[[2]], sep = " ")
} else {
# commas after all n - 1 elements
not_last <- str_c(x[seq_len(n - 1)], delim)
# prepend "and" to the last element
last <- str_c("and", x[[n]], sep = " ")
# combine parts with spaces
str_c(c(not_last, last), collapse = " ")
}
}
str_commasep("")
#> [1] ""
str_commasep("a")
#> [1] "a"
str_commasep(c("a", "b"))
#> [1] "a and b"
str_commasep(c("a", "b", "c"))
#> [1] "a, b, and c"
str_commasep(c("a", "b", "c", "d"))
#> [1] "a, b, c, and d"
```
14\.3 Matching patterns with regular expressions
------------------------------------------------
### 14\.3\.1 Basic matches
#### Exercise 14\.3\.1\.1
Explain why each of these strings don’t match a `\`: `"\"`, `"\\"`, `"\\\"`.
* `"\"`: This will escape the next character in the R string.
* `"\\"`: This will resolve to `\` in the regular expression, which will escape the next character in the regular expression.
* `"\\\"`: The first two backslashes will resolve to a literal backslash in the regular expression, the third will escape the next character. So in the regular expression, this will escape some escaped character.
#### Exercise 14\.3\.1\.2
How would you match the sequence `"'\` ?
```
str_view("\"'\\", "\"'\\\\", match = TRUE)
```
#### Exercise 14\.3\.1\.3
What patterns will the regular expression `\..\..\..` match? How would you represent it as a string?
It will match any patterns that are a dot followed by any character, repeated three times.
```
str_view(c(".a.b.c", ".a.b", "....."), c("\\..\\..\\.."), match = TRUE)
```
### 14\.3\.2 Anchors
#### Exercise 14\.3\.2\.1
How would you match the literal string `"$^$"`?
To check that the pattern works, I’ll include both the string `"$^$"`, and an example where that pattern occurs in the middle of the string which should not be matched.
```
str_view(c("$^$", "ab$^$sfas"), "^\\$\\^\\$$", match = TRUE)
```
#### Exercise 14\.3\.2\.2
Given the corpus of common words in `stringr::words`, create regular expressions that find all words that:
1. Start with “y”.
2. End with “x”
3. Are exactly three letters long. (Don’t cheat by using `str_length()`!)
4. Have seven letters or more.
Since this list is long, you might want to use the `match` argument to `str_view()` to show only the matching or non\-matching words.
The answer to each part follows.
1. The words that start with “y” are:
```
str_view(stringr::words, "^y", match = TRUE)
```
2. End with “x”
```
str_view(stringr::words, "x$", match = TRUE)
```
3. Are exactly three letters long are
```
str_view(stringr::words, "^...$", match = TRUE)
```
4. The words that have seven letters or more:
```
str_view(stringr::words, ".......", match = TRUE)
```
Since the pattern `.......` is not anchored with either `.` or `$`
this will match any word with at last seven letters.
The pattern, `^.......$`, matches words with exactly seven characters.
### 14\.3\.3 Character classes and alternatives
#### Exercise 14\.3\.3\.1
Create regular expressions to find all words that:
1. Start with a vowel.
2. That only contain consonants. (Hint: thinking about matching “not”\-vowels.)
3. End with `ed`, but not with `eed`.
4. End with `ing` or `ise`.
The answer to each part follows.
1. Words starting with vowels
```
str_subset(stringr::words, "^[aeiou]")
#> [1] "a" "able" "about" "absolute" "accept"
#> [6] "account" "achieve" "across" "act" "active"
#> [11] "actual" "add" "address" "admit" "advertise"
#> [16] "affect" "afford" "after" "afternoon" "again"
#> [21] "against" "age" "agent" "ago" "agree"
#> [26] "air" "all" "allow" "almost" "along"
#> [31] "already" "alright" "also" "although" "always"
#> [36] "america" "amount" "and" "another" "answer"
#> [41] "any" "apart" "apparent" "appear" "apply"
#> [46] "appoint" "approach" "appropriate" "area" "argue"
#> [51] "arm" "around" "arrange" "art" "as"
#> [56] "ask" "associate" "assume" "at" "attend"
#> [61] "authority" "available" "aware" "away" "awful"
#> [66] "each" "early" "east" "easy" "eat"
#> [71] "economy" "educate" "effect" "egg" "eight"
#> [76] "either" "elect" "electric" "eleven" "else"
#> [81] "employ" "encourage" "end" "engine" "english"
#> [86] "enjoy" "enough" "enter" "environment" "equal"
#> [91] "especial" "europe" "even" "evening" "ever"
#> [96] "every" "evidence" "exact" "example" "except"
#> [101] "excuse" "exercise" "exist" "expect" "expense"
#> [106] "experience" "explain" "express" "extra" "eye"
#> [111] "idea" "identify" "if" "imagine" "important"
#> [116] "improve" "in" "include" "income" "increase"
#> [121] "indeed" "individual" "industry" "inform" "inside"
#> [126] "instead" "insure" "interest" "into" "introduce"
#> [131] "invest" "involve" "issue" "it" "item"
#> [136] "obvious" "occasion" "odd" "of" "off"
#> [141] "offer" "office" "often" "okay" "old"
#> [146] "on" "once" "one" "only" "open"
#> [151] "operate" "opportunity" "oppose" "or" "order"
#> [156] "organize" "original" "other" "otherwise" "ought"
#> [161] "out" "over" "own" "under" "understand"
#> [166] "union" "unit" "unite" "university" "unless"
#> [171] "until" "up" "upon" "use" "usual"
```
2. Words that contain only consonants: Use the `negate`
argument of `str_subset`.
```
str_subset(stringr::words, "[aeiou]", negate=TRUE)
#> [1] "by" "dry" "fly" "mrs" "try" "why"
```
Alternatively, using `str_view()` the consonant\-only
words are:
```
str_view(stringr::words, "[aeiou]", match=FALSE)
```
3. Words that end with “\-ed” but not ending in “\-eed”.
```
str_subset(stringr::words, "[^e]ed$")
#> [1] "bed" "hundred" "red"
```
The pattern above will not match the word `"ed"`. If we wanted to include that, we could include it as a special case.
```
str_subset(c("ed", stringr::words), "(^|[^e])ed$")
#> [1] "ed" "bed" "hundred" "red"
```
4. Words ending in `ing` or `ise`:
```
str_subset(stringr::words, "i(ng|se)$")
#> [1] "advertise" "bring" "during" "evening" "exercise" "king"
#> [7] "meaning" "morning" "otherwise" "practise" "raise" "realise"
#> [13] "ring" "rise" "sing" "surprise" "thing"
```
#### Exercise 14\.3\.3\.2
Empirically verify the rule “i” before e except after “c”.
```
length(str_subset(stringr::words, "(cei|[^c]ie)"))
#> [1] 14
```
```
length(str_subset(stringr::words, "(cie|[^c]ei)"))
#> [1] 3
```
#### Exercise 14\.3\.3\.3
Is “q” always followed by a “u”?
In the `stringr::words` dataset, yes.
```
str_view(stringr::words, "q[^u]", match = TRUE)
```
In the English language— [no](https://en.wiktionary.org/wiki/Appendix:English_words_containing_Q_not_followed_by_U).
However, the examples are few, and mostly loanwords, such as “burqa” and “cinq”.
Also, “qwerty”.
That I had to add all of those examples to the list of words that spellchecking should ignore is indicative of their rarity.
#### Exercise 14\.3\.3\.4
Write a regular expression that matches a word if it’s probably written in British English, not American English.
In the general case, this is hard, and could require a dictionary.
But, there are a few heuristics to consider that would account for some common cases: British English tends to use the following:
* “ou” instead of “o”
* use of “ae” and “oe” instead of “a” and “o”
* ends in `ise` instead of `ize`
* ends in `yse`
The regex `ou|ise$|ae|oe|yse$` would match these.
There are other [spelling differences between American and British English](https://en.wikipedia.org/wiki/American_and_British_English_spelling_differences) but they are not patterns amenable to regular expressions.
It would require a dictionary with differences in spellings for different words.
#### Exercise 14\.3\.3\.5
Create a regular expression that will match telephone numbers as commonly written in your country.
\<div class\="alert alert\-primary hints\-alert\>
This answer can be improved and expanded.
The answer to this will vary by country.
For the United States, phone numbers have a format like `123-456-7890` or `(123)456-7890`).
These regular expressions will parse the first form
```
x <- c("123-456-7890", "(123)456-7890", "(123) 456-7890", "1235-2351")
str_view(x, "\\d\\d\\d-\\d\\d\\d-\\d\\d\\d\\d")
```
```
str_view(x, "[0-9][0-9][0-9]-[0-9][0-9][0-9]-[0-9][0-9][0-9][0-9]")
```
The regular expressions will parse the second form:
```
str_view(x, "\\(\\d\\d\\d\\)\\s*\\d\\d\\d-\\d\\d\\d\\d")
```
```
str_view(x, "\\([0-9][0-9][0-9]\\)[ ]*[0-9][0-9][0-9]-[0-9][0-9][0-9][0-9]")
```
This regular expression can be simplified with the `{m,n}` regular expression modifier introduced in the next section,
```
str_view(x, "\\d{3}-\\d{3}-\\d{4}")
```
```
str_view(x, "\\(\\d{3}\\)\\s*\\d{3}-\\d{4}")
```
Note that this pattern doesn’t account for phone numbers that are invalid
due to an invalid area code.
Nor does this pattern account for special numbers like 911\.
It also doesn’t parse a leading country code or an extensions.
See the Wikipedia page for the [North American Numbering
Plan](https://en.wikipedia.org/wiki/North_American_Numbering_Plan) for more information on the complexities of US phone numbers, and [this Stack Overflow
question](https://stackoverflow.com/questions/123559/a-comprehensive-regex-for-phone-number-validation) for a discussion of using a regex for phone number validation.
The R package [dialr](https://cran.r-project.org/web/packages/dialr/index.html) implements robust phone number parsing.
Generally, for patterns like phone numbers or URLs it is better to use a dedicated package.
It is easy to match the pattern for the most common cases and useful for learning regular expressions, but in real applications there often edge cases that are handled by dedicated packages.
14\.1 Introduction
------------------
```
library("tidyverse")
```
14\.2 String basics
-------------------
### Exercise 14\.2\.1
In code that doesn’t use stringr, you’ll often see `paste()` and `paste0()`.
What’s the difference between the two functions? What stringr function are they equivalent to?
How do the functions differ in their handling of `NA`?
The function `paste()` separates strings by spaces by default, while `paste0()` does not separate strings with spaces by default.
```
paste("foo", "bar")
#> [1] "foo bar"
paste0("foo", "bar")
#> [1] "foobar"
```
Since `str_c()` does not separate strings with spaces by default it is closer in behavior to `paste0()`.
```
str_c("foo", "bar")
#> [1] "foobar"
```
However, `str_c()` and the paste function handle NA differently.
The function `str_c()` propagates `NA`, if any argument is a missing value, it returns a missing value.
This is in line with how the numeric R functions, e.g. `sum()`, `mean()`, handle missing values.
However, the paste functions, convert `NA` to the string `"NA"` and then treat it as any other character vector.
```
str_c("foo", NA)
#> [1] NA
paste("foo", NA)
#> [1] "foo NA"
paste0("foo", NA)
#> [1] "fooNA"
```
### Exercise 14\.2\.2
In your own words, describe the difference between the `sep` and `collapse` arguments to `str_c()`.
The `sep` argument is the string inserted between arguments to `str_c()`, while `collapse` is the string used to separate any elements of the character vector into a character vector of length one.
### Exercise 14\.2\.3
Use `str_length()` and `str_sub()` to extract the middle character from a string. What will you do if the string has an even number of characters?
The following function extracts the middle character. If the string has an even number of characters the choice is arbitrary.
We choose to select \\(\\lceil n / 2 \\rceil\\), because that case works even if the string is only of length one.
A more general method would allow the user to select either the floor or ceiling for the middle character of an even string.
```
x <- c("a", "abc", "abcd", "abcde", "abcdef")
L <- str_length(x)
m <- ceiling(L / 2)
str_sub(x, m, m)
#> [1] "a" "b" "b" "c" "c"
```
### Exercise 14\.2\.4
What does `str_wrap()` do? When might you want to use it?
The function `str_wrap()` wraps text so that it fits within a certain width.
This is useful for wrapping long strings of text to be typeset.
### Exercise 14\.2\.5
What does `str_trim()` do? What’s the opposite of `str_trim()`?
The function `str_trim()` trims the whitespace from a string.
```
str_trim(" abc ")
#> [1] "abc"
str_trim(" abc ", side = "left")
#> [1] "abc "
str_trim(" abc ", side = "right")
#> [1] " abc"
```
The opposite of `str_trim()` is `str_pad()` which adds characters to each side.
```
str_pad("abc", 5, side = "both")
#> [1] " abc "
str_pad("abc", 4, side = "right")
#> [1] "abc "
str_pad("abc", 4, side = "left")
#> [1] " abc"
```
### Exercise 14\.2\.6
Write a function that turns (e.g.) a vector `c("a", "b", "c")` into the string `"a, b, and c"`. Think carefully about what it should do if given a vector of length 0, 1, or 2\.
See the Chapter \[Functions] for more details on writing R functions.
This function needs to handle four cases.
1. `n == 0`: an empty string, e.g. `""`.
2. `n == 1`: the original vector, e.g. `"a"`.
3. `n == 2`: return the two elements separated by “and”, e.g. `"a and b"`.
4. `n > 2`: return the first `n - 1` elements separated by commas, and the last element separated by a comma and “and”, e.g. `"a, b, and c"`.
```
str_commasep <- function(x, delim = ",") {
n <- length(x)
if (n == 0) {
""
} else if (n == 1) {
x
} else if (n == 2) {
# no comma before and when n == 2
str_c(x[[1]], "and", x[[2]], sep = " ")
} else {
# commas after all n - 1 elements
not_last <- str_c(x[seq_len(n - 1)], delim)
# prepend "and" to the last element
last <- str_c("and", x[[n]], sep = " ")
# combine parts with spaces
str_c(c(not_last, last), collapse = " ")
}
}
str_commasep("")
#> [1] ""
str_commasep("a")
#> [1] "a"
str_commasep(c("a", "b"))
#> [1] "a and b"
str_commasep(c("a", "b", "c"))
#> [1] "a, b, and c"
str_commasep(c("a", "b", "c", "d"))
#> [1] "a, b, c, and d"
```
### Exercise 14\.2\.1
In code that doesn’t use stringr, you’ll often see `paste()` and `paste0()`.
What’s the difference between the two functions? What stringr function are they equivalent to?
How do the functions differ in their handling of `NA`?
The function `paste()` separates strings by spaces by default, while `paste0()` does not separate strings with spaces by default.
```
paste("foo", "bar")
#> [1] "foo bar"
paste0("foo", "bar")
#> [1] "foobar"
```
Since `str_c()` does not separate strings with spaces by default it is closer in behavior to `paste0()`.
```
str_c("foo", "bar")
#> [1] "foobar"
```
However, `str_c()` and the paste function handle NA differently.
The function `str_c()` propagates `NA`, if any argument is a missing value, it returns a missing value.
This is in line with how the numeric R functions, e.g. `sum()`, `mean()`, handle missing values.
However, the paste functions, convert `NA` to the string `"NA"` and then treat it as any other character vector.
```
str_c("foo", NA)
#> [1] NA
paste("foo", NA)
#> [1] "foo NA"
paste0("foo", NA)
#> [1] "fooNA"
```
### Exercise 14\.2\.2
In your own words, describe the difference between the `sep` and `collapse` arguments to `str_c()`.
The `sep` argument is the string inserted between arguments to `str_c()`, while `collapse` is the string used to separate any elements of the character vector into a character vector of length one.
### Exercise 14\.2\.3
Use `str_length()` and `str_sub()` to extract the middle character from a string. What will you do if the string has an even number of characters?
The following function extracts the middle character. If the string has an even number of characters the choice is arbitrary.
We choose to select \\(\\lceil n / 2 \\rceil\\), because that case works even if the string is only of length one.
A more general method would allow the user to select either the floor or ceiling for the middle character of an even string.
```
x <- c("a", "abc", "abcd", "abcde", "abcdef")
L <- str_length(x)
m <- ceiling(L / 2)
str_sub(x, m, m)
#> [1] "a" "b" "b" "c" "c"
```
### Exercise 14\.2\.4
What does `str_wrap()` do? When might you want to use it?
The function `str_wrap()` wraps text so that it fits within a certain width.
This is useful for wrapping long strings of text to be typeset.
### Exercise 14\.2\.5
What does `str_trim()` do? What’s the opposite of `str_trim()`?
The function `str_trim()` trims the whitespace from a string.
```
str_trim(" abc ")
#> [1] "abc"
str_trim(" abc ", side = "left")
#> [1] "abc "
str_trim(" abc ", side = "right")
#> [1] " abc"
```
The opposite of `str_trim()` is `str_pad()` which adds characters to each side.
```
str_pad("abc", 5, side = "both")
#> [1] " abc "
str_pad("abc", 4, side = "right")
#> [1] "abc "
str_pad("abc", 4, side = "left")
#> [1] " abc"
```
### Exercise 14\.2\.6
Write a function that turns (e.g.) a vector `c("a", "b", "c")` into the string `"a, b, and c"`. Think carefully about what it should do if given a vector of length 0, 1, or 2\.
See the Chapter \[Functions] for more details on writing R functions.
This function needs to handle four cases.
1. `n == 0`: an empty string, e.g. `""`.
2. `n == 1`: the original vector, e.g. `"a"`.
3. `n == 2`: return the two elements separated by “and”, e.g. `"a and b"`.
4. `n > 2`: return the first `n - 1` elements separated by commas, and the last element separated by a comma and “and”, e.g. `"a, b, and c"`.
```
str_commasep <- function(x, delim = ",") {
n <- length(x)
if (n == 0) {
""
} else if (n == 1) {
x
} else if (n == 2) {
# no comma before and when n == 2
str_c(x[[1]], "and", x[[2]], sep = " ")
} else {
# commas after all n - 1 elements
not_last <- str_c(x[seq_len(n - 1)], delim)
# prepend "and" to the last element
last <- str_c("and", x[[n]], sep = " ")
# combine parts with spaces
str_c(c(not_last, last), collapse = " ")
}
}
str_commasep("")
#> [1] ""
str_commasep("a")
#> [1] "a"
str_commasep(c("a", "b"))
#> [1] "a and b"
str_commasep(c("a", "b", "c"))
#> [1] "a, b, and c"
str_commasep(c("a", "b", "c", "d"))
#> [1] "a, b, c, and d"
```
14\.3 Matching patterns with regular expressions
------------------------------------------------
### 14\.3\.1 Basic matches
#### Exercise 14\.3\.1\.1
Explain why each of these strings don’t match a `\`: `"\"`, `"\\"`, `"\\\"`.
* `"\"`: This will escape the next character in the R string.
* `"\\"`: This will resolve to `\` in the regular expression, which will escape the next character in the regular expression.
* `"\\\"`: The first two backslashes will resolve to a literal backslash in the regular expression, the third will escape the next character. So in the regular expression, this will escape some escaped character.
#### Exercise 14\.3\.1\.2
How would you match the sequence `"'\` ?
```
str_view("\"'\\", "\"'\\\\", match = TRUE)
```
#### Exercise 14\.3\.1\.3
What patterns will the regular expression `\..\..\..` match? How would you represent it as a string?
It will match any patterns that are a dot followed by any character, repeated three times.
```
str_view(c(".a.b.c", ".a.b", "....."), c("\\..\\..\\.."), match = TRUE)
```
### 14\.3\.2 Anchors
#### Exercise 14\.3\.2\.1
How would you match the literal string `"$^$"`?
To check that the pattern works, I’ll include both the string `"$^$"`, and an example where that pattern occurs in the middle of the string which should not be matched.
```
str_view(c("$^$", "ab$^$sfas"), "^\\$\\^\\$$", match = TRUE)
```
#### Exercise 14\.3\.2\.2
Given the corpus of common words in `stringr::words`, create regular expressions that find all words that:
1. Start with “y”.
2. End with “x”
3. Are exactly three letters long. (Don’t cheat by using `str_length()`!)
4. Have seven letters or more.
Since this list is long, you might want to use the `match` argument to `str_view()` to show only the matching or non\-matching words.
The answer to each part follows.
1. The words that start with “y” are:
```
str_view(stringr::words, "^y", match = TRUE)
```
2. End with “x”
```
str_view(stringr::words, "x$", match = TRUE)
```
3. Are exactly three letters long are
```
str_view(stringr::words, "^...$", match = TRUE)
```
4. The words that have seven letters or more:
```
str_view(stringr::words, ".......", match = TRUE)
```
Since the pattern `.......` is not anchored with either `.` or `$`
this will match any word with at last seven letters.
The pattern, `^.......$`, matches words with exactly seven characters.
### 14\.3\.3 Character classes and alternatives
#### Exercise 14\.3\.3\.1
Create regular expressions to find all words that:
1. Start with a vowel.
2. That only contain consonants. (Hint: thinking about matching “not”\-vowels.)
3. End with `ed`, but not with `eed`.
4. End with `ing` or `ise`.
The answer to each part follows.
1. Words starting with vowels
```
str_subset(stringr::words, "^[aeiou]")
#> [1] "a" "able" "about" "absolute" "accept"
#> [6] "account" "achieve" "across" "act" "active"
#> [11] "actual" "add" "address" "admit" "advertise"
#> [16] "affect" "afford" "after" "afternoon" "again"
#> [21] "against" "age" "agent" "ago" "agree"
#> [26] "air" "all" "allow" "almost" "along"
#> [31] "already" "alright" "also" "although" "always"
#> [36] "america" "amount" "and" "another" "answer"
#> [41] "any" "apart" "apparent" "appear" "apply"
#> [46] "appoint" "approach" "appropriate" "area" "argue"
#> [51] "arm" "around" "arrange" "art" "as"
#> [56] "ask" "associate" "assume" "at" "attend"
#> [61] "authority" "available" "aware" "away" "awful"
#> [66] "each" "early" "east" "easy" "eat"
#> [71] "economy" "educate" "effect" "egg" "eight"
#> [76] "either" "elect" "electric" "eleven" "else"
#> [81] "employ" "encourage" "end" "engine" "english"
#> [86] "enjoy" "enough" "enter" "environment" "equal"
#> [91] "especial" "europe" "even" "evening" "ever"
#> [96] "every" "evidence" "exact" "example" "except"
#> [101] "excuse" "exercise" "exist" "expect" "expense"
#> [106] "experience" "explain" "express" "extra" "eye"
#> [111] "idea" "identify" "if" "imagine" "important"
#> [116] "improve" "in" "include" "income" "increase"
#> [121] "indeed" "individual" "industry" "inform" "inside"
#> [126] "instead" "insure" "interest" "into" "introduce"
#> [131] "invest" "involve" "issue" "it" "item"
#> [136] "obvious" "occasion" "odd" "of" "off"
#> [141] "offer" "office" "often" "okay" "old"
#> [146] "on" "once" "one" "only" "open"
#> [151] "operate" "opportunity" "oppose" "or" "order"
#> [156] "organize" "original" "other" "otherwise" "ought"
#> [161] "out" "over" "own" "under" "understand"
#> [166] "union" "unit" "unite" "university" "unless"
#> [171] "until" "up" "upon" "use" "usual"
```
2. Words that contain only consonants: Use the `negate`
argument of `str_subset`.
```
str_subset(stringr::words, "[aeiou]", negate=TRUE)
#> [1] "by" "dry" "fly" "mrs" "try" "why"
```
Alternatively, using `str_view()` the consonant\-only
words are:
```
str_view(stringr::words, "[aeiou]", match=FALSE)
```
3. Words that end with “\-ed” but not ending in “\-eed”.
```
str_subset(stringr::words, "[^e]ed$")
#> [1] "bed" "hundred" "red"
```
The pattern above will not match the word `"ed"`. If we wanted to include that, we could include it as a special case.
```
str_subset(c("ed", stringr::words), "(^|[^e])ed$")
#> [1] "ed" "bed" "hundred" "red"
```
4. Words ending in `ing` or `ise`:
```
str_subset(stringr::words, "i(ng|se)$")
#> [1] "advertise" "bring" "during" "evening" "exercise" "king"
#> [7] "meaning" "morning" "otherwise" "practise" "raise" "realise"
#> [13] "ring" "rise" "sing" "surprise" "thing"
```
#### Exercise 14\.3\.3\.2
Empirically verify the rule “i” before e except after “c”.
```
length(str_subset(stringr::words, "(cei|[^c]ie)"))
#> [1] 14
```
```
length(str_subset(stringr::words, "(cie|[^c]ei)"))
#> [1] 3
```
#### Exercise 14\.3\.3\.3
Is “q” always followed by a “u”?
In the `stringr::words` dataset, yes.
```
str_view(stringr::words, "q[^u]", match = TRUE)
```
In the English language— [no](https://en.wiktionary.org/wiki/Appendix:English_words_containing_Q_not_followed_by_U).
However, the examples are few, and mostly loanwords, such as “burqa” and “cinq”.
Also, “qwerty”.
That I had to add all of those examples to the list of words that spellchecking should ignore is indicative of their rarity.
#### Exercise 14\.3\.3\.4
Write a regular expression that matches a word if it’s probably written in British English, not American English.
In the general case, this is hard, and could require a dictionary.
But, there are a few heuristics to consider that would account for some common cases: British English tends to use the following:
* “ou” instead of “o”
* use of “ae” and “oe” instead of “a” and “o”
* ends in `ise` instead of `ize`
* ends in `yse`
The regex `ou|ise$|ae|oe|yse$` would match these.
There are other [spelling differences between American and British English](https://en.wikipedia.org/wiki/American_and_British_English_spelling_differences) but they are not patterns amenable to regular expressions.
It would require a dictionary with differences in spellings for different words.
#### Exercise 14\.3\.3\.5
Create a regular expression that will match telephone numbers as commonly written in your country.
\<div class\="alert alert\-primary hints\-alert\>
This answer can be improved and expanded.
The answer to this will vary by country.
For the United States, phone numbers have a format like `123-456-7890` or `(123)456-7890`).
These regular expressions will parse the first form
```
x <- c("123-456-7890", "(123)456-7890", "(123) 456-7890", "1235-2351")
str_view(x, "\\d\\d\\d-\\d\\d\\d-\\d\\d\\d\\d")
```
```
str_view(x, "[0-9][0-9][0-9]-[0-9][0-9][0-9]-[0-9][0-9][0-9][0-9]")
```
The regular expressions will parse the second form:
```
str_view(x, "\\(\\d\\d\\d\\)\\s*\\d\\d\\d-\\d\\d\\d\\d")
```
```
str_view(x, "\\([0-9][0-9][0-9]\\)[ ]*[0-9][0-9][0-9]-[0-9][0-9][0-9][0-9]")
```
This regular expression can be simplified with the `{m,n}` regular expression modifier introduced in the next section,
```
str_view(x, "\\d{3}-\\d{3}-\\d{4}")
```
```
str_view(x, "\\(\\d{3}\\)\\s*\\d{3}-\\d{4}")
```
Note that this pattern doesn’t account for phone numbers that are invalid
due to an invalid area code.
Nor does this pattern account for special numbers like 911\.
It also doesn’t parse a leading country code or an extensions.
See the Wikipedia page for the [North American Numbering
Plan](https://en.wikipedia.org/wiki/North_American_Numbering_Plan) for more information on the complexities of US phone numbers, and [this Stack Overflow
question](https://stackoverflow.com/questions/123559/a-comprehensive-regex-for-phone-number-validation) for a discussion of using a regex for phone number validation.
The R package [dialr](https://cran.r-project.org/web/packages/dialr/index.html) implements robust phone number parsing.
Generally, for patterns like phone numbers or URLs it is better to use a dedicated package.
It is easy to match the pattern for the most common cases and useful for learning regular expressions, but in real applications there often edge cases that are handled by dedicated packages.
### 14\.3\.1 Basic matches
#### Exercise 14\.3\.1\.1
Explain why each of these strings don’t match a `\`: `"\"`, `"\\"`, `"\\\"`.
* `"\"`: This will escape the next character in the R string.
* `"\\"`: This will resolve to `\` in the regular expression, which will escape the next character in the regular expression.
* `"\\\"`: The first two backslashes will resolve to a literal backslash in the regular expression, the third will escape the next character. So in the regular expression, this will escape some escaped character.
#### Exercise 14\.3\.1\.2
How would you match the sequence `"'\` ?
```
str_view("\"'\\", "\"'\\\\", match = TRUE)
```
#### Exercise 14\.3\.1\.3
What patterns will the regular expression `\..\..\..` match? How would you represent it as a string?
It will match any patterns that are a dot followed by any character, repeated three times.
```
str_view(c(".a.b.c", ".a.b", "....."), c("\\..\\..\\.."), match = TRUE)
```
#### Exercise 14\.3\.1\.1
Explain why each of these strings don’t match a `\`: `"\"`, `"\\"`, `"\\\"`.
* `"\"`: This will escape the next character in the R string.
* `"\\"`: This will resolve to `\` in the regular expression, which will escape the next character in the regular expression.
* `"\\\"`: The first two backslashes will resolve to a literal backslash in the regular expression, the third will escape the next character. So in the regular expression, this will escape some escaped character.
#### Exercise 14\.3\.1\.2
How would you match the sequence `"'\` ?
```
str_view("\"'\\", "\"'\\\\", match = TRUE)
```
#### Exercise 14\.3\.1\.3
What patterns will the regular expression `\..\..\..` match? How would you represent it as a string?
It will match any patterns that are a dot followed by any character, repeated three times.
```
str_view(c(".a.b.c", ".a.b", "....."), c("\\..\\..\\.."), match = TRUE)
```
### 14\.3\.2 Anchors
#### Exercise 14\.3\.2\.1
How would you match the literal string `"$^$"`?
To check that the pattern works, I’ll include both the string `"$^$"`, and an example where that pattern occurs in the middle of the string which should not be matched.
```
str_view(c("$^$", "ab$^$sfas"), "^\\$\\^\\$$", match = TRUE)
```
#### Exercise 14\.3\.2\.2
Given the corpus of common words in `stringr::words`, create regular expressions that find all words that:
1. Start with “y”.
2. End with “x”
3. Are exactly three letters long. (Don’t cheat by using `str_length()`!)
4. Have seven letters or more.
Since this list is long, you might want to use the `match` argument to `str_view()` to show only the matching or non\-matching words.
The answer to each part follows.
1. The words that start with “y” are:
```
str_view(stringr::words, "^y", match = TRUE)
```
2. End with “x”
```
str_view(stringr::words, "x$", match = TRUE)
```
3. Are exactly three letters long are
```
str_view(stringr::words, "^...$", match = TRUE)
```
4. The words that have seven letters or more:
```
str_view(stringr::words, ".......", match = TRUE)
```
Since the pattern `.......` is not anchored with either `.` or `$`
this will match any word with at last seven letters.
The pattern, `^.......$`, matches words with exactly seven characters.
#### Exercise 14\.3\.2\.1
How would you match the literal string `"$^$"`?
To check that the pattern works, I’ll include both the string `"$^$"`, and an example where that pattern occurs in the middle of the string which should not be matched.
```
str_view(c("$^$", "ab$^$sfas"), "^\\$\\^\\$$", match = TRUE)
```
#### Exercise 14\.3\.2\.2
Given the corpus of common words in `stringr::words`, create regular expressions that find all words that:
1. Start with “y”.
2. End with “x”
3. Are exactly three letters long. (Don’t cheat by using `str_length()`!)
4. Have seven letters or more.
Since this list is long, you might want to use the `match` argument to `str_view()` to show only the matching or non\-matching words.
The answer to each part follows.
1. The words that start with “y” are:
```
str_view(stringr::words, "^y", match = TRUE)
```
2. End with “x”
```
str_view(stringr::words, "x$", match = TRUE)
```
3. Are exactly three letters long are
```
str_view(stringr::words, "^...$", match = TRUE)
```
4. The words that have seven letters or more:
```
str_view(stringr::words, ".......", match = TRUE)
```
Since the pattern `.......` is not anchored with either `.` or `$`
this will match any word with at last seven letters.
The pattern, `^.......$`, matches words with exactly seven characters.
### 14\.3\.3 Character classes and alternatives
#### Exercise 14\.3\.3\.1
Create regular expressions to find all words that:
1. Start with a vowel.
2. That only contain consonants. (Hint: thinking about matching “not”\-vowels.)
3. End with `ed`, but not with `eed`.
4. End with `ing` or `ise`.
The answer to each part follows.
1. Words starting with vowels
```
str_subset(stringr::words, "^[aeiou]")
#> [1] "a" "able" "about" "absolute" "accept"
#> [6] "account" "achieve" "across" "act" "active"
#> [11] "actual" "add" "address" "admit" "advertise"
#> [16] "affect" "afford" "after" "afternoon" "again"
#> [21] "against" "age" "agent" "ago" "agree"
#> [26] "air" "all" "allow" "almost" "along"
#> [31] "already" "alright" "also" "although" "always"
#> [36] "america" "amount" "and" "another" "answer"
#> [41] "any" "apart" "apparent" "appear" "apply"
#> [46] "appoint" "approach" "appropriate" "area" "argue"
#> [51] "arm" "around" "arrange" "art" "as"
#> [56] "ask" "associate" "assume" "at" "attend"
#> [61] "authority" "available" "aware" "away" "awful"
#> [66] "each" "early" "east" "easy" "eat"
#> [71] "economy" "educate" "effect" "egg" "eight"
#> [76] "either" "elect" "electric" "eleven" "else"
#> [81] "employ" "encourage" "end" "engine" "english"
#> [86] "enjoy" "enough" "enter" "environment" "equal"
#> [91] "especial" "europe" "even" "evening" "ever"
#> [96] "every" "evidence" "exact" "example" "except"
#> [101] "excuse" "exercise" "exist" "expect" "expense"
#> [106] "experience" "explain" "express" "extra" "eye"
#> [111] "idea" "identify" "if" "imagine" "important"
#> [116] "improve" "in" "include" "income" "increase"
#> [121] "indeed" "individual" "industry" "inform" "inside"
#> [126] "instead" "insure" "interest" "into" "introduce"
#> [131] "invest" "involve" "issue" "it" "item"
#> [136] "obvious" "occasion" "odd" "of" "off"
#> [141] "offer" "office" "often" "okay" "old"
#> [146] "on" "once" "one" "only" "open"
#> [151] "operate" "opportunity" "oppose" "or" "order"
#> [156] "organize" "original" "other" "otherwise" "ought"
#> [161] "out" "over" "own" "under" "understand"
#> [166] "union" "unit" "unite" "university" "unless"
#> [171] "until" "up" "upon" "use" "usual"
```
2. Words that contain only consonants: Use the `negate`
argument of `str_subset`.
```
str_subset(stringr::words, "[aeiou]", negate=TRUE)
#> [1] "by" "dry" "fly" "mrs" "try" "why"
```
Alternatively, using `str_view()` the consonant\-only
words are:
```
str_view(stringr::words, "[aeiou]", match=FALSE)
```
3. Words that end with “\-ed” but not ending in “\-eed”.
```
str_subset(stringr::words, "[^e]ed$")
#> [1] "bed" "hundred" "red"
```
The pattern above will not match the word `"ed"`. If we wanted to include that, we could include it as a special case.
```
str_subset(c("ed", stringr::words), "(^|[^e])ed$")
#> [1] "ed" "bed" "hundred" "red"
```
4. Words ending in `ing` or `ise`:
```
str_subset(stringr::words, "i(ng|se)$")
#> [1] "advertise" "bring" "during" "evening" "exercise" "king"
#> [7] "meaning" "morning" "otherwise" "practise" "raise" "realise"
#> [13] "ring" "rise" "sing" "surprise" "thing"
```
#### Exercise 14\.3\.3\.1
Create regular expressions to find all words that:
1. Start with a vowel.
2. That only contain consonants. (Hint: thinking about matching “not”\-vowels.)
3. End with `ed`, but not with `eed`.
4. End with `ing` or `ise`.
The answer to each part follows.
1. Words starting with vowels
```
str_subset(stringr::words, "^[aeiou]")
#> [1] "a" "able" "about" "absolute" "accept"
#> [6] "account" "achieve" "across" "act" "active"
#> [11] "actual" "add" "address" "admit" "advertise"
#> [16] "affect" "afford" "after" "afternoon" "again"
#> [21] "against" "age" "agent" "ago" "agree"
#> [26] "air" "all" "allow" "almost" "along"
#> [31] "already" "alright" "also" "although" "always"
#> [36] "america" "amount" "and" "another" "answer"
#> [41] "any" "apart" "apparent" "appear" "apply"
#> [46] "appoint" "approach" "appropriate" "area" "argue"
#> [51] "arm" "around" "arrange" "art" "as"
#> [56] "ask" "associate" "assume" "at" "attend"
#> [61] "authority" "available" "aware" "away" "awful"
#> [66] "each" "early" "east" "easy" "eat"
#> [71] "economy" "educate" "effect" "egg" "eight"
#> [76] "either" "elect" "electric" "eleven" "else"
#> [81] "employ" "encourage" "end" "engine" "english"
#> [86] "enjoy" "enough" "enter" "environment" "equal"
#> [91] "especial" "europe" "even" "evening" "ever"
#> [96] "every" "evidence" "exact" "example" "except"
#> [101] "excuse" "exercise" "exist" "expect" "expense"
#> [106] "experience" "explain" "express" "extra" "eye"
#> [111] "idea" "identify" "if" "imagine" "important"
#> [116] "improve" "in" "include" "income" "increase"
#> [121] "indeed" "individual" "industry" "inform" "inside"
#> [126] "instead" "insure" "interest" "into" "introduce"
#> [131] "invest" "involve" "issue" "it" "item"
#> [136] "obvious" "occasion" "odd" "of" "off"
#> [141] "offer" "office" "often" "okay" "old"
#> [146] "on" "once" "one" "only" "open"
#> [151] "operate" "opportunity" "oppose" "or" "order"
#> [156] "organize" "original" "other" "otherwise" "ought"
#> [161] "out" "over" "own" "under" "understand"
#> [166] "union" "unit" "unite" "university" "unless"
#> [171] "until" "up" "upon" "use" "usual"
```
2. Words that contain only consonants: Use the `negate`
argument of `str_subset`.
```
str_subset(stringr::words, "[aeiou]", negate=TRUE)
#> [1] "by" "dry" "fly" "mrs" "try" "why"
```
Alternatively, using `str_view()` the consonant\-only
words are:
```
str_view(stringr::words, "[aeiou]", match=FALSE)
```
3. Words that end with “\-ed” but not ending in “\-eed”.
```
str_subset(stringr::words, "[^e]ed$")
#> [1] "bed" "hundred" "red"
```
The pattern above will not match the word `"ed"`. If we wanted to include that, we could include it as a special case.
```
str_subset(c("ed", stringr::words), "(^|[^e])ed$")
#> [1] "ed" "bed" "hundred" "red"
```
4. Words ending in `ing` or `ise`:
```
str_subset(stringr::words, "i(ng|se)$")
#> [1] "advertise" "bring" "during" "evening" "exercise" "king"
#> [7] "meaning" "morning" "otherwise" "practise" "raise" "realise"
#> [13] "ring" "rise" "sing" "surprise" "thing"
```
#### Exercise 14\.3\.3\.2
Empirically verify the rule “i” before e except after “c”.
```
length(str_subset(stringr::words, "(cei|[^c]ie)"))
#> [1] 14
```
```
length(str_subset(stringr::words, "(cie|[^c]ei)"))
#> [1] 3
```
#### Exercise 14\.3\.3\.3
Is “q” always followed by a “u”?
In the `stringr::words` dataset, yes.
```
str_view(stringr::words, "q[^u]", match = TRUE)
```
In the English language— [no](https://en.wiktionary.org/wiki/Appendix:English_words_containing_Q_not_followed_by_U).
However, the examples are few, and mostly loanwords, such as “burqa” and “cinq”.
Also, “qwerty”.
That I had to add all of those examples to the list of words that spellchecking should ignore is indicative of their rarity.
#### Exercise 14\.3\.3\.4
Write a regular expression that matches a word if it’s probably written in British English, not American English.
In the general case, this is hard, and could require a dictionary.
But, there are a few heuristics to consider that would account for some common cases: British English tends to use the following:
* “ou” instead of “o”
* use of “ae” and “oe” instead of “a” and “o”
* ends in `ise` instead of `ize`
* ends in `yse`
The regex `ou|ise$|ae|oe|yse$` would match these.
There are other [spelling differences between American and British English](https://en.wikipedia.org/wiki/American_and_British_English_spelling_differences) but they are not patterns amenable to regular expressions.
It would require a dictionary with differences in spellings for different words.
#### Exercise 14\.3\.3\.5
Create a regular expression that will match telephone numbers as commonly written in your country.
\<div class\="alert alert\-primary hints\-alert\>
This answer can be improved and expanded.
The answer to this will vary by country.
For the United States, phone numbers have a format like `123-456-7890` or `(123)456-7890`).
These regular expressions will parse the first form
```
x <- c("123-456-7890", "(123)456-7890", "(123) 456-7890", "1235-2351")
str_view(x, "\\d\\d\\d-\\d\\d\\d-\\d\\d\\d\\d")
```
```
str_view(x, "[0-9][0-9][0-9]-[0-9][0-9][0-9]-[0-9][0-9][0-9][0-9]")
```
The regular expressions will parse the second form:
```
str_view(x, "\\(\\d\\d\\d\\)\\s*\\d\\d\\d-\\d\\d\\d\\d")
```
```
str_view(x, "\\([0-9][0-9][0-9]\\)[ ]*[0-9][0-9][0-9]-[0-9][0-9][0-9][0-9]")
```
This regular expression can be simplified with the `{m,n}` regular expression modifier introduced in the next section,
```
str_view(x, "\\d{3}-\\d{3}-\\d{4}")
```
```
str_view(x, "\\(\\d{3}\\)\\s*\\d{3}-\\d{4}")
```
Note that this pattern doesn’t account for phone numbers that are invalid
due to an invalid area code.
Nor does this pattern account for special numbers like 911\.
It also doesn’t parse a leading country code or an extensions.
See the Wikipedia page for the [North American Numbering
Plan](https://en.wikipedia.org/wiki/North_American_Numbering_Plan) for more information on the complexities of US phone numbers, and [this Stack Overflow
question](https://stackoverflow.com/questions/123559/a-comprehensive-regex-for-phone-number-validation) for a discussion of using a regex for phone number validation.
The R package [dialr](https://cran.r-project.org/web/packages/dialr/index.html) implements robust phone number parsing.
Generally, for patterns like phone numbers or URLs it is better to use a dedicated package.
It is easy to match the pattern for the most common cases and useful for learning regular expressions, but in real applications there often edge cases that are handled by dedicated packages.
### 14\.3\.4 Repetition
#### Exercise 14\.3\.4\.1
Describe the equivalents of `?`, `+`, `*` in `{m,n}` form.
| Pattern | `{m,n}` | Meaning |
| --- | --- | --- |
| `?` | `{0,1}` | Match at most 1 |
| `+` | `{1,}` | Match 1 or more |
| `*` | `{0,}` | Match 0 or more |
For example, let’s repeat the examples in the chapter, replacing `?` with `{0,1}`,
`+` with `{1,}`, and `*` with `{*,}`.
```
x <- "1888 is the longest year in Roman numerals: MDCCCLXXXVIII"
```
```
str_view(x, "CC?")
```
```
str_view(x, "CC{0,1}")
```
```
str_view(x, "CC+")
```
```
str_view(x, "CC{1,}")
```
```
str_view_all(x, "C[LX]+")
```
```
str_view_all(x, "C[LX]{1,}")
```
The chapter does not contain an example of `*`.
This pattern looks for a “C” optionally followed by
any number of “L” or “X” characters.
```
str_view_all(x, "C[LX]*")
```
```
str_view_all(x, "C[LX]{0,}")
```
#### Exercise 14\.3\.4\.2
Describe in words what these regular expressions match: (read carefully to see if I’m using a regular expression or a string that defines a regular expression.)
1. `^.*$`
2. `"\\{.+\\}"`
3. `\d{4}-\d{2}-\d{2}`
4. `"\\\\{4}"`
The answer to each part follows.
1. `^.*$` will match any string. For example: `^.*$`: `c("dog", "$1.23", "lorem ipsum")`.
2. `"\\{.+\\}"` will match any string with curly braces surrounding at least one character.
For example: `"\\{.+\\}"`: `c("{a}", "{abc}")`.
3. `\d{4}-\d{2}-\d{2}` will match four digits followed by a hyphen, followed by
two digits followed by a hyphen, followed by another two digits.
This is a regular expression that can match dates formatted like “YYYY\-MM\-DD” (“%Y\-%m\-%d”).
For example: `\d{4}-\d{2}-\d{2}`: `2018-01-11`
4. `"\\\\{4}"` is `\\{4}`, which will match four backslashes.
For example: `"\\\\{4}"`: `"\\\\\\\\"`.
#### Exercise 14\.3\.4\.3
Create regular expressions to find all words that:
1. Start with three consonants.
2. Have three or more vowels in a row.
3. Have two or more vowel\-consonant pairs in a row.
The answer to each part follows.
1. This regex finds all words starting with three consonants.
```
str_view(words, "^[^aeiou]{3}", match = TRUE)
```
2. This regex finds three or more vowels in a row:
```
str_view(words, "[aeiou]{3,}", match = TRUE)
```
3. This regex finds two or more vowel\-consonant pairs in a row.
```
str_view(words, "([aeiou][^aeiou]){2,}", match = TRUE)
```
#### Exercise 14\.3\.4\.4
Solve the beginner regexp crosswords at <https://regexcrossword.com/challenges/>
Exercise left to reader. That site validates its solutions, so they aren’t repeated here.
#### Exercise 14\.3\.4\.1
Describe the equivalents of `?`, `+`, `*` in `{m,n}` form.
| Pattern | `{m,n}` | Meaning |
| --- | --- | --- |
| `?` | `{0,1}` | Match at most 1 |
| `+` | `{1,}` | Match 1 or more |
| `*` | `{0,}` | Match 0 or more |
For example, let’s repeat the examples in the chapter, replacing `?` with `{0,1}`,
`+` with `{1,}`, and `*` with `{*,}`.
```
x <- "1888 is the longest year in Roman numerals: MDCCCLXXXVIII"
```
```
str_view(x, "CC?")
```
```
str_view(x, "CC{0,1}")
```
```
str_view(x, "CC+")
```
```
str_view(x, "CC{1,}")
```
```
str_view_all(x, "C[LX]+")
```
```
str_view_all(x, "C[LX]{1,}")
```
The chapter does not contain an example of `*`.
This pattern looks for a “C” optionally followed by
any number of “L” or “X” characters.
```
str_view_all(x, "C[LX]*")
```
```
str_view_all(x, "C[LX]{0,}")
```
#### Exercise 14\.3\.4\.2
Describe in words what these regular expressions match: (read carefully to see if I’m using a regular expression or a string that defines a regular expression.)
1. `^.*$`
2. `"\\{.+\\}"`
3. `\d{4}-\d{2}-\d{2}`
4. `"\\\\{4}"`
The answer to each part follows.
1. `^.*$` will match any string. For example: `^.*$`: `c("dog", "$1.23", "lorem ipsum")`.
2. `"\\{.+\\}"` will match any string with curly braces surrounding at least one character.
For example: `"\\{.+\\}"`: `c("{a}", "{abc}")`.
3. `\d{4}-\d{2}-\d{2}` will match four digits followed by a hyphen, followed by
two digits followed by a hyphen, followed by another two digits.
This is a regular expression that can match dates formatted like “YYYY\-MM\-DD” (“%Y\-%m\-%d”).
For example: `\d{4}-\d{2}-\d{2}`: `2018-01-11`
4. `"\\\\{4}"` is `\\{4}`, which will match four backslashes.
For example: `"\\\\{4}"`: `"\\\\\\\\"`.
#### Exercise 14\.3\.4\.3
Create regular expressions to find all words that:
1. Start with three consonants.
2. Have three or more vowels in a row.
3. Have two or more vowel\-consonant pairs in a row.
The answer to each part follows.
1. This regex finds all words starting with three consonants.
```
str_view(words, "^[^aeiou]{3}", match = TRUE)
```
2. This regex finds three or more vowels in a row:
```
str_view(words, "[aeiou]{3,}", match = TRUE)
```
3. This regex finds two or more vowel\-consonant pairs in a row.
```
str_view(words, "([aeiou][^aeiou]){2,}", match = TRUE)
```
#### Exercise 14\.3\.4\.4
Solve the beginner regexp crosswords at <https://regexcrossword.com/challenges/>
Exercise left to reader. That site validates its solutions, so they aren’t repeated here.
### 14\.3\.5 Grouping and backreferences
#### Exercise 14\.3\.5\.1
Describe, in words, what these expressions will match:
1. `(.)\1\1` :
2. `"(.)(.)\\2\\1"`:
3. `(..)\1`:
4. `"(.).\\1.\\1"`:
5. `"(.)(.)(.).*\\3\\2\\1"`
The answer to each part follows.
1. `(.)\1\1`: The same character appearing three times in a row. E.g. `"aaa"`
2. `"(.)(.)\\2\\1"`: A pair of characters followed by the same pair of characters in reversed order. E.g. `"abba"`.
3. `(..)\1`: Any two characters repeated. E.g. `"a1a1"`.
4. `"(.).\\1.\\1"`: A character followed by any character, the original character, any other character, the original character again. E.g. `"abaca"`, `"b8b.b"`.
5. `"(.)(.)(.).*\\3\\2\\1"` Three characters followed by zero or more characters of any kind followed by the same three characters but in reverse order. E.g. `"abcsgasgddsadgsdgcba"` or `"abccba"` or `"abc1cba"`.
#### Exercise 14\.3\.5\.2
Construct regular expressions to match words that:
1. Start and end with the same character.
2. Contain a repeated pair of letters (e.g. `church'' contains`ch’’ repeated twice.)
3. Contain one letter repeated in at least three places (e.g. `eleven'' contains three`e’’s.)
The answer to each part follows.
1. This regular expression matches words that start and end with the same character.
```
str_subset(words, "^(.)((.*\\1$)|\\1?$)")
#> [1] "a" "america" "area" "dad" "dead"
#> [6] "depend" "educate" "else" "encourage" "engine"
#> [11] "europe" "evidence" "example" "excuse" "exercise"
#> [16] "expense" "experience" "eye" "health" "high"
#> [21] "knock" "level" "local" "nation" "non"
#> [26] "rather" "refer" "remember" "serious" "stairs"
#> [31] "test" "tonight" "transport" "treat" "trust"
#> [36] "window" "yesterday"
```
2. This regular expression will match any pair of repeated letters, where *letters* is defined to be the ASCII letters A\-Z.
First, check that it works with the example in the problem.
```
str_subset("church", "([A-Za-z][A-Za-z]).*\\1")
#> [1] "church"
```
Now, find all matching words in `words`.
```
str_subset(words, "([A-Za-z][A-Za-z]).*\\1")
#> [1] "appropriate" "church" "condition" "decide" "environment"
#> [6] "london" "paragraph" "particular" "photograph" "prepare"
#> [11] "pressure" "remember" "represent" "require" "sense"
#> [16] "therefore" "understand" "whether"
```
The `\\1` pattern is called a backreference. It matches whatever the first group
matched. This allows the pattern to match a repeating pair of letters without having
to specify exactly what pair letters is being repeated.
Note that these patterns are case sensitive. Use the
case insensitive flag if you want to check for repeated pairs
of letters with different capitalization.
3. This regex matches words that contain one letter repeated in at least three places.
First, check that it works with th example given in the question.
```
str_subset("eleven", "([a-z]).*\\1.*\\1")
#> [1] "eleven"
```
Now, retrieve the matching words in `words`.
```
str_subset(words, "([a-z]).*\\1.*\\1")
#> [1] "appropriate" "available" "believe" "between" "business"
#> [6] "degree" "difference" "discuss" "eleven" "environment"
#> [11] "evidence" "exercise" "expense" "experience" "individual"
#> [16] "paragraph" "receive" "remember" "represent" "telephone"
#> [21] "therefore" "tomorrow"
```
#### Exercise 14\.3\.5\.1
Describe, in words, what these expressions will match:
1. `(.)\1\1` :
2. `"(.)(.)\\2\\1"`:
3. `(..)\1`:
4. `"(.).\\1.\\1"`:
5. `"(.)(.)(.).*\\3\\2\\1"`
The answer to each part follows.
1. `(.)\1\1`: The same character appearing three times in a row. E.g. `"aaa"`
2. `"(.)(.)\\2\\1"`: A pair of characters followed by the same pair of characters in reversed order. E.g. `"abba"`.
3. `(..)\1`: Any two characters repeated. E.g. `"a1a1"`.
4. `"(.).\\1.\\1"`: A character followed by any character, the original character, any other character, the original character again. E.g. `"abaca"`, `"b8b.b"`.
5. `"(.)(.)(.).*\\3\\2\\1"` Three characters followed by zero or more characters of any kind followed by the same three characters but in reverse order. E.g. `"abcsgasgddsadgsdgcba"` or `"abccba"` or `"abc1cba"`.
#### Exercise 14\.3\.5\.2
Construct regular expressions to match words that:
1. Start and end with the same character.
2. Contain a repeated pair of letters (e.g. `church'' contains`ch’’ repeated twice.)
3. Contain one letter repeated in at least three places (e.g. `eleven'' contains three`e’’s.)
The answer to each part follows.
1. This regular expression matches words that start and end with the same character.
```
str_subset(words, "^(.)((.*\\1$)|\\1?$)")
#> [1] "a" "america" "area" "dad" "dead"
#> [6] "depend" "educate" "else" "encourage" "engine"
#> [11] "europe" "evidence" "example" "excuse" "exercise"
#> [16] "expense" "experience" "eye" "health" "high"
#> [21] "knock" "level" "local" "nation" "non"
#> [26] "rather" "refer" "remember" "serious" "stairs"
#> [31] "test" "tonight" "transport" "treat" "trust"
#> [36] "window" "yesterday"
```
2. This regular expression will match any pair of repeated letters, where *letters* is defined to be the ASCII letters A\-Z.
First, check that it works with the example in the problem.
```
str_subset("church", "([A-Za-z][A-Za-z]).*\\1")
#> [1] "church"
```
Now, find all matching words in `words`.
```
str_subset(words, "([A-Za-z][A-Za-z]).*\\1")
#> [1] "appropriate" "church" "condition" "decide" "environment"
#> [6] "london" "paragraph" "particular" "photograph" "prepare"
#> [11] "pressure" "remember" "represent" "require" "sense"
#> [16] "therefore" "understand" "whether"
```
The `\\1` pattern is called a backreference. It matches whatever the first group
matched. This allows the pattern to match a repeating pair of letters without having
to specify exactly what pair letters is being repeated.
Note that these patterns are case sensitive. Use the
case insensitive flag if you want to check for repeated pairs
of letters with different capitalization.
3. This regex matches words that contain one letter repeated in at least three places.
First, check that it works with th example given in the question.
```
str_subset("eleven", "([a-z]).*\\1.*\\1")
#> [1] "eleven"
```
Now, retrieve the matching words in `words`.
```
str_subset(words, "([a-z]).*\\1.*\\1")
#> [1] "appropriate" "available" "believe" "between" "business"
#> [6] "degree" "difference" "discuss" "eleven" "environment"
#> [11] "evidence" "exercise" "expense" "experience" "individual"
#> [16] "paragraph" "receive" "remember" "represent" "telephone"
#> [21] "therefore" "tomorrow"
```
14\.4 Tools
-----------
### 14\.4\.1 Detect matches
#### Exercise 14\.4\.1\.1
For each of the following challenges, try solving it by using both a single regular expression, and a combination of multiple `str_detect()` calls.
1. Find all words that start or end with x.
2. Find all words that start with a vowel and end with a consonant.
3. Are there any words that contain at least one of each different vowel?
The answer to each part follows.
1. Words that start or end with `x`?
```
# one regex
words[str_detect(words, "^x|x$")]
#> [1] "box" "sex" "six" "tax"
# split regex into parts
start_with_x <- str_detect(words, "^x")
end_with_x <- str_detect(words, "x$")
words[start_with_x | end_with_x]
#> [1] "box" "sex" "six" "tax"
```
2. Words starting with vowel and ending with consonant.
```
str_subset(words, "^[aeiou].*[^aeiou]$") %>% head()
#> [1] "about" "accept" "account" "across" "act" "actual"
start_with_vowel <- str_detect(words, "^[aeiou]")
end_with_consonant <- str_detect(words, "[^aeiou]$")
words[start_with_vowel & end_with_consonant] %>% head()
#> [1] "about" "accept" "account" "across" "act" "actual"
```
3. There is not a simple regular expression to match words that
that contain at least one of each vowel. The regular expression
would need to consider all possible orders in which the vowels
could occur.
```
pattern <-
cross(rerun(5, c("a", "e", "i", "o", "u")),
.filter = function(...) {
x <- as.character(unlist(list(...)))
length(x) != length(unique(x))
}
) %>%
map_chr(~str_c(unlist(.x), collapse = ".*")) %>%
str_c(collapse = "|")
```
To check that this pattern works, test it on a pattern that
should match
```
str_subset("aseiouds", pattern)
#> [1] "aseiouds"
```
Using multiple `str_detect()` calls, one pattern for each vowel,
produces a much simpler and readable answer.
```
str_subset(words, pattern)
#> character(0)
words[str_detect(words, "a") &
str_detect(words, "e") &
str_detect(words, "i") &
str_detect(words, "o") &
str_detect(words, "u")]
#> character(0)
```
There appear to be none.
#### Exercise 14\.4\.1\.2
What word has the higher number of vowels? What word has the highest proportion of vowels? (Hint: what is the denominator?)
The word with the highest number of vowels is
```
vowels <- str_count(words, "[aeiou]")
words[which(vowels == max(vowels))]
#> [1] "appropriate" "associate" "available" "colleague" "encourage"
#> [6] "experience" "individual" "television"
```
The word with the highest proportion of vowels is
```
prop_vowels <- str_count(words, "[aeiou]") / str_length(words)
words[which(prop_vowels == max(prop_vowels))]
#> [1] "a"
```
### 14\.4\.2 Extract matches
#### Exercise 14\.4\.2\.1
In the previous example, you might have noticed that the regular expression matched “flickered”, which is not a color.
Modify the regex to fix the problem.
This was the original color match pattern:
```
colours <- c("red", "orange", "yellow", "green", "blue", "purple")
colour_match <- str_c(colours, collapse = "|")
```
It matches “flickered” because it matches “red”.
The problem is that the previous pattern will match any word with the name of a color inside it. We want to only match colors in which the entire word is the name of the color.
We can do this by adding a `\b` (to indicate a word boundary) before and after the pattern:
```
colour_match2 <- str_c("\\b(", str_c(colours, collapse = "|"), ")\\b")
colour_match2
#> [1] "\\b(red|orange|yellow|green|blue|purple)\\b"
```
```
more2 <- sentences[str_count(sentences, colour_match) > 1]
```
```
str_view_all(more2, colour_match2, match = TRUE)
```
#### Exercise 14\.4\.2\.2
From the Harvard sentences data, extract:
1. The first word from each sentence.
2. All words ending in `ing`.
3. All plurals.
The answer to each part follows.
1. Finding the first word in each sentence requires defining what a pattern constitutes a word. For the purposes of this question,
I’ll consider a word any contiguous set of letters.
Since `str_extract()` will extract the first match, if it is provided a
regular expression for words, it will return the first word.
```
str_extract(sentences, "[A-ZAa-z]+") %>% head()
#> [1] "The" "Glue" "It" "These" "Rice" "The"
```
However, the third sentence begins with “It’s”. To catch this, I’ll
change the regular expression to require the string to begin with a letter,
but allow for a subsequent apostrophe.
```
str_extract(sentences, "[A-Za-z][A-Za-z']*") %>% head()
#> [1] "The" "Glue" "It's" "These" "Rice" "The"
```
2. This pattern finds all words ending in `ing`.
```
pattern <- "\\b[A-Za-z]+ing\\b"
sentences_with_ing <- str_detect(sentences, pattern)
unique(unlist(str_extract_all(sentences[sentences_with_ing], pattern))) %>%
head()
#> [1] "spring" "evening" "morning" "winding" "living" "king"
```
3. Finding all plurals cannot be correctly accomplished with regular expressions alone.
Finding plural words would at least require morphological information about words in the language.
See [WordNet](https://cran.r-project.org/web/packages/wordnet/index.html) for a resource that would do that.
However, identifying words that end in an “s” and with more than three characters, in order to remove “as”, “is”, “gas”, etc., is
a reasonable heuristic.
```
unique(unlist(str_extract_all(sentences, "\\b[A-Za-z]{3,}s\\b"))) %>%
head()
#> [1] "planks" "days" "bowls" "lemons" "makes" "hogs"
```
### 14\.4\.3 Grouped matches
#### Exercise 14\.4\.3\.1
Find all words that come after a “number” like “one”, “two”, “three” etc.
Pull out both the number and the word.
```
numword <- "\\b(one|two|three|four|five|six|seven|eight|nine|ten) +(\\w+)"
sentences[str_detect(sentences, numword)] %>%
str_extract(numword)
#> [1] "seven books" "two met" "two factors" "three lists"
#> [5] "seven is" "two when" "ten inches" "one war"
#> [9] "one button" "six minutes" "ten years" "two shares"
#> [13] "two distinct" "five cents" "two pins" "five robins"
#> [17] "four kinds" "three story" "three inches" "six comes"
#> [21] "three batches" "two leaves"
```
#### Exercise 14\.4\.3\.2
Find all contractions.
Separate out the pieces before and after the apostrophe.
This is done in two steps. First, identify the contractions. Second, split the string on the contraction.
```
contraction <- "([A-Za-z]+)'([A-Za-z]+)"
sentences[str_detect(sentences, contraction)] %>%
str_extract(contraction) %>%
str_split("'")
#> [[1]]
#> [1] "It" "s"
#>
#> [[2]]
#> [1] "man" "s"
#>
#> [[3]]
#> [1] "don" "t"
#>
#> [[4]]
#> [1] "store" "s"
#>
#> [[5]]
#> [1] "workmen" "s"
#>
#> [[6]]
#> [1] "Let" "s"
#>
#> [[7]]
#> [1] "sun" "s"
#>
#> [[8]]
#> [1] "child" "s"
#>
#> [[9]]
#> [1] "king" "s"
#>
#> [[10]]
#> [1] "It" "s"
#>
#> [[11]]
#> [1] "don" "t"
#>
#> [[12]]
#> [1] "queen" "s"
#>
#> [[13]]
#> [1] "don" "t"
#>
#> [[14]]
#> [1] "pirate" "s"
#>
#> [[15]]
#> [1] "neighbor" "s"
```
### 14\.4\.4 Replacing matches
#### Exercise 14\.4\.4\.1
Replace all forward slashes in a string with backslashes.
```
str_replace_all("past/present/future", "/", "\\\\")
#> [1] "past\\present\\future"
```
#### Exercise 14\.4\.4\.2
Implement a simple version of `str_to_lower()` using `replace_all()`.
```
replacements <- c("A" = "a", "B" = "b", "C" = "c", "D" = "d", "E" = "e",
"F" = "f", "G" = "g", "H" = "h", "I" = "i", "J" = "j",
"K" = "k", "L" = "l", "M" = "m", "N" = "n", "O" = "o",
"P" = "p", "Q" = "q", "R" = "r", "S" = "s", "T" = "t",
"U" = "u", "V" = "v", "W" = "w", "X" = "x", "Y" = "y",
"Z" = "z")
lower_words <- str_replace_all(words, pattern = replacements)
head(lower_words)
#> [1] "a" "able" "about" "absolute" "accept" "account"
```
#### Exercise 14\.4\.4\.3
Switch the first and last letters in `words`. Which of those strings are still words?
First, make a vector of all the words with first and last letters swapped,
```
swapped <- str_replace_all(words, "^([A-Za-z])(.*)([A-Za-z])$", "\\3\\2\\1")
```
Next, find what of “swapped” is also in the original list using the function `intersect()`,
```
intersect(swapped, words)
#> [1] "a" "america" "area" "dad" "dead"
#> [6] "lead" "read" "depend" "god" "educate"
#> [11] "else" "encourage" "engine" "europe" "evidence"
#> [16] "example" "excuse" "exercise" "expense" "experience"
#> [21] "eye" "dog" "health" "high" "knock"
#> [26] "deal" "level" "local" "nation" "on"
#> [31] "non" "no" "rather" "dear" "refer"
#> [36] "remember" "serious" "stairs" "test" "tonight"
#> [41] "transport" "treat" "trust" "window" "yesterday"
```
Alternatively, the regex can be written using the POSIX character class for letter (`[[:alpha:]]`):
```
swapped2 <- str_replace_all(words, "^([[:alpha:]])(.*)([[:alpha:]])$", "\\3\\2\\1")
intersect(swapped2, words)
#> [1] "a" "america" "area" "dad" "dead"
#> [6] "lead" "read" "depend" "god" "educate"
#> [11] "else" "encourage" "engine" "europe" "evidence"
#> [16] "example" "excuse" "exercise" "expense" "experience"
#> [21] "eye" "dog" "health" "high" "knock"
#> [26] "deal" "level" "local" "nation" "on"
#> [31] "non" "no" "rather" "dear" "refer"
#> [36] "remember" "serious" "stairs" "test" "tonight"
#> [41] "transport" "treat" "trust" "window" "yesterday"
```
### 14\.4\.5 Splitting
#### Exercise 14\.4\.5\.1
Split up a string like `"apples, pears, and bananas"` into individual components.
```
x <- c("apples, pears, and bananas")
str_split(x, ", +(and +)?")[[1]]
#> [1] "apples" "pears" "bananas"
```
#### Exercise 14\.4\.5\.2
Why is it better to split up by `boundary("word")` than `" "`?
Splitting by `boundary("word")` is a more sophisticated method to split a string into words.
It recognizes non\-space punctuation that splits words, and also removes punctuation while retaining internal non\-letter characters that are parts of the word, e.g., “can’t”
See the [ICU website](http://userguide.icu-project.org/boundaryanalysis) for a description of the set of rules that are used to determine word boundaries.
Consider this sentence from the official [Unicode Report on word boundaries](http://www.unicode.org/reports/tr29/#Word_Boundaries),
```
sentence <- "The quick (“brown”) fox can’t jump 32.3 feet, right?"
```
Splitting the string on spaces considers will group the punctuation with the words,
```
str_split(sentence, " ")
#> [[1]]
#> [1] "The" "quick" "(“brown”)" "fox" "can’t" "jump"
#> [7] "32.3" "feet," "right?"
```
However, splitting the string using `boundary("word")` correctly removes punctuation, while not
separating “32\.2” and “can’t”,
```
str_split(sentence, boundary("word"))
#> [[1]]
#> [1] "The" "quick" "brown" "fox" "can’t" "jump" "32.3" "feet" "right"
```
#### Exercise 14\.4\.5\.3
What does splitting with an empty string `("")` do? Experiment, and then read the documentation.
```
str_split("ab. cd|agt", "")[[1]]
#> [1] "a" "b" "." " " "c" "d" "|" "a" "g" "t"
```
It splits the string into individual characters.
### 14\.4\.6 Find matches
No exercises
### 14\.4\.1 Detect matches
#### Exercise 14\.4\.1\.1
For each of the following challenges, try solving it by using both a single regular expression, and a combination of multiple `str_detect()` calls.
1. Find all words that start or end with x.
2. Find all words that start with a vowel and end with a consonant.
3. Are there any words that contain at least one of each different vowel?
The answer to each part follows.
1. Words that start or end with `x`?
```
# one regex
words[str_detect(words, "^x|x$")]
#> [1] "box" "sex" "six" "tax"
# split regex into parts
start_with_x <- str_detect(words, "^x")
end_with_x <- str_detect(words, "x$")
words[start_with_x | end_with_x]
#> [1] "box" "sex" "six" "tax"
```
2. Words starting with vowel and ending with consonant.
```
str_subset(words, "^[aeiou].*[^aeiou]$") %>% head()
#> [1] "about" "accept" "account" "across" "act" "actual"
start_with_vowel <- str_detect(words, "^[aeiou]")
end_with_consonant <- str_detect(words, "[^aeiou]$")
words[start_with_vowel & end_with_consonant] %>% head()
#> [1] "about" "accept" "account" "across" "act" "actual"
```
3. There is not a simple regular expression to match words that
that contain at least one of each vowel. The regular expression
would need to consider all possible orders in which the vowels
could occur.
```
pattern <-
cross(rerun(5, c("a", "e", "i", "o", "u")),
.filter = function(...) {
x <- as.character(unlist(list(...)))
length(x) != length(unique(x))
}
) %>%
map_chr(~str_c(unlist(.x), collapse = ".*")) %>%
str_c(collapse = "|")
```
To check that this pattern works, test it on a pattern that
should match
```
str_subset("aseiouds", pattern)
#> [1] "aseiouds"
```
Using multiple `str_detect()` calls, one pattern for each vowel,
produces a much simpler and readable answer.
```
str_subset(words, pattern)
#> character(0)
words[str_detect(words, "a") &
str_detect(words, "e") &
str_detect(words, "i") &
str_detect(words, "o") &
str_detect(words, "u")]
#> character(0)
```
There appear to be none.
#### Exercise 14\.4\.1\.2
What word has the higher number of vowels? What word has the highest proportion of vowels? (Hint: what is the denominator?)
The word with the highest number of vowels is
```
vowels <- str_count(words, "[aeiou]")
words[which(vowels == max(vowels))]
#> [1] "appropriate" "associate" "available" "colleague" "encourage"
#> [6] "experience" "individual" "television"
```
The word with the highest proportion of vowels is
```
prop_vowels <- str_count(words, "[aeiou]") / str_length(words)
words[which(prop_vowels == max(prop_vowels))]
#> [1] "a"
```
#### Exercise 14\.4\.1\.1
For each of the following challenges, try solving it by using both a single regular expression, and a combination of multiple `str_detect()` calls.
1. Find all words that start or end with x.
2. Find all words that start with a vowel and end with a consonant.
3. Are there any words that contain at least one of each different vowel?
The answer to each part follows.
1. Words that start or end with `x`?
```
# one regex
words[str_detect(words, "^x|x$")]
#> [1] "box" "sex" "six" "tax"
# split regex into parts
start_with_x <- str_detect(words, "^x")
end_with_x <- str_detect(words, "x$")
words[start_with_x | end_with_x]
#> [1] "box" "sex" "six" "tax"
```
2. Words starting with vowel and ending with consonant.
```
str_subset(words, "^[aeiou].*[^aeiou]$") %>% head()
#> [1] "about" "accept" "account" "across" "act" "actual"
start_with_vowel <- str_detect(words, "^[aeiou]")
end_with_consonant <- str_detect(words, "[^aeiou]$")
words[start_with_vowel & end_with_consonant] %>% head()
#> [1] "about" "accept" "account" "across" "act" "actual"
```
3. There is not a simple regular expression to match words that
that contain at least one of each vowel. The regular expression
would need to consider all possible orders in which the vowels
could occur.
```
pattern <-
cross(rerun(5, c("a", "e", "i", "o", "u")),
.filter = function(...) {
x <- as.character(unlist(list(...)))
length(x) != length(unique(x))
}
) %>%
map_chr(~str_c(unlist(.x), collapse = ".*")) %>%
str_c(collapse = "|")
```
To check that this pattern works, test it on a pattern that
should match
```
str_subset("aseiouds", pattern)
#> [1] "aseiouds"
```
Using multiple `str_detect()` calls, one pattern for each vowel,
produces a much simpler and readable answer.
```
str_subset(words, pattern)
#> character(0)
words[str_detect(words, "a") &
str_detect(words, "e") &
str_detect(words, "i") &
str_detect(words, "o") &
str_detect(words, "u")]
#> character(0)
```
There appear to be none.
#### Exercise 14\.4\.1\.2
What word has the higher number of vowels? What word has the highest proportion of vowels? (Hint: what is the denominator?)
The word with the highest number of vowels is
```
vowels <- str_count(words, "[aeiou]")
words[which(vowels == max(vowels))]
#> [1] "appropriate" "associate" "available" "colleague" "encourage"
#> [6] "experience" "individual" "television"
```
The word with the highest proportion of vowels is
```
prop_vowels <- str_count(words, "[aeiou]") / str_length(words)
words[which(prop_vowels == max(prop_vowels))]
#> [1] "a"
```
### 14\.4\.2 Extract matches
#### Exercise 14\.4\.2\.1
In the previous example, you might have noticed that the regular expression matched “flickered”, which is not a color.
Modify the regex to fix the problem.
This was the original color match pattern:
```
colours <- c("red", "orange", "yellow", "green", "blue", "purple")
colour_match <- str_c(colours, collapse = "|")
```
It matches “flickered” because it matches “red”.
The problem is that the previous pattern will match any word with the name of a color inside it. We want to only match colors in which the entire word is the name of the color.
We can do this by adding a `\b` (to indicate a word boundary) before and after the pattern:
```
colour_match2 <- str_c("\\b(", str_c(colours, collapse = "|"), ")\\b")
colour_match2
#> [1] "\\b(red|orange|yellow|green|blue|purple)\\b"
```
```
more2 <- sentences[str_count(sentences, colour_match) > 1]
```
```
str_view_all(more2, colour_match2, match = TRUE)
```
#### Exercise 14\.4\.2\.2
From the Harvard sentences data, extract:
1. The first word from each sentence.
2. All words ending in `ing`.
3. All plurals.
The answer to each part follows.
1. Finding the first word in each sentence requires defining what a pattern constitutes a word. For the purposes of this question,
I’ll consider a word any contiguous set of letters.
Since `str_extract()` will extract the first match, if it is provided a
regular expression for words, it will return the first word.
```
str_extract(sentences, "[A-ZAa-z]+") %>% head()
#> [1] "The" "Glue" "It" "These" "Rice" "The"
```
However, the third sentence begins with “It’s”. To catch this, I’ll
change the regular expression to require the string to begin with a letter,
but allow for a subsequent apostrophe.
```
str_extract(sentences, "[A-Za-z][A-Za-z']*") %>% head()
#> [1] "The" "Glue" "It's" "These" "Rice" "The"
```
2. This pattern finds all words ending in `ing`.
```
pattern <- "\\b[A-Za-z]+ing\\b"
sentences_with_ing <- str_detect(sentences, pattern)
unique(unlist(str_extract_all(sentences[sentences_with_ing], pattern))) %>%
head()
#> [1] "spring" "evening" "morning" "winding" "living" "king"
```
3. Finding all plurals cannot be correctly accomplished with regular expressions alone.
Finding plural words would at least require morphological information about words in the language.
See [WordNet](https://cran.r-project.org/web/packages/wordnet/index.html) for a resource that would do that.
However, identifying words that end in an “s” and with more than three characters, in order to remove “as”, “is”, “gas”, etc., is
a reasonable heuristic.
```
unique(unlist(str_extract_all(sentences, "\\b[A-Za-z]{3,}s\\b"))) %>%
head()
#> [1] "planks" "days" "bowls" "lemons" "makes" "hogs"
```
#### Exercise 14\.4\.2\.1
In the previous example, you might have noticed that the regular expression matched “flickered”, which is not a color.
Modify the regex to fix the problem.
This was the original color match pattern:
```
colours <- c("red", "orange", "yellow", "green", "blue", "purple")
colour_match <- str_c(colours, collapse = "|")
```
It matches “flickered” because it matches “red”.
The problem is that the previous pattern will match any word with the name of a color inside it. We want to only match colors in which the entire word is the name of the color.
We can do this by adding a `\b` (to indicate a word boundary) before and after the pattern:
```
colour_match2 <- str_c("\\b(", str_c(colours, collapse = "|"), ")\\b")
colour_match2
#> [1] "\\b(red|orange|yellow|green|blue|purple)\\b"
```
```
more2 <- sentences[str_count(sentences, colour_match) > 1]
```
```
str_view_all(more2, colour_match2, match = TRUE)
```
#### Exercise 14\.4\.2\.2
From the Harvard sentences data, extract:
1. The first word from each sentence.
2. All words ending in `ing`.
3. All plurals.
The answer to each part follows.
1. Finding the first word in each sentence requires defining what a pattern constitutes a word. For the purposes of this question,
I’ll consider a word any contiguous set of letters.
Since `str_extract()` will extract the first match, if it is provided a
regular expression for words, it will return the first word.
```
str_extract(sentences, "[A-ZAa-z]+") %>% head()
#> [1] "The" "Glue" "It" "These" "Rice" "The"
```
However, the third sentence begins with “It’s”. To catch this, I’ll
change the regular expression to require the string to begin with a letter,
but allow for a subsequent apostrophe.
```
str_extract(sentences, "[A-Za-z][A-Za-z']*") %>% head()
#> [1] "The" "Glue" "It's" "These" "Rice" "The"
```
2. This pattern finds all words ending in `ing`.
```
pattern <- "\\b[A-Za-z]+ing\\b"
sentences_with_ing <- str_detect(sentences, pattern)
unique(unlist(str_extract_all(sentences[sentences_with_ing], pattern))) %>%
head()
#> [1] "spring" "evening" "morning" "winding" "living" "king"
```
3. Finding all plurals cannot be correctly accomplished with regular expressions alone.
Finding plural words would at least require morphological information about words in the language.
See [WordNet](https://cran.r-project.org/web/packages/wordnet/index.html) for a resource that would do that.
However, identifying words that end in an “s” and with more than three characters, in order to remove “as”, “is”, “gas”, etc., is
a reasonable heuristic.
```
unique(unlist(str_extract_all(sentences, "\\b[A-Za-z]{3,}s\\b"))) %>%
head()
#> [1] "planks" "days" "bowls" "lemons" "makes" "hogs"
```
### 14\.4\.3 Grouped matches
#### Exercise 14\.4\.3\.1
Find all words that come after a “number” like “one”, “two”, “three” etc.
Pull out both the number and the word.
```
numword <- "\\b(one|two|three|four|five|six|seven|eight|nine|ten) +(\\w+)"
sentences[str_detect(sentences, numword)] %>%
str_extract(numword)
#> [1] "seven books" "two met" "two factors" "three lists"
#> [5] "seven is" "two when" "ten inches" "one war"
#> [9] "one button" "six minutes" "ten years" "two shares"
#> [13] "two distinct" "five cents" "two pins" "five robins"
#> [17] "four kinds" "three story" "three inches" "six comes"
#> [21] "three batches" "two leaves"
```
#### Exercise 14\.4\.3\.2
Find all contractions.
Separate out the pieces before and after the apostrophe.
This is done in two steps. First, identify the contractions. Second, split the string on the contraction.
```
contraction <- "([A-Za-z]+)'([A-Za-z]+)"
sentences[str_detect(sentences, contraction)] %>%
str_extract(contraction) %>%
str_split("'")
#> [[1]]
#> [1] "It" "s"
#>
#> [[2]]
#> [1] "man" "s"
#>
#> [[3]]
#> [1] "don" "t"
#>
#> [[4]]
#> [1] "store" "s"
#>
#> [[5]]
#> [1] "workmen" "s"
#>
#> [[6]]
#> [1] "Let" "s"
#>
#> [[7]]
#> [1] "sun" "s"
#>
#> [[8]]
#> [1] "child" "s"
#>
#> [[9]]
#> [1] "king" "s"
#>
#> [[10]]
#> [1] "It" "s"
#>
#> [[11]]
#> [1] "don" "t"
#>
#> [[12]]
#> [1] "queen" "s"
#>
#> [[13]]
#> [1] "don" "t"
#>
#> [[14]]
#> [1] "pirate" "s"
#>
#> [[15]]
#> [1] "neighbor" "s"
```
#### Exercise 14\.4\.3\.1
Find all words that come after a “number” like “one”, “two”, “three” etc.
Pull out both the number and the word.
```
numword <- "\\b(one|two|three|four|five|six|seven|eight|nine|ten) +(\\w+)"
sentences[str_detect(sentences, numword)] %>%
str_extract(numword)
#> [1] "seven books" "two met" "two factors" "three lists"
#> [5] "seven is" "two when" "ten inches" "one war"
#> [9] "one button" "six minutes" "ten years" "two shares"
#> [13] "two distinct" "five cents" "two pins" "five robins"
#> [17] "four kinds" "three story" "three inches" "six comes"
#> [21] "three batches" "two leaves"
```
#### Exercise 14\.4\.3\.2
Find all contractions.
Separate out the pieces before and after the apostrophe.
This is done in two steps. First, identify the contractions. Second, split the string on the contraction.
```
contraction <- "([A-Za-z]+)'([A-Za-z]+)"
sentences[str_detect(sentences, contraction)] %>%
str_extract(contraction) %>%
str_split("'")
#> [[1]]
#> [1] "It" "s"
#>
#> [[2]]
#> [1] "man" "s"
#>
#> [[3]]
#> [1] "don" "t"
#>
#> [[4]]
#> [1] "store" "s"
#>
#> [[5]]
#> [1] "workmen" "s"
#>
#> [[6]]
#> [1] "Let" "s"
#>
#> [[7]]
#> [1] "sun" "s"
#>
#> [[8]]
#> [1] "child" "s"
#>
#> [[9]]
#> [1] "king" "s"
#>
#> [[10]]
#> [1] "It" "s"
#>
#> [[11]]
#> [1] "don" "t"
#>
#> [[12]]
#> [1] "queen" "s"
#>
#> [[13]]
#> [1] "don" "t"
#>
#> [[14]]
#> [1] "pirate" "s"
#>
#> [[15]]
#> [1] "neighbor" "s"
```
### 14\.4\.4 Replacing matches
#### Exercise 14\.4\.4\.1
Replace all forward slashes in a string with backslashes.
```
str_replace_all("past/present/future", "/", "\\\\")
#> [1] "past\\present\\future"
```
#### Exercise 14\.4\.4\.2
Implement a simple version of `str_to_lower()` using `replace_all()`.
```
replacements <- c("A" = "a", "B" = "b", "C" = "c", "D" = "d", "E" = "e",
"F" = "f", "G" = "g", "H" = "h", "I" = "i", "J" = "j",
"K" = "k", "L" = "l", "M" = "m", "N" = "n", "O" = "o",
"P" = "p", "Q" = "q", "R" = "r", "S" = "s", "T" = "t",
"U" = "u", "V" = "v", "W" = "w", "X" = "x", "Y" = "y",
"Z" = "z")
lower_words <- str_replace_all(words, pattern = replacements)
head(lower_words)
#> [1] "a" "able" "about" "absolute" "accept" "account"
```
#### Exercise 14\.4\.4\.3
Switch the first and last letters in `words`. Which of those strings are still words?
First, make a vector of all the words with first and last letters swapped,
```
swapped <- str_replace_all(words, "^([A-Za-z])(.*)([A-Za-z])$", "\\3\\2\\1")
```
Next, find what of “swapped” is also in the original list using the function `intersect()`,
```
intersect(swapped, words)
#> [1] "a" "america" "area" "dad" "dead"
#> [6] "lead" "read" "depend" "god" "educate"
#> [11] "else" "encourage" "engine" "europe" "evidence"
#> [16] "example" "excuse" "exercise" "expense" "experience"
#> [21] "eye" "dog" "health" "high" "knock"
#> [26] "deal" "level" "local" "nation" "on"
#> [31] "non" "no" "rather" "dear" "refer"
#> [36] "remember" "serious" "stairs" "test" "tonight"
#> [41] "transport" "treat" "trust" "window" "yesterday"
```
Alternatively, the regex can be written using the POSIX character class for letter (`[[:alpha:]]`):
```
swapped2 <- str_replace_all(words, "^([[:alpha:]])(.*)([[:alpha:]])$", "\\3\\2\\1")
intersect(swapped2, words)
#> [1] "a" "america" "area" "dad" "dead"
#> [6] "lead" "read" "depend" "god" "educate"
#> [11] "else" "encourage" "engine" "europe" "evidence"
#> [16] "example" "excuse" "exercise" "expense" "experience"
#> [21] "eye" "dog" "health" "high" "knock"
#> [26] "deal" "level" "local" "nation" "on"
#> [31] "non" "no" "rather" "dear" "refer"
#> [36] "remember" "serious" "stairs" "test" "tonight"
#> [41] "transport" "treat" "trust" "window" "yesterday"
```
#### Exercise 14\.4\.4\.1
Replace all forward slashes in a string with backslashes.
```
str_replace_all("past/present/future", "/", "\\\\")
#> [1] "past\\present\\future"
```
#### Exercise 14\.4\.4\.2
Implement a simple version of `str_to_lower()` using `replace_all()`.
```
replacements <- c("A" = "a", "B" = "b", "C" = "c", "D" = "d", "E" = "e",
"F" = "f", "G" = "g", "H" = "h", "I" = "i", "J" = "j",
"K" = "k", "L" = "l", "M" = "m", "N" = "n", "O" = "o",
"P" = "p", "Q" = "q", "R" = "r", "S" = "s", "T" = "t",
"U" = "u", "V" = "v", "W" = "w", "X" = "x", "Y" = "y",
"Z" = "z")
lower_words <- str_replace_all(words, pattern = replacements)
head(lower_words)
#> [1] "a" "able" "about" "absolute" "accept" "account"
```
#### Exercise 14\.4\.4\.3
Switch the first and last letters in `words`. Which of those strings are still words?
First, make a vector of all the words with first and last letters swapped,
```
swapped <- str_replace_all(words, "^([A-Za-z])(.*)([A-Za-z])$", "\\3\\2\\1")
```
Next, find what of “swapped” is also in the original list using the function `intersect()`,
```
intersect(swapped, words)
#> [1] "a" "america" "area" "dad" "dead"
#> [6] "lead" "read" "depend" "god" "educate"
#> [11] "else" "encourage" "engine" "europe" "evidence"
#> [16] "example" "excuse" "exercise" "expense" "experience"
#> [21] "eye" "dog" "health" "high" "knock"
#> [26] "deal" "level" "local" "nation" "on"
#> [31] "non" "no" "rather" "dear" "refer"
#> [36] "remember" "serious" "stairs" "test" "tonight"
#> [41] "transport" "treat" "trust" "window" "yesterday"
```
Alternatively, the regex can be written using the POSIX character class for letter (`[[:alpha:]]`):
```
swapped2 <- str_replace_all(words, "^([[:alpha:]])(.*)([[:alpha:]])$", "\\3\\2\\1")
intersect(swapped2, words)
#> [1] "a" "america" "area" "dad" "dead"
#> [6] "lead" "read" "depend" "god" "educate"
#> [11] "else" "encourage" "engine" "europe" "evidence"
#> [16] "example" "excuse" "exercise" "expense" "experience"
#> [21] "eye" "dog" "health" "high" "knock"
#> [26] "deal" "level" "local" "nation" "on"
#> [31] "non" "no" "rather" "dear" "refer"
#> [36] "remember" "serious" "stairs" "test" "tonight"
#> [41] "transport" "treat" "trust" "window" "yesterday"
```
### 14\.4\.5 Splitting
#### Exercise 14\.4\.5\.1
Split up a string like `"apples, pears, and bananas"` into individual components.
```
x <- c("apples, pears, and bananas")
str_split(x, ", +(and +)?")[[1]]
#> [1] "apples" "pears" "bananas"
```
#### Exercise 14\.4\.5\.2
Why is it better to split up by `boundary("word")` than `" "`?
Splitting by `boundary("word")` is a more sophisticated method to split a string into words.
It recognizes non\-space punctuation that splits words, and also removes punctuation while retaining internal non\-letter characters that are parts of the word, e.g., “can’t”
See the [ICU website](http://userguide.icu-project.org/boundaryanalysis) for a description of the set of rules that are used to determine word boundaries.
Consider this sentence from the official [Unicode Report on word boundaries](http://www.unicode.org/reports/tr29/#Word_Boundaries),
```
sentence <- "The quick (“brown”) fox can’t jump 32.3 feet, right?"
```
Splitting the string on spaces considers will group the punctuation with the words,
```
str_split(sentence, " ")
#> [[1]]
#> [1] "The" "quick" "(“brown”)" "fox" "can’t" "jump"
#> [7] "32.3" "feet," "right?"
```
However, splitting the string using `boundary("word")` correctly removes punctuation, while not
separating “32\.2” and “can’t”,
```
str_split(sentence, boundary("word"))
#> [[1]]
#> [1] "The" "quick" "brown" "fox" "can’t" "jump" "32.3" "feet" "right"
```
#### Exercise 14\.4\.5\.3
What does splitting with an empty string `("")` do? Experiment, and then read the documentation.
```
str_split("ab. cd|agt", "")[[1]]
#> [1] "a" "b" "." " " "c" "d" "|" "a" "g" "t"
```
It splits the string into individual characters.
#### Exercise 14\.4\.5\.1
Split up a string like `"apples, pears, and bananas"` into individual components.
```
x <- c("apples, pears, and bananas")
str_split(x, ", +(and +)?")[[1]]
#> [1] "apples" "pears" "bananas"
```
#### Exercise 14\.4\.5\.2
Why is it better to split up by `boundary("word")` than `" "`?
Splitting by `boundary("word")` is a more sophisticated method to split a string into words.
It recognizes non\-space punctuation that splits words, and also removes punctuation while retaining internal non\-letter characters that are parts of the word, e.g., “can’t”
See the [ICU website](http://userguide.icu-project.org/boundaryanalysis) for a description of the set of rules that are used to determine word boundaries.
Consider this sentence from the official [Unicode Report on word boundaries](http://www.unicode.org/reports/tr29/#Word_Boundaries),
```
sentence <- "The quick (“brown”) fox can’t jump 32.3 feet, right?"
```
Splitting the string on spaces considers will group the punctuation with the words,
```
str_split(sentence, " ")
#> [[1]]
#> [1] "The" "quick" "(“brown”)" "fox" "can’t" "jump"
#> [7] "32.3" "feet," "right?"
```
However, splitting the string using `boundary("word")` correctly removes punctuation, while not
separating “32\.2” and “can’t”,
```
str_split(sentence, boundary("word"))
#> [[1]]
#> [1] "The" "quick" "brown" "fox" "can’t" "jump" "32.3" "feet" "right"
```
#### Exercise 14\.4\.5\.3
What does splitting with an empty string `("")` do? Experiment, and then read the documentation.
```
str_split("ab. cd|agt", "")[[1]]
#> [1] "a" "b" "." " " "c" "d" "|" "a" "g" "t"
```
It splits the string into individual characters.
### 14\.4\.6 Find matches
No exercises
14\.5 Other types of pattern
----------------------------
### Exercise 14\.5\.1
How would you find all strings containing `\` with `regex()` vs. with `fixed()`?
```
str_subset(c("a\\b", "ab"), "\\\\")
#> [1] "a\\b"
str_subset(c("a\\b", "ab"), fixed("\\"))
#> [1] "a\\b"
```
### Exercise 14\.5\.2
What are the five most common words in `sentences`?
Using `str_extract_all()` with the argument `boundary("word")` will extract all words.
The rest of the code uses dplyr functions to count words and find the most
common words.
```
tibble(word = unlist(str_extract_all(sentences, boundary("word")))) %>%
mutate(word = str_to_lower(word)) %>%
count(word, sort = TRUE) %>%
head(5)
#> # A tibble: 5 x 2
#> word n
#> <chr> <int>
#> 1 the 751
#> 2 a 202
#> 3 of 132
#> 4 to 123
#> 5 and 118
```
### Exercise 14\.5\.1
How would you find all strings containing `\` with `regex()` vs. with `fixed()`?
```
str_subset(c("a\\b", "ab"), "\\\\")
#> [1] "a\\b"
str_subset(c("a\\b", "ab"), fixed("\\"))
#> [1] "a\\b"
```
### Exercise 14\.5\.2
What are the five most common words in `sentences`?
Using `str_extract_all()` with the argument `boundary("word")` will extract all words.
The rest of the code uses dplyr functions to count words and find the most
common words.
```
tibble(word = unlist(str_extract_all(sentences, boundary("word")))) %>%
mutate(word = str_to_lower(word)) %>%
count(word, sort = TRUE) %>%
head(5)
#> # A tibble: 5 x 2
#> word n
#> <chr> <int>
#> 1 the 751
#> 2 a 202
#> 3 of 132
#> 4 to 123
#> 5 and 118
```
14\.6 Other uses of regular expressions
---------------------------------------
No exercises
14\.7 stringi
-------------
```
library("stringi")
```
### Exercise 14\.7\.1
Find the stringi functions that:
1. Count the number of words.
2. Find duplicated strings.
3. Generate random text.
The answer to each part follows.
1. To count the number of words use `stringi::stri_count_words()`.
This code counts the words in the first five sentences of `sentences`.
```
stri_count_words(head(sentences))
#> [1] 8 8 9 9 7 7
```
2. The `stringi::stri_duplicated()` function finds duplicate strings.
```
stri_duplicated(c("the", "brown", "cow", "jumped", "over",
"the", "lazy", "fox"))
#> [1] FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE
```
3. The *stringi* package contains several functions beginning with `stri_rand_*` that generate random text.
The function `stringi::stri_rand_strings()` generates random strings.
The following code generates four random strings each of length five.
```
stri_rand_strings(4, 5)
#> [1] "5pb90" "SUHjl" "sA2JO" "CP3Oy"
```
The function `stringi::stri_rand_shuffle()` randomly shuffles the characters in the text.
```
stri_rand_shuffle("The brown fox jumped over the lazy cow.")
#> [1] "ot f.lween p jzwoom xyucobhv daheerrT"
```
The function `stringi::stri_rand_lipsum()` generates [lorem ipsum](https://en.wikipedia.org/wiki/Lorem_ipsum) text.
Lorem ipsum text is nonsense text often used as placeholder text in publishing.
The following code generates one paragraph of placeholder text.
```
stri_rand_lipsum(1)
#> [1] "Lorem ipsum dolor sit amet, hac non metus cras nam vitae tempus proin, sed. Diam gravida viverra eros mauris, magna lacinia dui nullam. Arcu proin aenean fringilla sed sollicitudin hac neque, egestas condimentum massa, elementum vivamus. Odio eget litora molestie eget eros pulvinar ac. Vel nec nullam vivamus, sociosqu lectus varius eleifend. Vitae in. Conubia ut hac maximus amet, conubia sed. Volutpat vitae class cursus, elit mauris porta. Mauris lacus donec odio eget quam inceptos, ridiculus cursus, ad massa. Rhoncus hac aenean at id consectetur molestie vitae! Sed, primis mi dictum lacinia eros. Ligula, feugiat consequat ut vivamus ut morbi et. Dolor, eget eleifend nec magnis aliquam egestas. Sollicitudin venenatis et aptent rhoncus nisl platea ligula cum."
```
### Exercise 14\.7\.2
How do you control the language that `stri_sort()` uses for sorting?
You can set a locale to use when sorting with either `stri_sort(..., opts_collator=stri_opts_collator(locale = ...))` or `stri_sort(..., locale = ...)`.
In this example from the `stri_sort()` documentation, the sorted order of the character vector depends on the locale.
```
string1 <- c("hladny", "chladny")
stri_sort(string1, locale = "pl_PL")
#> [1] "chladny" "hladny"
stri_sort(string1, locale = "sk_SK")
#> [1] "hladny" "chladny"
```
The output of `stri_opts_collator()` can also be used for the `locale` argument of `str_sort`.
```
stri_sort(string1, opts_collator = stri_opts_collator(locale = "pl_PL"))
#> [1] "chladny" "hladny"
stri_sort(string1, opts_collator = stri_opts_collator(locale = "sk_SK"))
#> [1] "hladny" "chladny"
```
The `stri_opts_collator()` provides finer grained control over how strings are sorted.
In addition to setting the locale, it has options to customize how cases, unicode, accents, and numeric values are handled when comparing strings.
```
string2 <- c("number100", "number2")
stri_sort(string2)
#> [1] "number100" "number2"
stri_sort(string2, opts_collator = stri_opts_collator(numeric = TRUE))
#> [1] "number2" "number100"
```
### Exercise 14\.7\.1
Find the stringi functions that:
1. Count the number of words.
2. Find duplicated strings.
3. Generate random text.
The answer to each part follows.
1. To count the number of words use `stringi::stri_count_words()`.
This code counts the words in the first five sentences of `sentences`.
```
stri_count_words(head(sentences))
#> [1] 8 8 9 9 7 7
```
2. The `stringi::stri_duplicated()` function finds duplicate strings.
```
stri_duplicated(c("the", "brown", "cow", "jumped", "over",
"the", "lazy", "fox"))
#> [1] FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE
```
3. The *stringi* package contains several functions beginning with `stri_rand_*` that generate random text.
The function `stringi::stri_rand_strings()` generates random strings.
The following code generates four random strings each of length five.
```
stri_rand_strings(4, 5)
#> [1] "5pb90" "SUHjl" "sA2JO" "CP3Oy"
```
The function `stringi::stri_rand_shuffle()` randomly shuffles the characters in the text.
```
stri_rand_shuffle("The brown fox jumped over the lazy cow.")
#> [1] "ot f.lween p jzwoom xyucobhv daheerrT"
```
The function `stringi::stri_rand_lipsum()` generates [lorem ipsum](https://en.wikipedia.org/wiki/Lorem_ipsum) text.
Lorem ipsum text is nonsense text often used as placeholder text in publishing.
The following code generates one paragraph of placeholder text.
```
stri_rand_lipsum(1)
#> [1] "Lorem ipsum dolor sit amet, hac non metus cras nam vitae tempus proin, sed. Diam gravida viverra eros mauris, magna lacinia dui nullam. Arcu proin aenean fringilla sed sollicitudin hac neque, egestas condimentum massa, elementum vivamus. Odio eget litora molestie eget eros pulvinar ac. Vel nec nullam vivamus, sociosqu lectus varius eleifend. Vitae in. Conubia ut hac maximus amet, conubia sed. Volutpat vitae class cursus, elit mauris porta. Mauris lacus donec odio eget quam inceptos, ridiculus cursus, ad massa. Rhoncus hac aenean at id consectetur molestie vitae! Sed, primis mi dictum lacinia eros. Ligula, feugiat consequat ut vivamus ut morbi et. Dolor, eget eleifend nec magnis aliquam egestas. Sollicitudin venenatis et aptent rhoncus nisl platea ligula cum."
```
### Exercise 14\.7\.2
How do you control the language that `stri_sort()` uses for sorting?
You can set a locale to use when sorting with either `stri_sort(..., opts_collator=stri_opts_collator(locale = ...))` or `stri_sort(..., locale = ...)`.
In this example from the `stri_sort()` documentation, the sorted order of the character vector depends on the locale.
```
string1 <- c("hladny", "chladny")
stri_sort(string1, locale = "pl_PL")
#> [1] "chladny" "hladny"
stri_sort(string1, locale = "sk_SK")
#> [1] "hladny" "chladny"
```
The output of `stri_opts_collator()` can also be used for the `locale` argument of `str_sort`.
```
stri_sort(string1, opts_collator = stri_opts_collator(locale = "pl_PL"))
#> [1] "chladny" "hladny"
stri_sort(string1, opts_collator = stri_opts_collator(locale = "sk_SK"))
#> [1] "hladny" "chladny"
```
The `stri_opts_collator()` provides finer grained control over how strings are sorted.
In addition to setting the locale, it has options to customize how cases, unicode, accents, and numeric values are handled when comparing strings.
```
string2 <- c("number100", "number2")
stri_sort(string2)
#> [1] "number100" "number2"
stri_sort(string2, opts_collator = stri_opts_collator(numeric = TRUE))
#> [1] "number2" "number100"
```
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/factors.html |
15 Factors
==========
15\.1 Introduction
------------------
Functions and packages:
```
library("tidyverse")
```
The forcats package does not need to be explicitly loaded, since the recent versions of the tidyverse package now attach it.
15\.2 Creating factors
----------------------
No exercises
15\.3 General Social Survey
---------------------------
### Exercise 15\.3\.1
Explore the distribution of `rincome` (reported income).
What makes the default bar chart hard to understand?
How could you improve the plot?
My first attempt is to use `geom_bar()` with the default settings.
```
rincome_plot <-
gss_cat %>%
ggplot(aes(x = rincome)) +
geom_bar()
rincome_plot
```
The problem with default bar chart settings, are that the labels overlapping and impossible to read.
I’ll try changing the angle of the x\-axis labels to vertical so that they will not overlap.
```
rincome_plot +
theme(axis.text.x = element_text(angle = 90, hjust = 1))
```
This is better because the labels are not overlapping, but also difficult to read because the labels are vertical.
I could try angling the labels so that they are easier to read, but not overlapping.
```
rincome_plot +
theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
But the solution I prefer for bar charts with long labels is to flip the axes, so that the bars are horizontal.
Then the category labels are also horizontal, and easy to read.
```
rincome_plot +
coord_flip()
```
Though more than asked for in this question, I could further improve this plot by
1. removing the “Not applicable” responses,
2. renaming “Lt $1000” to “Less than $1000”,
3. using color to distinguish non\-response categories (“Refused”, “Don’t know”, and “No answer”) from income levels (“Lt $1000”, …),
4. adding meaningful y\- and x\-axis titles, and
5. formatting the counts axis labels to use commas.
```
gss_cat %>%
filter(!rincome %in% c("Not applicable")) %>%
mutate(rincome = fct_recode(rincome,
"Less than $1000" = "Lt $1000"
)) %>%
mutate(rincome_na = rincome %in% c("Refused", "Don't know", "No answer")) %>%
ggplot(aes(x = rincome, fill = rincome_na)) +
geom_bar() +
coord_flip() +
scale_y_continuous("Number of Respondents", labels = scales::comma) +
scale_x_discrete("Respondent's Income") +
scale_fill_manual(values = c("FALSE" = "black", "TRUE" = "gray")) +
theme(legend.position = "None")
```
If I were only interested in non\-missing responses, then I could drop all respondents who answered “Not applicable”, “Refused”, “Don’t know”, or “No answer”.
```
gss_cat %>%
filter(!rincome %in% c("Not applicable", "Don't know", "No answer", "Refused")) %>%
mutate(rincome = fct_recode(rincome,
"Less than $1000" = "Lt $1000"
)) %>%
ggplot(aes(x = rincome)) +
geom_bar() +
coord_flip() +
scale_y_continuous("Number of Respondents", labels = scales::comma) +
scale_x_discrete("Respondent's Income")
```
A side\-effect of `coord_flip()` is that the label ordering on the x\-axis, from lowest (top) to highest (bottom) is counterintuitive.
The next section introduces a function `fct_reorder()` which can help with this.
### Exercise 15\.3\.2
What is the most common `relig` in this survey?
What’s the most common `partyid`?
The most common `relig` is “Protestant”
```
gss_cat %>%
count(relig) %>%
arrange(desc(n)) %>%
head(1)
#> # A tibble: 1 x 2
#> relig n
#> <fct> <int>
#> 1 Protestant 10846
```
The most common `partyid` is “Independent”
```
gss_cat %>%
count(partyid) %>%
arrange(desc(n)) %>%
head(1)
#> # A tibble: 1 x 2
#> partyid n
#> <fct> <int>
#> 1 Independent 4119
```
### Exercise 15\.3\.3
Which `relig` does `denom` (denomination) apply to?
How can you find out with a table?
How can you find out with a visualization?
```
levels(gss_cat$denom)
#> [1] "No answer" "Don't know" "No denomination"
#> [4] "Other" "Episcopal" "Presbyterian-dk wh"
#> [7] "Presbyterian, merged" "Other presbyterian" "United pres ch in us"
#> [10] "Presbyterian c in us" "Lutheran-dk which" "Evangelical luth"
#> [13] "Other lutheran" "Wi evan luth synod" "Lutheran-mo synod"
#> [16] "Luth ch in america" "Am lutheran" "Methodist-dk which"
#> [19] "Other methodist" "United methodist" "Afr meth ep zion"
#> [22] "Afr meth episcopal" "Baptist-dk which" "Other baptists"
#> [25] "Southern baptist" "Nat bapt conv usa" "Nat bapt conv of am"
#> [28] "Am bapt ch in usa" "Am baptist asso" "Not applicable"
```
From the context it is clear that `denom` refers to “Protestant” (and unsurprising given that it is the largest category in `freq`).
Let’s filter out the non\-responses, no answers, others, not\-applicable, or
no denomination, to leave only answers to denominations.
After doing that, the only remaining responses are “Protestant”.
```
gss_cat %>%
filter(!denom %in% c(
"No answer", "Other", "Don't know", "Not applicable",
"No denomination"
)) %>%
count(relig)
#> # A tibble: 1 x 2
#> relig n
#> <fct> <int>
#> 1 Protestant 7025
```
This is also clear in a scatter plot of `relig` vs. `denom` where the points are
proportional to the size of the number of answers (since otherwise there would be overplotting).
```
gss_cat %>%
count(relig, denom) %>%
ggplot(aes(x = relig, y = denom, size = n)) +
geom_point() +
theme(axis.text.x = element_text(angle = 90))
```
15\.4 Modifying factor order
----------------------------
### Exercise 15\.4\.1
There are some suspiciously high numbers in `tvhours`.
Is the `mean` a good summary?
```
summary(gss_cat[["tvhours"]])
#> Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
#> 0 1 2 3 4 24 10146
```
```
gss_cat %>%
filter(!is.na(tvhours)) %>%
ggplot(aes(x = tvhours)) +
geom_histogram(binwidth = 1)
```
Whether the mean is the best summary depends on what you are using it for :\-), i.e. your objective.
But probably the median would be what most people prefer.
And the hours of TV doesn’t look that surprising to me.
### Exercise 15\.4\.2
For each factor in `gss_cat` identify whether the order of the levels is arbitrary or principled.
The following piece of code uses functions introduced in Ch 21, to print out the names of only the factors.
```
keep(gss_cat, is.factor) %>% names()
#> [1] "marital" "race" "rincome" "partyid" "relig" "denom"
```
There are six categorical variables: `marital`, `race`, `rincome`, `partyid`, `relig`, and `denom`.
The ordering of marital is “somewhat principled”. There is some sort of logic
in that the levels are grouped “never married”, married at some point
(separated, divorced, widowed), and “married”; though it would seem that “Never
Married”, “Divorced”, “Widowed”, “Separated”, “Married” might be more natural.
I find that the question of ordering can be determined by the level of
aggregation in a categorical variable, and there can be more “partially
ordered” factors than one would expect.
```
levels(gss_cat[["marital"]])
#> [1] "No answer" "Never married" "Separated" "Divorced"
#> [5] "Widowed" "Married"
```
```
gss_cat %>%
ggplot(aes(x = marital)) +
geom_bar()
```
The ordering of race is principled in that the categories are ordered by count of observations in the data.
```
levels(gss_cat$race)
#> [1] "Other" "Black" "White" "Not applicable"
```
```
gss_cat %>%
ggplot(aes(race)) +
geom_bar() +
scale_x_discrete(drop = FALSE)
```
The levels of `rincome` are ordered in decreasing order of the income; however
the placement of “No answer”, “Don’t know”, and “Refused” before, and “Not
applicable” after the income levels is arbitrary. It would be better to place
all the missing income level categories either before or after all the known
values.
```
levels(gss_cat$rincome)
#> [1] "No answer" "Don't know" "Refused" "$25000 or more"
#> [5] "$20000 - 24999" "$15000 - 19999" "$10000 - 14999" "$8000 to 9999"
#> [9] "$7000 to 7999" "$6000 to 6999" "$5000 to 5999" "$4000 to 4999"
#> [13] "$3000 to 3999" "$1000 to 2999" "Lt $1000" "Not applicable"
```
The levels of `relig` is arbitrary: there is no natural ordering, and they don’t appear to be ordered by stats within the dataset.
```
levels(gss_cat$relig)
#> [1] "No answer" "Don't know"
#> [3] "Inter-nondenominational" "Native american"
#> [5] "Christian" "Orthodox-christian"
#> [7] "Moslem/islam" "Other eastern"
#> [9] "Hinduism" "Buddhism"
#> [11] "Other" "None"
#> [13] "Jewish" "Catholic"
#> [15] "Protestant" "Not applicable"
```
```
gss_cat %>%
ggplot(aes(relig)) +
geom_bar() +
coord_flip()
```
The same goes for `denom`.
```
levels(gss_cat$denom)
#> [1] "No answer" "Don't know" "No denomination"
#> [4] "Other" "Episcopal" "Presbyterian-dk wh"
#> [7] "Presbyterian, merged" "Other presbyterian" "United pres ch in us"
#> [10] "Presbyterian c in us" "Lutheran-dk which" "Evangelical luth"
#> [13] "Other lutheran" "Wi evan luth synod" "Lutheran-mo synod"
#> [16] "Luth ch in america" "Am lutheran" "Methodist-dk which"
#> [19] "Other methodist" "United methodist" "Afr meth ep zion"
#> [22] "Afr meth episcopal" "Baptist-dk which" "Other baptists"
#> [25] "Southern baptist" "Nat bapt conv usa" "Nat bapt conv of am"
#> [28] "Am bapt ch in usa" "Am baptist asso" "Not applicable"
```
Ignoring “No answer”, “Don’t know”, and “Other party”, the levels of `partyid` are ordered from “Strong Republican”" to “Strong Democrat”.
```
levels(gss_cat$partyid)
#> [1] "No answer" "Don't know" "Other party"
#> [4] "Strong republican" "Not str republican" "Ind,near rep"
#> [7] "Independent" "Ind,near dem" "Not str democrat"
#> [10] "Strong democrat"
```
### Exercise 15\.4\.3
Why did moving “Not applicable” to the front of the levels move it to the bottom of the plot?
Because that gives the level “Not applicable” an integer value of 1\.
15\.5 Modifying factor levels
-----------------------------
### Exercise 15\.5\.1
How have the proportions of people identifying as Democrat, Republican, and Independent changed over time?
To answer that, we need to combine the multiple levels into Democrat, Republican, and Independent
```
levels(gss_cat$partyid)
#> [1] "No answer" "Don't know" "Other party"
#> [4] "Strong republican" "Not str republican" "Ind,near rep"
#> [7] "Independent" "Ind,near dem" "Not str democrat"
#> [10] "Strong democrat"
```
```
gss_cat %>%
mutate(
partyid =
fct_collapse(partyid,
other = c("No answer", "Don't know", "Other party"),
rep = c("Strong republican", "Not str republican"),
ind = c("Ind,near rep", "Independent", "Ind,near dem"),
dem = c("Not str democrat", "Strong democrat")
)
) %>%
count(year, partyid) %>%
group_by(year) %>%
mutate(p = n / sum(n)) %>%
ggplot(aes(
x = year, y = p,
colour = fct_reorder2(partyid, year, p)
)) +
geom_point() +
geom_line() +
labs(colour = "Party ID.")
```
### Exercise 15\.5\.2
How could you collapse `rincome` into a small set of categories?
Group all the non\-responses into one category, and then group other categories into a smaller number. Since there is a clear ordering, we would not use `fct_lump()`.\`
```
levels(gss_cat$rincome)
#> [1] "No answer" "Don't know" "Refused" "$25000 or more"
#> [5] "$20000 - 24999" "$15000 - 19999" "$10000 - 14999" "$8000 to 9999"
#> [9] "$7000 to 7999" "$6000 to 6999" "$5000 to 5999" "$4000 to 4999"
#> [13] "$3000 to 3999" "$1000 to 2999" "Lt $1000" "Not applicable"
```
```
library("stringr")
gss_cat %>%
mutate(
rincome =
fct_collapse(
rincome,
`Unknown` = c("No answer", "Don't know", "Refused", "Not applicable"),
`Lt $5000` = c("Lt $1000", str_c(
"$", c("1000", "3000", "4000"),
" to ", c("2999", "3999", "4999")
)),
`$5000 to 10000` = str_c(
"$", c("5000", "6000", "7000", "8000"),
" to ", c("5999", "6999", "7999", "9999")
)
)
) %>%
ggplot(aes(x = rincome)) +
geom_bar() +
coord_flip()
```
15\.1 Introduction
------------------
Functions and packages:
```
library("tidyverse")
```
The forcats package does not need to be explicitly loaded, since the recent versions of the tidyverse package now attach it.
15\.2 Creating factors
----------------------
No exercises
15\.3 General Social Survey
---------------------------
### Exercise 15\.3\.1
Explore the distribution of `rincome` (reported income).
What makes the default bar chart hard to understand?
How could you improve the plot?
My first attempt is to use `geom_bar()` with the default settings.
```
rincome_plot <-
gss_cat %>%
ggplot(aes(x = rincome)) +
geom_bar()
rincome_plot
```
The problem with default bar chart settings, are that the labels overlapping and impossible to read.
I’ll try changing the angle of the x\-axis labels to vertical so that they will not overlap.
```
rincome_plot +
theme(axis.text.x = element_text(angle = 90, hjust = 1))
```
This is better because the labels are not overlapping, but also difficult to read because the labels are vertical.
I could try angling the labels so that they are easier to read, but not overlapping.
```
rincome_plot +
theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
But the solution I prefer for bar charts with long labels is to flip the axes, so that the bars are horizontal.
Then the category labels are also horizontal, and easy to read.
```
rincome_plot +
coord_flip()
```
Though more than asked for in this question, I could further improve this plot by
1. removing the “Not applicable” responses,
2. renaming “Lt $1000” to “Less than $1000”,
3. using color to distinguish non\-response categories (“Refused”, “Don’t know”, and “No answer”) from income levels (“Lt $1000”, …),
4. adding meaningful y\- and x\-axis titles, and
5. formatting the counts axis labels to use commas.
```
gss_cat %>%
filter(!rincome %in% c("Not applicable")) %>%
mutate(rincome = fct_recode(rincome,
"Less than $1000" = "Lt $1000"
)) %>%
mutate(rincome_na = rincome %in% c("Refused", "Don't know", "No answer")) %>%
ggplot(aes(x = rincome, fill = rincome_na)) +
geom_bar() +
coord_flip() +
scale_y_continuous("Number of Respondents", labels = scales::comma) +
scale_x_discrete("Respondent's Income") +
scale_fill_manual(values = c("FALSE" = "black", "TRUE" = "gray")) +
theme(legend.position = "None")
```
If I were only interested in non\-missing responses, then I could drop all respondents who answered “Not applicable”, “Refused”, “Don’t know”, or “No answer”.
```
gss_cat %>%
filter(!rincome %in% c("Not applicable", "Don't know", "No answer", "Refused")) %>%
mutate(rincome = fct_recode(rincome,
"Less than $1000" = "Lt $1000"
)) %>%
ggplot(aes(x = rincome)) +
geom_bar() +
coord_flip() +
scale_y_continuous("Number of Respondents", labels = scales::comma) +
scale_x_discrete("Respondent's Income")
```
A side\-effect of `coord_flip()` is that the label ordering on the x\-axis, from lowest (top) to highest (bottom) is counterintuitive.
The next section introduces a function `fct_reorder()` which can help with this.
### Exercise 15\.3\.2
What is the most common `relig` in this survey?
What’s the most common `partyid`?
The most common `relig` is “Protestant”
```
gss_cat %>%
count(relig) %>%
arrange(desc(n)) %>%
head(1)
#> # A tibble: 1 x 2
#> relig n
#> <fct> <int>
#> 1 Protestant 10846
```
The most common `partyid` is “Independent”
```
gss_cat %>%
count(partyid) %>%
arrange(desc(n)) %>%
head(1)
#> # A tibble: 1 x 2
#> partyid n
#> <fct> <int>
#> 1 Independent 4119
```
### Exercise 15\.3\.3
Which `relig` does `denom` (denomination) apply to?
How can you find out with a table?
How can you find out with a visualization?
```
levels(gss_cat$denom)
#> [1] "No answer" "Don't know" "No denomination"
#> [4] "Other" "Episcopal" "Presbyterian-dk wh"
#> [7] "Presbyterian, merged" "Other presbyterian" "United pres ch in us"
#> [10] "Presbyterian c in us" "Lutheran-dk which" "Evangelical luth"
#> [13] "Other lutheran" "Wi evan luth synod" "Lutheran-mo synod"
#> [16] "Luth ch in america" "Am lutheran" "Methodist-dk which"
#> [19] "Other methodist" "United methodist" "Afr meth ep zion"
#> [22] "Afr meth episcopal" "Baptist-dk which" "Other baptists"
#> [25] "Southern baptist" "Nat bapt conv usa" "Nat bapt conv of am"
#> [28] "Am bapt ch in usa" "Am baptist asso" "Not applicable"
```
From the context it is clear that `denom` refers to “Protestant” (and unsurprising given that it is the largest category in `freq`).
Let’s filter out the non\-responses, no answers, others, not\-applicable, or
no denomination, to leave only answers to denominations.
After doing that, the only remaining responses are “Protestant”.
```
gss_cat %>%
filter(!denom %in% c(
"No answer", "Other", "Don't know", "Not applicable",
"No denomination"
)) %>%
count(relig)
#> # A tibble: 1 x 2
#> relig n
#> <fct> <int>
#> 1 Protestant 7025
```
This is also clear in a scatter plot of `relig` vs. `denom` where the points are
proportional to the size of the number of answers (since otherwise there would be overplotting).
```
gss_cat %>%
count(relig, denom) %>%
ggplot(aes(x = relig, y = denom, size = n)) +
geom_point() +
theme(axis.text.x = element_text(angle = 90))
```
### Exercise 15\.3\.1
Explore the distribution of `rincome` (reported income).
What makes the default bar chart hard to understand?
How could you improve the plot?
My first attempt is to use `geom_bar()` with the default settings.
```
rincome_plot <-
gss_cat %>%
ggplot(aes(x = rincome)) +
geom_bar()
rincome_plot
```
The problem with default bar chart settings, are that the labels overlapping and impossible to read.
I’ll try changing the angle of the x\-axis labels to vertical so that they will not overlap.
```
rincome_plot +
theme(axis.text.x = element_text(angle = 90, hjust = 1))
```
This is better because the labels are not overlapping, but also difficult to read because the labels are vertical.
I could try angling the labels so that they are easier to read, but not overlapping.
```
rincome_plot +
theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
But the solution I prefer for bar charts with long labels is to flip the axes, so that the bars are horizontal.
Then the category labels are also horizontal, and easy to read.
```
rincome_plot +
coord_flip()
```
Though more than asked for in this question, I could further improve this plot by
1. removing the “Not applicable” responses,
2. renaming “Lt $1000” to “Less than $1000”,
3. using color to distinguish non\-response categories (“Refused”, “Don’t know”, and “No answer”) from income levels (“Lt $1000”, …),
4. adding meaningful y\- and x\-axis titles, and
5. formatting the counts axis labels to use commas.
```
gss_cat %>%
filter(!rincome %in% c("Not applicable")) %>%
mutate(rincome = fct_recode(rincome,
"Less than $1000" = "Lt $1000"
)) %>%
mutate(rincome_na = rincome %in% c("Refused", "Don't know", "No answer")) %>%
ggplot(aes(x = rincome, fill = rincome_na)) +
geom_bar() +
coord_flip() +
scale_y_continuous("Number of Respondents", labels = scales::comma) +
scale_x_discrete("Respondent's Income") +
scale_fill_manual(values = c("FALSE" = "black", "TRUE" = "gray")) +
theme(legend.position = "None")
```
If I were only interested in non\-missing responses, then I could drop all respondents who answered “Not applicable”, “Refused”, “Don’t know”, or “No answer”.
```
gss_cat %>%
filter(!rincome %in% c("Not applicable", "Don't know", "No answer", "Refused")) %>%
mutate(rincome = fct_recode(rincome,
"Less than $1000" = "Lt $1000"
)) %>%
ggplot(aes(x = rincome)) +
geom_bar() +
coord_flip() +
scale_y_continuous("Number of Respondents", labels = scales::comma) +
scale_x_discrete("Respondent's Income")
```
A side\-effect of `coord_flip()` is that the label ordering on the x\-axis, from lowest (top) to highest (bottom) is counterintuitive.
The next section introduces a function `fct_reorder()` which can help with this.
### Exercise 15\.3\.2
What is the most common `relig` in this survey?
What’s the most common `partyid`?
The most common `relig` is “Protestant”
```
gss_cat %>%
count(relig) %>%
arrange(desc(n)) %>%
head(1)
#> # A tibble: 1 x 2
#> relig n
#> <fct> <int>
#> 1 Protestant 10846
```
The most common `partyid` is “Independent”
```
gss_cat %>%
count(partyid) %>%
arrange(desc(n)) %>%
head(1)
#> # A tibble: 1 x 2
#> partyid n
#> <fct> <int>
#> 1 Independent 4119
```
### Exercise 15\.3\.3
Which `relig` does `denom` (denomination) apply to?
How can you find out with a table?
How can you find out with a visualization?
```
levels(gss_cat$denom)
#> [1] "No answer" "Don't know" "No denomination"
#> [4] "Other" "Episcopal" "Presbyterian-dk wh"
#> [7] "Presbyterian, merged" "Other presbyterian" "United pres ch in us"
#> [10] "Presbyterian c in us" "Lutheran-dk which" "Evangelical luth"
#> [13] "Other lutheran" "Wi evan luth synod" "Lutheran-mo synod"
#> [16] "Luth ch in america" "Am lutheran" "Methodist-dk which"
#> [19] "Other methodist" "United methodist" "Afr meth ep zion"
#> [22] "Afr meth episcopal" "Baptist-dk which" "Other baptists"
#> [25] "Southern baptist" "Nat bapt conv usa" "Nat bapt conv of am"
#> [28] "Am bapt ch in usa" "Am baptist asso" "Not applicable"
```
From the context it is clear that `denom` refers to “Protestant” (and unsurprising given that it is the largest category in `freq`).
Let’s filter out the non\-responses, no answers, others, not\-applicable, or
no denomination, to leave only answers to denominations.
After doing that, the only remaining responses are “Protestant”.
```
gss_cat %>%
filter(!denom %in% c(
"No answer", "Other", "Don't know", "Not applicable",
"No denomination"
)) %>%
count(relig)
#> # A tibble: 1 x 2
#> relig n
#> <fct> <int>
#> 1 Protestant 7025
```
This is also clear in a scatter plot of `relig` vs. `denom` where the points are
proportional to the size of the number of answers (since otherwise there would be overplotting).
```
gss_cat %>%
count(relig, denom) %>%
ggplot(aes(x = relig, y = denom, size = n)) +
geom_point() +
theme(axis.text.x = element_text(angle = 90))
```
15\.4 Modifying factor order
----------------------------
### Exercise 15\.4\.1
There are some suspiciously high numbers in `tvhours`.
Is the `mean` a good summary?
```
summary(gss_cat[["tvhours"]])
#> Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
#> 0 1 2 3 4 24 10146
```
```
gss_cat %>%
filter(!is.na(tvhours)) %>%
ggplot(aes(x = tvhours)) +
geom_histogram(binwidth = 1)
```
Whether the mean is the best summary depends on what you are using it for :\-), i.e. your objective.
But probably the median would be what most people prefer.
And the hours of TV doesn’t look that surprising to me.
### Exercise 15\.4\.2
For each factor in `gss_cat` identify whether the order of the levels is arbitrary or principled.
The following piece of code uses functions introduced in Ch 21, to print out the names of only the factors.
```
keep(gss_cat, is.factor) %>% names()
#> [1] "marital" "race" "rincome" "partyid" "relig" "denom"
```
There are six categorical variables: `marital`, `race`, `rincome`, `partyid`, `relig`, and `denom`.
The ordering of marital is “somewhat principled”. There is some sort of logic
in that the levels are grouped “never married”, married at some point
(separated, divorced, widowed), and “married”; though it would seem that “Never
Married”, “Divorced”, “Widowed”, “Separated”, “Married” might be more natural.
I find that the question of ordering can be determined by the level of
aggregation in a categorical variable, and there can be more “partially
ordered” factors than one would expect.
```
levels(gss_cat[["marital"]])
#> [1] "No answer" "Never married" "Separated" "Divorced"
#> [5] "Widowed" "Married"
```
```
gss_cat %>%
ggplot(aes(x = marital)) +
geom_bar()
```
The ordering of race is principled in that the categories are ordered by count of observations in the data.
```
levels(gss_cat$race)
#> [1] "Other" "Black" "White" "Not applicable"
```
```
gss_cat %>%
ggplot(aes(race)) +
geom_bar() +
scale_x_discrete(drop = FALSE)
```
The levels of `rincome` are ordered in decreasing order of the income; however
the placement of “No answer”, “Don’t know”, and “Refused” before, and “Not
applicable” after the income levels is arbitrary. It would be better to place
all the missing income level categories either before or after all the known
values.
```
levels(gss_cat$rincome)
#> [1] "No answer" "Don't know" "Refused" "$25000 or more"
#> [5] "$20000 - 24999" "$15000 - 19999" "$10000 - 14999" "$8000 to 9999"
#> [9] "$7000 to 7999" "$6000 to 6999" "$5000 to 5999" "$4000 to 4999"
#> [13] "$3000 to 3999" "$1000 to 2999" "Lt $1000" "Not applicable"
```
The levels of `relig` is arbitrary: there is no natural ordering, and they don’t appear to be ordered by stats within the dataset.
```
levels(gss_cat$relig)
#> [1] "No answer" "Don't know"
#> [3] "Inter-nondenominational" "Native american"
#> [5] "Christian" "Orthodox-christian"
#> [7] "Moslem/islam" "Other eastern"
#> [9] "Hinduism" "Buddhism"
#> [11] "Other" "None"
#> [13] "Jewish" "Catholic"
#> [15] "Protestant" "Not applicable"
```
```
gss_cat %>%
ggplot(aes(relig)) +
geom_bar() +
coord_flip()
```
The same goes for `denom`.
```
levels(gss_cat$denom)
#> [1] "No answer" "Don't know" "No denomination"
#> [4] "Other" "Episcopal" "Presbyterian-dk wh"
#> [7] "Presbyterian, merged" "Other presbyterian" "United pres ch in us"
#> [10] "Presbyterian c in us" "Lutheran-dk which" "Evangelical luth"
#> [13] "Other lutheran" "Wi evan luth synod" "Lutheran-mo synod"
#> [16] "Luth ch in america" "Am lutheran" "Methodist-dk which"
#> [19] "Other methodist" "United methodist" "Afr meth ep zion"
#> [22] "Afr meth episcopal" "Baptist-dk which" "Other baptists"
#> [25] "Southern baptist" "Nat bapt conv usa" "Nat bapt conv of am"
#> [28] "Am bapt ch in usa" "Am baptist asso" "Not applicable"
```
Ignoring “No answer”, “Don’t know”, and “Other party”, the levels of `partyid` are ordered from “Strong Republican”" to “Strong Democrat”.
```
levels(gss_cat$partyid)
#> [1] "No answer" "Don't know" "Other party"
#> [4] "Strong republican" "Not str republican" "Ind,near rep"
#> [7] "Independent" "Ind,near dem" "Not str democrat"
#> [10] "Strong democrat"
```
### Exercise 15\.4\.3
Why did moving “Not applicable” to the front of the levels move it to the bottom of the plot?
Because that gives the level “Not applicable” an integer value of 1\.
### Exercise 15\.4\.1
There are some suspiciously high numbers in `tvhours`.
Is the `mean` a good summary?
```
summary(gss_cat[["tvhours"]])
#> Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
#> 0 1 2 3 4 24 10146
```
```
gss_cat %>%
filter(!is.na(tvhours)) %>%
ggplot(aes(x = tvhours)) +
geom_histogram(binwidth = 1)
```
Whether the mean is the best summary depends on what you are using it for :\-), i.e. your objective.
But probably the median would be what most people prefer.
And the hours of TV doesn’t look that surprising to me.
### Exercise 15\.4\.2
For each factor in `gss_cat` identify whether the order of the levels is arbitrary or principled.
The following piece of code uses functions introduced in Ch 21, to print out the names of only the factors.
```
keep(gss_cat, is.factor) %>% names()
#> [1] "marital" "race" "rincome" "partyid" "relig" "denom"
```
There are six categorical variables: `marital`, `race`, `rincome`, `partyid`, `relig`, and `denom`.
The ordering of marital is “somewhat principled”. There is some sort of logic
in that the levels are grouped “never married”, married at some point
(separated, divorced, widowed), and “married”; though it would seem that “Never
Married”, “Divorced”, “Widowed”, “Separated”, “Married” might be more natural.
I find that the question of ordering can be determined by the level of
aggregation in a categorical variable, and there can be more “partially
ordered” factors than one would expect.
```
levels(gss_cat[["marital"]])
#> [1] "No answer" "Never married" "Separated" "Divorced"
#> [5] "Widowed" "Married"
```
```
gss_cat %>%
ggplot(aes(x = marital)) +
geom_bar()
```
The ordering of race is principled in that the categories are ordered by count of observations in the data.
```
levels(gss_cat$race)
#> [1] "Other" "Black" "White" "Not applicable"
```
```
gss_cat %>%
ggplot(aes(race)) +
geom_bar() +
scale_x_discrete(drop = FALSE)
```
The levels of `rincome` are ordered in decreasing order of the income; however
the placement of “No answer”, “Don’t know”, and “Refused” before, and “Not
applicable” after the income levels is arbitrary. It would be better to place
all the missing income level categories either before or after all the known
values.
```
levels(gss_cat$rincome)
#> [1] "No answer" "Don't know" "Refused" "$25000 or more"
#> [5] "$20000 - 24999" "$15000 - 19999" "$10000 - 14999" "$8000 to 9999"
#> [9] "$7000 to 7999" "$6000 to 6999" "$5000 to 5999" "$4000 to 4999"
#> [13] "$3000 to 3999" "$1000 to 2999" "Lt $1000" "Not applicable"
```
The levels of `relig` is arbitrary: there is no natural ordering, and they don’t appear to be ordered by stats within the dataset.
```
levels(gss_cat$relig)
#> [1] "No answer" "Don't know"
#> [3] "Inter-nondenominational" "Native american"
#> [5] "Christian" "Orthodox-christian"
#> [7] "Moslem/islam" "Other eastern"
#> [9] "Hinduism" "Buddhism"
#> [11] "Other" "None"
#> [13] "Jewish" "Catholic"
#> [15] "Protestant" "Not applicable"
```
```
gss_cat %>%
ggplot(aes(relig)) +
geom_bar() +
coord_flip()
```
The same goes for `denom`.
```
levels(gss_cat$denom)
#> [1] "No answer" "Don't know" "No denomination"
#> [4] "Other" "Episcopal" "Presbyterian-dk wh"
#> [7] "Presbyterian, merged" "Other presbyterian" "United pres ch in us"
#> [10] "Presbyterian c in us" "Lutheran-dk which" "Evangelical luth"
#> [13] "Other lutheran" "Wi evan luth synod" "Lutheran-mo synod"
#> [16] "Luth ch in america" "Am lutheran" "Methodist-dk which"
#> [19] "Other methodist" "United methodist" "Afr meth ep zion"
#> [22] "Afr meth episcopal" "Baptist-dk which" "Other baptists"
#> [25] "Southern baptist" "Nat bapt conv usa" "Nat bapt conv of am"
#> [28] "Am bapt ch in usa" "Am baptist asso" "Not applicable"
```
Ignoring “No answer”, “Don’t know”, and “Other party”, the levels of `partyid` are ordered from “Strong Republican”" to “Strong Democrat”.
```
levels(gss_cat$partyid)
#> [1] "No answer" "Don't know" "Other party"
#> [4] "Strong republican" "Not str republican" "Ind,near rep"
#> [7] "Independent" "Ind,near dem" "Not str democrat"
#> [10] "Strong democrat"
```
### Exercise 15\.4\.3
Why did moving “Not applicable” to the front of the levels move it to the bottom of the plot?
Because that gives the level “Not applicable” an integer value of 1\.
15\.5 Modifying factor levels
-----------------------------
### Exercise 15\.5\.1
How have the proportions of people identifying as Democrat, Republican, and Independent changed over time?
To answer that, we need to combine the multiple levels into Democrat, Republican, and Independent
```
levels(gss_cat$partyid)
#> [1] "No answer" "Don't know" "Other party"
#> [4] "Strong republican" "Not str republican" "Ind,near rep"
#> [7] "Independent" "Ind,near dem" "Not str democrat"
#> [10] "Strong democrat"
```
```
gss_cat %>%
mutate(
partyid =
fct_collapse(partyid,
other = c("No answer", "Don't know", "Other party"),
rep = c("Strong republican", "Not str republican"),
ind = c("Ind,near rep", "Independent", "Ind,near dem"),
dem = c("Not str democrat", "Strong democrat")
)
) %>%
count(year, partyid) %>%
group_by(year) %>%
mutate(p = n / sum(n)) %>%
ggplot(aes(
x = year, y = p,
colour = fct_reorder2(partyid, year, p)
)) +
geom_point() +
geom_line() +
labs(colour = "Party ID.")
```
### Exercise 15\.5\.2
How could you collapse `rincome` into a small set of categories?
Group all the non\-responses into one category, and then group other categories into a smaller number. Since there is a clear ordering, we would not use `fct_lump()`.\`
```
levels(gss_cat$rincome)
#> [1] "No answer" "Don't know" "Refused" "$25000 or more"
#> [5] "$20000 - 24999" "$15000 - 19999" "$10000 - 14999" "$8000 to 9999"
#> [9] "$7000 to 7999" "$6000 to 6999" "$5000 to 5999" "$4000 to 4999"
#> [13] "$3000 to 3999" "$1000 to 2999" "Lt $1000" "Not applicable"
```
```
library("stringr")
gss_cat %>%
mutate(
rincome =
fct_collapse(
rincome,
`Unknown` = c("No answer", "Don't know", "Refused", "Not applicable"),
`Lt $5000` = c("Lt $1000", str_c(
"$", c("1000", "3000", "4000"),
" to ", c("2999", "3999", "4999")
)),
`$5000 to 10000` = str_c(
"$", c("5000", "6000", "7000", "8000"),
" to ", c("5999", "6999", "7999", "9999")
)
)
) %>%
ggplot(aes(x = rincome)) +
geom_bar() +
coord_flip()
```
### Exercise 15\.5\.1
How have the proportions of people identifying as Democrat, Republican, and Independent changed over time?
To answer that, we need to combine the multiple levels into Democrat, Republican, and Independent
```
levels(gss_cat$partyid)
#> [1] "No answer" "Don't know" "Other party"
#> [4] "Strong republican" "Not str republican" "Ind,near rep"
#> [7] "Independent" "Ind,near dem" "Not str democrat"
#> [10] "Strong democrat"
```
```
gss_cat %>%
mutate(
partyid =
fct_collapse(partyid,
other = c("No answer", "Don't know", "Other party"),
rep = c("Strong republican", "Not str republican"),
ind = c("Ind,near rep", "Independent", "Ind,near dem"),
dem = c("Not str democrat", "Strong democrat")
)
) %>%
count(year, partyid) %>%
group_by(year) %>%
mutate(p = n / sum(n)) %>%
ggplot(aes(
x = year, y = p,
colour = fct_reorder2(partyid, year, p)
)) +
geom_point() +
geom_line() +
labs(colour = "Party ID.")
```
### Exercise 15\.5\.2
How could you collapse `rincome` into a small set of categories?
Group all the non\-responses into one category, and then group other categories into a smaller number. Since there is a clear ordering, we would not use `fct_lump()`.\`
```
levels(gss_cat$rincome)
#> [1] "No answer" "Don't know" "Refused" "$25000 or more"
#> [5] "$20000 - 24999" "$15000 - 19999" "$10000 - 14999" "$8000 to 9999"
#> [9] "$7000 to 7999" "$6000 to 6999" "$5000 to 5999" "$4000 to 4999"
#> [13] "$3000 to 3999" "$1000 to 2999" "Lt $1000" "Not applicable"
```
```
library("stringr")
gss_cat %>%
mutate(
rincome =
fct_collapse(
rincome,
`Unknown` = c("No answer", "Don't know", "Refused", "Not applicable"),
`Lt $5000` = c("Lt $1000", str_c(
"$", c("1000", "3000", "4000"),
" to ", c("2999", "3999", "4999")
)),
`$5000 to 10000` = str_c(
"$", c("5000", "6000", "7000", "8000"),
" to ", c("5999", "6999", "7999", "9999")
)
)
) %>%
ggplot(aes(x = rincome)) +
geom_bar() +
coord_flip()
```
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/dates-and-times.html |
16 Dates and times
==================
16\.1 Introduction
------------------
```
library("tidyverse")
library("lubridate")
library("nycflights13")
```
16\.2 Creating date/times
-------------------------
This code is needed by exercises.
```
make_datetime_100 <- function(year, month, day, time) {
make_datetime(year, month, day, time %/% 100, time %% 100)
}
flights_dt <- flights %>%
filter(!is.na(dep_time), !is.na(arr_time)) %>%
mutate(
dep_time = make_datetime_100(year, month, day, dep_time),
arr_time = make_datetime_100(year, month, day, arr_time),
sched_dep_time = make_datetime_100(year, month, day, sched_dep_time),
sched_arr_time = make_datetime_100(year, month, day, sched_arr_time)
) %>%
select(origin, dest, ends_with("delay"), ends_with("time"))
```
### Exercise 16\.2\.1
What happens if you parse a string that
contains invalid dates?
```
ret <- ymd(c("2010-10-10", "bananas"))
#> Warning: 1 failed to parse.
print(class(ret))
#> [1] "Date"
ret
#> [1] "2010-10-10" NA
```
It produces an `NA` and a warning message.
### Exercise 16\.2\.2
What does the `tzone` argument to `today()` do?
Why is it important?
It determines the time\-zone of the date.
Since different time\-zones can have different dates, the value of `today()` can vary depending on the time\-zone specified.
### Exercise 16\.2\.3
Use the appropriate lubridate function to parse each of the following dates:
```
d1 <- "January 1, 2010"
d2 <- "2015-Mar-07"
d3 <- "06-Jun-2017"
d4 <- c("August 19 (2015)", "July 1 (2015)")
d5 <- "12/30/14"
```
```
mdy(d1)
#> [1] "2010-01-01"
ymd(d2)
#> [1] "2015-03-07"
dmy(d3)
#> [1] "2017-06-06"
mdy(d4)
#> [1] "2015-08-19" "2015-07-01"
mdy(d5)
#> [1] "2014-12-30"
```
16\.3 Date\-time components
---------------------------
The following code from the chapter is used
```
sched_dep <- flights_dt %>%
mutate(minute = minute(sched_dep_time)) %>%
group_by(minute) %>%
summarise(
avg_delay = mean(arr_delay, na.rm = TRUE),
n = n()
)
#> `summarise()` ungrouping output (override with `.groups` argument)
```
In the previous code, the difference between rounded and un\-rounded dates provides the within\-period time.
### Exercise 16\.3\.1
How does the distribution of flight times
within a day change over the course of the year?
Let’s try plotting this by month:
```
flights_dt %>%
filter(!is.na(dep_time)) %>%
mutate(dep_hour = update(dep_time, yday = 1)) %>%
mutate(month = factor(month(dep_time))) %>%
ggplot(aes(dep_hour, color = month)) +
geom_freqpoly(binwidth = 60 * 60)
```
This will look better if everything is normalized within groups. The reason
that February is lower is that there are fewer days and thus fewer flights.
```
flights_dt %>%
filter(!is.na(dep_time)) %>%
mutate(dep_hour = update(dep_time, yday = 1)) %>%
mutate(month = factor(month(dep_time))) %>%
ggplot(aes(dep_hour, color = month)) +
geom_freqpoly(aes(y = ..density..), binwidth = 60 * 60)
```
At least to me there doesn’t appear to much difference in within\-day distribution over the year, but I maybe thinking about it incorrectly.
### Exercise 16\.3\.2
Compare `dep_time`, `sched_dep_time` and `dep_delay`. Are they consistent? Explain your findings.
If they are consistent, then `dep_time = sched_dep_time + dep_delay`.
```
flights_dt %>%
mutate(dep_time_ = sched_dep_time + dep_delay * 60) %>%
filter(dep_time_ != dep_time) %>%
select(dep_time_, dep_time, sched_dep_time, dep_delay)
#> # A tibble: 1,205 x 4
#> dep_time_ dep_time sched_dep_time dep_delay
#> <dttm> <dttm> <dttm> <dbl>
#> 1 2013-01-02 08:48:00 2013-01-01 08:48:00 2013-01-01 18:35:00 853
#> 2 2013-01-03 00:42:00 2013-01-02 00:42:00 2013-01-02 23:59:00 43
#> 3 2013-01-03 01:26:00 2013-01-02 01:26:00 2013-01-02 22:50:00 156
#> 4 2013-01-04 00:32:00 2013-01-03 00:32:00 2013-01-03 23:59:00 33
#> 5 2013-01-04 00:50:00 2013-01-03 00:50:00 2013-01-03 21:45:00 185
#> 6 2013-01-04 02:35:00 2013-01-03 02:35:00 2013-01-03 23:59:00 156
#> # … with 1,199 more rows
```
There exist discrepancies. It looks like there are mistakes in the dates. These
are flights in which the actual departure time is on the *next* day relative to
the scheduled departure time. We forgot to account for this when creating the
date\-times using `make_datetime_100()` function in [16\.2\.2 From individual components](https://r4ds.had.co.nz/dates-and-times.html#from-individual-components). The code would have had to check if the departure time is less than
the scheduled departure time plus departure delay (in minutes). Alternatively, simply adding the departure delay to the scheduled departure time is a more robust way to construct the departure time because it will automatically account for crossing into the next day.
### Exercise 16\.3\.3
Compare `air_time` with the duration between the departure and arrival.
Explain your findings.
```
flights_dt %>%
mutate(
flight_duration = as.numeric(arr_time - dep_time),
air_time_mins = air_time,
diff = flight_duration - air_time_mins
) %>%
select(origin, dest, flight_duration, air_time_mins, diff)
#> # A tibble: 328,063 x 5
#> origin dest flight_duration air_time_mins diff
#> <chr> <chr> <dbl> <dbl> <dbl>
#> 1 EWR IAH 193 227 -34
#> 2 LGA IAH 197 227 -30
#> 3 JFK MIA 221 160 61
#> 4 JFK BQN 260 183 77
#> 5 LGA ATL 138 116 22
#> 6 EWR ORD 106 150 -44
#> # … with 328,057 more rows
```
### Exercise 16\.3\.4
How does the average delay time change over the course of a day? Should you use `dep_time` or `sched_dep_time`? Why?
Use `sched_dep_time` because that is the relevant metric for someone scheduling a flight. Also, using `dep_time` will always bias delays to later in the day since delays will push flights later.
```
flights_dt %>%
mutate(sched_dep_hour = hour(sched_dep_time)) %>%
group_by(sched_dep_hour) %>%
summarise(dep_delay = mean(dep_delay)) %>%
ggplot(aes(y = dep_delay, x = sched_dep_hour)) +
geom_point() +
geom_smooth()
#> `summarise()` ungrouping output (override with `.groups` argument)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### Exercise 16\.3\.5
On what day of the week should you leave if you want to minimize the chance of a delay?
Saturday has the lowest average departure delay time and the lowest average arrival delay time.
```
flights_dt %>%
mutate(dow = wday(sched_dep_time)) %>%
group_by(dow) %>%
summarise(
dep_delay = mean(dep_delay),
arr_delay = mean(arr_delay, na.rm = TRUE)
) %>%
print(n = Inf)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 7 x 3
#> dow dep_delay arr_delay
#> <dbl> <dbl> <dbl>
#> 1 1 11.5 4.82
#> 2 2 14.7 9.65
#> 3 3 10.6 5.39
#> 4 4 11.7 7.05
#> 5 5 16.1 11.7
#> 6 6 14.7 9.07
#> 7 7 7.62 -1.45
```
```
flights_dt %>%
mutate(wday = wday(dep_time, label = TRUE)) %>%
group_by(wday) %>%
summarize(ave_dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = wday, y = ave_dep_delay)) +
geom_bar(stat = "identity")
#> `summarise()` ungrouping output (override with `.groups` argument)
```
```
flights_dt %>%
mutate(wday = wday(dep_time, label = TRUE)) %>%
group_by(wday) %>%
summarize(ave_arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
ggplot(aes(x = wday, y = ave_arr_delay)) +
geom_bar(stat = "identity")
#> `summarise()` ungrouping output (override with `.groups` argument)
```
### Exercise 16\.3\.6
What makes the distribution of `diamonds$carat` and `flights$sched_dep_time` similar?
```
ggplot(diamonds, aes(x = carat)) +
geom_density()
```
In both `carat` and `sched_dep_time` there are abnormally large numbers of values are at nice “human” numbers. In `sched_dep_time` it is at 00 and 30 minutes. In carats, it is at 0, 1/3, 1/2, 2/3,
```
ggplot(diamonds, aes(x = carat %% 1 * 100)) +
geom_histogram(binwidth = 1)
```
In scheduled departure times it is 00 and 30 minutes, and minutes
ending in 0 and 5\.
```
ggplot(flights_dt, aes(x = minute(sched_dep_time))) +
geom_histogram(binwidth = 1)
```
### Exercise 16\.3\.7
Confirm my hypothesis that the early departures of flights in minutes 20\-30 and 50\-60 are caused by scheduled flights that leave early.
Hint: create a binary variable that tells you whether or not a flight was delayed.
First, I create a binary variable `early` that is equal to 1 if a flight leaves early, and 0 if it does not.
Then, I group flights by the minute of departure.
This shows that the proportion of flights that are early departures is highest between minutes 20–30 and 50–60\.
```
flights_dt %>%
mutate(minute = minute(dep_time),
early = dep_delay < 0) %>%
group_by(minute) %>%
summarise(
early = mean(early, na.rm = TRUE),
n = n()) %>%
ggplot(aes(minute, early)) +
geom_line()
#> `summarise()` ungrouping output (override with `.groups` argument)
```
16\.4 Time spans
----------------
### Exercise 16\.4\.1
Why is there `months()` but no `dmonths()`?
There is no unambiguous value of months in terms of seconds since months have differing numbers of days.
* 31 days: January, March, May, July, August, October, December
* 30 days: April, June, September, November
* 28 or 29 days: February
The month is not a duration of time defined independently of when it occurs, but a special interval between two dates.
### Exercise 16\.4\.2
Explain `days(overnight * 1)` to someone who has just started learning R.
How does it work?
The variable `overnight` is equal to `TRUE` or `FALSE`.
If it is an overnight flight, this becomes 1 day, and if not, then overnight \= 0, and no days are added to the date.
### Exercise 16\.4\.3
Create a vector of dates giving the first day of every month in 2015\.
Create a vector of dates giving the first day of every month in the current year.
A vector of the first day of the month for every month in 2015:
```
ymd("2015-01-01") + months(0:11)
#> [1] "2015-01-01" "2015-02-01" "2015-03-01" "2015-04-01" "2015-05-01"
#> [6] "2015-06-01" "2015-07-01" "2015-08-01" "2015-09-01" "2015-10-01"
#> [11] "2015-11-01" "2015-12-01"
```
To get the vector of the first day of the month for *this* year, we first need to figure out what this year is, and get January 1st of it.
I can do that by taking `today()` and truncating it to the year using `floor_date()`:
```
floor_date(today(), unit = "year") + months(0:11)
#> [1] "2020-01-01" "2020-02-01" "2020-03-01" "2020-04-01" "2020-05-01"
#> [6] "2020-06-01" "2020-07-01" "2020-08-01" "2020-09-01" "2020-10-01"
#> [11] "2020-11-01" "2020-12-01"
```
### Exercise 16\.4\.4
Write a function that given your birthday (as a date), returns how old you are in years.
```
age <- function(bday) {
(bday %--% today()) %/% years(1)
}
age(ymd("1990-10-12"))
#> [1] 29
```
### Exercise 16\.4\.5
Why can’t `(today() %--% (today() + years(1)) / months(1)` work?
The code in the question is missing a parentheses.
So, I will assume that that the correct code is,
```
(today() %--% (today() + years(1))) / months(1)
#> [1] 12
```
While this code will not display a warning or message, it does not work exactly as
expected. The problem is discussed in the [Intervals](https://r4ds.had.co.nz/dates-and-times.html#intervals) section.
The numerator of the expression, `(today() %--% (today() + years(1))`, is an *interval*, which includes both a duration of time and a starting point. The interval has an exact number of seconds.
The denominator of the expression, `months(1)`, is a period, which is meaningful to humans but not defined in terms of an exact number of seconds.
Months can be 28, 29, 30, or 31 days, so it is not clear what `months(1)` divide by?
The code does not produce a warning message, but it will not always produce the correct result.
To find the number of months within an interval use `%/%` instead of `/`,
```
(today() %--% (today() + years(1))) %/% months(1)
#> [1] 12
```
Alternatively, we could define a “month” as 30 days, and run
```
(today() %--% (today() + years(1))) / days(30)
#> [1] 12.2
```
This approach will not work with `today() + years(1)`, which is not defined for February 29th on leap years:
```
as.Date("2016-02-29") + years(1)
#> [1] NA
```
16\.5 Time zones
----------------
No exercises
16\.1 Introduction
------------------
```
library("tidyverse")
library("lubridate")
library("nycflights13")
```
16\.2 Creating date/times
-------------------------
This code is needed by exercises.
```
make_datetime_100 <- function(year, month, day, time) {
make_datetime(year, month, day, time %/% 100, time %% 100)
}
flights_dt <- flights %>%
filter(!is.na(dep_time), !is.na(arr_time)) %>%
mutate(
dep_time = make_datetime_100(year, month, day, dep_time),
arr_time = make_datetime_100(year, month, day, arr_time),
sched_dep_time = make_datetime_100(year, month, day, sched_dep_time),
sched_arr_time = make_datetime_100(year, month, day, sched_arr_time)
) %>%
select(origin, dest, ends_with("delay"), ends_with("time"))
```
### Exercise 16\.2\.1
What happens if you parse a string that
contains invalid dates?
```
ret <- ymd(c("2010-10-10", "bananas"))
#> Warning: 1 failed to parse.
print(class(ret))
#> [1] "Date"
ret
#> [1] "2010-10-10" NA
```
It produces an `NA` and a warning message.
### Exercise 16\.2\.2
What does the `tzone` argument to `today()` do?
Why is it important?
It determines the time\-zone of the date.
Since different time\-zones can have different dates, the value of `today()` can vary depending on the time\-zone specified.
### Exercise 16\.2\.3
Use the appropriate lubridate function to parse each of the following dates:
```
d1 <- "January 1, 2010"
d2 <- "2015-Mar-07"
d3 <- "06-Jun-2017"
d4 <- c("August 19 (2015)", "July 1 (2015)")
d5 <- "12/30/14"
```
```
mdy(d1)
#> [1] "2010-01-01"
ymd(d2)
#> [1] "2015-03-07"
dmy(d3)
#> [1] "2017-06-06"
mdy(d4)
#> [1] "2015-08-19" "2015-07-01"
mdy(d5)
#> [1] "2014-12-30"
```
### Exercise 16\.2\.1
What happens if you parse a string that
contains invalid dates?
```
ret <- ymd(c("2010-10-10", "bananas"))
#> Warning: 1 failed to parse.
print(class(ret))
#> [1] "Date"
ret
#> [1] "2010-10-10" NA
```
It produces an `NA` and a warning message.
### Exercise 16\.2\.2
What does the `tzone` argument to `today()` do?
Why is it important?
It determines the time\-zone of the date.
Since different time\-zones can have different dates, the value of `today()` can vary depending on the time\-zone specified.
### Exercise 16\.2\.3
Use the appropriate lubridate function to parse each of the following dates:
```
d1 <- "January 1, 2010"
d2 <- "2015-Mar-07"
d3 <- "06-Jun-2017"
d4 <- c("August 19 (2015)", "July 1 (2015)")
d5 <- "12/30/14"
```
```
mdy(d1)
#> [1] "2010-01-01"
ymd(d2)
#> [1] "2015-03-07"
dmy(d3)
#> [1] "2017-06-06"
mdy(d4)
#> [1] "2015-08-19" "2015-07-01"
mdy(d5)
#> [1] "2014-12-30"
```
16\.3 Date\-time components
---------------------------
The following code from the chapter is used
```
sched_dep <- flights_dt %>%
mutate(minute = minute(sched_dep_time)) %>%
group_by(minute) %>%
summarise(
avg_delay = mean(arr_delay, na.rm = TRUE),
n = n()
)
#> `summarise()` ungrouping output (override with `.groups` argument)
```
In the previous code, the difference between rounded and un\-rounded dates provides the within\-period time.
### Exercise 16\.3\.1
How does the distribution of flight times
within a day change over the course of the year?
Let’s try plotting this by month:
```
flights_dt %>%
filter(!is.na(dep_time)) %>%
mutate(dep_hour = update(dep_time, yday = 1)) %>%
mutate(month = factor(month(dep_time))) %>%
ggplot(aes(dep_hour, color = month)) +
geom_freqpoly(binwidth = 60 * 60)
```
This will look better if everything is normalized within groups. The reason
that February is lower is that there are fewer days and thus fewer flights.
```
flights_dt %>%
filter(!is.na(dep_time)) %>%
mutate(dep_hour = update(dep_time, yday = 1)) %>%
mutate(month = factor(month(dep_time))) %>%
ggplot(aes(dep_hour, color = month)) +
geom_freqpoly(aes(y = ..density..), binwidth = 60 * 60)
```
At least to me there doesn’t appear to much difference in within\-day distribution over the year, but I maybe thinking about it incorrectly.
### Exercise 16\.3\.2
Compare `dep_time`, `sched_dep_time` and `dep_delay`. Are they consistent? Explain your findings.
If they are consistent, then `dep_time = sched_dep_time + dep_delay`.
```
flights_dt %>%
mutate(dep_time_ = sched_dep_time + dep_delay * 60) %>%
filter(dep_time_ != dep_time) %>%
select(dep_time_, dep_time, sched_dep_time, dep_delay)
#> # A tibble: 1,205 x 4
#> dep_time_ dep_time sched_dep_time dep_delay
#> <dttm> <dttm> <dttm> <dbl>
#> 1 2013-01-02 08:48:00 2013-01-01 08:48:00 2013-01-01 18:35:00 853
#> 2 2013-01-03 00:42:00 2013-01-02 00:42:00 2013-01-02 23:59:00 43
#> 3 2013-01-03 01:26:00 2013-01-02 01:26:00 2013-01-02 22:50:00 156
#> 4 2013-01-04 00:32:00 2013-01-03 00:32:00 2013-01-03 23:59:00 33
#> 5 2013-01-04 00:50:00 2013-01-03 00:50:00 2013-01-03 21:45:00 185
#> 6 2013-01-04 02:35:00 2013-01-03 02:35:00 2013-01-03 23:59:00 156
#> # … with 1,199 more rows
```
There exist discrepancies. It looks like there are mistakes in the dates. These
are flights in which the actual departure time is on the *next* day relative to
the scheduled departure time. We forgot to account for this when creating the
date\-times using `make_datetime_100()` function in [16\.2\.2 From individual components](https://r4ds.had.co.nz/dates-and-times.html#from-individual-components). The code would have had to check if the departure time is less than
the scheduled departure time plus departure delay (in minutes). Alternatively, simply adding the departure delay to the scheduled departure time is a more robust way to construct the departure time because it will automatically account for crossing into the next day.
### Exercise 16\.3\.3
Compare `air_time` with the duration between the departure and arrival.
Explain your findings.
```
flights_dt %>%
mutate(
flight_duration = as.numeric(arr_time - dep_time),
air_time_mins = air_time,
diff = flight_duration - air_time_mins
) %>%
select(origin, dest, flight_duration, air_time_mins, diff)
#> # A tibble: 328,063 x 5
#> origin dest flight_duration air_time_mins diff
#> <chr> <chr> <dbl> <dbl> <dbl>
#> 1 EWR IAH 193 227 -34
#> 2 LGA IAH 197 227 -30
#> 3 JFK MIA 221 160 61
#> 4 JFK BQN 260 183 77
#> 5 LGA ATL 138 116 22
#> 6 EWR ORD 106 150 -44
#> # … with 328,057 more rows
```
### Exercise 16\.3\.4
How does the average delay time change over the course of a day? Should you use `dep_time` or `sched_dep_time`? Why?
Use `sched_dep_time` because that is the relevant metric for someone scheduling a flight. Also, using `dep_time` will always bias delays to later in the day since delays will push flights later.
```
flights_dt %>%
mutate(sched_dep_hour = hour(sched_dep_time)) %>%
group_by(sched_dep_hour) %>%
summarise(dep_delay = mean(dep_delay)) %>%
ggplot(aes(y = dep_delay, x = sched_dep_hour)) +
geom_point() +
geom_smooth()
#> `summarise()` ungrouping output (override with `.groups` argument)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### Exercise 16\.3\.5
On what day of the week should you leave if you want to minimize the chance of a delay?
Saturday has the lowest average departure delay time and the lowest average arrival delay time.
```
flights_dt %>%
mutate(dow = wday(sched_dep_time)) %>%
group_by(dow) %>%
summarise(
dep_delay = mean(dep_delay),
arr_delay = mean(arr_delay, na.rm = TRUE)
) %>%
print(n = Inf)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 7 x 3
#> dow dep_delay arr_delay
#> <dbl> <dbl> <dbl>
#> 1 1 11.5 4.82
#> 2 2 14.7 9.65
#> 3 3 10.6 5.39
#> 4 4 11.7 7.05
#> 5 5 16.1 11.7
#> 6 6 14.7 9.07
#> 7 7 7.62 -1.45
```
```
flights_dt %>%
mutate(wday = wday(dep_time, label = TRUE)) %>%
group_by(wday) %>%
summarize(ave_dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = wday, y = ave_dep_delay)) +
geom_bar(stat = "identity")
#> `summarise()` ungrouping output (override with `.groups` argument)
```
```
flights_dt %>%
mutate(wday = wday(dep_time, label = TRUE)) %>%
group_by(wday) %>%
summarize(ave_arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
ggplot(aes(x = wday, y = ave_arr_delay)) +
geom_bar(stat = "identity")
#> `summarise()` ungrouping output (override with `.groups` argument)
```
### Exercise 16\.3\.6
What makes the distribution of `diamonds$carat` and `flights$sched_dep_time` similar?
```
ggplot(diamonds, aes(x = carat)) +
geom_density()
```
In both `carat` and `sched_dep_time` there are abnormally large numbers of values are at nice “human” numbers. In `sched_dep_time` it is at 00 and 30 minutes. In carats, it is at 0, 1/3, 1/2, 2/3,
```
ggplot(diamonds, aes(x = carat %% 1 * 100)) +
geom_histogram(binwidth = 1)
```
In scheduled departure times it is 00 and 30 minutes, and minutes
ending in 0 and 5\.
```
ggplot(flights_dt, aes(x = minute(sched_dep_time))) +
geom_histogram(binwidth = 1)
```
### Exercise 16\.3\.7
Confirm my hypothesis that the early departures of flights in minutes 20\-30 and 50\-60 are caused by scheduled flights that leave early.
Hint: create a binary variable that tells you whether or not a flight was delayed.
First, I create a binary variable `early` that is equal to 1 if a flight leaves early, and 0 if it does not.
Then, I group flights by the minute of departure.
This shows that the proportion of flights that are early departures is highest between minutes 20–30 and 50–60\.
```
flights_dt %>%
mutate(minute = minute(dep_time),
early = dep_delay < 0) %>%
group_by(minute) %>%
summarise(
early = mean(early, na.rm = TRUE),
n = n()) %>%
ggplot(aes(minute, early)) +
geom_line()
#> `summarise()` ungrouping output (override with `.groups` argument)
```
### Exercise 16\.3\.1
How does the distribution of flight times
within a day change over the course of the year?
Let’s try plotting this by month:
```
flights_dt %>%
filter(!is.na(dep_time)) %>%
mutate(dep_hour = update(dep_time, yday = 1)) %>%
mutate(month = factor(month(dep_time))) %>%
ggplot(aes(dep_hour, color = month)) +
geom_freqpoly(binwidth = 60 * 60)
```
This will look better if everything is normalized within groups. The reason
that February is lower is that there are fewer days and thus fewer flights.
```
flights_dt %>%
filter(!is.na(dep_time)) %>%
mutate(dep_hour = update(dep_time, yday = 1)) %>%
mutate(month = factor(month(dep_time))) %>%
ggplot(aes(dep_hour, color = month)) +
geom_freqpoly(aes(y = ..density..), binwidth = 60 * 60)
```
At least to me there doesn’t appear to much difference in within\-day distribution over the year, but I maybe thinking about it incorrectly.
### Exercise 16\.3\.2
Compare `dep_time`, `sched_dep_time` and `dep_delay`. Are they consistent? Explain your findings.
If they are consistent, then `dep_time = sched_dep_time + dep_delay`.
```
flights_dt %>%
mutate(dep_time_ = sched_dep_time + dep_delay * 60) %>%
filter(dep_time_ != dep_time) %>%
select(dep_time_, dep_time, sched_dep_time, dep_delay)
#> # A tibble: 1,205 x 4
#> dep_time_ dep_time sched_dep_time dep_delay
#> <dttm> <dttm> <dttm> <dbl>
#> 1 2013-01-02 08:48:00 2013-01-01 08:48:00 2013-01-01 18:35:00 853
#> 2 2013-01-03 00:42:00 2013-01-02 00:42:00 2013-01-02 23:59:00 43
#> 3 2013-01-03 01:26:00 2013-01-02 01:26:00 2013-01-02 22:50:00 156
#> 4 2013-01-04 00:32:00 2013-01-03 00:32:00 2013-01-03 23:59:00 33
#> 5 2013-01-04 00:50:00 2013-01-03 00:50:00 2013-01-03 21:45:00 185
#> 6 2013-01-04 02:35:00 2013-01-03 02:35:00 2013-01-03 23:59:00 156
#> # … with 1,199 more rows
```
There exist discrepancies. It looks like there are mistakes in the dates. These
are flights in which the actual departure time is on the *next* day relative to
the scheduled departure time. We forgot to account for this when creating the
date\-times using `make_datetime_100()` function in [16\.2\.2 From individual components](https://r4ds.had.co.nz/dates-and-times.html#from-individual-components). The code would have had to check if the departure time is less than
the scheduled departure time plus departure delay (in minutes). Alternatively, simply adding the departure delay to the scheduled departure time is a more robust way to construct the departure time because it will automatically account for crossing into the next day.
### Exercise 16\.3\.3
Compare `air_time` with the duration between the departure and arrival.
Explain your findings.
```
flights_dt %>%
mutate(
flight_duration = as.numeric(arr_time - dep_time),
air_time_mins = air_time,
diff = flight_duration - air_time_mins
) %>%
select(origin, dest, flight_duration, air_time_mins, diff)
#> # A tibble: 328,063 x 5
#> origin dest flight_duration air_time_mins diff
#> <chr> <chr> <dbl> <dbl> <dbl>
#> 1 EWR IAH 193 227 -34
#> 2 LGA IAH 197 227 -30
#> 3 JFK MIA 221 160 61
#> 4 JFK BQN 260 183 77
#> 5 LGA ATL 138 116 22
#> 6 EWR ORD 106 150 -44
#> # … with 328,057 more rows
```
### Exercise 16\.3\.4
How does the average delay time change over the course of a day? Should you use `dep_time` or `sched_dep_time`? Why?
Use `sched_dep_time` because that is the relevant metric for someone scheduling a flight. Also, using `dep_time` will always bias delays to later in the day since delays will push flights later.
```
flights_dt %>%
mutate(sched_dep_hour = hour(sched_dep_time)) %>%
group_by(sched_dep_hour) %>%
summarise(dep_delay = mean(dep_delay)) %>%
ggplot(aes(y = dep_delay, x = sched_dep_hour)) +
geom_point() +
geom_smooth()
#> `summarise()` ungrouping output (override with `.groups` argument)
#> `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### Exercise 16\.3\.5
On what day of the week should you leave if you want to minimize the chance of a delay?
Saturday has the lowest average departure delay time and the lowest average arrival delay time.
```
flights_dt %>%
mutate(dow = wday(sched_dep_time)) %>%
group_by(dow) %>%
summarise(
dep_delay = mean(dep_delay),
arr_delay = mean(arr_delay, na.rm = TRUE)
) %>%
print(n = Inf)
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 7 x 3
#> dow dep_delay arr_delay
#> <dbl> <dbl> <dbl>
#> 1 1 11.5 4.82
#> 2 2 14.7 9.65
#> 3 3 10.6 5.39
#> 4 4 11.7 7.05
#> 5 5 16.1 11.7
#> 6 6 14.7 9.07
#> 7 7 7.62 -1.45
```
```
flights_dt %>%
mutate(wday = wday(dep_time, label = TRUE)) %>%
group_by(wday) %>%
summarize(ave_dep_delay = mean(dep_delay, na.rm = TRUE)) %>%
ggplot(aes(x = wday, y = ave_dep_delay)) +
geom_bar(stat = "identity")
#> `summarise()` ungrouping output (override with `.groups` argument)
```
```
flights_dt %>%
mutate(wday = wday(dep_time, label = TRUE)) %>%
group_by(wday) %>%
summarize(ave_arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
ggplot(aes(x = wday, y = ave_arr_delay)) +
geom_bar(stat = "identity")
#> `summarise()` ungrouping output (override with `.groups` argument)
```
### Exercise 16\.3\.6
What makes the distribution of `diamonds$carat` and `flights$sched_dep_time` similar?
```
ggplot(diamonds, aes(x = carat)) +
geom_density()
```
In both `carat` and `sched_dep_time` there are abnormally large numbers of values are at nice “human” numbers. In `sched_dep_time` it is at 00 and 30 minutes. In carats, it is at 0, 1/3, 1/2, 2/3,
```
ggplot(diamonds, aes(x = carat %% 1 * 100)) +
geom_histogram(binwidth = 1)
```
In scheduled departure times it is 00 and 30 minutes, and minutes
ending in 0 and 5\.
```
ggplot(flights_dt, aes(x = minute(sched_dep_time))) +
geom_histogram(binwidth = 1)
```
### Exercise 16\.3\.7
Confirm my hypothesis that the early departures of flights in minutes 20\-30 and 50\-60 are caused by scheduled flights that leave early.
Hint: create a binary variable that tells you whether or not a flight was delayed.
First, I create a binary variable `early` that is equal to 1 if a flight leaves early, and 0 if it does not.
Then, I group flights by the minute of departure.
This shows that the proportion of flights that are early departures is highest between minutes 20–30 and 50–60\.
```
flights_dt %>%
mutate(minute = minute(dep_time),
early = dep_delay < 0) %>%
group_by(minute) %>%
summarise(
early = mean(early, na.rm = TRUE),
n = n()) %>%
ggplot(aes(minute, early)) +
geom_line()
#> `summarise()` ungrouping output (override with `.groups` argument)
```
16\.4 Time spans
----------------
### Exercise 16\.4\.1
Why is there `months()` but no `dmonths()`?
There is no unambiguous value of months in terms of seconds since months have differing numbers of days.
* 31 days: January, March, May, July, August, October, December
* 30 days: April, June, September, November
* 28 or 29 days: February
The month is not a duration of time defined independently of when it occurs, but a special interval between two dates.
### Exercise 16\.4\.2
Explain `days(overnight * 1)` to someone who has just started learning R.
How does it work?
The variable `overnight` is equal to `TRUE` or `FALSE`.
If it is an overnight flight, this becomes 1 day, and if not, then overnight \= 0, and no days are added to the date.
### Exercise 16\.4\.3
Create a vector of dates giving the first day of every month in 2015\.
Create a vector of dates giving the first day of every month in the current year.
A vector of the first day of the month for every month in 2015:
```
ymd("2015-01-01") + months(0:11)
#> [1] "2015-01-01" "2015-02-01" "2015-03-01" "2015-04-01" "2015-05-01"
#> [6] "2015-06-01" "2015-07-01" "2015-08-01" "2015-09-01" "2015-10-01"
#> [11] "2015-11-01" "2015-12-01"
```
To get the vector of the first day of the month for *this* year, we first need to figure out what this year is, and get January 1st of it.
I can do that by taking `today()` and truncating it to the year using `floor_date()`:
```
floor_date(today(), unit = "year") + months(0:11)
#> [1] "2020-01-01" "2020-02-01" "2020-03-01" "2020-04-01" "2020-05-01"
#> [6] "2020-06-01" "2020-07-01" "2020-08-01" "2020-09-01" "2020-10-01"
#> [11] "2020-11-01" "2020-12-01"
```
### Exercise 16\.4\.4
Write a function that given your birthday (as a date), returns how old you are in years.
```
age <- function(bday) {
(bday %--% today()) %/% years(1)
}
age(ymd("1990-10-12"))
#> [1] 29
```
### Exercise 16\.4\.5
Why can’t `(today() %--% (today() + years(1)) / months(1)` work?
The code in the question is missing a parentheses.
So, I will assume that that the correct code is,
```
(today() %--% (today() + years(1))) / months(1)
#> [1] 12
```
While this code will not display a warning or message, it does not work exactly as
expected. The problem is discussed in the [Intervals](https://r4ds.had.co.nz/dates-and-times.html#intervals) section.
The numerator of the expression, `(today() %--% (today() + years(1))`, is an *interval*, which includes both a duration of time and a starting point. The interval has an exact number of seconds.
The denominator of the expression, `months(1)`, is a period, which is meaningful to humans but not defined in terms of an exact number of seconds.
Months can be 28, 29, 30, or 31 days, so it is not clear what `months(1)` divide by?
The code does not produce a warning message, but it will not always produce the correct result.
To find the number of months within an interval use `%/%` instead of `/`,
```
(today() %--% (today() + years(1))) %/% months(1)
#> [1] 12
```
Alternatively, we could define a “month” as 30 days, and run
```
(today() %--% (today() + years(1))) / days(30)
#> [1] 12.2
```
This approach will not work with `today() + years(1)`, which is not defined for February 29th on leap years:
```
as.Date("2016-02-29") + years(1)
#> [1] NA
```
### Exercise 16\.4\.1
Why is there `months()` but no `dmonths()`?
There is no unambiguous value of months in terms of seconds since months have differing numbers of days.
* 31 days: January, March, May, July, August, October, December
* 30 days: April, June, September, November
* 28 or 29 days: February
The month is not a duration of time defined independently of when it occurs, but a special interval between two dates.
### Exercise 16\.4\.2
Explain `days(overnight * 1)` to someone who has just started learning R.
How does it work?
The variable `overnight` is equal to `TRUE` or `FALSE`.
If it is an overnight flight, this becomes 1 day, and if not, then overnight \= 0, and no days are added to the date.
### Exercise 16\.4\.3
Create a vector of dates giving the first day of every month in 2015\.
Create a vector of dates giving the first day of every month in the current year.
A vector of the first day of the month for every month in 2015:
```
ymd("2015-01-01") + months(0:11)
#> [1] "2015-01-01" "2015-02-01" "2015-03-01" "2015-04-01" "2015-05-01"
#> [6] "2015-06-01" "2015-07-01" "2015-08-01" "2015-09-01" "2015-10-01"
#> [11] "2015-11-01" "2015-12-01"
```
To get the vector of the first day of the month for *this* year, we first need to figure out what this year is, and get January 1st of it.
I can do that by taking `today()` and truncating it to the year using `floor_date()`:
```
floor_date(today(), unit = "year") + months(0:11)
#> [1] "2020-01-01" "2020-02-01" "2020-03-01" "2020-04-01" "2020-05-01"
#> [6] "2020-06-01" "2020-07-01" "2020-08-01" "2020-09-01" "2020-10-01"
#> [11] "2020-11-01" "2020-12-01"
```
### Exercise 16\.4\.4
Write a function that given your birthday (as a date), returns how old you are in years.
```
age <- function(bday) {
(bday %--% today()) %/% years(1)
}
age(ymd("1990-10-12"))
#> [1] 29
```
### Exercise 16\.4\.5
Why can’t `(today() %--% (today() + years(1)) / months(1)` work?
The code in the question is missing a parentheses.
So, I will assume that that the correct code is,
```
(today() %--% (today() + years(1))) / months(1)
#> [1] 12
```
While this code will not display a warning or message, it does not work exactly as
expected. The problem is discussed in the [Intervals](https://r4ds.had.co.nz/dates-and-times.html#intervals) section.
The numerator of the expression, `(today() %--% (today() + years(1))`, is an *interval*, which includes both a duration of time and a starting point. The interval has an exact number of seconds.
The denominator of the expression, `months(1)`, is a period, which is meaningful to humans but not defined in terms of an exact number of seconds.
Months can be 28, 29, 30, or 31 days, so it is not clear what `months(1)` divide by?
The code does not produce a warning message, but it will not always produce the correct result.
To find the number of months within an interval use `%/%` instead of `/`,
```
(today() %--% (today() + years(1))) %/% months(1)
#> [1] 12
```
Alternatively, we could define a “month” as 30 days, and run
```
(today() %--% (today() + years(1))) / days(30)
#> [1] 12.2
```
This approach will not work with `today() + years(1)`, which is not defined for February 29th on leap years:
```
as.Date("2016-02-29") + years(1)
#> [1] NA
```
16\.5 Time zones
----------------
No exercises
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/functions.html |
19 Functions
============
19\.1 Introduction
------------------
```
library("tidyverse")
library("lubridate")
```
19\.2 When should you write a function?
---------------------------------------
### Exercise 19\.2\.1
Why is `TRUE` not a parameter to `rescale01()`?
What would happen if `x` contained a single missing value, and `na.rm` was `FALSE`?
The code for `rescale01()` is reproduced below.
```
rescale01 <- function(x) {
rng <- range(x, na.rm = TRUE, finite = TRUE)
(x - rng[1]) / (rng[2] - rng[1])
}
```
If `x` contains a single missing value and `na.rm = FALSE`, then this function stills return a non\-missing value.
```
rescale01_alt <- function(x, na.rm = FALSE) {
rng <- range(x, na.rm = na.rm, finite = TRUE)
(x - rng[1]) / (rng[2] - rng[1])
}
rescale01_alt(c(NA, 1:5), na.rm = FALSE)
#> [1] NA 0.00 0.25 0.50 0.75 1.00
rescale01_alt(c(NA, 1:5), na.rm = TRUE)
#> [1] NA 0.00 0.25 0.50 0.75 1.00
```
The option `finite = TRUE` to `range()` will drop all non\-finite elements, and `NA` is a non\-finite element.
However, if both `finite = FALSE` and `na.rm = FALSE`, then this function will return a vector of `NA` values.
Recall, arithmetic operations involving `NA` values return `NA`.
```
rescale01_alt2 <- function(x, na.rm = FALSE, finite = FALSE) {
rng <- range(x, na.rm = na.rm, finite = finite)
(x - rng[1]) / (rng[2] - rng[1])
}
rescale01_alt2(c(NA, 1:5), na.rm = FALSE, finite = FALSE)
#> [1] NA NA NA NA NA NA
```
### Exercise 19\.2\.2
In the second variant of `rescale01()`, infinite values are left unchanged.
Rewrite `rescale01()` so that `-Inf` is mapped to `0`, and `Inf` is mapped to `1`.
```
rescale01 <- function(x) {
rng <- range(x, na.rm = TRUE, finite = TRUE)
y <- (x - rng[1]) / (rng[2] - rng[1])
y[y == -Inf] <- 0
y[y == Inf] <- 1
y
}
rescale01(c(Inf, -Inf, 0:5, NA))
#> [1] 1.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 NA
```
### Exercise 19\.2\.3
Practice turning the following code snippets into functions. Think about what each function does. What would you call it? How many arguments does it need? Can you rewrite it to be more expressive or less duplicative?
```
mean(is.na(x))
x / sum(x, na.rm = TRUE)
sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE)
```
This code calculates the proportion of `NA` values in a vector.
```
mean(is.na(x))
```
I will write it as a function named `prop_na()` that takes a single argument `x`,
and returns a single numeric value between 0 and 1\.
```
prop_na <- function(x) {
mean(is.na(x))
}
prop_na(c(0, 1, 2, NA, 4, NA))
#> [1] 0.333
```
This code standardizes a vector so that it sums to one.
```
x / sum(x, na.rm = TRUE)
```
I’ll write a function named `sum_to_one()`, which is a function of a single argument, `x`, the vector to standardize, and an optional argument `na.rm`.
The optional argument, `na.rm`, makes the function more expressive, since it can
handle `NA` values in two ways (returning `NA` or dropping them).
Additionally, this makes `sum_to_one()` consistent with `sum()`, `mean()`, and many
other R functions which have a `na.rm` argument.
While the example code had `na.rm = TRUE`, I set `na.rm = FALSE` by default
in order to make the function behave the same as the built\-in functions like `sum()` and `mean()` in its handling of missing values.
```
sum_to_one <- function(x, na.rm = FALSE) {
x / sum(x, na.rm = na.rm)
}
```
```
# no missing values
sum_to_one(1:5)
#> [1] 0.0667 0.1333 0.2000 0.2667 0.3333
# if any missing, return all missing
sum_to_one(c(1:5, NA))
#> [1] NA NA NA NA NA NA
# drop missing values when standardizing
sum_to_one(c(1:5, NA), na.rm = TRUE)
#> [1] 0.0667 0.1333 0.2000 0.2667 0.3333 NA
```
This code calculates the [coefficient of variation](https://en.wikipedia.org/wiki/Coefficient_of_variation) (assuming that `x` can only take non\-negative values), which is the standard deviation divided by the mean.
```
sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE)
```
I’ll write a function named `coef_variation()`, which takes a single argument `x`,
and an optional `na.rm` argument.
```
coef_variation <- function(x, na.rm = FALSE) {
sd(x, na.rm = na.rm) / mean(x, na.rm = na.rm)
}
coef_variation(1:5)
#> [1] 0.527
coef_variation(c(1:5, NA))
#> [1] NA
coef_variation(c(1:5, NA), na.rm = TRUE)
#> [1] 0.527
```
### Exercise 19\.2\.4
Follow [https://nicercode.github.io/intro/writing\-functions.html](https://nicercode.github.io/intro/writing-functions.html) to write your own functions to compute the variance and skew of a numeric vector.
**Note** The math in [https://nicercode.github.io/intro/writing\-functions.html](https://nicercode.github.io/intro/writing-functions.html) seems not to be rendering.
The sample variance is defined as,
\\\[
\\mathrm{Var}(x) \= \\frac{1}{n \- 1} \\sum\_{i\=1}^n (x\_i \- \\bar{x}) ^2 \\text{,}
\\]
where \\(\\bar{x} \= (\\sum\_i^n x\_i) / n\\) is the sample mean.
The corresponding function is:
```
variance <- function(x, na.rm = TRUE) {
n <- length(x)
m <- mean(x, na.rm = TRUE)
sq_err <- (x - m)^2
sum(sq_err) / (n - 1)
}
```
```
var(1:10)
#> [1] 9.17
variance(1:10)
#> [1] 9.17
```
There are multiple definitions for [skewness](https://en.wikipedia.org/wiki/Skewness), but we will use the following one,
\\\[
\\mathrm{Skew}(x) \= \\frac{\\frac{1}{n \- 2}\\left(\\sum\_{i\=1}^{n}(x\_{i} \- \\bar x)^3\\right)}{\\mathrm{Var}(x)^{3 / 2}} \\text{.}
\\]
The corresponding function is:
```
skewness <- function(x, na.rm = FALSE) {
n <- length(x)
m <- mean(x, na.rm = na.rm)
v <- var(x, na.rm = na.rm)
(sum((x - m) ^ 3) / (n - 2)) / v ^ (3 / 2)
}
```
```
skewness(c(1, 2, 5, 100))
#> [1] 1.49
```
### Exercise 19\.2\.5
Write `both_na()`, a function that takes two vectors of the same length and returns the number of positions that have an `NA` in both vectors.
```
both_na <- function(x, y) {
sum(is.na(x) & is.na(y))
}
both_na(
c(NA, NA, 1, 2),
c(NA, 1, NA, 2)
)
#> [1] 1
both_na(
c(NA, NA, 1, 2, NA, NA, 1),
c(NA, 1, NA, 2, NA, NA, 1)
)
#> [1] 3
```
### Exercise 19\.2\.6
What do the following functions do? Why are they useful even though they are so short?
```
is_directory <- function(x) file.info(x)$isdir
is_readable <- function(x) file.access(x, 4) == 0
```
The function `is_directory()` checks whether the path in `x` is a directory.
The function `is_readable()` checks whether the path in `x` is readable, meaning that the file exists and the user has permission to open it.
These functions are useful even though they are short because their names make it much clearer what the code is doing.
### Exercise 19\.2\.7
Read the complete lyrics to \`\`Little Bunny Foo Foo’’. There’s a lot of duplication in this song. Extend the initial piping example to recreate the complete song, and use functions to reduce the duplication.
The lyrics of one of the [most common versions](https://en.wikipedia.org/wiki/Little_Bunny_Foo_Foo) of this song are
> Little bunny Foo Foo
>
> Hopping through the forest
>
> Scooping up the field mice
>
> And bopping them on the head
>
>
> Down came the Good Fairy, and she said
>
> "Little bunny Foo Foo
>
> I don’t want to see you
> Scooping up the field mice
>
>
> And bopping them on the head.
>
> I’ll give you three chances,
>
> And if you don’t stop, I’ll turn you into a GOON!"
>
> And the next day…
The verses repeat with one chance fewer each time.
When there are no chances left, the Good Fairy says
> “I gave you three chances, and you didn’t stop; so….”
>
> POOF. She turned him into a GOON!
>
> And the moral of this story is: *hare today, goon tomorrow.*
Here’s one way of writing this
```
threat <- function(chances) {
give_chances(
from = Good_Fairy,
to = foo_foo,
number = chances,
condition = "Don't behave",
consequence = turn_into_goon
)
}
lyric <- function() {
foo_foo %>%
hop(through = forest) %>%
scoop(up = field_mouse) %>%
bop(on = head)
down_came(Good_Fairy)
said(
Good_Fairy,
c(
"Little bunny Foo Foo",
"I don't want to see you",
"Scooping up the field mice",
"And bopping them on the head."
)
)
}
lyric()
threat(3)
lyric()
threat(2)
lyric()
threat(1)
lyric()
turn_into_goon(Good_Fairy, foo_foo)
```
19\.3 Functions are for humans and computers
--------------------------------------------
### Exercise 19\.3\.1
Read the source code for each of the following three functions, puzzle out what they do, and then brainstorm better names.
```
f1 <- function(string, prefix) {
substr(string, 1, nchar(prefix)) == prefix
}
f2 <- function(x) {
if (length(x) <= 1) return(NULL)
x[-length(x)]
}
f3 <- function(x, y) {
rep(y, length.out = length(x))
}
```
The function `f1` tests whether each element of the character vector `nchar`
starts with the string `prefix`. For example,
```
f1(c("abc", "abcde", "ad"), "ab")
#> [1] TRUE TRUE FALSE
```
A better name for `f1` is `has_prefix()`
The function `f2` drops the last element of the vector `x`.
```
f2(1:3)
#> [1] 1 2
f2(1:2)
#> [1] 1
f2(1)
#> NULL
```
A better name for `f2` is `drop_last()`.
The function `f3` repeats `y` once for each element of `x`.
```
f3(1:3, 4)
#> [1] 4 4 4
```
Good names would include `recycle()` (R’s name for this behavior) or `expand()`.
### Exercise 19\.3\.2
Take a function that you’ve written recently and spend 5 minutes brainstorming a better name for it and its arguments.
Answer left to the reader.
### Exercise 19\.3\.3
Compare and contrast `rnorm()` and `MASS::mvrnorm()`. How could you make them more consistent?
`rnorm()` samples from the univariate normal distribution, while `MASS::mvrnorm`
samples from the multivariate normal distribution. The main arguments in
`rnorm()` are `n`, `mean`, `sd`. The main arguments is `MASS::mvrnorm` are `n`,
`mu`, `Sigma`. To be consistent they should have the same names. However, this
is difficult. In general, it is better to be consistent with more widely used
functions, e.g. `rmvnorm()` should follow the conventions of `rnorm()`. However,
while `mean` is correct in the multivariate case, `sd` does not make sense in
the multivariate case. However, both functions are internally consistent.
It would not be good practice to have `mu` and `sd` as arguments or `mean` and `Sigma` as arguments.
### Exercise 19\.3\.4
Make a case for why `norm_r()`, `norm_d()` etc would be better than `rnorm()`, `dnorm()`. Make a case for the opposite.
If named `norm_r()` and `norm_d()`, the naming convention groups functions by their
distribution.
If named `rnorm()`, and `dnorm()`, the naming convention groups functions
by the action they perform.
* `r*` functions always sample from distributions: for example,
`rnorm()`, `rbinom()`, `runif()`, and `rexp()`.
* `d*` functions calculate the probability density or mass of a distribution:
For example, `dnorm()`, `dbinom()`, `dunif()`, and `dexp()`.
R distributions use this latter naming convention.
19\.4 Conditional execution
---------------------------
### Exercise 19\.4\.1
What’s the difference between `if` and `ifelse()`? \> Carefully read the help and construct three examples that illustrate the key differences.
The keyword `if` tests a single condition, while `ifelse()` tests each element.
### Exercise 19\.4\.2
Write a greeting function that says “good morning”, “good afternoon”, or “good evening”, depending on the time of day. (Hint: use a time argument that defaults to `lubridate::now()`. That will make it easier to test your function.)
```
greet <- function(time = lubridate::now()) {
hr <- lubridate::hour(time)
# I don't know what to do about times after midnight,
# are they evening or morning?
if (hr < 12) {
print("good morning")
} else if (hr < 17) {
print("good afternoon")
} else {
print("good evening")
}
}
greet()
#> [1] "good morning"
greet(ymd_h("2017-01-08:05"))
#> [1] "good morning"
greet(ymd_h("2017-01-08:13"))
#> [1] "good afternoon"
greet(ymd_h("2017-01-08:20"))
#> [1] "good evening"
```
### Exercise 19\.4\.3
Implement a `fizzbuzz()` function. It takes a single number as input. If the
number is divisible by three, it returns “fizz”. If it’s divisible by five it
returns “buzz”. If it’s divisible by three and five, it returns “fizzbuzz”.
Otherwise, it returns the number. Make sure you first write working code before
you create the function.
We can use modulo operator, `%%`, to check divisibility.
The expression `x %% y` returns 0 if `y` divides `x`.
```
1:10 %% 3 == 0
#> [1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE
```
A more concise way of checking for divisibility is to note that the not operator will return `TRUE` for 0, and `FALSE` for all non\-zero numbers.
Thus, `!(x %% y)`, will check whether `y` divides `x`.
```
!(1:10 %% 3)
#> [1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE
```
There are four cases to consider:
1. If `x` is divisible by 3 and 5, then return “fizzbuzz”.
2. If `x` is divisible by 3 and not 5, then return “fizz”.
3. If `x` is divisible by 5 and not 3, then return “buzz”.
4. Otherwise, which is the case in which `x` is not divisible by either 3 or 5, return `x`.
The key to answering this question correctly, is to first check whether `x`
is divisible by both 3 and 5\.
If the function checks whether `x` is divisible by 3 or 5 before considering the case that the number is divisible by both, then the function will never return `"fizzbuzz"`.
```
fizzbuzz <- function(x) {
# these two lines check that x is a valid input
stopifnot(length(x) == 1)
stopifnot(is.numeric(x))
if (!(x %% 3) && !(x %% 5)) {
"fizzbuzz"
} else if (!(x %% 3)) {
"fizz"
} else if (!(x %% 5)) {
"buzz"
} else {
# ensure that the function returns a character vector
as.character(x)
}
}
fizzbuzz(6)
#> [1] "fizz"
fizzbuzz(10)
#> [1] "buzz"
fizzbuzz(15)
#> [1] "fizzbuzz"
fizzbuzz(2)
#> [1] "2"
```
This function can be slightly improved by combining the first two lines conditions so
we only check whether `x` is divisible by 3 once.
```
fizzbuzz2 <- function(x) {
# these two lines check that x is a valid input
stopifnot(length(x) == 1)
stopifnot(is.numeric(x))
if (!(x %% 3)) {
if (!(x %% 5)) {
"fizzbuzz"
} else {
"fizz"
}
} else if (!(x %% 5)) {
"buzz"
} else {
# ensure that the function returns a character vector
as.character(x)
}
}
fizzbuzz2(6)
#> [1] "fizz"
fizzbuzz2(10)
#> [1] "buzz"
fizzbuzz2(15)
#> [1] "fizzbuzz"
fizzbuzz2(2)
#> [1] "2"
```
Instead of only accepting one number as an input, we could a FizzBuzz function that works on a vector.
The `case_when()` function vectorizes multiple if\-else conditions, so is perfect for this task.
In fact, fizz\-buzz is used in the examples in the documentation of `case_when()`.
```
fizzbuzz_vec <- function(x) {
case_when(!(x %% 3) & !(x %% 5) ~ "fizzbuzz",
!(x %% 3) ~ "fizz",
!(x %% 5) ~ "buzz",
TRUE ~ as.character(x)
)
}
fizzbuzz_vec(c(0, 1, 2, 3, 5, 9, 10, 12, 15))
#> [1] "fizzbuzz" "1" "2" "fizz" "buzz" "fizz" "buzz"
#> [8] "fizz" "fizzbuzz"
```
The following function is an example of a vectorized FizzBuzz function that
only uses bracket assignment.
```
fizzbuzz_vec2 <- function(x) {
y <- as.character(x)
# put the individual cases first - any elements divisible by both 3 and 5
# will be overwritten with fizzbuzz later
y[!(x %% 3)] <- "fizz"
y[!(x %% 3)] <- "buzz"
y[!(x %% 3) & !(x %% 5)] <- "fizzbuzz"
y
}
fizzbuzz_vec2(c(0, 1, 2, 3, 5, 9, 10, 12, 15))
#> [1] "fizzbuzz" "1" "2" "buzz" "5" "buzz" "10"
#> [8] "buzz" "fizzbuzz"
```
This question, called the [“Fizz Buzz”](https://en.wikipedia.org/wiki/Fizz_buzz) question, is a common programming interview question used for screening out programmers who can’t program.\[^fizzbuzz]
### Exercise 19\.4\.4
How could you use `cut()` to simplify this set of nested if\-else statements?
```
if (temp <= 0) {
"freezing"
} else if (temp <= 10) {
"cold"
} else if (temp <= 20) {
"cool"
} else if (temp <= 30) {
"warm"
} else {
"hot"
}
```
How would you change the call to `cut()` if I’d used `<` instead of `<=`? What is the other chief advantage of cut() for this problem? (Hint: what happens if you have many values in temp?)
```
temp <- seq(-10, 50, by = 5)
cut(temp, c(-Inf, 0, 10, 20, 30, Inf),
right = TRUE,
labels = c("freezing", "cold", "cool", "warm", "hot")
)
#> [1] freezing freezing freezing cold cold cool cool warm
#> [9] warm hot hot hot hot
#> Levels: freezing cold cool warm hot
```
To have intervals open on the left (using `<`), I change the argument to `right = FALSE`,
```
temp <- seq(-10, 50, by = 5)
cut(temp, c(-Inf, 0, 10, 20, 30, Inf),
right = FALSE,
labels = c("freezing", "cold", "cool", "warm", "hot")
)
#> [1] freezing freezing cold cold cool cool warm warm
#> [9] hot hot hot hot hot
#> Levels: freezing cold cool warm hot
```
Two advantages of using `cut` is that it works on vectors, whereas `if` only works on a single value (I already demonstrated this above),
and that to change comparisons I only needed to change the argument to `right`, but I would have had to change four operators in the `if` expression.
### Exercise 19\.4\.5
What happens if you use `switch()` with numeric values?
In `switch(n, ...)`, if `n` is numeric, it will return the `n`th argument from `...`.
This means that if `n = 1`, `switch()` will return the first argument in `...`,
if `n = 2`, the second, and so on.
For example,
```
switch(1, "apple", "banana", "cantaloupe")
#> [1] "apple"
switch(2, "apple", "banana", "cantaloupe")
#> [1] "banana"
```
If you use a non\-integer number for the first argument of `switch()`, it will
ignore the non\-integer part.
```
switch(1.2, "apple", "banana", "cantaloupe")
#> [1] "apple"
switch(2.8, "apple", "banana", "cantaloupe")
#> [1] "banana"
```
Note that `switch()` truncates the numeric value, it does not round to the nearest integer.
While it is possible to use non\-integer numbers with `switch()`, you should avoid it
### Exercise 19\.4\.6
What does this `switch()` call do? What happens if `x` is `"e"`?
```
x <- "e"
switch(x,
a = ,
b = "ab",
c = ,
d = "cd"
)
```
Experiment, then carefully read the documentation.
First, let’s write a function `switcheroo()`, and see what it returns for different values of `x`.
```
switcheroo <- function(x) {
switch(x,
a = ,
b = "ab",
c = ,
d = "cd"
)
}
switcheroo("a")
#> [1] "ab"
switcheroo("b")
#> [1] "ab"
switcheroo("c")
#> [1] "cd"
switcheroo("d")
#> [1] "cd"
switcheroo("e")
switcheroo("f")
```
The `switcheroo()` function returns `"ab"` for `x = "a"` or `x = "b"`,
`"cd"` for `x = "c"` or `x = "d"`, and
`NULL` for `x = "e"` or any other value of `x` not in `c("a", "b", "c", "d")`.
How does this work?
The `switch()` function returns the first non\-missing argument value for the first name it matches.
Thus, when `switch()` encounters an argument with a missing value, like `a = ,`,
it will return the value of the next argument with a non missing value, which in this case is `b = "ab"`.
If `object` in `switch(object=)` is not equal to the names of any of its arguments,
`switch()` will return either the last (unnamed) argument if one is present or `NULL`.
Since `"e"` is not one of the named arguments in `switch()` (`a`, `b`, `c`, `d`),
and no other unnamed default value is present, this code will return `NULL`.
The code in the question is shorter way of writing the following.
```
switch(x,
a = "ab",
b = "ab",
c = "cd",
d = "cd",
NULL # value to return if x not matched
)
```
19\.5 Function arguments
------------------------
### Exercise 19\.5\.1
What does `commas(letters, collapse = "-")` do? Why?
The `commas()` function in the chapter is defined as
```
commas <- function(...) {
str_c(..., collapse = ", ")
}
```
When `commas()` is given a collapse argument, it throws an error.
```
commas(letters, collapse = "-")
#> Error in str_c(..., collapse = ", "): formal argument "collapse" matched by multiple actual arguments
```
This is because when the argument `collapse` is given to `commas()`, it
is passed to `str_c()` as part of `...`.
In other words, the previous code is equivalent to
```
str_c(letters, collapse = "-", collapse = ", ")
```
However, it is an error to give the same named argument to a function twice.
One way to allow the user to override the separator in `commas()` is to add a `collapse`
argument to the function.
```
commas <- function(..., collapse = ", ") {
str_c(..., collapse = collapse)
}
```
### Exercise 19\.5\.2
It’d be nice if you could supply multiple characters to the `pad` argument, e.g. `rule("Title", pad = "-+")`.
Why doesn’t this currently work? How could you fix it?
This is the definition of the rule function from the [chapter](https://r4ds.had.co.nz/functions.html).
```
rule <- function(..., pad = "-") {
title <- paste0(...)
width <- getOption("width") - nchar(title) - 5
cat(title, " ", str_dup(pad, width), "\n", sep = "")
}
```
```
rule("Important output")
#> Important output -----------------------------------------------------------
```
You can currently supply multiple characters to the `pad` argument, but the output will not be the desired width.
The `rule()` function duplicates `pad` a number of times
equal to the desired width minus the length of the title and five extra characters.
This implicitly assumes that `pad` is only one character. If `pad` were two character,
the output will be almost twice as long.
```
rule("Valuable output", pad = "-+")
#> Valuable output -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
```
One way to handle this is to use `str_trunc()` to truncate the string,
and `str_length()` to calculate the number of characters in the `pad` argument.
```
rule <- function(..., pad = "-") {
title <- paste0(...)
width <- getOption("width") - nchar(title) - 5
padding <- str_dup(
pad,
ceiling(width / str_length(title))
) %>%
str_trunc(width)
cat(title, " ", padding, "\n", sep = "")
}
rule("Important output")
#> Important output ----
rule("Valuable output", pad = "-+")
#> Valuable output -+-+-+-+
rule("Vital output", pad = "-+-")
#> Vital output -+--+--+--+--+--+-
```
Note that in the second output, there is only a single `-` at the end.
### Exercise 19\.5\.3
What does the `trim` argument to `mean()` do? When might you use it?
The `trim` arguments trims a fraction of observations from each end of the vector (meaning the range) before calculating the mean.
This is useful for calculating a measure of central tendency that is robust to outliers.
### Exercise 19\.5\.4
The default value for the `method` argument to `cor()` is `c("pearson", "kendall", "spearman")`.
What does that mean? What value is used by default?
It means that the `method` argument can take one of those three values.
The first value, `"pearson"`, is used by default.
19\.6 Return values
-------------------
No exercises
19\.7 Environment
-----------------
No exercises
\[fizzbuzz]: Read [Why I’m still using “Fizz Buzz” to hire Software\-Developers](https://hackernoon.com/why-im-still-using-fizz-buzz-to-hire-software-developers-7e31a89a4bbf) for more discussion on the use of the Fizz\-Buzz question in programming interviews.
19\.1 Introduction
------------------
```
library("tidyverse")
library("lubridate")
```
19\.2 When should you write a function?
---------------------------------------
### Exercise 19\.2\.1
Why is `TRUE` not a parameter to `rescale01()`?
What would happen if `x` contained a single missing value, and `na.rm` was `FALSE`?
The code for `rescale01()` is reproduced below.
```
rescale01 <- function(x) {
rng <- range(x, na.rm = TRUE, finite = TRUE)
(x - rng[1]) / (rng[2] - rng[1])
}
```
If `x` contains a single missing value and `na.rm = FALSE`, then this function stills return a non\-missing value.
```
rescale01_alt <- function(x, na.rm = FALSE) {
rng <- range(x, na.rm = na.rm, finite = TRUE)
(x - rng[1]) / (rng[2] - rng[1])
}
rescale01_alt(c(NA, 1:5), na.rm = FALSE)
#> [1] NA 0.00 0.25 0.50 0.75 1.00
rescale01_alt(c(NA, 1:5), na.rm = TRUE)
#> [1] NA 0.00 0.25 0.50 0.75 1.00
```
The option `finite = TRUE` to `range()` will drop all non\-finite elements, and `NA` is a non\-finite element.
However, if both `finite = FALSE` and `na.rm = FALSE`, then this function will return a vector of `NA` values.
Recall, arithmetic operations involving `NA` values return `NA`.
```
rescale01_alt2 <- function(x, na.rm = FALSE, finite = FALSE) {
rng <- range(x, na.rm = na.rm, finite = finite)
(x - rng[1]) / (rng[2] - rng[1])
}
rescale01_alt2(c(NA, 1:5), na.rm = FALSE, finite = FALSE)
#> [1] NA NA NA NA NA NA
```
### Exercise 19\.2\.2
In the second variant of `rescale01()`, infinite values are left unchanged.
Rewrite `rescale01()` so that `-Inf` is mapped to `0`, and `Inf` is mapped to `1`.
```
rescale01 <- function(x) {
rng <- range(x, na.rm = TRUE, finite = TRUE)
y <- (x - rng[1]) / (rng[2] - rng[1])
y[y == -Inf] <- 0
y[y == Inf] <- 1
y
}
rescale01(c(Inf, -Inf, 0:5, NA))
#> [1] 1.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 NA
```
### Exercise 19\.2\.3
Practice turning the following code snippets into functions. Think about what each function does. What would you call it? How many arguments does it need? Can you rewrite it to be more expressive or less duplicative?
```
mean(is.na(x))
x / sum(x, na.rm = TRUE)
sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE)
```
This code calculates the proportion of `NA` values in a vector.
```
mean(is.na(x))
```
I will write it as a function named `prop_na()` that takes a single argument `x`,
and returns a single numeric value between 0 and 1\.
```
prop_na <- function(x) {
mean(is.na(x))
}
prop_na(c(0, 1, 2, NA, 4, NA))
#> [1] 0.333
```
This code standardizes a vector so that it sums to one.
```
x / sum(x, na.rm = TRUE)
```
I’ll write a function named `sum_to_one()`, which is a function of a single argument, `x`, the vector to standardize, and an optional argument `na.rm`.
The optional argument, `na.rm`, makes the function more expressive, since it can
handle `NA` values in two ways (returning `NA` or dropping them).
Additionally, this makes `sum_to_one()` consistent with `sum()`, `mean()`, and many
other R functions which have a `na.rm` argument.
While the example code had `na.rm = TRUE`, I set `na.rm = FALSE` by default
in order to make the function behave the same as the built\-in functions like `sum()` and `mean()` in its handling of missing values.
```
sum_to_one <- function(x, na.rm = FALSE) {
x / sum(x, na.rm = na.rm)
}
```
```
# no missing values
sum_to_one(1:5)
#> [1] 0.0667 0.1333 0.2000 0.2667 0.3333
# if any missing, return all missing
sum_to_one(c(1:5, NA))
#> [1] NA NA NA NA NA NA
# drop missing values when standardizing
sum_to_one(c(1:5, NA), na.rm = TRUE)
#> [1] 0.0667 0.1333 0.2000 0.2667 0.3333 NA
```
This code calculates the [coefficient of variation](https://en.wikipedia.org/wiki/Coefficient_of_variation) (assuming that `x` can only take non\-negative values), which is the standard deviation divided by the mean.
```
sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE)
```
I’ll write a function named `coef_variation()`, which takes a single argument `x`,
and an optional `na.rm` argument.
```
coef_variation <- function(x, na.rm = FALSE) {
sd(x, na.rm = na.rm) / mean(x, na.rm = na.rm)
}
coef_variation(1:5)
#> [1] 0.527
coef_variation(c(1:5, NA))
#> [1] NA
coef_variation(c(1:5, NA), na.rm = TRUE)
#> [1] 0.527
```
### Exercise 19\.2\.4
Follow [https://nicercode.github.io/intro/writing\-functions.html](https://nicercode.github.io/intro/writing-functions.html) to write your own functions to compute the variance and skew of a numeric vector.
**Note** The math in [https://nicercode.github.io/intro/writing\-functions.html](https://nicercode.github.io/intro/writing-functions.html) seems not to be rendering.
The sample variance is defined as,
\\\[
\\mathrm{Var}(x) \= \\frac{1}{n \- 1} \\sum\_{i\=1}^n (x\_i \- \\bar{x}) ^2 \\text{,}
\\]
where \\(\\bar{x} \= (\\sum\_i^n x\_i) / n\\) is the sample mean.
The corresponding function is:
```
variance <- function(x, na.rm = TRUE) {
n <- length(x)
m <- mean(x, na.rm = TRUE)
sq_err <- (x - m)^2
sum(sq_err) / (n - 1)
}
```
```
var(1:10)
#> [1] 9.17
variance(1:10)
#> [1] 9.17
```
There are multiple definitions for [skewness](https://en.wikipedia.org/wiki/Skewness), but we will use the following one,
\\\[
\\mathrm{Skew}(x) \= \\frac{\\frac{1}{n \- 2}\\left(\\sum\_{i\=1}^{n}(x\_{i} \- \\bar x)^3\\right)}{\\mathrm{Var}(x)^{3 / 2}} \\text{.}
\\]
The corresponding function is:
```
skewness <- function(x, na.rm = FALSE) {
n <- length(x)
m <- mean(x, na.rm = na.rm)
v <- var(x, na.rm = na.rm)
(sum((x - m) ^ 3) / (n - 2)) / v ^ (3 / 2)
}
```
```
skewness(c(1, 2, 5, 100))
#> [1] 1.49
```
### Exercise 19\.2\.5
Write `both_na()`, a function that takes two vectors of the same length and returns the number of positions that have an `NA` in both vectors.
```
both_na <- function(x, y) {
sum(is.na(x) & is.na(y))
}
both_na(
c(NA, NA, 1, 2),
c(NA, 1, NA, 2)
)
#> [1] 1
both_na(
c(NA, NA, 1, 2, NA, NA, 1),
c(NA, 1, NA, 2, NA, NA, 1)
)
#> [1] 3
```
### Exercise 19\.2\.6
What do the following functions do? Why are they useful even though they are so short?
```
is_directory <- function(x) file.info(x)$isdir
is_readable <- function(x) file.access(x, 4) == 0
```
The function `is_directory()` checks whether the path in `x` is a directory.
The function `is_readable()` checks whether the path in `x` is readable, meaning that the file exists and the user has permission to open it.
These functions are useful even though they are short because their names make it much clearer what the code is doing.
### Exercise 19\.2\.7
Read the complete lyrics to \`\`Little Bunny Foo Foo’’. There’s a lot of duplication in this song. Extend the initial piping example to recreate the complete song, and use functions to reduce the duplication.
The lyrics of one of the [most common versions](https://en.wikipedia.org/wiki/Little_Bunny_Foo_Foo) of this song are
> Little bunny Foo Foo
>
> Hopping through the forest
>
> Scooping up the field mice
>
> And bopping them on the head
>
>
> Down came the Good Fairy, and she said
>
> "Little bunny Foo Foo
>
> I don’t want to see you
> Scooping up the field mice
>
>
> And bopping them on the head.
>
> I’ll give you three chances,
>
> And if you don’t stop, I’ll turn you into a GOON!"
>
> And the next day…
The verses repeat with one chance fewer each time.
When there are no chances left, the Good Fairy says
> “I gave you three chances, and you didn’t stop; so….”
>
> POOF. She turned him into a GOON!
>
> And the moral of this story is: *hare today, goon tomorrow.*
Here’s one way of writing this
```
threat <- function(chances) {
give_chances(
from = Good_Fairy,
to = foo_foo,
number = chances,
condition = "Don't behave",
consequence = turn_into_goon
)
}
lyric <- function() {
foo_foo %>%
hop(through = forest) %>%
scoop(up = field_mouse) %>%
bop(on = head)
down_came(Good_Fairy)
said(
Good_Fairy,
c(
"Little bunny Foo Foo",
"I don't want to see you",
"Scooping up the field mice",
"And bopping them on the head."
)
)
}
lyric()
threat(3)
lyric()
threat(2)
lyric()
threat(1)
lyric()
turn_into_goon(Good_Fairy, foo_foo)
```
### Exercise 19\.2\.1
Why is `TRUE` not a parameter to `rescale01()`?
What would happen if `x` contained a single missing value, and `na.rm` was `FALSE`?
The code for `rescale01()` is reproduced below.
```
rescale01 <- function(x) {
rng <- range(x, na.rm = TRUE, finite = TRUE)
(x - rng[1]) / (rng[2] - rng[1])
}
```
If `x` contains a single missing value and `na.rm = FALSE`, then this function stills return a non\-missing value.
```
rescale01_alt <- function(x, na.rm = FALSE) {
rng <- range(x, na.rm = na.rm, finite = TRUE)
(x - rng[1]) / (rng[2] - rng[1])
}
rescale01_alt(c(NA, 1:5), na.rm = FALSE)
#> [1] NA 0.00 0.25 0.50 0.75 1.00
rescale01_alt(c(NA, 1:5), na.rm = TRUE)
#> [1] NA 0.00 0.25 0.50 0.75 1.00
```
The option `finite = TRUE` to `range()` will drop all non\-finite elements, and `NA` is a non\-finite element.
However, if both `finite = FALSE` and `na.rm = FALSE`, then this function will return a vector of `NA` values.
Recall, arithmetic operations involving `NA` values return `NA`.
```
rescale01_alt2 <- function(x, na.rm = FALSE, finite = FALSE) {
rng <- range(x, na.rm = na.rm, finite = finite)
(x - rng[1]) / (rng[2] - rng[1])
}
rescale01_alt2(c(NA, 1:5), na.rm = FALSE, finite = FALSE)
#> [1] NA NA NA NA NA NA
```
### Exercise 19\.2\.2
In the second variant of `rescale01()`, infinite values are left unchanged.
Rewrite `rescale01()` so that `-Inf` is mapped to `0`, and `Inf` is mapped to `1`.
```
rescale01 <- function(x) {
rng <- range(x, na.rm = TRUE, finite = TRUE)
y <- (x - rng[1]) / (rng[2] - rng[1])
y[y == -Inf] <- 0
y[y == Inf] <- 1
y
}
rescale01(c(Inf, -Inf, 0:5, NA))
#> [1] 1.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 NA
```
### Exercise 19\.2\.3
Practice turning the following code snippets into functions. Think about what each function does. What would you call it? How many arguments does it need? Can you rewrite it to be more expressive or less duplicative?
```
mean(is.na(x))
x / sum(x, na.rm = TRUE)
sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE)
```
This code calculates the proportion of `NA` values in a vector.
```
mean(is.na(x))
```
I will write it as a function named `prop_na()` that takes a single argument `x`,
and returns a single numeric value between 0 and 1\.
```
prop_na <- function(x) {
mean(is.na(x))
}
prop_na(c(0, 1, 2, NA, 4, NA))
#> [1] 0.333
```
This code standardizes a vector so that it sums to one.
```
x / sum(x, na.rm = TRUE)
```
I’ll write a function named `sum_to_one()`, which is a function of a single argument, `x`, the vector to standardize, and an optional argument `na.rm`.
The optional argument, `na.rm`, makes the function more expressive, since it can
handle `NA` values in two ways (returning `NA` or dropping them).
Additionally, this makes `sum_to_one()` consistent with `sum()`, `mean()`, and many
other R functions which have a `na.rm` argument.
While the example code had `na.rm = TRUE`, I set `na.rm = FALSE` by default
in order to make the function behave the same as the built\-in functions like `sum()` and `mean()` in its handling of missing values.
```
sum_to_one <- function(x, na.rm = FALSE) {
x / sum(x, na.rm = na.rm)
}
```
```
# no missing values
sum_to_one(1:5)
#> [1] 0.0667 0.1333 0.2000 0.2667 0.3333
# if any missing, return all missing
sum_to_one(c(1:5, NA))
#> [1] NA NA NA NA NA NA
# drop missing values when standardizing
sum_to_one(c(1:5, NA), na.rm = TRUE)
#> [1] 0.0667 0.1333 0.2000 0.2667 0.3333 NA
```
This code calculates the [coefficient of variation](https://en.wikipedia.org/wiki/Coefficient_of_variation) (assuming that `x` can only take non\-negative values), which is the standard deviation divided by the mean.
```
sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE)
```
I’ll write a function named `coef_variation()`, which takes a single argument `x`,
and an optional `na.rm` argument.
```
coef_variation <- function(x, na.rm = FALSE) {
sd(x, na.rm = na.rm) / mean(x, na.rm = na.rm)
}
coef_variation(1:5)
#> [1] 0.527
coef_variation(c(1:5, NA))
#> [1] NA
coef_variation(c(1:5, NA), na.rm = TRUE)
#> [1] 0.527
```
### Exercise 19\.2\.4
Follow [https://nicercode.github.io/intro/writing\-functions.html](https://nicercode.github.io/intro/writing-functions.html) to write your own functions to compute the variance and skew of a numeric vector.
**Note** The math in [https://nicercode.github.io/intro/writing\-functions.html](https://nicercode.github.io/intro/writing-functions.html) seems not to be rendering.
The sample variance is defined as,
\\\[
\\mathrm{Var}(x) \= \\frac{1}{n \- 1} \\sum\_{i\=1}^n (x\_i \- \\bar{x}) ^2 \\text{,}
\\]
where \\(\\bar{x} \= (\\sum\_i^n x\_i) / n\\) is the sample mean.
The corresponding function is:
```
variance <- function(x, na.rm = TRUE) {
n <- length(x)
m <- mean(x, na.rm = TRUE)
sq_err <- (x - m)^2
sum(sq_err) / (n - 1)
}
```
```
var(1:10)
#> [1] 9.17
variance(1:10)
#> [1] 9.17
```
There are multiple definitions for [skewness](https://en.wikipedia.org/wiki/Skewness), but we will use the following one,
\\\[
\\mathrm{Skew}(x) \= \\frac{\\frac{1}{n \- 2}\\left(\\sum\_{i\=1}^{n}(x\_{i} \- \\bar x)^3\\right)}{\\mathrm{Var}(x)^{3 / 2}} \\text{.}
\\]
The corresponding function is:
```
skewness <- function(x, na.rm = FALSE) {
n <- length(x)
m <- mean(x, na.rm = na.rm)
v <- var(x, na.rm = na.rm)
(sum((x - m) ^ 3) / (n - 2)) / v ^ (3 / 2)
}
```
```
skewness(c(1, 2, 5, 100))
#> [1] 1.49
```
### Exercise 19\.2\.5
Write `both_na()`, a function that takes two vectors of the same length and returns the number of positions that have an `NA` in both vectors.
```
both_na <- function(x, y) {
sum(is.na(x) & is.na(y))
}
both_na(
c(NA, NA, 1, 2),
c(NA, 1, NA, 2)
)
#> [1] 1
both_na(
c(NA, NA, 1, 2, NA, NA, 1),
c(NA, 1, NA, 2, NA, NA, 1)
)
#> [1] 3
```
### Exercise 19\.2\.6
What do the following functions do? Why are they useful even though they are so short?
```
is_directory <- function(x) file.info(x)$isdir
is_readable <- function(x) file.access(x, 4) == 0
```
The function `is_directory()` checks whether the path in `x` is a directory.
The function `is_readable()` checks whether the path in `x` is readable, meaning that the file exists and the user has permission to open it.
These functions are useful even though they are short because their names make it much clearer what the code is doing.
### Exercise 19\.2\.7
Read the complete lyrics to \`\`Little Bunny Foo Foo’’. There’s a lot of duplication in this song. Extend the initial piping example to recreate the complete song, and use functions to reduce the duplication.
The lyrics of one of the [most common versions](https://en.wikipedia.org/wiki/Little_Bunny_Foo_Foo) of this song are
> Little bunny Foo Foo
>
> Hopping through the forest
>
> Scooping up the field mice
>
> And bopping them on the head
>
>
> Down came the Good Fairy, and she said
>
> "Little bunny Foo Foo
>
> I don’t want to see you
> Scooping up the field mice
>
>
> And bopping them on the head.
>
> I’ll give you three chances,
>
> And if you don’t stop, I’ll turn you into a GOON!"
>
> And the next day…
The verses repeat with one chance fewer each time.
When there are no chances left, the Good Fairy says
> “I gave you three chances, and you didn’t stop; so….”
>
> POOF. She turned him into a GOON!
>
> And the moral of this story is: *hare today, goon tomorrow.*
Here’s one way of writing this
```
threat <- function(chances) {
give_chances(
from = Good_Fairy,
to = foo_foo,
number = chances,
condition = "Don't behave",
consequence = turn_into_goon
)
}
lyric <- function() {
foo_foo %>%
hop(through = forest) %>%
scoop(up = field_mouse) %>%
bop(on = head)
down_came(Good_Fairy)
said(
Good_Fairy,
c(
"Little bunny Foo Foo",
"I don't want to see you",
"Scooping up the field mice",
"And bopping them on the head."
)
)
}
lyric()
threat(3)
lyric()
threat(2)
lyric()
threat(1)
lyric()
turn_into_goon(Good_Fairy, foo_foo)
```
19\.3 Functions are for humans and computers
--------------------------------------------
### Exercise 19\.3\.1
Read the source code for each of the following three functions, puzzle out what they do, and then brainstorm better names.
```
f1 <- function(string, prefix) {
substr(string, 1, nchar(prefix)) == prefix
}
f2 <- function(x) {
if (length(x) <= 1) return(NULL)
x[-length(x)]
}
f3 <- function(x, y) {
rep(y, length.out = length(x))
}
```
The function `f1` tests whether each element of the character vector `nchar`
starts with the string `prefix`. For example,
```
f1(c("abc", "abcde", "ad"), "ab")
#> [1] TRUE TRUE FALSE
```
A better name for `f1` is `has_prefix()`
The function `f2` drops the last element of the vector `x`.
```
f2(1:3)
#> [1] 1 2
f2(1:2)
#> [1] 1
f2(1)
#> NULL
```
A better name for `f2` is `drop_last()`.
The function `f3` repeats `y` once for each element of `x`.
```
f3(1:3, 4)
#> [1] 4 4 4
```
Good names would include `recycle()` (R’s name for this behavior) or `expand()`.
### Exercise 19\.3\.2
Take a function that you’ve written recently and spend 5 minutes brainstorming a better name for it and its arguments.
Answer left to the reader.
### Exercise 19\.3\.3
Compare and contrast `rnorm()` and `MASS::mvrnorm()`. How could you make them more consistent?
`rnorm()` samples from the univariate normal distribution, while `MASS::mvrnorm`
samples from the multivariate normal distribution. The main arguments in
`rnorm()` are `n`, `mean`, `sd`. The main arguments is `MASS::mvrnorm` are `n`,
`mu`, `Sigma`. To be consistent they should have the same names. However, this
is difficult. In general, it is better to be consistent with more widely used
functions, e.g. `rmvnorm()` should follow the conventions of `rnorm()`. However,
while `mean` is correct in the multivariate case, `sd` does not make sense in
the multivariate case. However, both functions are internally consistent.
It would not be good practice to have `mu` and `sd` as arguments or `mean` and `Sigma` as arguments.
### Exercise 19\.3\.4
Make a case for why `norm_r()`, `norm_d()` etc would be better than `rnorm()`, `dnorm()`. Make a case for the opposite.
If named `norm_r()` and `norm_d()`, the naming convention groups functions by their
distribution.
If named `rnorm()`, and `dnorm()`, the naming convention groups functions
by the action they perform.
* `r*` functions always sample from distributions: for example,
`rnorm()`, `rbinom()`, `runif()`, and `rexp()`.
* `d*` functions calculate the probability density or mass of a distribution:
For example, `dnorm()`, `dbinom()`, `dunif()`, and `dexp()`.
R distributions use this latter naming convention.
### Exercise 19\.3\.1
Read the source code for each of the following three functions, puzzle out what they do, and then brainstorm better names.
```
f1 <- function(string, prefix) {
substr(string, 1, nchar(prefix)) == prefix
}
f2 <- function(x) {
if (length(x) <= 1) return(NULL)
x[-length(x)]
}
f3 <- function(x, y) {
rep(y, length.out = length(x))
}
```
The function `f1` tests whether each element of the character vector `nchar`
starts with the string `prefix`. For example,
```
f1(c("abc", "abcde", "ad"), "ab")
#> [1] TRUE TRUE FALSE
```
A better name for `f1` is `has_prefix()`
The function `f2` drops the last element of the vector `x`.
```
f2(1:3)
#> [1] 1 2
f2(1:2)
#> [1] 1
f2(1)
#> NULL
```
A better name for `f2` is `drop_last()`.
The function `f3` repeats `y` once for each element of `x`.
```
f3(1:3, 4)
#> [1] 4 4 4
```
Good names would include `recycle()` (R’s name for this behavior) or `expand()`.
### Exercise 19\.3\.2
Take a function that you’ve written recently and spend 5 minutes brainstorming a better name for it and its arguments.
Answer left to the reader.
### Exercise 19\.3\.3
Compare and contrast `rnorm()` and `MASS::mvrnorm()`. How could you make them more consistent?
`rnorm()` samples from the univariate normal distribution, while `MASS::mvrnorm`
samples from the multivariate normal distribution. The main arguments in
`rnorm()` are `n`, `mean`, `sd`. The main arguments is `MASS::mvrnorm` are `n`,
`mu`, `Sigma`. To be consistent they should have the same names. However, this
is difficult. In general, it is better to be consistent with more widely used
functions, e.g. `rmvnorm()` should follow the conventions of `rnorm()`. However,
while `mean` is correct in the multivariate case, `sd` does not make sense in
the multivariate case. However, both functions are internally consistent.
It would not be good practice to have `mu` and `sd` as arguments or `mean` and `Sigma` as arguments.
### Exercise 19\.3\.4
Make a case for why `norm_r()`, `norm_d()` etc would be better than `rnorm()`, `dnorm()`. Make a case for the opposite.
If named `norm_r()` and `norm_d()`, the naming convention groups functions by their
distribution.
If named `rnorm()`, and `dnorm()`, the naming convention groups functions
by the action they perform.
* `r*` functions always sample from distributions: for example,
`rnorm()`, `rbinom()`, `runif()`, and `rexp()`.
* `d*` functions calculate the probability density or mass of a distribution:
For example, `dnorm()`, `dbinom()`, `dunif()`, and `dexp()`.
R distributions use this latter naming convention.
19\.4 Conditional execution
---------------------------
### Exercise 19\.4\.1
What’s the difference between `if` and `ifelse()`? \> Carefully read the help and construct three examples that illustrate the key differences.
The keyword `if` tests a single condition, while `ifelse()` tests each element.
### Exercise 19\.4\.2
Write a greeting function that says “good morning”, “good afternoon”, or “good evening”, depending on the time of day. (Hint: use a time argument that defaults to `lubridate::now()`. That will make it easier to test your function.)
```
greet <- function(time = lubridate::now()) {
hr <- lubridate::hour(time)
# I don't know what to do about times after midnight,
# are they evening or morning?
if (hr < 12) {
print("good morning")
} else if (hr < 17) {
print("good afternoon")
} else {
print("good evening")
}
}
greet()
#> [1] "good morning"
greet(ymd_h("2017-01-08:05"))
#> [1] "good morning"
greet(ymd_h("2017-01-08:13"))
#> [1] "good afternoon"
greet(ymd_h("2017-01-08:20"))
#> [1] "good evening"
```
### Exercise 19\.4\.3
Implement a `fizzbuzz()` function. It takes a single number as input. If the
number is divisible by three, it returns “fizz”. If it’s divisible by five it
returns “buzz”. If it’s divisible by three and five, it returns “fizzbuzz”.
Otherwise, it returns the number. Make sure you first write working code before
you create the function.
We can use modulo operator, `%%`, to check divisibility.
The expression `x %% y` returns 0 if `y` divides `x`.
```
1:10 %% 3 == 0
#> [1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE
```
A more concise way of checking for divisibility is to note that the not operator will return `TRUE` for 0, and `FALSE` for all non\-zero numbers.
Thus, `!(x %% y)`, will check whether `y` divides `x`.
```
!(1:10 %% 3)
#> [1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE
```
There are four cases to consider:
1. If `x` is divisible by 3 and 5, then return “fizzbuzz”.
2. If `x` is divisible by 3 and not 5, then return “fizz”.
3. If `x` is divisible by 5 and not 3, then return “buzz”.
4. Otherwise, which is the case in which `x` is not divisible by either 3 or 5, return `x`.
The key to answering this question correctly, is to first check whether `x`
is divisible by both 3 and 5\.
If the function checks whether `x` is divisible by 3 or 5 before considering the case that the number is divisible by both, then the function will never return `"fizzbuzz"`.
```
fizzbuzz <- function(x) {
# these two lines check that x is a valid input
stopifnot(length(x) == 1)
stopifnot(is.numeric(x))
if (!(x %% 3) && !(x %% 5)) {
"fizzbuzz"
} else if (!(x %% 3)) {
"fizz"
} else if (!(x %% 5)) {
"buzz"
} else {
# ensure that the function returns a character vector
as.character(x)
}
}
fizzbuzz(6)
#> [1] "fizz"
fizzbuzz(10)
#> [1] "buzz"
fizzbuzz(15)
#> [1] "fizzbuzz"
fizzbuzz(2)
#> [1] "2"
```
This function can be slightly improved by combining the first two lines conditions so
we only check whether `x` is divisible by 3 once.
```
fizzbuzz2 <- function(x) {
# these two lines check that x is a valid input
stopifnot(length(x) == 1)
stopifnot(is.numeric(x))
if (!(x %% 3)) {
if (!(x %% 5)) {
"fizzbuzz"
} else {
"fizz"
}
} else if (!(x %% 5)) {
"buzz"
} else {
# ensure that the function returns a character vector
as.character(x)
}
}
fizzbuzz2(6)
#> [1] "fizz"
fizzbuzz2(10)
#> [1] "buzz"
fizzbuzz2(15)
#> [1] "fizzbuzz"
fizzbuzz2(2)
#> [1] "2"
```
Instead of only accepting one number as an input, we could a FizzBuzz function that works on a vector.
The `case_when()` function vectorizes multiple if\-else conditions, so is perfect for this task.
In fact, fizz\-buzz is used in the examples in the documentation of `case_when()`.
```
fizzbuzz_vec <- function(x) {
case_when(!(x %% 3) & !(x %% 5) ~ "fizzbuzz",
!(x %% 3) ~ "fizz",
!(x %% 5) ~ "buzz",
TRUE ~ as.character(x)
)
}
fizzbuzz_vec(c(0, 1, 2, 3, 5, 9, 10, 12, 15))
#> [1] "fizzbuzz" "1" "2" "fizz" "buzz" "fizz" "buzz"
#> [8] "fizz" "fizzbuzz"
```
The following function is an example of a vectorized FizzBuzz function that
only uses bracket assignment.
```
fizzbuzz_vec2 <- function(x) {
y <- as.character(x)
# put the individual cases first - any elements divisible by both 3 and 5
# will be overwritten with fizzbuzz later
y[!(x %% 3)] <- "fizz"
y[!(x %% 3)] <- "buzz"
y[!(x %% 3) & !(x %% 5)] <- "fizzbuzz"
y
}
fizzbuzz_vec2(c(0, 1, 2, 3, 5, 9, 10, 12, 15))
#> [1] "fizzbuzz" "1" "2" "buzz" "5" "buzz" "10"
#> [8] "buzz" "fizzbuzz"
```
This question, called the [“Fizz Buzz”](https://en.wikipedia.org/wiki/Fizz_buzz) question, is a common programming interview question used for screening out programmers who can’t program.\[^fizzbuzz]
### Exercise 19\.4\.4
How could you use `cut()` to simplify this set of nested if\-else statements?
```
if (temp <= 0) {
"freezing"
} else if (temp <= 10) {
"cold"
} else if (temp <= 20) {
"cool"
} else if (temp <= 30) {
"warm"
} else {
"hot"
}
```
How would you change the call to `cut()` if I’d used `<` instead of `<=`? What is the other chief advantage of cut() for this problem? (Hint: what happens if you have many values in temp?)
```
temp <- seq(-10, 50, by = 5)
cut(temp, c(-Inf, 0, 10, 20, 30, Inf),
right = TRUE,
labels = c("freezing", "cold", "cool", "warm", "hot")
)
#> [1] freezing freezing freezing cold cold cool cool warm
#> [9] warm hot hot hot hot
#> Levels: freezing cold cool warm hot
```
To have intervals open on the left (using `<`), I change the argument to `right = FALSE`,
```
temp <- seq(-10, 50, by = 5)
cut(temp, c(-Inf, 0, 10, 20, 30, Inf),
right = FALSE,
labels = c("freezing", "cold", "cool", "warm", "hot")
)
#> [1] freezing freezing cold cold cool cool warm warm
#> [9] hot hot hot hot hot
#> Levels: freezing cold cool warm hot
```
Two advantages of using `cut` is that it works on vectors, whereas `if` only works on a single value (I already demonstrated this above),
and that to change comparisons I only needed to change the argument to `right`, but I would have had to change four operators in the `if` expression.
### Exercise 19\.4\.5
What happens if you use `switch()` with numeric values?
In `switch(n, ...)`, if `n` is numeric, it will return the `n`th argument from `...`.
This means that if `n = 1`, `switch()` will return the first argument in `...`,
if `n = 2`, the second, and so on.
For example,
```
switch(1, "apple", "banana", "cantaloupe")
#> [1] "apple"
switch(2, "apple", "banana", "cantaloupe")
#> [1] "banana"
```
If you use a non\-integer number for the first argument of `switch()`, it will
ignore the non\-integer part.
```
switch(1.2, "apple", "banana", "cantaloupe")
#> [1] "apple"
switch(2.8, "apple", "banana", "cantaloupe")
#> [1] "banana"
```
Note that `switch()` truncates the numeric value, it does not round to the nearest integer.
While it is possible to use non\-integer numbers with `switch()`, you should avoid it
### Exercise 19\.4\.6
What does this `switch()` call do? What happens if `x` is `"e"`?
```
x <- "e"
switch(x,
a = ,
b = "ab",
c = ,
d = "cd"
)
```
Experiment, then carefully read the documentation.
First, let’s write a function `switcheroo()`, and see what it returns for different values of `x`.
```
switcheroo <- function(x) {
switch(x,
a = ,
b = "ab",
c = ,
d = "cd"
)
}
switcheroo("a")
#> [1] "ab"
switcheroo("b")
#> [1] "ab"
switcheroo("c")
#> [1] "cd"
switcheroo("d")
#> [1] "cd"
switcheroo("e")
switcheroo("f")
```
The `switcheroo()` function returns `"ab"` for `x = "a"` or `x = "b"`,
`"cd"` for `x = "c"` or `x = "d"`, and
`NULL` for `x = "e"` or any other value of `x` not in `c("a", "b", "c", "d")`.
How does this work?
The `switch()` function returns the first non\-missing argument value for the first name it matches.
Thus, when `switch()` encounters an argument with a missing value, like `a = ,`,
it will return the value of the next argument with a non missing value, which in this case is `b = "ab"`.
If `object` in `switch(object=)` is not equal to the names of any of its arguments,
`switch()` will return either the last (unnamed) argument if one is present or `NULL`.
Since `"e"` is not one of the named arguments in `switch()` (`a`, `b`, `c`, `d`),
and no other unnamed default value is present, this code will return `NULL`.
The code in the question is shorter way of writing the following.
```
switch(x,
a = "ab",
b = "ab",
c = "cd",
d = "cd",
NULL # value to return if x not matched
)
```
### Exercise 19\.4\.1
What’s the difference between `if` and `ifelse()`? \> Carefully read the help and construct three examples that illustrate the key differences.
The keyword `if` tests a single condition, while `ifelse()` tests each element.
### Exercise 19\.4\.2
Write a greeting function that says “good morning”, “good afternoon”, or “good evening”, depending on the time of day. (Hint: use a time argument that defaults to `lubridate::now()`. That will make it easier to test your function.)
```
greet <- function(time = lubridate::now()) {
hr <- lubridate::hour(time)
# I don't know what to do about times after midnight,
# are they evening or morning?
if (hr < 12) {
print("good morning")
} else if (hr < 17) {
print("good afternoon")
} else {
print("good evening")
}
}
greet()
#> [1] "good morning"
greet(ymd_h("2017-01-08:05"))
#> [1] "good morning"
greet(ymd_h("2017-01-08:13"))
#> [1] "good afternoon"
greet(ymd_h("2017-01-08:20"))
#> [1] "good evening"
```
### Exercise 19\.4\.3
Implement a `fizzbuzz()` function. It takes a single number as input. If the
number is divisible by three, it returns “fizz”. If it’s divisible by five it
returns “buzz”. If it’s divisible by three and five, it returns “fizzbuzz”.
Otherwise, it returns the number. Make sure you first write working code before
you create the function.
We can use modulo operator, `%%`, to check divisibility.
The expression `x %% y` returns 0 if `y` divides `x`.
```
1:10 %% 3 == 0
#> [1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE
```
A more concise way of checking for divisibility is to note that the not operator will return `TRUE` for 0, and `FALSE` for all non\-zero numbers.
Thus, `!(x %% y)`, will check whether `y` divides `x`.
```
!(1:10 %% 3)
#> [1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE
```
There are four cases to consider:
1. If `x` is divisible by 3 and 5, then return “fizzbuzz”.
2. If `x` is divisible by 3 and not 5, then return “fizz”.
3. If `x` is divisible by 5 and not 3, then return “buzz”.
4. Otherwise, which is the case in which `x` is not divisible by either 3 or 5, return `x`.
The key to answering this question correctly, is to first check whether `x`
is divisible by both 3 and 5\.
If the function checks whether `x` is divisible by 3 or 5 before considering the case that the number is divisible by both, then the function will never return `"fizzbuzz"`.
```
fizzbuzz <- function(x) {
# these two lines check that x is a valid input
stopifnot(length(x) == 1)
stopifnot(is.numeric(x))
if (!(x %% 3) && !(x %% 5)) {
"fizzbuzz"
} else if (!(x %% 3)) {
"fizz"
} else if (!(x %% 5)) {
"buzz"
} else {
# ensure that the function returns a character vector
as.character(x)
}
}
fizzbuzz(6)
#> [1] "fizz"
fizzbuzz(10)
#> [1] "buzz"
fizzbuzz(15)
#> [1] "fizzbuzz"
fizzbuzz(2)
#> [1] "2"
```
This function can be slightly improved by combining the first two lines conditions so
we only check whether `x` is divisible by 3 once.
```
fizzbuzz2 <- function(x) {
# these two lines check that x is a valid input
stopifnot(length(x) == 1)
stopifnot(is.numeric(x))
if (!(x %% 3)) {
if (!(x %% 5)) {
"fizzbuzz"
} else {
"fizz"
}
} else if (!(x %% 5)) {
"buzz"
} else {
# ensure that the function returns a character vector
as.character(x)
}
}
fizzbuzz2(6)
#> [1] "fizz"
fizzbuzz2(10)
#> [1] "buzz"
fizzbuzz2(15)
#> [1] "fizzbuzz"
fizzbuzz2(2)
#> [1] "2"
```
Instead of only accepting one number as an input, we could a FizzBuzz function that works on a vector.
The `case_when()` function vectorizes multiple if\-else conditions, so is perfect for this task.
In fact, fizz\-buzz is used in the examples in the documentation of `case_when()`.
```
fizzbuzz_vec <- function(x) {
case_when(!(x %% 3) & !(x %% 5) ~ "fizzbuzz",
!(x %% 3) ~ "fizz",
!(x %% 5) ~ "buzz",
TRUE ~ as.character(x)
)
}
fizzbuzz_vec(c(0, 1, 2, 3, 5, 9, 10, 12, 15))
#> [1] "fizzbuzz" "1" "2" "fizz" "buzz" "fizz" "buzz"
#> [8] "fizz" "fizzbuzz"
```
The following function is an example of a vectorized FizzBuzz function that
only uses bracket assignment.
```
fizzbuzz_vec2 <- function(x) {
y <- as.character(x)
# put the individual cases first - any elements divisible by both 3 and 5
# will be overwritten with fizzbuzz later
y[!(x %% 3)] <- "fizz"
y[!(x %% 3)] <- "buzz"
y[!(x %% 3) & !(x %% 5)] <- "fizzbuzz"
y
}
fizzbuzz_vec2(c(0, 1, 2, 3, 5, 9, 10, 12, 15))
#> [1] "fizzbuzz" "1" "2" "buzz" "5" "buzz" "10"
#> [8] "buzz" "fizzbuzz"
```
This question, called the [“Fizz Buzz”](https://en.wikipedia.org/wiki/Fizz_buzz) question, is a common programming interview question used for screening out programmers who can’t program.\[^fizzbuzz]
### Exercise 19\.4\.4
How could you use `cut()` to simplify this set of nested if\-else statements?
```
if (temp <= 0) {
"freezing"
} else if (temp <= 10) {
"cold"
} else if (temp <= 20) {
"cool"
} else if (temp <= 30) {
"warm"
} else {
"hot"
}
```
How would you change the call to `cut()` if I’d used `<` instead of `<=`? What is the other chief advantage of cut() for this problem? (Hint: what happens if you have many values in temp?)
```
temp <- seq(-10, 50, by = 5)
cut(temp, c(-Inf, 0, 10, 20, 30, Inf),
right = TRUE,
labels = c("freezing", "cold", "cool", "warm", "hot")
)
#> [1] freezing freezing freezing cold cold cool cool warm
#> [9] warm hot hot hot hot
#> Levels: freezing cold cool warm hot
```
To have intervals open on the left (using `<`), I change the argument to `right = FALSE`,
```
temp <- seq(-10, 50, by = 5)
cut(temp, c(-Inf, 0, 10, 20, 30, Inf),
right = FALSE,
labels = c("freezing", "cold", "cool", "warm", "hot")
)
#> [1] freezing freezing cold cold cool cool warm warm
#> [9] hot hot hot hot hot
#> Levels: freezing cold cool warm hot
```
Two advantages of using `cut` is that it works on vectors, whereas `if` only works on a single value (I already demonstrated this above),
and that to change comparisons I only needed to change the argument to `right`, but I would have had to change four operators in the `if` expression.
### Exercise 19\.4\.5
What happens if you use `switch()` with numeric values?
In `switch(n, ...)`, if `n` is numeric, it will return the `n`th argument from `...`.
This means that if `n = 1`, `switch()` will return the first argument in `...`,
if `n = 2`, the second, and so on.
For example,
```
switch(1, "apple", "banana", "cantaloupe")
#> [1] "apple"
switch(2, "apple", "banana", "cantaloupe")
#> [1] "banana"
```
If you use a non\-integer number for the first argument of `switch()`, it will
ignore the non\-integer part.
```
switch(1.2, "apple", "banana", "cantaloupe")
#> [1] "apple"
switch(2.8, "apple", "banana", "cantaloupe")
#> [1] "banana"
```
Note that `switch()` truncates the numeric value, it does not round to the nearest integer.
While it is possible to use non\-integer numbers with `switch()`, you should avoid it
### Exercise 19\.4\.6
What does this `switch()` call do? What happens if `x` is `"e"`?
```
x <- "e"
switch(x,
a = ,
b = "ab",
c = ,
d = "cd"
)
```
Experiment, then carefully read the documentation.
First, let’s write a function `switcheroo()`, and see what it returns for different values of `x`.
```
switcheroo <- function(x) {
switch(x,
a = ,
b = "ab",
c = ,
d = "cd"
)
}
switcheroo("a")
#> [1] "ab"
switcheroo("b")
#> [1] "ab"
switcheroo("c")
#> [1] "cd"
switcheroo("d")
#> [1] "cd"
switcheroo("e")
switcheroo("f")
```
The `switcheroo()` function returns `"ab"` for `x = "a"` or `x = "b"`,
`"cd"` for `x = "c"` or `x = "d"`, and
`NULL` for `x = "e"` or any other value of `x` not in `c("a", "b", "c", "d")`.
How does this work?
The `switch()` function returns the first non\-missing argument value for the first name it matches.
Thus, when `switch()` encounters an argument with a missing value, like `a = ,`,
it will return the value of the next argument with a non missing value, which in this case is `b = "ab"`.
If `object` in `switch(object=)` is not equal to the names of any of its arguments,
`switch()` will return either the last (unnamed) argument if one is present or `NULL`.
Since `"e"` is not one of the named arguments in `switch()` (`a`, `b`, `c`, `d`),
and no other unnamed default value is present, this code will return `NULL`.
The code in the question is shorter way of writing the following.
```
switch(x,
a = "ab",
b = "ab",
c = "cd",
d = "cd",
NULL # value to return if x not matched
)
```
19\.5 Function arguments
------------------------
### Exercise 19\.5\.1
What does `commas(letters, collapse = "-")` do? Why?
The `commas()` function in the chapter is defined as
```
commas <- function(...) {
str_c(..., collapse = ", ")
}
```
When `commas()` is given a collapse argument, it throws an error.
```
commas(letters, collapse = "-")
#> Error in str_c(..., collapse = ", "): formal argument "collapse" matched by multiple actual arguments
```
This is because when the argument `collapse` is given to `commas()`, it
is passed to `str_c()` as part of `...`.
In other words, the previous code is equivalent to
```
str_c(letters, collapse = "-", collapse = ", ")
```
However, it is an error to give the same named argument to a function twice.
One way to allow the user to override the separator in `commas()` is to add a `collapse`
argument to the function.
```
commas <- function(..., collapse = ", ") {
str_c(..., collapse = collapse)
}
```
### Exercise 19\.5\.2
It’d be nice if you could supply multiple characters to the `pad` argument, e.g. `rule("Title", pad = "-+")`.
Why doesn’t this currently work? How could you fix it?
This is the definition of the rule function from the [chapter](https://r4ds.had.co.nz/functions.html).
```
rule <- function(..., pad = "-") {
title <- paste0(...)
width <- getOption("width") - nchar(title) - 5
cat(title, " ", str_dup(pad, width), "\n", sep = "")
}
```
```
rule("Important output")
#> Important output -----------------------------------------------------------
```
You can currently supply multiple characters to the `pad` argument, but the output will not be the desired width.
The `rule()` function duplicates `pad` a number of times
equal to the desired width minus the length of the title and five extra characters.
This implicitly assumes that `pad` is only one character. If `pad` were two character,
the output will be almost twice as long.
```
rule("Valuable output", pad = "-+")
#> Valuable output -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
```
One way to handle this is to use `str_trunc()` to truncate the string,
and `str_length()` to calculate the number of characters in the `pad` argument.
```
rule <- function(..., pad = "-") {
title <- paste0(...)
width <- getOption("width") - nchar(title) - 5
padding <- str_dup(
pad,
ceiling(width / str_length(title))
) %>%
str_trunc(width)
cat(title, " ", padding, "\n", sep = "")
}
rule("Important output")
#> Important output ----
rule("Valuable output", pad = "-+")
#> Valuable output -+-+-+-+
rule("Vital output", pad = "-+-")
#> Vital output -+--+--+--+--+--+-
```
Note that in the second output, there is only a single `-` at the end.
### Exercise 19\.5\.3
What does the `trim` argument to `mean()` do? When might you use it?
The `trim` arguments trims a fraction of observations from each end of the vector (meaning the range) before calculating the mean.
This is useful for calculating a measure of central tendency that is robust to outliers.
### Exercise 19\.5\.4
The default value for the `method` argument to `cor()` is `c("pearson", "kendall", "spearman")`.
What does that mean? What value is used by default?
It means that the `method` argument can take one of those three values.
The first value, `"pearson"`, is used by default.
### Exercise 19\.5\.1
What does `commas(letters, collapse = "-")` do? Why?
The `commas()` function in the chapter is defined as
```
commas <- function(...) {
str_c(..., collapse = ", ")
}
```
When `commas()` is given a collapse argument, it throws an error.
```
commas(letters, collapse = "-")
#> Error in str_c(..., collapse = ", "): formal argument "collapse" matched by multiple actual arguments
```
This is because when the argument `collapse` is given to `commas()`, it
is passed to `str_c()` as part of `...`.
In other words, the previous code is equivalent to
```
str_c(letters, collapse = "-", collapse = ", ")
```
However, it is an error to give the same named argument to a function twice.
One way to allow the user to override the separator in `commas()` is to add a `collapse`
argument to the function.
```
commas <- function(..., collapse = ", ") {
str_c(..., collapse = collapse)
}
```
### Exercise 19\.5\.2
It’d be nice if you could supply multiple characters to the `pad` argument, e.g. `rule("Title", pad = "-+")`.
Why doesn’t this currently work? How could you fix it?
This is the definition of the rule function from the [chapter](https://r4ds.had.co.nz/functions.html).
```
rule <- function(..., pad = "-") {
title <- paste0(...)
width <- getOption("width") - nchar(title) - 5
cat(title, " ", str_dup(pad, width), "\n", sep = "")
}
```
```
rule("Important output")
#> Important output -----------------------------------------------------------
```
You can currently supply multiple characters to the `pad` argument, but the output will not be the desired width.
The `rule()` function duplicates `pad` a number of times
equal to the desired width minus the length of the title and five extra characters.
This implicitly assumes that `pad` is only one character. If `pad` were two character,
the output will be almost twice as long.
```
rule("Valuable output", pad = "-+")
#> Valuable output -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
```
One way to handle this is to use `str_trunc()` to truncate the string,
and `str_length()` to calculate the number of characters in the `pad` argument.
```
rule <- function(..., pad = "-") {
title <- paste0(...)
width <- getOption("width") - nchar(title) - 5
padding <- str_dup(
pad,
ceiling(width / str_length(title))
) %>%
str_trunc(width)
cat(title, " ", padding, "\n", sep = "")
}
rule("Important output")
#> Important output ----
rule("Valuable output", pad = "-+")
#> Valuable output -+-+-+-+
rule("Vital output", pad = "-+-")
#> Vital output -+--+--+--+--+--+-
```
Note that in the second output, there is only a single `-` at the end.
### Exercise 19\.5\.3
What does the `trim` argument to `mean()` do? When might you use it?
The `trim` arguments trims a fraction of observations from each end of the vector (meaning the range) before calculating the mean.
This is useful for calculating a measure of central tendency that is robust to outliers.
### Exercise 19\.5\.4
The default value for the `method` argument to `cor()` is `c("pearson", "kendall", "spearman")`.
What does that mean? What value is used by default?
It means that the `method` argument can take one of those three values.
The first value, `"pearson"`, is used by default.
19\.6 Return values
-------------------
No exercises
19\.7 Environment
-----------------
No exercises
\[fizzbuzz]: Read [Why I’m still using “Fizz Buzz” to hire Software\-Developers](https://hackernoon.com/why-im-still-using-fizz-buzz-to-hire-software-developers-7e31a89a4bbf) for more discussion on the use of the Fizz\-Buzz question in programming interviews.
| Data Science |
jrnold.github.io | https://jrnold.github.io/r4ds-exercise-solutions/vectors.html |
20 Vectors
==========
20\.1 Introduction
------------------
```
library("tidyverse")
```
20\.2 Vector basics
-------------------
No exercises
20\.3 Important types of atomic vector
--------------------------------------
### Exercise 20\.3\.1
Describe the difference between `is.finite(x)` and `!is.infinite(x)`.
To find out, try the functions on a numeric vector that includes at least one number and the four special values (`NA`, `NaN`, `Inf`, `-Inf`).
```
x <- c(0, NA, NaN, Inf, -Inf)
is.finite(x)
#> [1] TRUE FALSE FALSE FALSE FALSE
!is.infinite(x)
#> [1] TRUE TRUE TRUE FALSE FALSE
```
The `is.finite()` function considers non\-missing numeric values to be finite,
and missing (`NA`), not a number (`NaN`), and positive (`Inf`) and negative infinity (`-Inf`) to not be finite. The `is.infinite()` behaves slightly differently.
It considers `Inf` and `-Inf` to be infinite, and everything else, including non\-missing numbers, `NA`, and `NaN` to not be infinite. See Table [20\.1](vectors.html#tab:finite-infinite).
Table 20\.1: Results of `is.finite()` and `is.infinite()` for
numeric and special values.
| | `is.finite()` | `is.infinite()` |
| --- | --- | --- |
| `1` | `TRUE` | `FALSE` |
| `NA` | `FALSE` | `FALSE` |
| `NaN` | `FALSE` | `FALSE` |
| `Inf` | `FALSE` | `TRUE` |
### Exercise 20\.3\.2
Read the source code for `dplyr::near()` (Hint: to see the source code, drop the `()`). How does it work?
The source for `dplyr::near` is:
```
dplyr::near
#> function (x, y, tol = .Machine$double.eps^0.5)
#> {
#> abs(x - y) < tol
#> }
#> <bytecode: 0x5d8e8a8>
#> <environment: namespace:dplyr>
```
Instead of checking for exact equality, it checks that two numbers are within a certain tolerance, `tol`.
By default the tolerance is set to the square root of `.Machine$double.eps`, which is the smallest floating point number that the computer can represent.
### Exercise 20\.3\.3
A logical vector can take 3 possible values. How many possible values can an integer vector take? How many possible values can a double take? Use Google to do some research.
For integers vectors, R uses a 32\-bit representation. This means that it can represent up to \\(2^{32}\\) different values with integers. One of these values is set aside for `NA_integer_`.
From the help for `integer`.
> Note that current implementations of R use 32\-bit integers for integer vectors,
> so the range of representable integers is restricted to about \+/\-2\*10^9: doubles
> can hold much larger integers exactly.
The range of integers values that R can represent in an integer vector is \\(\\pm 2^{31} \- 1\\),
```
.Machine$integer.max
#> [1] 2147483647
```
The maximum integer is \\(2^{31} \- 1\\) rather than \\(2^{32}\\) because 1 bit is used to
represent the sign (\\(\+\\), \\(\-\\)) and one value is used to represent `NA_integer_`.
If you try to represent an integer greater than that value, R will return `NA` values.
```
.Machine$integer.max + 1L
#> Warning in .Machine$integer.max + 1L: NAs produced by integer overflow
#> [1] NA
```
However, you can represent that value (exactly) with a numeric vector at the cost of
about two times the memory.
```
as.numeric(.Machine$integer.max) + 1
#> [1] 2.15e+09
```
The same is true for the negative of the integer max.
```
-.Machine$integer.max - 1L
#> Warning in -.Machine$integer.max - 1L: NAs produced by integer overflow
#> [1] NA
```
For double vectors, R uses a 64\-bit representation. This means that they can hold up
to \\(2^{64}\\) values exactly. However, some of those values are allocated to special values
such as `-Inf`, `Inf`, `NA_real_`, and `NaN`. From the help for `double`:
> All R platforms are required to work with values conforming to the IEC 60559
> (also known as IEEE 754\) standard. This basically works with a precision of
> 53 bits, and represents to that precision a range of absolute values from
> about 2e\-308 to 2e\+308\. It also has special values `NaN` (many of them),
> plus and minus infinity
> and plus and minus zero (although R acts as if these are the same). There are
> also denormal(ized) (or subnormal) numbers with absolute values above or below
> the range given above but represented to less precision.
The details of floating point representation and arithmetic are complicated, beyond
the scope of this question, and better discussed in the references provided below.
The double can represent numbers in the range of about \\(\\pm 2 \\times 10^{308}\\), which is
provided in
```
.Machine$double.xmax
#> [1] 1.8e+308
```
Many other details for the implementation of the double vectors are given in the `.Machine` variable (and its documentation).
These include the base (radix) of doubles,
```
.Machine$double.base
#> [1] 2
```
the number of bits used for the significand (mantissa),
```
.Machine$double.digits
#> [1] 53
```
the number of bits used in the exponent,
```
.Machine$double.exponent
#> [1] 11
```
and the smallest positive and negative numbers not equal to zero,
```
.Machine$double.eps
#> [1] 2.22e-16
.Machine$double.neg.eps
#> [1] 1.11e-16
```
* Computerphile, “[Floating Point Numbers](https://www.youtube.com/watch?v=PZRI1IfStY0)”
* <https://en.wikipedia.org/wiki/IEEE_754>
* [https://en.wikipedia.org/wiki/Double\-precision\_floating\-point\_format](https://en.wikipedia.org/wiki/Double-precision_floating-point_format)
* “[Floating Point Numbers: Why floating\-point numbers are needed](https://floating-point-gui.de/formats/fp/)”
* Fabien Sanglard, “[Floating Point Numbers: Visually Explained](http://fabiensanglard.net/floating_point_visually_explained/)”
* James Howard, “[How Many Floating Point Numbers are There?](https://jameshoward.us/2015/09/09/how-many-floating-point-numbers-are-there/)”
* GeeksforGeeks, “[Floating Point Representation Basics](https://www.geeksforgeeks.org/floating-point-representation-basics/)”
* Chris Hecker, “[Lets Go to the (Floating) Point](http://chrishecker.com/images/f/fb/Gdmfp.pdf)”, *Game Developer*
* Chua Hock\-Chuan, [A Tutorial on Data Representation Integers, Floating\-point Numbers, and Characters](http://www.ntu.edu.sg/home/ehchua/programming/java/datarepresentation.html)
* John D. Cook, “[Anatomy of a floating point number](https://www.johndcook.com/blog/2009/04/06/anatomy-of-a-floating-point-number/)”
* John D. Cook, “[Five Tips for Floating Point Programming](https://www.codeproject.com/Articles/29637/Five-Tips-for-Floating-Point-Programming)”
### Exercise 20\.3\.4
Brainstorm at least four functions that allow you to convert a double to an integer. How do they differ? Be precise.
The difference between to convert a double to an integer differ in how they deal with the fractional part of the double.
There are are a variety of rules that could be used to do this.
* Round down, towards \\(\-\\infty\\). This is also called taking the `floor` of a number. This is the method the `floor()` function uses.
* Round up, towards \\(\+\\infty\\). This is also called taking the `ceiling`. This is the method the `ceiling()` function uses.
* Round towards zero. This is the method that the `trunc()` and `as.integer()` functions use.
* Round away from zero.
* Round to the nearest integer. There several different methods for handling ties, defined as numbers with a fractional part of 0\.5\.
+ Round half down, towards \\(\-\\infty\\).
+ Round half up, towards \\(\+\\infty\\).
+ Round half towards zero
+ Round half away from zero
+ Round half towards the even integer. This is the method that the `round()` function uses.
+ Round half towards the odd integer.
```
function(x, method) {
if (method == "round down") {
floor(x)
} else if (method == "round up") {
ceiling(x)
} else if (method == "round towards zero") {
trunc(x)
} else if (method == "round away from zero") {
sign(x) * ceiling(abs(x))
} else if (method == "nearest, round half up") {
floor(x + 0.5)
} else if (method == "nearest, round half down") {
ceiling(x - 0.5)
} else if (method == "nearest, round half towards zero") {
sign(x) * ceiling(abs(x) - 0.5)
} else if (method == "nearest, round half away from zero") {
sign(x) * floor(abs(x) + 0.5)
} else if (method == "nearest, round half to even") {
round(x, digits = 0)
} else if (method == "nearest, round half to odd") {
case_when(
# smaller integer is odd - round half down
floor(x) %% 2 ~ ceiling(x - 0.5),
# otherwise, round half up
TRUE ~ floor(x + 0.5)
)
} else if (method == "nearest, round half randomly") {
round_half_up <- sample(c(TRUE, FALSE), length(x), replace = TRUE)
y <- x
y[round_half_up] <- ceiling(x[round_half_up] - 0.5)
y[!round_half_up] <- floor(x[!round_half_up] + 0.5)
y
}
}
#> function(x, method) {
#> if (method == "round down") {
#> floor(x)
#> } else if (method == "round up") {
#> ceiling(x)
#> } else if (method == "round towards zero") {
#> trunc(x)
#> } else if (method == "round away from zero") {
#> sign(x) * ceiling(abs(x))
#> } else if (method == "nearest, round half up") {
#> floor(x + 0.5)
#> } else if (method == "nearest, round half down") {
#> ceiling(x - 0.5)
#> } else if (method == "nearest, round half towards zero") {
#> sign(x) * ceiling(abs(x) - 0.5)
#> } else if (method == "nearest, round half away from zero") {
#> sign(x) * floor(abs(x) + 0.5)
#> } else if (method == "nearest, round half to even") {
#> round(x, digits = 0)
#> } else if (method == "nearest, round half to odd") {
#> case_when(
#> # smaller integer is odd - round half down
#> floor(x) %% 2 ~ ceiling(x - 0.5),
#> # otherwise, round half up
#> TRUE ~ floor(x + 0.5)
#> )
#> } else if (method == "nearest, round half randomly") {
#> round_half_up <- sample(c(TRUE, FALSE), length(x), replace = TRUE)
#> y <- x
#> y[round_half_up] <- ceiling(x[round_half_up] - 0.5)
#> y[!round_half_up] <- floor(x[!round_half_up] + 0.5)
#> y
#> }
#> }
#> <environment: 0x2b114b8>
```
```
tibble(
x = c(1.8, 1.5, 1.2, 0.8, 0.5, 0.2,
-0.2, -0.5, -0.8, -1.2, -1.5, -1.8),
`Round down` = floor(x),
`Round up` = ceiling(x),
`Round towards zero` = trunc(x),
`Nearest, round half to even` = round(x)
)
#> # A tibble: 12 x 5
#> x `Round down` `Round up` `Round towards zero` `Nearest, round half to ev…
#> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1.8 1 2 1 2
#> 2 1.5 1 2 1 2
#> 3 1.2 1 2 1 1
#> 4 0.8 0 1 0 1
#> 5 0.5 0 1 0 0
#> 6 0.2 0 1 0 0
#> # … with 6 more rows
```
See the Wikipedia articles, [Rounding](https://en.wikipedia.org/wiki/Rounding) and [IEEE floating point](https://en.wikipedia.org/wiki/IEEE_floating_point) for more discussion of these rounding rules.
For rounding, R and many programming languages use the IEEE standard. This method is called “round to nearest, ties to even.”[8](#fn8)
This rule rounds ties, numbers with a remainder of 0\.5, to the nearest even number.
In this rule, half the ties are rounded up, and half are rounded down.
The following function, `round2()`, manually implements the “round to nearest, ties to even” method.
```
x <- seq(-10, 10, by = 0.5)
round2 <- function(x, to_even = TRUE) {
q <- x %/% 1
r <- x %% 1
q + (r >= 0.5)
}
x <- c(-12.5, -11.5, 11.5, 12.5)
round(x)
#> [1] -12 -12 12 12
round2(x, to_even = FALSE)
#> [1] -12 -11 12 13
```
This rounding method may be different than the one you learned in grade school, which is, at least for me, was to always round ties upwards, or, alternatively away from zero.
This rule is called the “round half up” rule.
The problem with the “round half up” rule is that it is biased upwards for positive numbers.
Rounding to nearest with ties towards even is not.
Consider this sequence which sums to zero.
```
x <- seq(-100.5, 100.5, by = 1)
x
#> [1] -100.5 -99.5 -98.5 -97.5 -96.5 -95.5 -94.5 -93.5 -92.5 -91.5
#> [11] -90.5 -89.5 -88.5 -87.5 -86.5 -85.5 -84.5 -83.5 -82.5 -81.5
#> [21] -80.5 -79.5 -78.5 -77.5 -76.5 -75.5 -74.5 -73.5 -72.5 -71.5
#> [31] -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5 -61.5
#> [41] -60.5 -59.5 -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5
#> [51] -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5 -41.5
#> [61] -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5
#> [71] -30.5 -29.5 -28.5 -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5
#> [81] -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5
#> [91] -10.5 -9.5 -8.5 -7.5 -6.5 -5.5 -4.5 -3.5 -2.5 -1.5
#> [101] -0.5 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5
#> [111] 9.5 10.5 11.5 12.5 13.5 14.5 15.5 16.5 17.5 18.5
#> [121] 19.5 20.5 21.5 22.5 23.5 24.5 25.5 26.5 27.5 28.5
#> [131] 29.5 30.5 31.5 32.5 33.5 34.5 35.5 36.5 37.5 38.5
#> [141] 39.5 40.5 41.5 42.5 43.5 44.5 45.5 46.5 47.5 48.5
#> [151] 49.5 50.5 51.5 52.5 53.5 54.5 55.5 56.5 57.5 58.5
#> [161] 59.5 60.5 61.5 62.5 63.5 64.5 65.5 66.5 67.5 68.5
#> [171] 69.5 70.5 71.5 72.5 73.5 74.5 75.5 76.5 77.5 78.5
#> [181] 79.5 80.5 81.5 82.5 83.5 84.5 85.5 86.5 87.5 88.5
#> [191] 89.5 90.5 91.5 92.5 93.5 94.5 95.5 96.5 97.5 98.5
#> [201] 99.5 100.5
sum(x)
#> [1] 0
```
A nice property of rounding preserved that sum.
Using the “ties towards even”, the sum is still zero.
However, the “ties towards \\(\+\\infty\\)” produces a non\-zero number.
```
sum(x)
#> [1] 0
sum(round(x))
#> [1] 0
sum(round2(x))
#> [1] 101
```
Rounding rules can have real world impacts.
One notable example was that in 1983, the Vancouver stock exchange adjusted its index from 524\.811 to 1098\.892 to correct for accumulated error due to rounding to three decimal points (see [Vancouver Stock Exchange](https://en.wikipedia.org/wiki/Vancouver_Stock_Exchange)).
This [site](https://web.ma.utexas.edu/users/arbogast/misc/disasters.html) lists several more examples of the dangers of rounding rules.
### Exercise 20\.3\.5
What functions from the readr package allow you to turn a string into logical, integer, and double vector?
The function `parse_logical()` parses logical values, which can appear
as variations of TRUE/FALSE or 1/0\.
```
parse_logical(c("TRUE", "FALSE", "1", "0", "true", "t", "NA"))
#> [1] TRUE FALSE TRUE FALSE TRUE TRUE NA
```
The function `parse_integer()` parses integer values.
```
parse_integer(c("1235", "0134", "NA"))
#> [1] 1235 134 NA
```
However, if there are any non\-numeric characters in the string, including
currency symbols, commas, and decimals, `parse_integer()` will raise an error.
```
parse_integer(c("1000", "$1,000", "10.00"))
#> Warning: 2 parsing failures.
#> row col expected actual
#> 2 -- an integer $1,000
#> 3 -- no trailing characters .00
#> [1] 1000 NA NA
#> attr(,"problems")
#> # A tibble: 2 x 4
#> row col expected actual
#> <int> <int> <chr> <chr>
#> 1 2 NA an integer $1,000
#> 2 3 NA no trailing characters .00
```
The function `parse_number()` parses numeric values.
Unlike `parse_integer()`, the function `parse_number()` is more forgiving about the format of the numbers.
It ignores all non\-numeric characters before or after the first number, as with `"$1,000.00"` in the example.
Within the number, `parse_number()` will only ignore grouping marks such as `","`.
This allows it to easily parse numeric fields that include currency symbols and comma separators in number strings without any intervention by the user.
```
parse_number(c("1.0", "3.5", "$1,000.00", "NA", "ABCD12234.90", "1234ABC", "A123B", "A1B2C"))
#> [1] 1.0 3.5 1000.0 NA 12234.9 1234.0 123.0 1.0
```
20\.4 Using atomic vectors
--------------------------
### Exercise 20\.4\.1
What does `mean(is.na(x))` tell you about a vector `x`? What about `sum(!is.finite(x))`?
I’ll use the numeric vector `x` to compare the behaviors of `is.na()`
and `is.finite()`. It contains numbers (`-1`, `0`, `1`) as
well as all the special numeric values: infinity (`Inf`),
missing (`NA`), and not\-a\-number (`NaN`).
```
x <- c(-Inf, -1, 0, 1, Inf, NA, NaN)
```
The expression `mean(is.na(x))` calculates the proportion of missing (`NA`) and not\-a\-number `NaN` values in a vector:
```
mean(is.na(x))
#> [1] 0.286
```
The result of 0\.286 is equal to `2 / 7` as expected.
There are seven elements in the vector `x`, and two elements that are either `NA` or `NaN`.
The expression `sum(!is.finite(x))` calculates the number of elements in the vector that are equal to missing (`NA`), not\-a\-number (`NaN`), or infinity (`Inf`).
```
sum(!is.finite(x))
#> [1] 4
```
Review the [Numeric](https://r4ds.had.co.nz/vectors.html#numeric) section for the differences between `is.na()` and `is.finite()`.
### Exercise 20\.4\.2
Carefully read the documentation of `is.vector()`. What does it actually test for? Why does `is.atomic()` not agree with the definition of atomic vectors above?
The function `is.vector()` only checks whether the object has no attributes other than names. Thus a `list` is a vector:
```
is.vector(list(a = 1, b = 2))
#> [1] TRUE
```
But any object that has an attribute (other than names) is not:
```
x <- 1:10
attr(x, "something") <- TRUE
is.vector(x)
#> [1] FALSE
```
The idea behind this is that object oriented classes will include attributes, including, but not limited to `"class"`.
The function `is.atomic()` explicitly checks whether an object is one of the atomic types (“logical”, “integer”, “numeric”, “complex”, “character”, and “raw”) or NULL.
```
is.atomic(1:10)
#> [1] TRUE
is.atomic(list(a = 1))
#> [1] FALSE
```
The function `is.atomic()` will consider objects to be atomic even if they have extra attributes.
```
is.atomic(x)
#> [1] TRUE
```
### Exercise 20\.4\.3
Compare and contrast `setNames()` with `purrr::set_names()`.
The function `setNames()` takes two arguments, a vector to be named and a vector
of names to apply to its elements.
```
setNames(1:4, c("a", "b", "c", "d"))
#> a b c d
#> 1 2 3 4
```
You can use the values of the vector as its names if the `nm` argument is used.
```
setNames(nm = c("a", "b", "c", "d"))
#> a b c d
#> "a" "b" "c" "d"
```
The function `set_names()` has more ways to set the names than `setNames()`.
The names can be specified in the same manner as `setNames()`.
```
purrr::set_names(1:4, c("a", "b", "c", "d"))
#> a b c d
#> 1 2 3 4
```
The names can also be specified as unnamed arguments,
```
purrr::set_names(1:4, "a", "b", "c", "d")
#> a b c d
#> 1 2 3 4
```
The function `set_names()` will name an object with itself if no `nm` argument is
provided (the opposite of `setNames()` behavior).
```
purrr::set_names(c("a", "b", "c", "d"))
#> a b c d
#> "a" "b" "c" "d"
```
The biggest difference between `set_names()` and `setNames()` is that `set_names()` allows for using a function or formula to transform the existing names.
```
purrr::set_names(c(a = 1, b = 2, c = 3), toupper)
#> A B C
#> 1 2 3
purrr::set_names(c(a = 1, b = 2, c = 3), ~toupper(.))
#> A B C
#> 1 2 3
```
The `set_names()` function also checks that the length of the names argument is the
same length as the vector that is being named, and will raise an error if it is not.
```
purrr::set_names(1:4, c("a", "b"))
#> Error: `nm` must be `NULL` or a character vector the same length as `x`
```
The `setNames()` function will allow the names to be shorter than the vector being
named, and will set the missing names to `NA`.
```
setNames(1:4, c("a", "b"))
#> a b <NA> <NA>
#> 1 2 3 4
```
### Exercise 20\.4\.4
Create functions that take a vector as input and returns:
1. The last value. Should you use `[` or `[[`?
2. The elements at even numbered positions.
3. Every element except the last value.
4. Only even numbers (and no missing values).
The answers to the parts follow.
1. This function find the last value in a vector.
```
last_value <- function(x) {
# check for case with no length
if (length(x)) {
x[[length(x)]]
} else {
x
}
}
last_value(numeric())
#> numeric(0)
last_value(1)
#> [1] 1
last_value(1:10)
#> [1] 10
```
The function uses `[[` in order to extract a single element.
2. This function returns the elements at even number positions.
```
even_indices <- function(x) {
if (length(x)) {
x[seq_along(x) %% 2 == 0]
} else {
x
}
}
even_indices(numeric())
#> numeric(0)
even_indices(1)
#> numeric(0)
even_indices(1:10)
#> [1] 2 4 6 8 10
# test using case to ensure that values not indices
# are being returned
even_indices(letters)
#> [1] "b" "d" "f" "h" "j" "l" "n" "p" "r" "t" "v" "x" "z"
```
3. This function returns a vector with every element except the last.
```
not_last <- function(x) {
n <- length(x)
if (n) {
x[-n]
} else {
# n == 0
x
}
}
not_last(1:3)
#> [1] 1 2
```
We should also confirm that the function works with some edge cases, like
a vector with one element, and a vector with zero elements.
```
not_last(1)
#> numeric(0)
not_last(numeric())
#> numeric(0)
```
In both these cases, `not_last()` correctly returns an empty vector.
4. This function returns the elements of a vector that are even numbers.
```
even_numbers <- function(x) {
x[x %% 2 == 0]
}
even_numbers(-4:4)
#> [1] -4 -2 0 2 4
```
We could improve this function by handling the special numeric values:
`NA`, `NaN`, `Inf`. However, first we need to decide how to handle them.
Neither `NaN` nor `Inf` are numbers, and so they are neither even nor odd.
In other words, since `NaN` nor `Inf` aren’t *even* numbers, they aren’t *even numbers*.
What about `NA`? Well, we don’t know. `NA` is a number, but we don’t know its
value. The missing number could be even or odd, but we don’t know.
Another reason to return `NA` is that it is consistent with the behavior of other R functions,
which generally return `NA` values instead of dropping them.
```
even_numbers2 <- function(x) {
x[!is.infinite(x) & !is.nan(x) & (x %% 2 == 0)]
}
even_numbers2(c(0:4, NA, NaN, Inf, -Inf))
#> [1] 0 2 4 NA
```
### Exercise 20\.4\.5
Why is `x[-which(x > 0)]` not the same as `x[x <= 0]`?
These expressions differ in the way that they treat missing values.
Let’s test how they work by creating a vector with positive and negative integers,
and special values (`NA`, `NaN`, and `Inf`). These values should encompass
all relevant types of values that these expressions would encounter.
```
x <- c(-1:1, Inf, -Inf, NaN, NA)
x[-which(x > 0)]
#> [1] -1 0 -Inf NaN NA
x[x <= 0]
#> [1] -1 0 -Inf NA NA
```
The expressions `x[-which(x > 0)]` and `x[x <= 0]` return the same values except
for a `NaN` instead of an `NA` in the expression using which.
So what is going on here? Let’s work through each part of these expressions and
see where the different occurs.
Let’s start with the expression `x[x <= 0]`.
```
x <= 0
#> [1] TRUE TRUE FALSE FALSE TRUE NA NA
```
Recall how the logical relational operators (`<`, `<=`, `==`, `!=`, `>`, `>=`) treat `NA` values.
Any relational operation that includes a `NA` returns an `NA`.
Is `NA <= 0`? We don’t know because it depends on the unknown value of `NA`, so the answer is `NA`.
This same argument applies to `NaN`. Asking whether `NaN <= 0` does not make sense because you can’t compare a number to “Not a Number”.
Now recall how indexing treats `NA` values.
Indexing can take a logical vector as an input.
When the indexing vector is logical, the output vector includes those elements where the logical vector is `TRUE`, and excludes those elements where the logical vector is `FALSE`.
Logical vectors can also include `NA` values, and it is not clear how they should be treated.
Well, since the value is `NA`, it could be `TRUE` or `FALSE`, we don’t know.
Keeping elements with `NA` would treat the `NA` as `TRUE`, and dropping them would treat the `NA` as `FALSE`.
The way R decides to handle the `NA` values so that they are treated differently than `TRUE` or `FALSE` values is to include elements where the indexing vector is `NA`, but set their values to `NA`.
Now consider the expression `x[-which(x > 0)]`.
As before, to understand this expression we’ll work from the inside out.
Consider `x > 0`.
```
x > 0
#> [1] FALSE FALSE TRUE TRUE FALSE NA NA
```
As with `x <= 0`, it returns `NA` for comparisons involving `NA` and `NaN`.
What does `which()` do?
```
which(x > 0)
#> [1] 3 4
```
The `which()` function returns the indexes for which the argument is `TRUE`.
This means that it is not including the indexes for which the argument is `FALSE` or `NA`.
Now consider the full expression `x[-which(x > 0)]`?
The `which()` function returned a vector of integers.
How does indexing treat negative integers?
```
x[1:2]
#> [1] -1 0
x[-(1:2)]
#> [1] 1 Inf -Inf NaN NA
```
If indexing gets a vector of positive integers, it will select those indexes;
if it receives a vector of negative integers, it will drop those indexes.
Thus, `x[-which(x > 0)]` ends up dropping the elements for which `x > 0` is true,
and keeps all the other elements and their original values, including `NA` and `NaN`.
There’s one other special case that we should consider. How do these two expressions work with
an empty vector?
```
x <- numeric()
x[x <= 0]
#> numeric(0)
x[-which(x > 0)]
#> numeric(0)
```
Thankfully, they both handle empty vectors the same.
This exercise is a reminder to always test your code. Even though these two expressions looked
equivalent, they are not in practice. And when you do test code, consider both
how it works on typical values as well as special values and edge cases, like a
vector with `NA` or `NaN` or `Inf` values, or an empty vector. These are where
unexpected behavior is most likely to occur.
### Exercise 20\.4\.6
What happens when you subset with a positive integer that’s bigger than the length of the vector? What happens when you subset with a name that doesn’t exist?
Let’s consider the named vector,
```
x <- c(a = 10, b = 20)
```
If we subset it by an integer larger than its length, it returns a vector of missing values.
```
x[3]
#> <NA>
#> NA
```
This also applies to ranges.
```
x[3:5]
#> <NA> <NA> <NA>
#> NA NA NA
```
If some indexes are larger than the length of the vector, those elements are `NA`.
```
x[1:5]
#> a b <NA> <NA> <NA>
#> 10 20 NA NA NA
```
Likewise, when `[` is provided names not in the vector’s names, it will return
`NA` for those elements.
```
x["c"]
#> <NA>
#> NA
x[c("c", "d", "e")]
#> <NA> <NA> <NA>
#> NA NA NA
x[c("a", "b", "c")]
#> a b <NA>
#> 10 20 NA
```
Though not yet discussed much in this chapter, the `[[` behaves differently.
With an atomic vector, if `[[` is given an index outside the range of the vector or an invalid name, it raises an error.
```
x[["c"]]
#> Error in x[["c"]]: subscript out of bounds
```
```
x[[5]]
#> Error in x[[5]]: subscript out of bounds
```
20\.5 Recursive vectors (lists)
-------------------------------
### Exercise 20\.5\.1
Draw the following lists as nested sets:
1. `list(a, b, list(c, d), list(e, f))`
2. `list(list(list(list(list(list(a))))))`
There are a variety of ways to draw these graphs.
The original diagrams in *R for Data Science* were produced with [Graffle](https://www.omnigroup.com/omnigraffle).
You could also use various diagramming, drawing, or presentation software, including Adobe Illustrator, Inkscape, PowerPoint, Keynote, and Google Slides.
For these examples, I generated these diagrams programmatically using the
[DiagrammeR](http://rich-iannone.github.io/DiagrammeR/graphviz_and_mermaid.html) R package to render [Graphviz](https://www.graphviz.org/) diagrams.
1. The nested set diagram for
`list(a, b, list(c, d), list(e, f))`
is:[9](#fn9)
2. The nested set diagram for
`list(list(list(list(list(list(a))))))`
is:
### Exercise 20\.5\.2
What happens if you subset a `tibble` as if you’re subsetting a list? What are the key differences between a list and a `tibble`?
Subsetting a `tibble` works the same way as a list; a data frame can be thought of as a list of columns.
The key difference between a list and a `tibble` is that all the elements (columns) of a tibble must have the same length (number of rows).
Lists can have vectors with different lengths as elements.
```
x <- tibble(a = 1:2, b = 3:4)
x[["a"]]
#> [1] 1 2
x["a"]
#> # A tibble: 2 x 1
#> a
#> <int>
#> 1 1
#> 2 2
x[1]
#> # A tibble: 2 x 1
#> a
#> <int>
#> 1 1
#> 2 2
x[1, ]
#> # A tibble: 1 x 2
#> a b
#> <int> <int>
#> 1 1 3
```
20\.6 Attributes
----------------
No exercises
20\.7 Augmented vectors
-----------------------
### Exercise 20\.7\.1
What does `hms::hms(3600)` return? How does it print? What primitive type is the augmented vector built on top of? What attributes does it use?
```
x <- hms::hms(3600)
class(x)
#> [1] "hms" "difftime"
x
#> 01:00:00
```
`hms::hms` returns an object of class, and prints the time in “%H:%M:%S” format.
The primitive type is a double
```
typeof(x)
#> [1] "double"
```
The attributes is uses are `"units"` and `"class"`.
```
attributes(x)
#> $units
#> [1] "secs"
#>
#> $class
#> [1] "hms" "difftime"
```
### Exercise 20\.7\.2
Try and make a tibble that has columns with different lengths. What happens?
If I try to create a tibble with a scalar and column of a different length there are no issues, and the scalar is repeated to the length of the longer vector.
```
tibble(x = 1, y = 1:5)
#> # A tibble: 5 x 2
#> x y
#> <dbl> <int>
#> 1 1 1
#> 2 1 2
#> 3 1 3
#> 4 1 4
#> 5 1 5
```
However, if I try to create a tibble with two vectors of different lengths (other than one), the `tibble` function throws an error.
```
tibble(x = 1:3, y = 1:4)
#> Error: Tibble columns must have compatible sizes.
#> * Size 3: Existing data.
#> * Size 4: Column `y`.
#> ℹ Only values of size one are recycled.
```
### Exercise 20\.7\.3
Based on the definition above, is it OK to have a list as a column of a tibble?
If I didn’t already know the answer, what I would do is try it out.
From the above, the error message was about vectors having different lengths.
But there is nothing that prevents a tibble from having vectors of different types: doubles, character, integers, logical, factor, date.
The later are still atomic, but they have additional attributes.
So, maybe there won’t be an issue with a list vector as long as it is the same length.
```
tibble(x = 1:3, y = list("a", 1, list(1:3)))
#> # A tibble: 3 x 2
#> x y
#> <int> <list>
#> 1 1 <chr [1]>
#> 2 2 <dbl [1]>
#> 3 3 <list [1]>
```
It works! I even used a list with heterogeneous types and there wasn’t an issue.
In following chapters we’ll see that list vectors can be very useful: for example, when processing many different models.
20\.1 Introduction
------------------
```
library("tidyverse")
```
20\.2 Vector basics
-------------------
No exercises
20\.3 Important types of atomic vector
--------------------------------------
### Exercise 20\.3\.1
Describe the difference between `is.finite(x)` and `!is.infinite(x)`.
To find out, try the functions on a numeric vector that includes at least one number and the four special values (`NA`, `NaN`, `Inf`, `-Inf`).
```
x <- c(0, NA, NaN, Inf, -Inf)
is.finite(x)
#> [1] TRUE FALSE FALSE FALSE FALSE
!is.infinite(x)
#> [1] TRUE TRUE TRUE FALSE FALSE
```
The `is.finite()` function considers non\-missing numeric values to be finite,
and missing (`NA`), not a number (`NaN`), and positive (`Inf`) and negative infinity (`-Inf`) to not be finite. The `is.infinite()` behaves slightly differently.
It considers `Inf` and `-Inf` to be infinite, and everything else, including non\-missing numbers, `NA`, and `NaN` to not be infinite. See Table [20\.1](vectors.html#tab:finite-infinite).
Table 20\.1: Results of `is.finite()` and `is.infinite()` for
numeric and special values.
| | `is.finite()` | `is.infinite()` |
| --- | --- | --- |
| `1` | `TRUE` | `FALSE` |
| `NA` | `FALSE` | `FALSE` |
| `NaN` | `FALSE` | `FALSE` |
| `Inf` | `FALSE` | `TRUE` |
### Exercise 20\.3\.2
Read the source code for `dplyr::near()` (Hint: to see the source code, drop the `()`). How does it work?
The source for `dplyr::near` is:
```
dplyr::near
#> function (x, y, tol = .Machine$double.eps^0.5)
#> {
#> abs(x - y) < tol
#> }
#> <bytecode: 0x5d8e8a8>
#> <environment: namespace:dplyr>
```
Instead of checking for exact equality, it checks that two numbers are within a certain tolerance, `tol`.
By default the tolerance is set to the square root of `.Machine$double.eps`, which is the smallest floating point number that the computer can represent.
### Exercise 20\.3\.3
A logical vector can take 3 possible values. How many possible values can an integer vector take? How many possible values can a double take? Use Google to do some research.
For integers vectors, R uses a 32\-bit representation. This means that it can represent up to \\(2^{32}\\) different values with integers. One of these values is set aside for `NA_integer_`.
From the help for `integer`.
> Note that current implementations of R use 32\-bit integers for integer vectors,
> so the range of representable integers is restricted to about \+/\-2\*10^9: doubles
> can hold much larger integers exactly.
The range of integers values that R can represent in an integer vector is \\(\\pm 2^{31} \- 1\\),
```
.Machine$integer.max
#> [1] 2147483647
```
The maximum integer is \\(2^{31} \- 1\\) rather than \\(2^{32}\\) because 1 bit is used to
represent the sign (\\(\+\\), \\(\-\\)) and one value is used to represent `NA_integer_`.
If you try to represent an integer greater than that value, R will return `NA` values.
```
.Machine$integer.max + 1L
#> Warning in .Machine$integer.max + 1L: NAs produced by integer overflow
#> [1] NA
```
However, you can represent that value (exactly) with a numeric vector at the cost of
about two times the memory.
```
as.numeric(.Machine$integer.max) + 1
#> [1] 2.15e+09
```
The same is true for the negative of the integer max.
```
-.Machine$integer.max - 1L
#> Warning in -.Machine$integer.max - 1L: NAs produced by integer overflow
#> [1] NA
```
For double vectors, R uses a 64\-bit representation. This means that they can hold up
to \\(2^{64}\\) values exactly. However, some of those values are allocated to special values
such as `-Inf`, `Inf`, `NA_real_`, and `NaN`. From the help for `double`:
> All R platforms are required to work with values conforming to the IEC 60559
> (also known as IEEE 754\) standard. This basically works with a precision of
> 53 bits, and represents to that precision a range of absolute values from
> about 2e\-308 to 2e\+308\. It also has special values `NaN` (many of them),
> plus and minus infinity
> and plus and minus zero (although R acts as if these are the same). There are
> also denormal(ized) (or subnormal) numbers with absolute values above or below
> the range given above but represented to less precision.
The details of floating point representation and arithmetic are complicated, beyond
the scope of this question, and better discussed in the references provided below.
The double can represent numbers in the range of about \\(\\pm 2 \\times 10^{308}\\), which is
provided in
```
.Machine$double.xmax
#> [1] 1.8e+308
```
Many other details for the implementation of the double vectors are given in the `.Machine` variable (and its documentation).
These include the base (radix) of doubles,
```
.Machine$double.base
#> [1] 2
```
the number of bits used for the significand (mantissa),
```
.Machine$double.digits
#> [1] 53
```
the number of bits used in the exponent,
```
.Machine$double.exponent
#> [1] 11
```
and the smallest positive and negative numbers not equal to zero,
```
.Machine$double.eps
#> [1] 2.22e-16
.Machine$double.neg.eps
#> [1] 1.11e-16
```
* Computerphile, “[Floating Point Numbers](https://www.youtube.com/watch?v=PZRI1IfStY0)”
* <https://en.wikipedia.org/wiki/IEEE_754>
* [https://en.wikipedia.org/wiki/Double\-precision\_floating\-point\_format](https://en.wikipedia.org/wiki/Double-precision_floating-point_format)
* “[Floating Point Numbers: Why floating\-point numbers are needed](https://floating-point-gui.de/formats/fp/)”
* Fabien Sanglard, “[Floating Point Numbers: Visually Explained](http://fabiensanglard.net/floating_point_visually_explained/)”
* James Howard, “[How Many Floating Point Numbers are There?](https://jameshoward.us/2015/09/09/how-many-floating-point-numbers-are-there/)”
* GeeksforGeeks, “[Floating Point Representation Basics](https://www.geeksforgeeks.org/floating-point-representation-basics/)”
* Chris Hecker, “[Lets Go to the (Floating) Point](http://chrishecker.com/images/f/fb/Gdmfp.pdf)”, *Game Developer*
* Chua Hock\-Chuan, [A Tutorial on Data Representation Integers, Floating\-point Numbers, and Characters](http://www.ntu.edu.sg/home/ehchua/programming/java/datarepresentation.html)
* John D. Cook, “[Anatomy of a floating point number](https://www.johndcook.com/blog/2009/04/06/anatomy-of-a-floating-point-number/)”
* John D. Cook, “[Five Tips for Floating Point Programming](https://www.codeproject.com/Articles/29637/Five-Tips-for-Floating-Point-Programming)”
### Exercise 20\.3\.4
Brainstorm at least four functions that allow you to convert a double to an integer. How do they differ? Be precise.
The difference between to convert a double to an integer differ in how they deal with the fractional part of the double.
There are are a variety of rules that could be used to do this.
* Round down, towards \\(\-\\infty\\). This is also called taking the `floor` of a number. This is the method the `floor()` function uses.
* Round up, towards \\(\+\\infty\\). This is also called taking the `ceiling`. This is the method the `ceiling()` function uses.
* Round towards zero. This is the method that the `trunc()` and `as.integer()` functions use.
* Round away from zero.
* Round to the nearest integer. There several different methods for handling ties, defined as numbers with a fractional part of 0\.5\.
+ Round half down, towards \\(\-\\infty\\).
+ Round half up, towards \\(\+\\infty\\).
+ Round half towards zero
+ Round half away from zero
+ Round half towards the even integer. This is the method that the `round()` function uses.
+ Round half towards the odd integer.
```
function(x, method) {
if (method == "round down") {
floor(x)
} else if (method == "round up") {
ceiling(x)
} else if (method == "round towards zero") {
trunc(x)
} else if (method == "round away from zero") {
sign(x) * ceiling(abs(x))
} else if (method == "nearest, round half up") {
floor(x + 0.5)
} else if (method == "nearest, round half down") {
ceiling(x - 0.5)
} else if (method == "nearest, round half towards zero") {
sign(x) * ceiling(abs(x) - 0.5)
} else if (method == "nearest, round half away from zero") {
sign(x) * floor(abs(x) + 0.5)
} else if (method == "nearest, round half to even") {
round(x, digits = 0)
} else if (method == "nearest, round half to odd") {
case_when(
# smaller integer is odd - round half down
floor(x) %% 2 ~ ceiling(x - 0.5),
# otherwise, round half up
TRUE ~ floor(x + 0.5)
)
} else if (method == "nearest, round half randomly") {
round_half_up <- sample(c(TRUE, FALSE), length(x), replace = TRUE)
y <- x
y[round_half_up] <- ceiling(x[round_half_up] - 0.5)
y[!round_half_up] <- floor(x[!round_half_up] + 0.5)
y
}
}
#> function(x, method) {
#> if (method == "round down") {
#> floor(x)
#> } else if (method == "round up") {
#> ceiling(x)
#> } else if (method == "round towards zero") {
#> trunc(x)
#> } else if (method == "round away from zero") {
#> sign(x) * ceiling(abs(x))
#> } else if (method == "nearest, round half up") {
#> floor(x + 0.5)
#> } else if (method == "nearest, round half down") {
#> ceiling(x - 0.5)
#> } else if (method == "nearest, round half towards zero") {
#> sign(x) * ceiling(abs(x) - 0.5)
#> } else if (method == "nearest, round half away from zero") {
#> sign(x) * floor(abs(x) + 0.5)
#> } else if (method == "nearest, round half to even") {
#> round(x, digits = 0)
#> } else if (method == "nearest, round half to odd") {
#> case_when(
#> # smaller integer is odd - round half down
#> floor(x) %% 2 ~ ceiling(x - 0.5),
#> # otherwise, round half up
#> TRUE ~ floor(x + 0.5)
#> )
#> } else if (method == "nearest, round half randomly") {
#> round_half_up <- sample(c(TRUE, FALSE), length(x), replace = TRUE)
#> y <- x
#> y[round_half_up] <- ceiling(x[round_half_up] - 0.5)
#> y[!round_half_up] <- floor(x[!round_half_up] + 0.5)
#> y
#> }
#> }
#> <environment: 0x2b114b8>
```
```
tibble(
x = c(1.8, 1.5, 1.2, 0.8, 0.5, 0.2,
-0.2, -0.5, -0.8, -1.2, -1.5, -1.8),
`Round down` = floor(x),
`Round up` = ceiling(x),
`Round towards zero` = trunc(x),
`Nearest, round half to even` = round(x)
)
#> # A tibble: 12 x 5
#> x `Round down` `Round up` `Round towards zero` `Nearest, round half to ev…
#> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1.8 1 2 1 2
#> 2 1.5 1 2 1 2
#> 3 1.2 1 2 1 1
#> 4 0.8 0 1 0 1
#> 5 0.5 0 1 0 0
#> 6 0.2 0 1 0 0
#> # … with 6 more rows
```
See the Wikipedia articles, [Rounding](https://en.wikipedia.org/wiki/Rounding) and [IEEE floating point](https://en.wikipedia.org/wiki/IEEE_floating_point) for more discussion of these rounding rules.
For rounding, R and many programming languages use the IEEE standard. This method is called “round to nearest, ties to even.”[8](#fn8)
This rule rounds ties, numbers with a remainder of 0\.5, to the nearest even number.
In this rule, half the ties are rounded up, and half are rounded down.
The following function, `round2()`, manually implements the “round to nearest, ties to even” method.
```
x <- seq(-10, 10, by = 0.5)
round2 <- function(x, to_even = TRUE) {
q <- x %/% 1
r <- x %% 1
q + (r >= 0.5)
}
x <- c(-12.5, -11.5, 11.5, 12.5)
round(x)
#> [1] -12 -12 12 12
round2(x, to_even = FALSE)
#> [1] -12 -11 12 13
```
This rounding method may be different than the one you learned in grade school, which is, at least for me, was to always round ties upwards, or, alternatively away from zero.
This rule is called the “round half up” rule.
The problem with the “round half up” rule is that it is biased upwards for positive numbers.
Rounding to nearest with ties towards even is not.
Consider this sequence which sums to zero.
```
x <- seq(-100.5, 100.5, by = 1)
x
#> [1] -100.5 -99.5 -98.5 -97.5 -96.5 -95.5 -94.5 -93.5 -92.5 -91.5
#> [11] -90.5 -89.5 -88.5 -87.5 -86.5 -85.5 -84.5 -83.5 -82.5 -81.5
#> [21] -80.5 -79.5 -78.5 -77.5 -76.5 -75.5 -74.5 -73.5 -72.5 -71.5
#> [31] -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5 -61.5
#> [41] -60.5 -59.5 -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5
#> [51] -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5 -41.5
#> [61] -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5
#> [71] -30.5 -29.5 -28.5 -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5
#> [81] -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5
#> [91] -10.5 -9.5 -8.5 -7.5 -6.5 -5.5 -4.5 -3.5 -2.5 -1.5
#> [101] -0.5 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5
#> [111] 9.5 10.5 11.5 12.5 13.5 14.5 15.5 16.5 17.5 18.5
#> [121] 19.5 20.5 21.5 22.5 23.5 24.5 25.5 26.5 27.5 28.5
#> [131] 29.5 30.5 31.5 32.5 33.5 34.5 35.5 36.5 37.5 38.5
#> [141] 39.5 40.5 41.5 42.5 43.5 44.5 45.5 46.5 47.5 48.5
#> [151] 49.5 50.5 51.5 52.5 53.5 54.5 55.5 56.5 57.5 58.5
#> [161] 59.5 60.5 61.5 62.5 63.5 64.5 65.5 66.5 67.5 68.5
#> [171] 69.5 70.5 71.5 72.5 73.5 74.5 75.5 76.5 77.5 78.5
#> [181] 79.5 80.5 81.5 82.5 83.5 84.5 85.5 86.5 87.5 88.5
#> [191] 89.5 90.5 91.5 92.5 93.5 94.5 95.5 96.5 97.5 98.5
#> [201] 99.5 100.5
sum(x)
#> [1] 0
```
A nice property of rounding preserved that sum.
Using the “ties towards even”, the sum is still zero.
However, the “ties towards \\(\+\\infty\\)” produces a non\-zero number.
```
sum(x)
#> [1] 0
sum(round(x))
#> [1] 0
sum(round2(x))
#> [1] 101
```
Rounding rules can have real world impacts.
One notable example was that in 1983, the Vancouver stock exchange adjusted its index from 524\.811 to 1098\.892 to correct for accumulated error due to rounding to three decimal points (see [Vancouver Stock Exchange](https://en.wikipedia.org/wiki/Vancouver_Stock_Exchange)).
This [site](https://web.ma.utexas.edu/users/arbogast/misc/disasters.html) lists several more examples of the dangers of rounding rules.
### Exercise 20\.3\.5
What functions from the readr package allow you to turn a string into logical, integer, and double vector?
The function `parse_logical()` parses logical values, which can appear
as variations of TRUE/FALSE or 1/0\.
```
parse_logical(c("TRUE", "FALSE", "1", "0", "true", "t", "NA"))
#> [1] TRUE FALSE TRUE FALSE TRUE TRUE NA
```
The function `parse_integer()` parses integer values.
```
parse_integer(c("1235", "0134", "NA"))
#> [1] 1235 134 NA
```
However, if there are any non\-numeric characters in the string, including
currency symbols, commas, and decimals, `parse_integer()` will raise an error.
```
parse_integer(c("1000", "$1,000", "10.00"))
#> Warning: 2 parsing failures.
#> row col expected actual
#> 2 -- an integer $1,000
#> 3 -- no trailing characters .00
#> [1] 1000 NA NA
#> attr(,"problems")
#> # A tibble: 2 x 4
#> row col expected actual
#> <int> <int> <chr> <chr>
#> 1 2 NA an integer $1,000
#> 2 3 NA no trailing characters .00
```
The function `parse_number()` parses numeric values.
Unlike `parse_integer()`, the function `parse_number()` is more forgiving about the format of the numbers.
It ignores all non\-numeric characters before or after the first number, as with `"$1,000.00"` in the example.
Within the number, `parse_number()` will only ignore grouping marks such as `","`.
This allows it to easily parse numeric fields that include currency symbols and comma separators in number strings without any intervention by the user.
```
parse_number(c("1.0", "3.5", "$1,000.00", "NA", "ABCD12234.90", "1234ABC", "A123B", "A1B2C"))
#> [1] 1.0 3.5 1000.0 NA 12234.9 1234.0 123.0 1.0
```
### Exercise 20\.3\.1
Describe the difference between `is.finite(x)` and `!is.infinite(x)`.
To find out, try the functions on a numeric vector that includes at least one number and the four special values (`NA`, `NaN`, `Inf`, `-Inf`).
```
x <- c(0, NA, NaN, Inf, -Inf)
is.finite(x)
#> [1] TRUE FALSE FALSE FALSE FALSE
!is.infinite(x)
#> [1] TRUE TRUE TRUE FALSE FALSE
```
The `is.finite()` function considers non\-missing numeric values to be finite,
and missing (`NA`), not a number (`NaN`), and positive (`Inf`) and negative infinity (`-Inf`) to not be finite. The `is.infinite()` behaves slightly differently.
It considers `Inf` and `-Inf` to be infinite, and everything else, including non\-missing numbers, `NA`, and `NaN` to not be infinite. See Table [20\.1](vectors.html#tab:finite-infinite).
Table 20\.1: Results of `is.finite()` and `is.infinite()` for
numeric and special values.
| | `is.finite()` | `is.infinite()` |
| --- | --- | --- |
| `1` | `TRUE` | `FALSE` |
| `NA` | `FALSE` | `FALSE` |
| `NaN` | `FALSE` | `FALSE` |
| `Inf` | `FALSE` | `TRUE` |
### Exercise 20\.3\.2
Read the source code for `dplyr::near()` (Hint: to see the source code, drop the `()`). How does it work?
The source for `dplyr::near` is:
```
dplyr::near
#> function (x, y, tol = .Machine$double.eps^0.5)
#> {
#> abs(x - y) < tol
#> }
#> <bytecode: 0x5d8e8a8>
#> <environment: namespace:dplyr>
```
Instead of checking for exact equality, it checks that two numbers are within a certain tolerance, `tol`.
By default the tolerance is set to the square root of `.Machine$double.eps`, which is the smallest floating point number that the computer can represent.
### Exercise 20\.3\.3
A logical vector can take 3 possible values. How many possible values can an integer vector take? How many possible values can a double take? Use Google to do some research.
For integers vectors, R uses a 32\-bit representation. This means that it can represent up to \\(2^{32}\\) different values with integers. One of these values is set aside for `NA_integer_`.
From the help for `integer`.
> Note that current implementations of R use 32\-bit integers for integer vectors,
> so the range of representable integers is restricted to about \+/\-2\*10^9: doubles
> can hold much larger integers exactly.
The range of integers values that R can represent in an integer vector is \\(\\pm 2^{31} \- 1\\),
```
.Machine$integer.max
#> [1] 2147483647
```
The maximum integer is \\(2^{31} \- 1\\) rather than \\(2^{32}\\) because 1 bit is used to
represent the sign (\\(\+\\), \\(\-\\)) and one value is used to represent `NA_integer_`.
If you try to represent an integer greater than that value, R will return `NA` values.
```
.Machine$integer.max + 1L
#> Warning in .Machine$integer.max + 1L: NAs produced by integer overflow
#> [1] NA
```
However, you can represent that value (exactly) with a numeric vector at the cost of
about two times the memory.
```
as.numeric(.Machine$integer.max) + 1
#> [1] 2.15e+09
```
The same is true for the negative of the integer max.
```
-.Machine$integer.max - 1L
#> Warning in -.Machine$integer.max - 1L: NAs produced by integer overflow
#> [1] NA
```
For double vectors, R uses a 64\-bit representation. This means that they can hold up
to \\(2^{64}\\) values exactly. However, some of those values are allocated to special values
such as `-Inf`, `Inf`, `NA_real_`, and `NaN`. From the help for `double`:
> All R platforms are required to work with values conforming to the IEC 60559
> (also known as IEEE 754\) standard. This basically works with a precision of
> 53 bits, and represents to that precision a range of absolute values from
> about 2e\-308 to 2e\+308\. It also has special values `NaN` (many of them),
> plus and minus infinity
> and plus and minus zero (although R acts as if these are the same). There are
> also denormal(ized) (or subnormal) numbers with absolute values above or below
> the range given above but represented to less precision.
The details of floating point representation and arithmetic are complicated, beyond
the scope of this question, and better discussed in the references provided below.
The double can represent numbers in the range of about \\(\\pm 2 \\times 10^{308}\\), which is
provided in
```
.Machine$double.xmax
#> [1] 1.8e+308
```
Many other details for the implementation of the double vectors are given in the `.Machine` variable (and its documentation).
These include the base (radix) of doubles,
```
.Machine$double.base
#> [1] 2
```
the number of bits used for the significand (mantissa),
```
.Machine$double.digits
#> [1] 53
```
the number of bits used in the exponent,
```
.Machine$double.exponent
#> [1] 11
```
and the smallest positive and negative numbers not equal to zero,
```
.Machine$double.eps
#> [1] 2.22e-16
.Machine$double.neg.eps
#> [1] 1.11e-16
```
* Computerphile, “[Floating Point Numbers](https://www.youtube.com/watch?v=PZRI1IfStY0)”
* <https://en.wikipedia.org/wiki/IEEE_754>
* [https://en.wikipedia.org/wiki/Double\-precision\_floating\-point\_format](https://en.wikipedia.org/wiki/Double-precision_floating-point_format)
* “[Floating Point Numbers: Why floating\-point numbers are needed](https://floating-point-gui.de/formats/fp/)”
* Fabien Sanglard, “[Floating Point Numbers: Visually Explained](http://fabiensanglard.net/floating_point_visually_explained/)”
* James Howard, “[How Many Floating Point Numbers are There?](https://jameshoward.us/2015/09/09/how-many-floating-point-numbers-are-there/)”
* GeeksforGeeks, “[Floating Point Representation Basics](https://www.geeksforgeeks.org/floating-point-representation-basics/)”
* Chris Hecker, “[Lets Go to the (Floating) Point](http://chrishecker.com/images/f/fb/Gdmfp.pdf)”, *Game Developer*
* Chua Hock\-Chuan, [A Tutorial on Data Representation Integers, Floating\-point Numbers, and Characters](http://www.ntu.edu.sg/home/ehchua/programming/java/datarepresentation.html)
* John D. Cook, “[Anatomy of a floating point number](https://www.johndcook.com/blog/2009/04/06/anatomy-of-a-floating-point-number/)”
* John D. Cook, “[Five Tips for Floating Point Programming](https://www.codeproject.com/Articles/29637/Five-Tips-for-Floating-Point-Programming)”
### Exercise 20\.3\.4
Brainstorm at least four functions that allow you to convert a double to an integer. How do they differ? Be precise.
The difference between to convert a double to an integer differ in how they deal with the fractional part of the double.
There are are a variety of rules that could be used to do this.
* Round down, towards \\(\-\\infty\\). This is also called taking the `floor` of a number. This is the method the `floor()` function uses.
* Round up, towards \\(\+\\infty\\). This is also called taking the `ceiling`. This is the method the `ceiling()` function uses.
* Round towards zero. This is the method that the `trunc()` and `as.integer()` functions use.
* Round away from zero.
* Round to the nearest integer. There several different methods for handling ties, defined as numbers with a fractional part of 0\.5\.
+ Round half down, towards \\(\-\\infty\\).
+ Round half up, towards \\(\+\\infty\\).
+ Round half towards zero
+ Round half away from zero
+ Round half towards the even integer. This is the method that the `round()` function uses.
+ Round half towards the odd integer.
```
function(x, method) {
if (method == "round down") {
floor(x)
} else if (method == "round up") {
ceiling(x)
} else if (method == "round towards zero") {
trunc(x)
} else if (method == "round away from zero") {
sign(x) * ceiling(abs(x))
} else if (method == "nearest, round half up") {
floor(x + 0.5)
} else if (method == "nearest, round half down") {
ceiling(x - 0.5)
} else if (method == "nearest, round half towards zero") {
sign(x) * ceiling(abs(x) - 0.5)
} else if (method == "nearest, round half away from zero") {
sign(x) * floor(abs(x) + 0.5)
} else if (method == "nearest, round half to even") {
round(x, digits = 0)
} else if (method == "nearest, round half to odd") {
case_when(
# smaller integer is odd - round half down
floor(x) %% 2 ~ ceiling(x - 0.5),
# otherwise, round half up
TRUE ~ floor(x + 0.5)
)
} else if (method == "nearest, round half randomly") {
round_half_up <- sample(c(TRUE, FALSE), length(x), replace = TRUE)
y <- x
y[round_half_up] <- ceiling(x[round_half_up] - 0.5)
y[!round_half_up] <- floor(x[!round_half_up] + 0.5)
y
}
}
#> function(x, method) {
#> if (method == "round down") {
#> floor(x)
#> } else if (method == "round up") {
#> ceiling(x)
#> } else if (method == "round towards zero") {
#> trunc(x)
#> } else if (method == "round away from zero") {
#> sign(x) * ceiling(abs(x))
#> } else if (method == "nearest, round half up") {
#> floor(x + 0.5)
#> } else if (method == "nearest, round half down") {
#> ceiling(x - 0.5)
#> } else if (method == "nearest, round half towards zero") {
#> sign(x) * ceiling(abs(x) - 0.5)
#> } else if (method == "nearest, round half away from zero") {
#> sign(x) * floor(abs(x) + 0.5)
#> } else if (method == "nearest, round half to even") {
#> round(x, digits = 0)
#> } else if (method == "nearest, round half to odd") {
#> case_when(
#> # smaller integer is odd - round half down
#> floor(x) %% 2 ~ ceiling(x - 0.5),
#> # otherwise, round half up
#> TRUE ~ floor(x + 0.5)
#> )
#> } else if (method == "nearest, round half randomly") {
#> round_half_up <- sample(c(TRUE, FALSE), length(x), replace = TRUE)
#> y <- x
#> y[round_half_up] <- ceiling(x[round_half_up] - 0.5)
#> y[!round_half_up] <- floor(x[!round_half_up] + 0.5)
#> y
#> }
#> }
#> <environment: 0x2b114b8>
```
```
tibble(
x = c(1.8, 1.5, 1.2, 0.8, 0.5, 0.2,
-0.2, -0.5, -0.8, -1.2, -1.5, -1.8),
`Round down` = floor(x),
`Round up` = ceiling(x),
`Round towards zero` = trunc(x),
`Nearest, round half to even` = round(x)
)
#> # A tibble: 12 x 5
#> x `Round down` `Round up` `Round towards zero` `Nearest, round half to ev…
#> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1.8 1 2 1 2
#> 2 1.5 1 2 1 2
#> 3 1.2 1 2 1 1
#> 4 0.8 0 1 0 1
#> 5 0.5 0 1 0 0
#> 6 0.2 0 1 0 0
#> # … with 6 more rows
```
See the Wikipedia articles, [Rounding](https://en.wikipedia.org/wiki/Rounding) and [IEEE floating point](https://en.wikipedia.org/wiki/IEEE_floating_point) for more discussion of these rounding rules.
For rounding, R and many programming languages use the IEEE standard. This method is called “round to nearest, ties to even.”[8](#fn8)
This rule rounds ties, numbers with a remainder of 0\.5, to the nearest even number.
In this rule, half the ties are rounded up, and half are rounded down.
The following function, `round2()`, manually implements the “round to nearest, ties to even” method.
```
x <- seq(-10, 10, by = 0.5)
round2 <- function(x, to_even = TRUE) {
q <- x %/% 1
r <- x %% 1
q + (r >= 0.5)
}
x <- c(-12.5, -11.5, 11.5, 12.5)
round(x)
#> [1] -12 -12 12 12
round2(x, to_even = FALSE)
#> [1] -12 -11 12 13
```
This rounding method may be different than the one you learned in grade school, which is, at least for me, was to always round ties upwards, or, alternatively away from zero.
This rule is called the “round half up” rule.
The problem with the “round half up” rule is that it is biased upwards for positive numbers.
Rounding to nearest with ties towards even is not.
Consider this sequence which sums to zero.
```
x <- seq(-100.5, 100.5, by = 1)
x
#> [1] -100.5 -99.5 -98.5 -97.5 -96.5 -95.5 -94.5 -93.5 -92.5 -91.5
#> [11] -90.5 -89.5 -88.5 -87.5 -86.5 -85.5 -84.5 -83.5 -82.5 -81.5
#> [21] -80.5 -79.5 -78.5 -77.5 -76.5 -75.5 -74.5 -73.5 -72.5 -71.5
#> [31] -70.5 -69.5 -68.5 -67.5 -66.5 -65.5 -64.5 -63.5 -62.5 -61.5
#> [41] -60.5 -59.5 -58.5 -57.5 -56.5 -55.5 -54.5 -53.5 -52.5 -51.5
#> [51] -50.5 -49.5 -48.5 -47.5 -46.5 -45.5 -44.5 -43.5 -42.5 -41.5
#> [61] -40.5 -39.5 -38.5 -37.5 -36.5 -35.5 -34.5 -33.5 -32.5 -31.5
#> [71] -30.5 -29.5 -28.5 -27.5 -26.5 -25.5 -24.5 -23.5 -22.5 -21.5
#> [81] -20.5 -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5
#> [91] -10.5 -9.5 -8.5 -7.5 -6.5 -5.5 -4.5 -3.5 -2.5 -1.5
#> [101] -0.5 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5
#> [111] 9.5 10.5 11.5 12.5 13.5 14.5 15.5 16.5 17.5 18.5
#> [121] 19.5 20.5 21.5 22.5 23.5 24.5 25.5 26.5 27.5 28.5
#> [131] 29.5 30.5 31.5 32.5 33.5 34.5 35.5 36.5 37.5 38.5
#> [141] 39.5 40.5 41.5 42.5 43.5 44.5 45.5 46.5 47.5 48.5
#> [151] 49.5 50.5 51.5 52.5 53.5 54.5 55.5 56.5 57.5 58.5
#> [161] 59.5 60.5 61.5 62.5 63.5 64.5 65.5 66.5 67.5 68.5
#> [171] 69.5 70.5 71.5 72.5 73.5 74.5 75.5 76.5 77.5 78.5
#> [181] 79.5 80.5 81.5 82.5 83.5 84.5 85.5 86.5 87.5 88.5
#> [191] 89.5 90.5 91.5 92.5 93.5 94.5 95.5 96.5 97.5 98.5
#> [201] 99.5 100.5
sum(x)
#> [1] 0
```
A nice property of rounding preserved that sum.
Using the “ties towards even”, the sum is still zero.
However, the “ties towards \\(\+\\infty\\)” produces a non\-zero number.
```
sum(x)
#> [1] 0
sum(round(x))
#> [1] 0
sum(round2(x))
#> [1] 101
```
Rounding rules can have real world impacts.
One notable example was that in 1983, the Vancouver stock exchange adjusted its index from 524\.811 to 1098\.892 to correct for accumulated error due to rounding to three decimal points (see [Vancouver Stock Exchange](https://en.wikipedia.org/wiki/Vancouver_Stock_Exchange)).
This [site](https://web.ma.utexas.edu/users/arbogast/misc/disasters.html) lists several more examples of the dangers of rounding rules.
### Exercise 20\.3\.5
What functions from the readr package allow you to turn a string into logical, integer, and double vector?
The function `parse_logical()` parses logical values, which can appear
as variations of TRUE/FALSE or 1/0\.
```
parse_logical(c("TRUE", "FALSE", "1", "0", "true", "t", "NA"))
#> [1] TRUE FALSE TRUE FALSE TRUE TRUE NA
```
The function `parse_integer()` parses integer values.
```
parse_integer(c("1235", "0134", "NA"))
#> [1] 1235 134 NA
```
However, if there are any non\-numeric characters in the string, including
currency symbols, commas, and decimals, `parse_integer()` will raise an error.
```
parse_integer(c("1000", "$1,000", "10.00"))
#> Warning: 2 parsing failures.
#> row col expected actual
#> 2 -- an integer $1,000
#> 3 -- no trailing characters .00
#> [1] 1000 NA NA
#> attr(,"problems")
#> # A tibble: 2 x 4
#> row col expected actual
#> <int> <int> <chr> <chr>
#> 1 2 NA an integer $1,000
#> 2 3 NA no trailing characters .00
```
The function `parse_number()` parses numeric values.
Unlike `parse_integer()`, the function `parse_number()` is more forgiving about the format of the numbers.
It ignores all non\-numeric characters before or after the first number, as with `"$1,000.00"` in the example.
Within the number, `parse_number()` will only ignore grouping marks such as `","`.
This allows it to easily parse numeric fields that include currency symbols and comma separators in number strings without any intervention by the user.
```
parse_number(c("1.0", "3.5", "$1,000.00", "NA", "ABCD12234.90", "1234ABC", "A123B", "A1B2C"))
#> [1] 1.0 3.5 1000.0 NA 12234.9 1234.0 123.0 1.0
```
20\.4 Using atomic vectors
--------------------------
### Exercise 20\.4\.1
What does `mean(is.na(x))` tell you about a vector `x`? What about `sum(!is.finite(x))`?
I’ll use the numeric vector `x` to compare the behaviors of `is.na()`
and `is.finite()`. It contains numbers (`-1`, `0`, `1`) as
well as all the special numeric values: infinity (`Inf`),
missing (`NA`), and not\-a\-number (`NaN`).
```
x <- c(-Inf, -1, 0, 1, Inf, NA, NaN)
```
The expression `mean(is.na(x))` calculates the proportion of missing (`NA`) and not\-a\-number `NaN` values in a vector:
```
mean(is.na(x))
#> [1] 0.286
```
The result of 0\.286 is equal to `2 / 7` as expected.
There are seven elements in the vector `x`, and two elements that are either `NA` or `NaN`.
The expression `sum(!is.finite(x))` calculates the number of elements in the vector that are equal to missing (`NA`), not\-a\-number (`NaN`), or infinity (`Inf`).
```
sum(!is.finite(x))
#> [1] 4
```
Review the [Numeric](https://r4ds.had.co.nz/vectors.html#numeric) section for the differences between `is.na()` and `is.finite()`.
### Exercise 20\.4\.2
Carefully read the documentation of `is.vector()`. What does it actually test for? Why does `is.atomic()` not agree with the definition of atomic vectors above?
The function `is.vector()` only checks whether the object has no attributes other than names. Thus a `list` is a vector:
```
is.vector(list(a = 1, b = 2))
#> [1] TRUE
```
But any object that has an attribute (other than names) is not:
```
x <- 1:10
attr(x, "something") <- TRUE
is.vector(x)
#> [1] FALSE
```
The idea behind this is that object oriented classes will include attributes, including, but not limited to `"class"`.
The function `is.atomic()` explicitly checks whether an object is one of the atomic types (“logical”, “integer”, “numeric”, “complex”, “character”, and “raw”) or NULL.
```
is.atomic(1:10)
#> [1] TRUE
is.atomic(list(a = 1))
#> [1] FALSE
```
The function `is.atomic()` will consider objects to be atomic even if they have extra attributes.
```
is.atomic(x)
#> [1] TRUE
```
### Exercise 20\.4\.3
Compare and contrast `setNames()` with `purrr::set_names()`.
The function `setNames()` takes two arguments, a vector to be named and a vector
of names to apply to its elements.
```
setNames(1:4, c("a", "b", "c", "d"))
#> a b c d
#> 1 2 3 4
```
You can use the values of the vector as its names if the `nm` argument is used.
```
setNames(nm = c("a", "b", "c", "d"))
#> a b c d
#> "a" "b" "c" "d"
```
The function `set_names()` has more ways to set the names than `setNames()`.
The names can be specified in the same manner as `setNames()`.
```
purrr::set_names(1:4, c("a", "b", "c", "d"))
#> a b c d
#> 1 2 3 4
```
The names can also be specified as unnamed arguments,
```
purrr::set_names(1:4, "a", "b", "c", "d")
#> a b c d
#> 1 2 3 4
```
The function `set_names()` will name an object with itself if no `nm` argument is
provided (the opposite of `setNames()` behavior).
```
purrr::set_names(c("a", "b", "c", "d"))
#> a b c d
#> "a" "b" "c" "d"
```
The biggest difference between `set_names()` and `setNames()` is that `set_names()` allows for using a function or formula to transform the existing names.
```
purrr::set_names(c(a = 1, b = 2, c = 3), toupper)
#> A B C
#> 1 2 3
purrr::set_names(c(a = 1, b = 2, c = 3), ~toupper(.))
#> A B C
#> 1 2 3
```
The `set_names()` function also checks that the length of the names argument is the
same length as the vector that is being named, and will raise an error if it is not.
```
purrr::set_names(1:4, c("a", "b"))
#> Error: `nm` must be `NULL` or a character vector the same length as `x`
```
The `setNames()` function will allow the names to be shorter than the vector being
named, and will set the missing names to `NA`.
```
setNames(1:4, c("a", "b"))
#> a b <NA> <NA>
#> 1 2 3 4
```
### Exercise 20\.4\.4
Create functions that take a vector as input and returns:
1. The last value. Should you use `[` or `[[`?
2. The elements at even numbered positions.
3. Every element except the last value.
4. Only even numbers (and no missing values).
The answers to the parts follow.
1. This function find the last value in a vector.
```
last_value <- function(x) {
# check for case with no length
if (length(x)) {
x[[length(x)]]
} else {
x
}
}
last_value(numeric())
#> numeric(0)
last_value(1)
#> [1] 1
last_value(1:10)
#> [1] 10
```
The function uses `[[` in order to extract a single element.
2. This function returns the elements at even number positions.
```
even_indices <- function(x) {
if (length(x)) {
x[seq_along(x) %% 2 == 0]
} else {
x
}
}
even_indices(numeric())
#> numeric(0)
even_indices(1)
#> numeric(0)
even_indices(1:10)
#> [1] 2 4 6 8 10
# test using case to ensure that values not indices
# are being returned
even_indices(letters)
#> [1] "b" "d" "f" "h" "j" "l" "n" "p" "r" "t" "v" "x" "z"
```
3. This function returns a vector with every element except the last.
```
not_last <- function(x) {
n <- length(x)
if (n) {
x[-n]
} else {
# n == 0
x
}
}
not_last(1:3)
#> [1] 1 2
```
We should also confirm that the function works with some edge cases, like
a vector with one element, and a vector with zero elements.
```
not_last(1)
#> numeric(0)
not_last(numeric())
#> numeric(0)
```
In both these cases, `not_last()` correctly returns an empty vector.
4. This function returns the elements of a vector that are even numbers.
```
even_numbers <- function(x) {
x[x %% 2 == 0]
}
even_numbers(-4:4)
#> [1] -4 -2 0 2 4
```
We could improve this function by handling the special numeric values:
`NA`, `NaN`, `Inf`. However, first we need to decide how to handle them.
Neither `NaN` nor `Inf` are numbers, and so they are neither even nor odd.
In other words, since `NaN` nor `Inf` aren’t *even* numbers, they aren’t *even numbers*.
What about `NA`? Well, we don’t know. `NA` is a number, but we don’t know its
value. The missing number could be even or odd, but we don’t know.
Another reason to return `NA` is that it is consistent with the behavior of other R functions,
which generally return `NA` values instead of dropping them.
```
even_numbers2 <- function(x) {
x[!is.infinite(x) & !is.nan(x) & (x %% 2 == 0)]
}
even_numbers2(c(0:4, NA, NaN, Inf, -Inf))
#> [1] 0 2 4 NA
```
### Exercise 20\.4\.5
Why is `x[-which(x > 0)]` not the same as `x[x <= 0]`?
These expressions differ in the way that they treat missing values.
Let’s test how they work by creating a vector with positive and negative integers,
and special values (`NA`, `NaN`, and `Inf`). These values should encompass
all relevant types of values that these expressions would encounter.
```
x <- c(-1:1, Inf, -Inf, NaN, NA)
x[-which(x > 0)]
#> [1] -1 0 -Inf NaN NA
x[x <= 0]
#> [1] -1 0 -Inf NA NA
```
The expressions `x[-which(x > 0)]` and `x[x <= 0]` return the same values except
for a `NaN` instead of an `NA` in the expression using which.
So what is going on here? Let’s work through each part of these expressions and
see where the different occurs.
Let’s start with the expression `x[x <= 0]`.
```
x <= 0
#> [1] TRUE TRUE FALSE FALSE TRUE NA NA
```
Recall how the logical relational operators (`<`, `<=`, `==`, `!=`, `>`, `>=`) treat `NA` values.
Any relational operation that includes a `NA` returns an `NA`.
Is `NA <= 0`? We don’t know because it depends on the unknown value of `NA`, so the answer is `NA`.
This same argument applies to `NaN`. Asking whether `NaN <= 0` does not make sense because you can’t compare a number to “Not a Number”.
Now recall how indexing treats `NA` values.
Indexing can take a logical vector as an input.
When the indexing vector is logical, the output vector includes those elements where the logical vector is `TRUE`, and excludes those elements where the logical vector is `FALSE`.
Logical vectors can also include `NA` values, and it is not clear how they should be treated.
Well, since the value is `NA`, it could be `TRUE` or `FALSE`, we don’t know.
Keeping elements with `NA` would treat the `NA` as `TRUE`, and dropping them would treat the `NA` as `FALSE`.
The way R decides to handle the `NA` values so that they are treated differently than `TRUE` or `FALSE` values is to include elements where the indexing vector is `NA`, but set their values to `NA`.
Now consider the expression `x[-which(x > 0)]`.
As before, to understand this expression we’ll work from the inside out.
Consider `x > 0`.
```
x > 0
#> [1] FALSE FALSE TRUE TRUE FALSE NA NA
```
As with `x <= 0`, it returns `NA` for comparisons involving `NA` and `NaN`.
What does `which()` do?
```
which(x > 0)
#> [1] 3 4
```
The `which()` function returns the indexes for which the argument is `TRUE`.
This means that it is not including the indexes for which the argument is `FALSE` or `NA`.
Now consider the full expression `x[-which(x > 0)]`?
The `which()` function returned a vector of integers.
How does indexing treat negative integers?
```
x[1:2]
#> [1] -1 0
x[-(1:2)]
#> [1] 1 Inf -Inf NaN NA
```
If indexing gets a vector of positive integers, it will select those indexes;
if it receives a vector of negative integers, it will drop those indexes.
Thus, `x[-which(x > 0)]` ends up dropping the elements for which `x > 0` is true,
and keeps all the other elements and their original values, including `NA` and `NaN`.
There’s one other special case that we should consider. How do these two expressions work with
an empty vector?
```
x <- numeric()
x[x <= 0]
#> numeric(0)
x[-which(x > 0)]
#> numeric(0)
```
Thankfully, they both handle empty vectors the same.
This exercise is a reminder to always test your code. Even though these two expressions looked
equivalent, they are not in practice. And when you do test code, consider both
how it works on typical values as well as special values and edge cases, like a
vector with `NA` or `NaN` or `Inf` values, or an empty vector. These are where
unexpected behavior is most likely to occur.
### Exercise 20\.4\.6
What happens when you subset with a positive integer that’s bigger than the length of the vector? What happens when you subset with a name that doesn’t exist?
Let’s consider the named vector,
```
x <- c(a = 10, b = 20)
```
If we subset it by an integer larger than its length, it returns a vector of missing values.
```
x[3]
#> <NA>
#> NA
```
This also applies to ranges.
```
x[3:5]
#> <NA> <NA> <NA>
#> NA NA NA
```
If some indexes are larger than the length of the vector, those elements are `NA`.
```
x[1:5]
#> a b <NA> <NA> <NA>
#> 10 20 NA NA NA
```
Likewise, when `[` is provided names not in the vector’s names, it will return
`NA` for those elements.
```
x["c"]
#> <NA>
#> NA
x[c("c", "d", "e")]
#> <NA> <NA> <NA>
#> NA NA NA
x[c("a", "b", "c")]
#> a b <NA>
#> 10 20 NA
```
Though not yet discussed much in this chapter, the `[[` behaves differently.
With an atomic vector, if `[[` is given an index outside the range of the vector or an invalid name, it raises an error.
```
x[["c"]]
#> Error in x[["c"]]: subscript out of bounds
```
```
x[[5]]
#> Error in x[[5]]: subscript out of bounds
```
### Exercise 20\.4\.1
What does `mean(is.na(x))` tell you about a vector `x`? What about `sum(!is.finite(x))`?
I’ll use the numeric vector `x` to compare the behaviors of `is.na()`
and `is.finite()`. It contains numbers (`-1`, `0`, `1`) as
well as all the special numeric values: infinity (`Inf`),
missing (`NA`), and not\-a\-number (`NaN`).
```
x <- c(-Inf, -1, 0, 1, Inf, NA, NaN)
```
The expression `mean(is.na(x))` calculates the proportion of missing (`NA`) and not\-a\-number `NaN` values in a vector:
```
mean(is.na(x))
#> [1] 0.286
```
The result of 0\.286 is equal to `2 / 7` as expected.
There are seven elements in the vector `x`, and two elements that are either `NA` or `NaN`.
The expression `sum(!is.finite(x))` calculates the number of elements in the vector that are equal to missing (`NA`), not\-a\-number (`NaN`), or infinity (`Inf`).
```
sum(!is.finite(x))
#> [1] 4
```
Review the [Numeric](https://r4ds.had.co.nz/vectors.html#numeric) section for the differences between `is.na()` and `is.finite()`.
### Exercise 20\.4\.2
Carefully read the documentation of `is.vector()`. What does it actually test for? Why does `is.atomic()` not agree with the definition of atomic vectors above?
The function `is.vector()` only checks whether the object has no attributes other than names. Thus a `list` is a vector:
```
is.vector(list(a = 1, b = 2))
#> [1] TRUE
```
But any object that has an attribute (other than names) is not:
```
x <- 1:10
attr(x, "something") <- TRUE
is.vector(x)
#> [1] FALSE
```
The idea behind this is that object oriented classes will include attributes, including, but not limited to `"class"`.
The function `is.atomic()` explicitly checks whether an object is one of the atomic types (“logical”, “integer”, “numeric”, “complex”, “character”, and “raw”) or NULL.
```
is.atomic(1:10)
#> [1] TRUE
is.atomic(list(a = 1))
#> [1] FALSE
```
The function `is.atomic()` will consider objects to be atomic even if they have extra attributes.
```
is.atomic(x)
#> [1] TRUE
```
### Exercise 20\.4\.3
Compare and contrast `setNames()` with `purrr::set_names()`.
The function `setNames()` takes two arguments, a vector to be named and a vector
of names to apply to its elements.
```
setNames(1:4, c("a", "b", "c", "d"))
#> a b c d
#> 1 2 3 4
```
You can use the values of the vector as its names if the `nm` argument is used.
```
setNames(nm = c("a", "b", "c", "d"))
#> a b c d
#> "a" "b" "c" "d"
```
The function `set_names()` has more ways to set the names than `setNames()`.
The names can be specified in the same manner as `setNames()`.
```
purrr::set_names(1:4, c("a", "b", "c", "d"))
#> a b c d
#> 1 2 3 4
```
The names can also be specified as unnamed arguments,
```
purrr::set_names(1:4, "a", "b", "c", "d")
#> a b c d
#> 1 2 3 4
```
The function `set_names()` will name an object with itself if no `nm` argument is
provided (the opposite of `setNames()` behavior).
```
purrr::set_names(c("a", "b", "c", "d"))
#> a b c d
#> "a" "b" "c" "d"
```
The biggest difference between `set_names()` and `setNames()` is that `set_names()` allows for using a function or formula to transform the existing names.
```
purrr::set_names(c(a = 1, b = 2, c = 3), toupper)
#> A B C
#> 1 2 3
purrr::set_names(c(a = 1, b = 2, c = 3), ~toupper(.))
#> A B C
#> 1 2 3
```
The `set_names()` function also checks that the length of the names argument is the
same length as the vector that is being named, and will raise an error if it is not.
```
purrr::set_names(1:4, c("a", "b"))
#> Error: `nm` must be `NULL` or a character vector the same length as `x`
```
The `setNames()` function will allow the names to be shorter than the vector being
named, and will set the missing names to `NA`.
```
setNames(1:4, c("a", "b"))
#> a b <NA> <NA>
#> 1 2 3 4
```
### Exercise 20\.4\.4
Create functions that take a vector as input and returns:
1. The last value. Should you use `[` or `[[`?
2. The elements at even numbered positions.
3. Every element except the last value.
4. Only even numbers (and no missing values).
The answers to the parts follow.
1. This function find the last value in a vector.
```
last_value <- function(x) {
# check for case with no length
if (length(x)) {
x[[length(x)]]
} else {
x
}
}
last_value(numeric())
#> numeric(0)
last_value(1)
#> [1] 1
last_value(1:10)
#> [1] 10
```
The function uses `[[` in order to extract a single element.
2. This function returns the elements at even number positions.
```
even_indices <- function(x) {
if (length(x)) {
x[seq_along(x) %% 2 == 0]
} else {
x
}
}
even_indices(numeric())
#> numeric(0)
even_indices(1)
#> numeric(0)
even_indices(1:10)
#> [1] 2 4 6 8 10
# test using case to ensure that values not indices
# are being returned
even_indices(letters)
#> [1] "b" "d" "f" "h" "j" "l" "n" "p" "r" "t" "v" "x" "z"
```
3. This function returns a vector with every element except the last.
```
not_last <- function(x) {
n <- length(x)
if (n) {
x[-n]
} else {
# n == 0
x
}
}
not_last(1:3)
#> [1] 1 2
```
We should also confirm that the function works with some edge cases, like
a vector with one element, and a vector with zero elements.
```
not_last(1)
#> numeric(0)
not_last(numeric())
#> numeric(0)
```
In both these cases, `not_last()` correctly returns an empty vector.
4. This function returns the elements of a vector that are even numbers.
```
even_numbers <- function(x) {
x[x %% 2 == 0]
}
even_numbers(-4:4)
#> [1] -4 -2 0 2 4
```
We could improve this function by handling the special numeric values:
`NA`, `NaN`, `Inf`. However, first we need to decide how to handle them.
Neither `NaN` nor `Inf` are numbers, and so they are neither even nor odd.
In other words, since `NaN` nor `Inf` aren’t *even* numbers, they aren’t *even numbers*.
What about `NA`? Well, we don’t know. `NA` is a number, but we don’t know its
value. The missing number could be even or odd, but we don’t know.
Another reason to return `NA` is that it is consistent with the behavior of other R functions,
which generally return `NA` values instead of dropping them.
```
even_numbers2 <- function(x) {
x[!is.infinite(x) & !is.nan(x) & (x %% 2 == 0)]
}
even_numbers2(c(0:4, NA, NaN, Inf, -Inf))
#> [1] 0 2 4 NA
```
### Exercise 20\.4\.5
Why is `x[-which(x > 0)]` not the same as `x[x <= 0]`?
These expressions differ in the way that they treat missing values.
Let’s test how they work by creating a vector with positive and negative integers,
and special values (`NA`, `NaN`, and `Inf`). These values should encompass
all relevant types of values that these expressions would encounter.
```
x <- c(-1:1, Inf, -Inf, NaN, NA)
x[-which(x > 0)]
#> [1] -1 0 -Inf NaN NA
x[x <= 0]
#> [1] -1 0 -Inf NA NA
```
The expressions `x[-which(x > 0)]` and `x[x <= 0]` return the same values except
for a `NaN` instead of an `NA` in the expression using which.
So what is going on here? Let’s work through each part of these expressions and
see where the different occurs.
Let’s start with the expression `x[x <= 0]`.
```
x <= 0
#> [1] TRUE TRUE FALSE FALSE TRUE NA NA
```
Recall how the logical relational operators (`<`, `<=`, `==`, `!=`, `>`, `>=`) treat `NA` values.
Any relational operation that includes a `NA` returns an `NA`.
Is `NA <= 0`? We don’t know because it depends on the unknown value of `NA`, so the answer is `NA`.
This same argument applies to `NaN`. Asking whether `NaN <= 0` does not make sense because you can’t compare a number to “Not a Number”.
Now recall how indexing treats `NA` values.
Indexing can take a logical vector as an input.
When the indexing vector is logical, the output vector includes those elements where the logical vector is `TRUE`, and excludes those elements where the logical vector is `FALSE`.
Logical vectors can also include `NA` values, and it is not clear how they should be treated.
Well, since the value is `NA`, it could be `TRUE` or `FALSE`, we don’t know.
Keeping elements with `NA` would treat the `NA` as `TRUE`, and dropping them would treat the `NA` as `FALSE`.
The way R decides to handle the `NA` values so that they are treated differently than `TRUE` or `FALSE` values is to include elements where the indexing vector is `NA`, but set their values to `NA`.
Now consider the expression `x[-which(x > 0)]`.
As before, to understand this expression we’ll work from the inside out.
Consider `x > 0`.
```
x > 0
#> [1] FALSE FALSE TRUE TRUE FALSE NA NA
```
As with `x <= 0`, it returns `NA` for comparisons involving `NA` and `NaN`.
What does `which()` do?
```
which(x > 0)
#> [1] 3 4
```
The `which()` function returns the indexes for which the argument is `TRUE`.
This means that it is not including the indexes for which the argument is `FALSE` or `NA`.
Now consider the full expression `x[-which(x > 0)]`?
The `which()` function returned a vector of integers.
How does indexing treat negative integers?
```
x[1:2]
#> [1] -1 0
x[-(1:2)]
#> [1] 1 Inf -Inf NaN NA
```
If indexing gets a vector of positive integers, it will select those indexes;
if it receives a vector of negative integers, it will drop those indexes.
Thus, `x[-which(x > 0)]` ends up dropping the elements for which `x > 0` is true,
and keeps all the other elements and their original values, including `NA` and `NaN`.
There’s one other special case that we should consider. How do these two expressions work with
an empty vector?
```
x <- numeric()
x[x <= 0]
#> numeric(0)
x[-which(x > 0)]
#> numeric(0)
```
Thankfully, they both handle empty vectors the same.
This exercise is a reminder to always test your code. Even though these two expressions looked
equivalent, they are not in practice. And when you do test code, consider both
how it works on typical values as well as special values and edge cases, like a
vector with `NA` or `NaN` or `Inf` values, or an empty vector. These are where
unexpected behavior is most likely to occur.
### Exercise 20\.4\.6
What happens when you subset with a positive integer that’s bigger than the length of the vector? What happens when you subset with a name that doesn’t exist?
Let’s consider the named vector,
```
x <- c(a = 10, b = 20)
```
If we subset it by an integer larger than its length, it returns a vector of missing values.
```
x[3]
#> <NA>
#> NA
```
This also applies to ranges.
```
x[3:5]
#> <NA> <NA> <NA>
#> NA NA NA
```
If some indexes are larger than the length of the vector, those elements are `NA`.
```
x[1:5]
#> a b <NA> <NA> <NA>
#> 10 20 NA NA NA
```
Likewise, when `[` is provided names not in the vector’s names, it will return
`NA` for those elements.
```
x["c"]
#> <NA>
#> NA
x[c("c", "d", "e")]
#> <NA> <NA> <NA>
#> NA NA NA
x[c("a", "b", "c")]
#> a b <NA>
#> 10 20 NA
```
Though not yet discussed much in this chapter, the `[[` behaves differently.
With an atomic vector, if `[[` is given an index outside the range of the vector or an invalid name, it raises an error.
```
x[["c"]]
#> Error in x[["c"]]: subscript out of bounds
```
```
x[[5]]
#> Error in x[[5]]: subscript out of bounds
```
20\.5 Recursive vectors (lists)
-------------------------------
### Exercise 20\.5\.1
Draw the following lists as nested sets:
1. `list(a, b, list(c, d), list(e, f))`
2. `list(list(list(list(list(list(a))))))`
There are a variety of ways to draw these graphs.
The original diagrams in *R for Data Science* were produced with [Graffle](https://www.omnigroup.com/omnigraffle).
You could also use various diagramming, drawing, or presentation software, including Adobe Illustrator, Inkscape, PowerPoint, Keynote, and Google Slides.
For these examples, I generated these diagrams programmatically using the
[DiagrammeR](http://rich-iannone.github.io/DiagrammeR/graphviz_and_mermaid.html) R package to render [Graphviz](https://www.graphviz.org/) diagrams.
1. The nested set diagram for
`list(a, b, list(c, d), list(e, f))`
is:[9](#fn9)
2. The nested set diagram for
`list(list(list(list(list(list(a))))))`
is:
### Exercise 20\.5\.2
What happens if you subset a `tibble` as if you’re subsetting a list? What are the key differences between a list and a `tibble`?
Subsetting a `tibble` works the same way as a list; a data frame can be thought of as a list of columns.
The key difference between a list and a `tibble` is that all the elements (columns) of a tibble must have the same length (number of rows).
Lists can have vectors with different lengths as elements.
```
x <- tibble(a = 1:2, b = 3:4)
x[["a"]]
#> [1] 1 2
x["a"]
#> # A tibble: 2 x 1
#> a
#> <int>
#> 1 1
#> 2 2
x[1]
#> # A tibble: 2 x 1
#> a
#> <int>
#> 1 1
#> 2 2
x[1, ]
#> # A tibble: 1 x 2
#> a b
#> <int> <int>
#> 1 1 3
```
### Exercise 20\.5\.1
Draw the following lists as nested sets:
1. `list(a, b, list(c, d), list(e, f))`
2. `list(list(list(list(list(list(a))))))`
There are a variety of ways to draw these graphs.
The original diagrams in *R for Data Science* were produced with [Graffle](https://www.omnigroup.com/omnigraffle).
You could also use various diagramming, drawing, or presentation software, including Adobe Illustrator, Inkscape, PowerPoint, Keynote, and Google Slides.
For these examples, I generated these diagrams programmatically using the
[DiagrammeR](http://rich-iannone.github.io/DiagrammeR/graphviz_and_mermaid.html) R package to render [Graphviz](https://www.graphviz.org/) diagrams.
1. The nested set diagram for
`list(a, b, list(c, d), list(e, f))`
is:[9](#fn9)
2. The nested set diagram for
`list(list(list(list(list(list(a))))))`
is:
### Exercise 20\.5\.2
What happens if you subset a `tibble` as if you’re subsetting a list? What are the key differences between a list and a `tibble`?
Subsetting a `tibble` works the same way as a list; a data frame can be thought of as a list of columns.
The key difference between a list and a `tibble` is that all the elements (columns) of a tibble must have the same length (number of rows).
Lists can have vectors with different lengths as elements.
```
x <- tibble(a = 1:2, b = 3:4)
x[["a"]]
#> [1] 1 2
x["a"]
#> # A tibble: 2 x 1
#> a
#> <int>
#> 1 1
#> 2 2
x[1]
#> # A tibble: 2 x 1
#> a
#> <int>
#> 1 1
#> 2 2
x[1, ]
#> # A tibble: 1 x 2
#> a b
#> <int> <int>
#> 1 1 3
```
20\.6 Attributes
----------------
No exercises
20\.7 Augmented vectors
-----------------------
### Exercise 20\.7\.1
What does `hms::hms(3600)` return? How does it print? What primitive type is the augmented vector built on top of? What attributes does it use?
```
x <- hms::hms(3600)
class(x)
#> [1] "hms" "difftime"
x
#> 01:00:00
```
`hms::hms` returns an object of class, and prints the time in “%H:%M:%S” format.
The primitive type is a double
```
typeof(x)
#> [1] "double"
```
The attributes is uses are `"units"` and `"class"`.
```
attributes(x)
#> $units
#> [1] "secs"
#>
#> $class
#> [1] "hms" "difftime"
```
### Exercise 20\.7\.2
Try and make a tibble that has columns with different lengths. What happens?
If I try to create a tibble with a scalar and column of a different length there are no issues, and the scalar is repeated to the length of the longer vector.
```
tibble(x = 1, y = 1:5)
#> # A tibble: 5 x 2
#> x y
#> <dbl> <int>
#> 1 1 1
#> 2 1 2
#> 3 1 3
#> 4 1 4
#> 5 1 5
```
However, if I try to create a tibble with two vectors of different lengths (other than one), the `tibble` function throws an error.
```
tibble(x = 1:3, y = 1:4)
#> Error: Tibble columns must have compatible sizes.
#> * Size 3: Existing data.
#> * Size 4: Column `y`.
#> ℹ Only values of size one are recycled.
```
### Exercise 20\.7\.3
Based on the definition above, is it OK to have a list as a column of a tibble?
If I didn’t already know the answer, what I would do is try it out.
From the above, the error message was about vectors having different lengths.
But there is nothing that prevents a tibble from having vectors of different types: doubles, character, integers, logical, factor, date.
The later are still atomic, but they have additional attributes.
So, maybe there won’t be an issue with a list vector as long as it is the same length.
```
tibble(x = 1:3, y = list("a", 1, list(1:3)))
#> # A tibble: 3 x 2
#> x y
#> <int> <list>
#> 1 1 <chr [1]>
#> 2 2 <dbl [1]>
#> 3 3 <list [1]>
```
It works! I even used a list with heterogeneous types and there wasn’t an issue.
In following chapters we’ll see that list vectors can be very useful: for example, when processing many different models.
### Exercise 20\.7\.1
What does `hms::hms(3600)` return? How does it print? What primitive type is the augmented vector built on top of? What attributes does it use?
```
x <- hms::hms(3600)
class(x)
#> [1] "hms" "difftime"
x
#> 01:00:00
```
`hms::hms` returns an object of class, and prints the time in “%H:%M:%S” format.
The primitive type is a double
```
typeof(x)
#> [1] "double"
```
The attributes is uses are `"units"` and `"class"`.
```
attributes(x)
#> $units
#> [1] "secs"
#>
#> $class
#> [1] "hms" "difftime"
```
### Exercise 20\.7\.2
Try and make a tibble that has columns with different lengths. What happens?
If I try to create a tibble with a scalar and column of a different length there are no issues, and the scalar is repeated to the length of the longer vector.
```
tibble(x = 1, y = 1:5)
#> # A tibble: 5 x 2
#> x y
#> <dbl> <int>
#> 1 1 1
#> 2 1 2
#> 3 1 3
#> 4 1 4
#> 5 1 5
```
However, if I try to create a tibble with two vectors of different lengths (other than one), the `tibble` function throws an error.
```
tibble(x = 1:3, y = 1:4)
#> Error: Tibble columns must have compatible sizes.
#> * Size 3: Existing data.
#> * Size 4: Column `y`.
#> ℹ Only values of size one are recycled.
```
### Exercise 20\.7\.3
Based on the definition above, is it OK to have a list as a column of a tibble?
If I didn’t already know the answer, what I would do is try it out.
From the above, the error message was about vectors having different lengths.
But there is nothing that prevents a tibble from having vectors of different types: doubles, character, integers, logical, factor, date.
The later are still atomic, but they have additional attributes.
So, maybe there won’t be an issue with a list vector as long as it is the same length.
```
tibble(x = 1:3, y = list("a", 1, list(1:3)))
#> # A tibble: 3 x 2
#> x y
#> <int> <list>
#> 1 1 <chr [1]>
#> 2 2 <dbl [1]>
#> 3 3 <list [1]>
```
It works! I even used a list with heterogeneous types and there wasn’t an issue.
In following chapters we’ll see that list vectors can be very useful: for example, when processing many different models.
| Data Science |